论文标题
离散的单词嵌入逻辑自然语言理解
Discrete Word Embedding for Logical Natural Language Understanding
论文作者
论文摘要
我们提出了一种无监督的神经模型,用于学习单词的离散嵌入。与现有的离散嵌入不同,我们的二进制嵌入支持向量算术操作类似于连续嵌入。我们的嵌入将每个单词表示为一组命题陈述,描述了规划形式主义的经典/条带中的过渡规则。这使得嵌入与符号,最先进的经典计划求解器直接兼容。
We propose an unsupervised neural model for learning a discrete embedding of words. Unlike existing discrete embeddings, our binary embedding supports vector arithmetic operations similar to continuous embeddings. Our embedding represents each word as a set of propositional statements describing a transition rule in classical/STRIPS planning formalism. This makes the embedding directly compatible with symbolic, state of the art classical planning solvers.