论文标题
通过有意义的学习重新审视系统的概括
Revisit Systematic Generalization via Meaningful Learning
论文作者
论文摘要
人类可以系统地概括为现有概念的新颖组成。最近的研究认为,神经网络在这种认知能力上似乎固有地无效,导致了悲观的观点和对乐观结果的关注。我们从有意义的学习的角度重新审视了这个有争议的话题,这是人类通过将其与已知概念联系起来的新颖概念的特殊能力。我们重新评估基于新概念和旧概念之间语义联系的顺序到序列模型的组成技能。我们的观察结果表明,模型可以通过归纳或演绎以语义链接而成功地将新颖的概念和组成概括。我们证明先验知识也起着关键作用。除了合成测试外,我们还进一步进行了机器翻译和语义解析的概念验证实验,显示了应用中有意义的学习的好处。我们希望我们的积极发现将鼓励通过更先进的学习方案发掘现代神经网络在系统概括方面的潜力。
Humans can systematically generalize to novel compositions of existing concepts. Recent studies argue that neural networks appear inherently ineffective in such cognitive capacity, leading to a pessimistic view and a lack of attention to optimistic results. We revisit this controversial topic from the perspective of meaningful learning, an exceptional capability of humans to learn novel concepts by connecting them with known ones. We reassess the compositional skills of sequence-to-sequence models conditioned on the semantic links between new and old concepts. Our observations suggest that models can successfully one-shot generalize to novel concepts and compositions through semantic linking, either inductively or deductively. We demonstrate that prior knowledge plays a key role as well. In addition to synthetic tests, we further conduct proof-of-concept experiments in machine translation and semantic parsing, showing the benefits of meaningful learning in applications. We hope our positive findings will encourage excavating modern neural networks' potential in systematic generalization through more advanced learning schemes.