论文标题
在神经迭代的学习模型中出现构图语言
Compositional Languages Emerge in a Neural Iterated Learning Model
论文作者
论文摘要
构图的原理使自然语言能够通过更简单的结构组合来表示复杂的概念,使我们能够使用有限的词汇传达一组开放式的消息。如果组成确实是语言的自然特性,我们可能希望它出现在语言游戏中神经代理创建的通信协议中。在本文中,我们提出了一种有效的神经迭代学习(NIL)算法,该算法应用于相互作用的神经剂时,可以促进一种更结构化的语言的出现。确实,这些语言在培训过程中为神经药物提供了学习速度的优势,可以通过零零来逐步扩大。我们提供了一个零零的概率模型,并解释了为什么存在组成语言的优势。我们的实验证实了我们的分析,还证明了出现的语言在很大程度上可以提高神经剂通信的普遍力量。
The principle of compositionality, which enables natural language to represent complex concepts via a structured combination of simpler ones, allows us to convey an open-ended set of messages using a limited vocabulary. If compositionality is indeed a natural property of language, we may expect it to appear in communication protocols that are created by neural agents in language games. In this paper, we propose an effective neural iterated learning (NIL) algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language. Indeed, these languages provide learning speed advantages to neural agents during training, which can be incrementally amplified via NIL. We provide a probabilistic model of NIL and an explanation of why the advantage of compositional language exist. Our experiments confirm our analysis, and also demonstrate that the emerged languages largely improve the generalizing power of the neural agent communication.