论文标题

发展网络第二,其最佳性和新兴的Turing机器

Developmental Network Two, Its Optimality, and Emergent Turing Machines

论文作者

Weng, Juyang, Zheng, Zejia, Wu, Xiang

论文摘要

强大的AI要求学习引擎是任务非特定的,并自动构建了内部功能的动态层次结构。根据层次结构,我们的意思是,例如,短路边缘和短灌木边缘等于地标的中间特征;但是,来自树阴影的中间特征是干扰物,必须被高级地标概念​​所忽视。通过动态,我们是指在忽略干扰物的同时自动选择功能不是静态的,而是基于动态统计数据(例如,由于地标的阴影不稳定)。通过内部功能,我们的意思是它们不仅是感官,而且是电动机,因此电动机(状态)的上下文与感官输入集成在一起,成为基于上下文的逻辑机。我们介绍了为什么对于任何在现实世界中可靠工作的实用AI系统来说,强大的AI是必要的。然后,我们提出新一代的发展网络2(DN-2)。除了DN-1之外,DN-2最重要的新颖性是每个内部神经元的抑制区域都是神经元特异性和动态的。这使DN-2能够自动构造流体的内部层次结构,其区域的数量不是DN-1中的静态层次。为了最佳使用有限的资源,我们确定DN-2在最大可能性和资源有限的条件下是最大可能性的最佳选择。我们还介绍了DN-2如何学习新兴的通用图灵机(UTM)。与最佳性一起,我们提供了最佳UTM。使用DN-2的基于现实世界的导航,迷宫计划和试听的实验。他们成功地表明,DN-2是使用天然和合成输入的一般目的。他们自动构建的内部表示集中在重要特征上,同时对干扰因素和其他无关的背景概念不变。

Strong AI requires the learning engine to be task non-specific and to automatically construct a dynamic hierarchy of internal features. By hierarchy, we mean, e.g., short road edges and short bush edges amount to intermediate features of landmarks; but intermediate features from tree shadows are distractors that must be disregarded by the high-level landmark concept. By dynamic, we mean the automatic selection of features while disregarding distractors is not static, but instead based on dynamic statistics (e.g. because of the instability of shadows in the context of landmark). By internal features, we mean that they are not only sensory, but also motor, so that context from motor (state) integrates with sensory inputs to become a context-based logic machine. We present why strong AI is necessary for any practical AI systems that work reliably in the real world. We then present a new generation of Developmental Networks 2 (DN-2). With many new novelties beyond DN-1, the most important novelty of DN-2 is that the inhibition area of each internal neuron is neuron-specific and dynamic. This enables DN-2 to automatically construct an internal hierarchy that is fluid, whose number of areas is not static as in DN-1. To optimally use the limited resource available, we establish that DN-2 is optimal in terms of maximum likelihood, under the condition of limited learning experience and limited resources. We also present how DN-2 can learn an emergent Universal Turing Machine (UTM). Together with the optimality, we present the optimal UTM. Experiments for real-world vision-based navigation, maze planning, and audition used DN-2. They successfully showed that DN-2 is for general purposes using natural and synthetic inputs. Their automatically constructed internal representation focuses on important features while being invariant to distractors and other irrelevant context-concepts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源