论文标题
通过预测编码在任意图形拓扑上学习
Learning on Arbitrary Graph Topologies via Predictive Coding
论文作者
论文摘要
标准深度学习中的反向传播(BP)培训包括两个主要步骤:向前传递,将数据点映射到其预测,以及向后传播该预测通过网络的错误。当目标是最大程度地减少特定目标函数时,此过程非常有效。但是,它不允许在网络上具有循环或向后连接的培训。这是达到类似脑部功能的障碍,因为新皮层中神经连接的高度复杂的异质结构对于其有效性而言是基础。在本文中,我们显示了预测性编码(PC)是如何在任意图形拓扑上执行推理和学习的信息处理理论。我们通过实验表明,该公式(称为PC图)如何通过简单地刺激特定的神经元来灵活地执行不同的任务,并研究该图的拓扑结构如何影响最终性能。我们结论是与受过〜bp训练的简单基线进行比较。
Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons, and investigate how the topology of the graph influences the final performance. We conclude by comparing against simple baselines trained~with~BP.