论文标题
预测编码:迈向深度学习的未来?
Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation?
论文作者
论文摘要
用于训练深层神经网络的错误算法的反向传播是深度学习成功的基础。但是,它需要顺序的向后更新和非本地计算,这使得大规模并行化的挑战性,并且与学习在大脑中的工作方式不同。然而,使用当地学习的神经科学启发的学习算法,例如\ emph {预测编码},有可能克服这些局限性并超越当前的深度学习技术。虽然预测性编码起源于理论神经科学作为皮质中信息处理模型,但最近的工作将这一想法开发为能够仅使用局部计算训练神经网络的通用算法。在这项调查中,我们回顾了有助于这一观点的作品,并证明了预测性编码和反向传播之间的紧密理论联系,以及强调使用预测编码模型而不是反向传播训练的神经网络的多个优势的作品。具体而言,我们显示了针对等效深神经网络的预测编码网络的灵活性,该网络可以同时用作分类器,生成器和关联记忆,并且可以在任意图形拓扑上定义。最后,我们回顾了有关机器学习分类任务的预测编码网络的直接基准,以及与机器人技术中控制理论和应用的密切联系。
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning. However, it requires sequential backward updates and non-local computations, which make it challenging to parallelize at scale and is unlike how learning works in the brain. Neuroscience-inspired learning algorithms, however, such as \emph{predictive coding}, which utilize local learning, have the potential to overcome these limitations and advance beyond current deep learning technologies. While predictive coding originated in theoretical neuroscience as a model of information processing in the cortex, recent work has developed the idea into a general-purpose algorithm able to train neural networks using only local computations. In this survey, we review works that have contributed to this perspective and demonstrate the close theoretical connections between predictive coding and backpropagation, as well as works that highlight the multiple advantages of using predictive coding models over backpropagation-trained neural networks. Specifically, we show the substantially greater flexibility of predictive coding networks against equivalent deep neural networks, which can function as classifiers, generators, and associative memories simultaneously, and can be defined on arbitrary graph topologies. Finally, we review direct benchmarks of predictive coding networks on machine learning classification tasks, as well as its close connections to control theory and applications in robotics.