论文标题
具有反复生成反馈的神经网络
Neural Networks with Recurrent Generative Feedback
论文作者
论文摘要
神经网络容易受到输入扰动的影响,例如加性噪声和对抗性攻击。相反,人类的看法对这种扰动更为强大。贝叶斯大脑假说指出,人的大脑使用内部生成模型来更新感觉输入的后验信仰。该机制可以解释为最大a后验(MAP)估计内部生成模型和外部环境之间的一种自相连形式。受这种假设的启发,我们通过结合生成反复反馈来在神经网络中实施自洽。我们将这种设计实例化在卷积神经网络(CNN)上。所提出的框架称为带有反馈的卷积神经网络(CNN-F),引入了具有潜在变量的生成反馈,并向现有的CNN体系结构引入了,其中通过在贝叶斯框架下通过交替的地图推断做出一致的预测。在实验中,CNN-F对标准基准测试的常规前馈CNN显示出明显改善的对抗鲁棒性。
Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.