论文标题
训练输入和特征空间中训练前馈神经网络的反向投影
Backprojection for Training Feedforward Neural Networks in the Input and Feature Spaces
论文作者
论文摘要
在通过反向传播训练的神经网络的巨大发展之后,现在是开发其他算法来训练神经网络以获得更多了解网络的好时机。在本文中,我们提出了一种用于培训前馈神经网络的新算法,该算法比反向传播要快得多。该方法基于投影和重建,在每个层,投影的数据和重建标签被迫相似,并且权重相应地按一层调整。所提出的算法可以分别用于输入和特征空间,分别称为反向投影和内核反向投影。该算法为基于投影的视角提供了对网络的见解。合成数据集的实验显示了该方法的有效性。
After the tremendous development of neural networks trained by backpropagation, it is a good time to develop other algorithms for training neural networks to gain more insights into networks. In this paper, we propose a new algorithm for training feedforward neural networks which is fairly faster than backpropagation. This method is based on projection and reconstruction where, at every layer, the projected data and reconstructed labels are forced to be similar and the weights are tuned accordingly layer by layer. The proposed algorithm can be used for both input and feature spaces, named as backprojection and kernel backprojection, respectively. This algorithm gives an insight to networks with a projection-based perspective. The experiments on synthetic datasets show the effectiveness of the proposed method.