论文标题

更快的生物梯度下降学习

Faster Biological Gradient Descent Learning

论文作者

Li, Ho Ling

论文摘要

反向传播是一种流行的机器学习算法,它在训练神经网络中使用梯度下降进行监督学习,但可能非常慢。已经开发了许多算法来加快收敛性并改善学习的鲁棒性。但是,由于需要以前的更新中的信息,因此在生物学上实施它们是复杂的。受到生物学突触竞争的启发,我们提出了一种简单且本地的梯度下降优化算法,该算法可以减少训练时间,而对过去的细节没有需求。我们的算法称为动态学习率(DLR),与后传播中使用的传统梯度下降相似,只是学习率并没有在所有突触中具有均匀的学习率,而是取决于当前的神经元连接权重。我们的算法被发现可以加快学习速度,尤其是对于小型网络。

Back-propagation is a popular machine learning algorithm that uses gradient descent in training neural networks for supervised learning, but can be very slow. A number of algorithms have been developed to speed up convergence and improve robustness of the learning. However, they are complicated to implement biologically as they require information from previous updates. Inspired by synaptic competition in biology, we have come up with a simple and local gradient descent optimization algorithm that can reduce training time, with no demand on past details. Our algorithm, named dynamic learning rate (DLR), works similarly to the traditional gradient descent used in back-propagation, except that instead of having a uniform learning rate across all synapses, the learning rate depends on the current neuronal connection weights. Our algorithm is found to speed up learning, particularly for small networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源