论文标题

在物理驱动的学习网络中的不同步学习

Desynchronous Learning in a Physics-Driven Learning Network

论文作者

Wycoff, Jacob F, Dillavou, Sam, Stern, Menachem, Liu, Andrea J, Durian, Douglas J

论文摘要

在神经元网络中,使用本地信息单独更新突触,从而允许完全分散的学习。相反,通常使用中央处理器同时更新人工神经网络(ANN)中的元素。在这里,我们在最近引入的分散,物理驱动的学习网络中调查了DENCHRONOUS学习的可行性和效果。我们表明,在理想化的模拟中,对学习过程并不会降低各种任务的性能。在实验中,DES同步实际上通过允许系统更好地探索离散化解决方案的状态空间来改善性能。我们在随机梯度下降中进行异步和微型批次之间进行类比,并表明它们对学习过程有相似的影响。 DENSCHRONING学习过程将物理驱动的学习网络建立为真正分布式的学习机器,从而提高部署的性能和可扩展性。

In a neuron network, synapses update individually using local information, allowing for entirely decentralized learning. In contrast, elements in an artificial neural network (ANN) are typically updated simultaneously using a central processor. Here we investigate the feasibility and effect of desynchronous learning in a recently introduced decentralized, physics-driven learning network. We show that desynchronizing the learning process does not degrade performance for a variety of tasks in an idealized simulation. In experiment, desynchronization actually improves performance by allowing the system to better explore the discretized state space of solutions. We draw an analogy between desynchronization and mini-batching in stochastic gradient descent, and show that they have similar effects on the learning process. Desynchronizing the learning process establishes physics-driven learning networks as truly fully distributed learning machines, promoting better performance and scalability in deployment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源