论文标题

共识驱动的学习

Consensus Driven Learning

论文作者

Crandall, Kyle, Webb, Dustin

论文摘要

随着我们神经网络模型的复杂性的增长,成功培训的数据和计算要求也是如此。解决此问题的一种建议是在计算设备的分布式网络上进行培训,从而分发计算和数据存储负载。该策略已经看到了Google和其他公司等公司的采用。在本文中,我们提出了一种新的分布式,分散学习的方法,该方法允许使用不可靠网络的异步更新来协调其培训网络,同时仅访问本地数据集。这是通过从分布的平均共识算法中汲取灵感来协调各种节点来实现的。共享内部模型而不是培训数据允许原始原始数据保留在计算节点中。异步性质和缺乏集中协调的性质使该范式在沟通要求有限的情况下起作用。我们在MNIST,时尚MNIST和CIFAR10数据集上演示了我们的方法。我们表明,我们的协调方法允许在高度偏见的数据集中学习模型,并且在存在间歇性通信失败的情况下。

As the complexity of our neural network models grow, so too do the data and computation requirements for successful training. One proposed solution to this problem is training on a distributed network of computational devices, thus distributing the computational and data storage loads. This strategy has already seen some adoption by the likes of Google and other companies. In this paper we propose a new method of distributed, decentralized learning that allows a network of computation nodes to coordinate their training using asynchronous updates over an unreliable network while only having access to a local dataset. This is achieved by taking inspiration from Distributed Averaging Consensus algorithms to coordinate the various nodes. Sharing the internal model instead of the training data allows the original raw data to remain with the computation node. The asynchronous nature and lack of centralized coordination allows this paradigm to function with limited communication requirements. We demonstrate our method on the MNIST, Fashion MNIST, and CIFAR10 datasets. We show that our coordination method allows models to be learned on highly biased datasets, and in the presence of intermittent communication failure.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源