论文标题

规模上的管道返回传播:训练大型模型没有批量

Pipelined Backpropagation at Scale: Training Large Models without Batches

论文作者

Kosson, Atli, Chiley, Vitaliy, Venigalla, Abhinav, Hestness, Joel, Köster, Urs

论文摘要

新硬件可以大大提高深神经网络训练的速度和效率。为了指导未来硬件体系结构的开发,有必要探索替代培训算法的硬件和机器学习属性。在这项工作中,我们评估了使用具有重要硬件优势的异步管道平行训练算法的小批量,细粒管道的反向传播。我们介绍了两种方法,即尖峰补偿和线性权重预测,从而有效地减轻了由管道反向传播的异步性和在我们的环境中胜过现有技术的弊端。我们表明,适当的归一化和小批量大小也可以帮助培训。通过我们的方法,使用一个批次大小的细粒管道反向传播可以匹配在CIFAR-10和Imagenet上训练的多个网络的SGD准确性。简单的缩放规则允许使用现有的超参数进行传统培训,而无需进行其他调整。

New hardware can substantially increase the speed and efficiency of deep neural network training. To guide the development of future hardware architectures, it is pertinent to explore the hardware and machine learning properties of alternative training algorithms. In this work we evaluate the use of small batch, fine-grained Pipelined Backpropagation, an asynchronous pipeline parallel training algorithm that has significant hardware advantages. We introduce two methods, Spike Compensation and Linear Weight Prediction, that effectively mitigate the downsides caused by the asynchronicity of Pipelined Backpropagation and outperform existing techniques in our setting. We show that appropriate normalization and small batch sizes can also aid training. With our methods, fine-grained Pipelined Backpropagation using a batch size of one can match the accuracy of SGD for multiple networks trained on CIFAR-10 and ImageNet. Simple scaling rules allow the use of existing hyperparameters for traditional training without additional tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源