论文标题

NLP模型的现代分布式数据并行大规模训练策略

Modern Distributed Data-Parallel Large-Scale Pre-training Strategies For NLP models

论文作者

Bai, Hao

论文摘要

由于对具有更多参数的深度学习模型的计算资源需求的扩展,分布式深度学习变得越来越流行。与传统培训方法不同,数据并行培训允许多个计算节点同时训练大型深度学习模型,以提高训练效率。在本文中,我们介绍并比较了使用Pytorch在语言模型GPT-2上使用定性方法使用Pytorch使用Pytorch进行100m参数的六种策略。这些策略是单个GPU,单参数服务器,分布式参数服务器,HOROVOD,带有Apex混合精神策略的分布式参数服务器,以及具有Apex混合精神策略的HOROVOD。我们还分析了每种策略的定量实验结果。最后,我们得出一个结论,即具有Apex MixedPrecision策略的分布式参数服务器在单节点训练中具有最佳性能,而Apex的Horovod是我们拥有单个或多个节点时最强大的使用方法。

Distributed deep learning is becoming increasingly popular due to the expanding demand for computing resources for deep learning models with a larger amount of parameters. Different from traditional training approaches, data-parallel training allows multiple compute nodes to train large deep learning models simultaneously in order to boost the training efficiency. In this paper, we present and compare six strategies for data-parallel training using PyTorch on the language model GPT-2 with 100M parameters using a qualitative approach. These strategies are Single GPU, Single Parameter Server, Distributed Parameter Server, Horovod, Distributed Parameter Server with Apex mixed-precision strategy, and Horovod with Apex mixed-precision strategy. We also analyze the quantitative experiment results from each strategy. In the end, we draw the conclusion that the Distributed Parameter Server with Apex mixedprecision strategy has the best performance on single node training, while Horovod with Apex is the most robust approach to use when we have single or multiple nodes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源