论文标题

可调子网拆分,用于神经网络训练的模型并行性

Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training

论文作者

Wang, Junxiang, Chai, Zheng, Cheng, Yue, Zhao, Liang

论文摘要

最近已提出交替的最小化方法作为深神网络优化的梯度下降的替代方法。交替的最小化方法通常可以将深层神经网络分解为图层子问题,然后可以并行优化。尽管存在显着的并行性,但由于严重的准确性降解,很少在训练深层神经网络中探索交替的最小化方法。在本文中,我们通过称为可调子网拆分方法(TSSM)的重新制作来分析原因,并提议在平行性和准确性之间实现令人信服的权衡,这可以调整深神经网络的分解粒度。提出了两种方法分裂的乘数乘数(GSADMM)和梯度分裂交替最小化(GSAM)来解决TSSM公式。五个基准数据集的实验表明,我们提出的TSSM可以达到显着的加速,而无需观察到训练精度。该代码已在https://github.com/xianggebenben/tssm上发布。

Alternating minimization methods have recently been proposed as alternatives to the gradient descent for deep neural network optimization. Alternating minimization methods can typically decompose a deep neural network into layerwise subproblems, which can then be optimized in parallel. Despite the significant parallelism, alternating minimization methods are rarely explored in training deep neural networks because of the severe accuracy degradation. In this paper, we analyze the reason and propose to achieve a compelling trade-off between parallelism and accuracy by a reformulation called Tunable Subnetwork Splitting Method (TSSM), which can tune the decomposition granularity of deep neural networks. Two methods gradient splitting Alternating Direction Method of Multipliers (gsADMM) and gradient splitting Alternating Minimization (gsAM) are proposed to solve the TSSM formulation. Experiments on five benchmark datasets show that our proposed TSSM can achieve significant speedup without observable loss of training accuracy. The code has been released at https://github.com/xianggebenben/TSSM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源