论文标题
联合学习的确切惩罚方法
Exact Penalty Method for Federated Learning
论文作者
论文摘要
联邦学习最近在机器学习中迅速发展,引起了各种研究主题。流行的优化算法基于(随机)梯度下降方法的框架或乘数的交替方向方法。在本文中,我们部署了一种确切的惩罚方法来处理联合学习,并提出了一种算法,即Fedepm,该算法能够解决联邦学习中的四个关键问题:沟通效率,计算复杂性,Stragglers的效果和数据隐私。此外,事实证明,它是收敛的,并证明具有高数值性能。
Federated learning has burgeoned recently in machine learning, giving rise to a variety of research topics. Popular optimization algorithms are based on the frameworks of the (stochastic) gradient descent methods or the alternating direction method of multipliers. In this paper, we deploy an exact penalty method to deal with federated learning and propose an algorithm, FedEPM, that enables to tackle four critical issues in federated learning: communication efficiency, computational complexity, stragglers' effect, and data privacy. Moreover, it is proven to be convergent and testified to have high numerical performance.