论文标题
Q线性优化与Barzilai-Borwein台阶大小的Q线性收敛
Q-linear Convergence of Distributed Optimization with Barzilai-Borwein Step Sizes
论文作者
论文摘要
大型系统和机器学习中数据的大小的增长使分布式优化成为一种自然吸引人的技术,可以在不同情况下解决决策问题。在这样的方法中,每个代理都使用从其邻居那里收到的信息进行迭代对其本地目标进行计算,并与邻近代理共享相关信息。尽管基于梯度的方法由于其简单性而被广泛使用,但众所周知,它们的收敛速度较慢。另一方面,尽管牛顿型方法具有更好的收敛属性,但由于巨大的计算和内存要求,它们并不适用。在这项工作中,我们介绍了带有Barzilai-Borwein的分布式的准Newton方法。我们证明Q线性收敛到最佳溶液,当前条件下,该算法是超线性收敛的,并通过数值模拟验证了我们的结果。
The growth in sizes of large-scale systems and data in machine learning have made distributed optimization a naturally appealing technique to solve decision problems in different contexts. In such methods, each agent iteratively carries out computations on its local objective using information received from its neighbors, and shares relevant information with neighboring agents. Though gradient-based methods are widely used because of their simplicity, they are known to have slow convergence rates. On the other hand, though Newton-type methods have better convergence properties, they are not as applicable because of the enormous computation and memory requirements. In this work, we introduce a distributed quasi-Newton method with Barzilai-Borwein step-sizes. We prove a Q-linear convergence to the optimal solution, present conditions under which the algorithm is superlinearly convergent and validate our results via numerical simulations.