论文标题
具有分散的共识优化的隐私权增量ADMM
Privacy-preserving Incremental ADMM for Decentralized Consensus Optimization
论文作者
论文摘要
乘数的交替方向方法(ADMM)最近被认为是大规模机器学习模型的有前途的优化器。但是,从沟通成本方面,尤其是与隐私保存的共同研究,对ADMM研究的结果很少,这对于分布式学习至关重要。我们通过在分散网络上解决共识优化问题来研究ADMM的沟通效率和隐私保护。由于步行算法可以减少通信负载,因此我们首先根据步行算法提出增量ADMM(I-ADMM),而更新顺序则遵循汉密尔顿周期。但是,即使应用了随机初始化,i-admm也无法保证针对外部窃听器的代理的隐私。为了保护代理的隐私,我们然后提出了两种保存隐私的增量ADMM算法,即PI-ADMM1和PI-ADMM2,其中分别采用了步骤大小和原始变量的扰动。通过理论分析,我们证明了PI-ADMM1的收敛性和隐私保护,这进一步由数值实验支持。此外,模拟表明,与最先进的方法相比,所提出的PI-ADMM1和PI-ADMM2算法是有效的。
The alternating direction method of multipliers (ADMM) has been recently recognized as a promising optimizer for large-scale machine learning models. However, there are very few results studying ADMM from the aspect of communication costs, especially jointly with privacy preservation, which are critical for distributed learning. We investigate the communication efficiency and privacy-preservation of ADMM by solving the consensus optimization problem over decentralized networks. Since walk algorithms can reduce communication load, we first propose incremental ADMM (I-ADMM) based on the walk algorithm, the updating order of which follows a Hamiltonian cycle instead. However, I-ADMM cannot guarantee the privacy for agents against external eavesdroppers even if the randomized initialization is applied. To protect privacy for agents, we then propose two privacy-preserving incremental ADMM algorithms, i.e., PI-ADMM1 and PI-ADMM2, where perturbation over step sizes and primal variables is adopted, respectively. Through theoretical analyses, we prove the convergence and privacy preservation for PI-ADMM1, which are further supported by numerical experiments. Besides, simulations demonstrate that the proposed PI-ADMM1 and PI-ADMM2 algorithms are communication efficient compared with state-of-the-art methods.