论文标题

SSGD:一种安全有效的梯度下降方法

SSGD: A safe and efficient method of gradient descent

论文作者

Duan, Jinhuan, Li, Xianxian, Gao, Shiqi, Wang, Jinyan, Zhong, Zili

论文摘要

随着人工智能技术的迅速发展,各种工程技术应用程序已被实施。由于其简单的结构,良好的稳定性和易于实施,梯度下降方法在解决各种优化问题中起着重要作用。在多节点机器学习系统中,通常需要共享梯度。共享梯度通常不安全。攻击者只能通过了解梯度信息来获得培训数据。在本文中,为了在保持模型的准确性的同时防止梯度泄漏,我们建议通过隐藏梯度向量的模量长度并将其转换为单位向量来更新参数。此外,我们分析了超随机梯度下降方法的安全性。我们的算法可以防御对梯度的攻击。实验结果表明,就准确性,鲁棒性和对大规模批次的适应性而言,我们的方法显然优于普遍的梯度下降方法。

With the vigorous development of artificial intelligence technology, various engineering technology applications have been implemented one after another. The gradient descent method plays an important role in solving various optimization problems, due to its simple structure, good stability and easy implementation. In multi-node machine learning system, the gradients usually need to be shared. Shared gradients are generally unsafe. Attackers can obtain training data simply by knowing the gradient information. In this paper, to prevent gradient leakage while keeping the accuracy of model, we propose the super stochastic gradient descent approach to update parameters by concealing the modulus length of gradient vectors and converting it or them into a unit vector. Furthermore, we analyze the security of super stochastic gradient descent approach. Our algorithm can defend against attacks on the gradient. Experiment results show that our approach is obviously superior to prevalent gradient descent approaches in terms of accuracy, robustness, and adaptability to large-scale batches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源