论文标题
分布式添加剂加密和量化,以保存联合深度学习
Distributed Additive Encryption and Quantization for Privacy Preserving Federated Deep Learning
论文作者
论文摘要
同态加密是一种非常有用的梯度保护技术,用于保存联合学习的隐私性。但是,现有的加密联合学习系统需要一个受信任的第三方来生成和分发关键对,使其不适合联合学习,并且容易受到安全风险的影响。此外,加密所有模型参数都是计算密集的,尤其是对于大型机器学习模型,例如深神经网络。为了减轻这些问题,我们为联邦深度学习开发了一种实用的,计算上有效的加密协议,其中关键对是在没有第三方帮助的情况下协作生成的。通过量化客户端上的模型参数以及服务器上的近似聚合,该方法避免了整个模型的加密和解密。此外,设计了一种基于阈值的秘密共享技术,以便没有人可以持有全球私有密钥进行解密,而汇总的密文可以通过阈值数量的客户端数量成功地解密,即使某些客户离线也是如此。我们的实验结果证实,与现有的加密联合学习相比,所提出的方法显着降低了通信成本和计算复杂性,而不会损害绩效和安全性。
Homomorphic encryption is a very useful gradient protection technique used in privacy preserving federated learning. However, existing encrypted federated learning systems need a trusted third party to generate and distribute key pairs to connected participants, making them unsuited for federated learning and vulnerable to security risks. Moreover, encrypting all model parameters is computationally intensive, especially for large machine learning models such as deep neural networks. In order to mitigate these issues, we develop a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a third party. By quantization of the model parameters on the clients and an approximated aggregation on the server, the proposed method avoids encryption and decryption of the entire model. In addition, a threshold based secret sharing technique is designed so that no one can hold the global private key for decryption, while aggregated ciphertexts can be successfully decrypted by a threshold number of clients even if some clients are offline. Our experimental results confirm that the proposed method significantly reduces the communication costs and computational complexity compared to existing encrypted federated learning without compromising the performance and security.