论文标题

保护数据免受各方的侵害:将FHE和DP合并在联邦学习中

Protecting Data from all Parties: Combining FHE and DP in Federated Learning

论文作者

Sébert, Arnaud Grivet, Sirdey, Renaud, Stan, Oana, Gouy-Pailler, Cédric

论文摘要

本文解决了在联合学习环境中确保培训数据隐私的问题。依靠同态加密(HE)和差异隐私(DP),我们提出了一个框架,以解决培训数据隐私的威胁。值得注意的是,提出的框架确保了学习过程中所有参与者的培训数据的隐私,即数据所有者和聚合服务器。更确切地说,尽管他在学习协议期间对半honest服务器蒙上了光泽,但DP保护数据免受参加培训过程的半honest客户端以及具有黑盒或白色盒子访问受过训练的模型的最终用户。为了实现这一目标,我们提供了新的理论和实际结果,以允许严格合并这些技术。特别是,通过新型随机定量操作员,我们证明了DP在通过使用HE的情况下定量和界定的环境中保证DP。本文以实验结束,这些实验表明了整个框架在模型质量(受DP的影响)和计算开销(受HE的影响)方面的实用性。

This paper tackles the problem of ensuring training data privacy in a federated learning context. Relying on Homomorphic Encryption (HE) and Differential Privacy (DP), we propose a framework addressing threats on the privacy of the training data. Notably, the proposed framework ensures the privacy of the training data from all actors of the learning process, namely the data owners and the aggregating server. More precisely, while HE blinds a semi-honest server during the learning protocol, DP protects the data from semi-honest clients participating in the training process as well as end-users with black-box or white-box access to the trained model. In order to achieve this, we provide new theoretical and practical results to allow these techniques to be rigorously combined. In particular, by means of a novel stochastic quantisation operator, we prove DP guarantees in a context where the noise is quantised and bounded due to the use of HE. The paper is concluded by experiments which show the practicality of the entire framework in terms of both model quality (impacted by DP) and computational overhead (impacted by HE).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源