论文标题

使用安全的多方计算培训差异私有模型

Training Differentially Private Models with Secure Multiparty Computation

论文作者

Pentyala, Sikha, Railsback, Davis, Maia, Ricardo, Dowsley, Rafael, Melanson, David, Nascimento, Anderson, De Cock, Martine

论文摘要

我们从培训数据中学习机器学习模型的问题,该模型源于多个数据所有者,同时提供有关保护每个所有者数据的正式隐私保证。基于差异隐私(DP)的现有解决方案以准确性下降为代价。基于安全多方计算(MPC)的解决方案不会产生这种准确性损失,而是在公开可用的训练模型时泄漏信息。我们提出了用于训练DP模型的MPC解决方案。我们的解决方案依赖于用于模型培训的MPC协议,以及以隐私保护方式以拉普拉斯噪声驱动训练的模型系数的MPC协议。所得的MPC+DP方法比纯DP方法获得了更高的准确性,同时提供了相同的正式隐私保证。我们的工作在IDASH2021轨道III竞赛中获得了针对安全基因组分析的机密计算竞赛的第一名。

We address the problem of learning a machine learning model from training data that originates at multiple data owners while providing formal privacy guarantees regarding the protection of each owner's data. Existing solutions based on Differential Privacy (DP) achieve this at the cost of a drop in accuracy. Solutions based on Secure Multiparty Computation (MPC) do not incur such accuracy loss but leak information when the trained model is made publicly available. We propose an MPC solution for training DP models. Our solution relies on an MPC protocol for model training, and an MPC protocol for perturbing the trained model coefficients with Laplace noise in a privacy-preserving manner. The resulting MPC+DP approach achieves higher accuracy than a pure DP approach while providing the same formal privacy guarantees. Our work obtained first place in the iDASH2021 Track III competition on confidential computing for secure genome analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源