论文标题
速度:安全,私人和高效的深度学习
SPEED: Secure, PrivatE, and Efficient Deep learning
论文作者
论文摘要
我们介绍了一个能够处理强大隐私限制的深度学习框架。基于协作学习,差异隐私和同型加密,提出的方法推进了私人深度学习的最先进,尤其是诚实但有趣的服务器假设。我们从聚合服务器,全局模型和潜在勾结数据持有人的威胁解决。在分布式差异隐私和同型Argmax操作员的基础上,我们的方法专门设计用于保持低沟通负载和效率。所提出的方法得到精心设计的理论结果的支持。我们从具有访问最终模型的任何实体(包括勾结数据持有人)的角度提供差异隐私保证,这是保存其噪音秘密的数据持有人比率的函数。这使我们的方法对现实生活中的方案实用,在现实生活中,数据持有人不信任任何第三方处理其数据集或其他数据持有人的方法。至关重要的是,该方法的计算负担是合理的,据我们所知,我们的框架是第一个有效地研究深度学习应用程序的框架,同时解决如此巨大的威胁范围。为了评估框架的实际可用性,在分类环境中在图像数据集上进行了实验。我们提出的数值结果表明学习过程既准确又私密。
We introduce a deep learning framework able to deal with strong privacy constraints. Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art of private deep learning against a wider range of threats, in particular the honest-but-curious server assumption. We address threats from both the aggregation server, the global model and potentially colluding data holders. Building upon distributed differential privacy and a homomorphic argmax operator, our method is specifically designed to maintain low communication loads and efficiency. The proposed method is supported by carefully crafted theoretical results. We provide differential privacy guarantees from the point of view of any entity having access to the final model, including colluding data holders, as a function of the ratio of data holders who kept their noise secret. This makes our method practical to real-life scenarios where data holders do not trust any third party to process their datasets nor the other data holders. Crucially the computational burden of the approach is maintained reasonable, and, to the best of our knowledge, our framework is the first one to be efficient enough to investigate deep learning applications while addressing such a large scope of threats. To assess the practical usability of our framework, experiments have been carried out on image datasets in a classification context. We present numerical results that show that the learning procedure is both accurate and private.