论文标题

调试差异隐私:隐私审核的案例研究

Debugging Differential Privacy: A Case Study for Privacy Auditing

论文作者

Tramer, Florian, Terzis, Andreas, Steinke, Thomas, Song, Shuang, Jagielski, Matthew, Carlini, Nicholas

论文摘要

差异隐私可以为机器学习中的培训数据提供可证明的隐私保证。但是,证据的存在并不排除错误的存在。受审计的最新进展的启发,这些进步用于估算差异私有算法的下限,我们在这里表明,审计也可以用于在(据称)差异化私人方案中查找缺陷。在此案例研究中,我们审核了近期私人深度学习算法的最新开源实施,并以99.9999999%的信心发现该实施不符合所声称的差异隐私保证。

Differential Privacy can provide provable privacy guarantees for training data in machine learning. However, the presence of proofs does not preclude the presence of errors. Inspired by recent advances in auditing which have been used for estimating lower bounds on differentially private algorithms, here we show that auditing can also be used to find flaws in (purportedly) differentially private schemes. In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源