论文标题
稳定性增强了隐私和私人随机梯度下降的应用
Stability Enhanced Privacy and Applications in Private Stochastic Gradient Descent
论文作者
论文摘要
私人机器学习涉及在训练时增加噪声,从而降低精度。直观地,更大的稳定性意味着更大的隐私并改善了这种隐私 - 实用性的权衡。我们研究了稳定性在私人经验风险最小化中的这种作用,在这些风险最小化中,通过输出扰动实现了差异隐私,并建立了相应的理论结果,表明对于强率强烈的损失函数,这是一种$β$均匀稳定性的算法,暗示着$ o(\sqrtβ)的限制(\sqrtβ)$(\sqrtβ)$(\sqrtβ)$所需的噪声限制。 该结果既适用于显式正则化和隐式稳定的ERM,例如已知稳定的随机梯度下降的适应。因此,它概括了通过修改SGD来改善隐私的最新结果,并确立稳定性为统一的观点。它意味着具有均匀稳定性保证的优化的新隐私保证,此前尚不清楚相应的差异隐私保证。实验结果验证了稳定性在几个问题中增强的隐私性,包括应用弹性网和特征选择。
Private machine learning involves addition of noise while training, resulting in lower accuracy. Intuitively, greater stability can imply greater privacy and improve this privacy-utility tradeoff. We study this role of stability in private empirical risk minimization, where differential privacy is achieved by output perturbation, and establish a corresponding theoretical result showing that for strongly-convex loss functions, an algorithm with uniform stability of $β$ implies a bound of $O(\sqrtβ)$ on the scale of noise required for differential privacy. The result applies to both explicit regularization and to implicitly stabilized ERM, such as adaptations of Stochastic Gradient Descent that are known to be stable. Thus, it generalizes recent results that improve privacy through modifications to SGD, and establishes stability as the unifying perspective. It implies new privacy guarantees for optimizations with uniform stability guarantees, where a corresponding differential privacy guarantee was previously not known. Experimental results validate the utility of stability enhanced privacy in several problems, including application of elastic nets and feature selection.