论文标题

差异化学习何时不会在高维度中受苦?

When Does Differentially Private Learning Not Suffer in High Dimensions?

论文作者

Li, Xuechen, Liu, Daogao, Hashimoto, Tatsunori, Inan, Huseyin A., Kulkarni, Janardhan, Lee, Yin Tat, Thakurta, Abhradeep Guha

论文摘要

大型预审预告片的模型可以私人微调以实现非私有模型的绩效。这些结果中的一个共同主题是令人惊讶的观察结果,即高维模型可以实现有利的隐私 - 实用性权衡。这似乎与差异私有凸学习的模型尺寸依赖性相矛盾,并提出了以下研究问题:差异私有学习的性能何时不会随着模型大小的增加而降低差异?我们确定投影到子空间上的梯度的大小是决定性能的关键因素。为了确切地为私人凸学习来表征这一点,我们介绍了一个条件,即我们称为\ emph {限制的Lipschitz连续性},并为在其他条件下与维度无关的经验和人口风险提出了改善的界限。我们从经验上表明,在大型语言模型的私人微调中,在微调过程中获得的梯度主要由一些主要组件控制。这种行为类似于在凸设置中获得与维度无关的界限的条件。我们的理论和经验结果共同为大规模私人微调成功提供了可能的解释。可以在\ url {https://github.com/lxuechen/private-transformers/tree/main/main/main/examples/classification/spectral_analysis}找到代码。

Large pretrained models can be privately fine-tuned to achieve performance approaching that of non-private models. A common theme in these results is the surprising observation that high-dimensional models can achieve favorable privacy-utility trade-offs. This seemingly contradicts known results on the model-size dependence of differentially private convex learning and raises the following research question: When does the performance of differentially private learning not degrade with increasing model size? We identify that the magnitudes of gradients projected onto subspaces is a key factor that determines performance. To precisely characterize this for private convex learning, we introduce a condition on the objective that we term \emph{restricted Lipschitz continuity} and derive improved bounds for the excess empirical and population risks that are dimension-independent under additional conditions. We empirically show that in private fine-tuning of large language models, gradients obtained during fine-tuning are mostly controlled by a few principal components. This behavior is similar to conditions under which we obtain dimension-independent bounds in convex settings. Our theoretical and empirical results together provide a possible explanation for recent successes in large-scale private fine-tuning. Code to reproduce our results can be found at \url{https://github.com/lxuechen/private-transformers/tree/main/examples/classification/spectral_analysis}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源