论文标题
探索跨设置的DP-SGD的不公平性
Exploring the Unfairness of DP-SGD Across Settings
论文作者
论文摘要
最终用户和监管机构需要私人和公平的人工智能模型,但以前的工作表明这些目标可能是矛盾的。我们使用公民进行评估在几个公平指标中应用{\ em de fearto}标准方法DP-SGD的影响。我们评估了DP-SGD的三个实现:降低维度(PCA),线性分类(逻辑回归)和健壮的深度学习(组-DRO)。在线性分类和强大的深度学习的情况下,我们建立了隐私与公平之间的负面,对数相关性。 DP-SGD对PCA的公平性没有重大影响,但经检查似乎也没有导致私人代表。
End users and regulators require private and fair artificial intelligence models, but previous work suggests these objectives may be at odds. We use the CivilComments to evaluate the impact of applying the {\em de facto} standard approach to privacy, DP-SGD, across several fairness metrics. We evaluate three implementations of DP-SGD: for dimensionality reduction (PCA), linear classification (logistic regression), and robust deep learning (Group-DRO). We establish a negative, logarithmic correlation between privacy and fairness in the case of linear classification and robust deep learning. DP-SGD had no significant impact on fairness for PCA, but upon inspection, also did not seem to lead to private representations.