论文标题

改善选择问题的公平和隐私

Improving Fairness and Privacy in Selection Problems

论文作者

Khalili, Mohammad Mahdi, Zhang, Xueru, Abroshan, Mahed, Sojoudi, Somayeh

论文摘要

监督的学习模型越来越多地用于制定有关雇用,贷款和大学入学等应用中个人的决策。这些模型可能会从训练数据集中继承先前存在的偏见,并区分受保护属性(例如种族或性别)。除了不公平之外,当使用模型揭示敏感的个人信息时,也会出现隐私问题。在各种隐私概念中,近年来差异隐私已变得流行。在这项工作中,我们研究了使用差异私人指数机制作为改善监督学习模型公平和隐私的后处理步骤。与许多现有作品不同,我们考虑了一种方案,其中使用监督模型选择有限数量的申请人,因为可用位置的数量有限。此假设非常适合各种情况,例如工作申请和大学入学。我们将``平等机会''用作公平概念,并表明指数机制可以使决策过程完全公平。此外,对现实世界数据集的实验表明,指数机制可以提高隐私和公平性,与模型相比,准确性略有下降,而无需进行后处理。

Supervised learning models have been increasingly used for making decisions about individuals in applications such as hiring, lending, and college admission. These models may inherit pre-existing biases from training datasets and discriminate against protected attributes (e.g., race or gender). In addition to unfairness, privacy concerns also arise when the use of models reveals sensitive personal information. Among various privacy notions, differential privacy has become popular in recent years. In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models. Unlike many existing works, we consider a scenario where a supervised model is used to select a limited number of applicants as the number of available positions is limited. This assumption is well-suited for various scenarios, such as job application and college admission. We use ``equal opportunity'' as the fairness notion and show that the exponential mechanisms can make the decision-making process perfectly fair. Moreover, the experiments on real-world datasets show that the exponential mechanism can improve both privacy and fairness, with a slight decrease in accuracy compared to the model without post-processing.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源