论文标题
利用公平性以增强敏感属性重建
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
论文作者
论文摘要
近年来,越来越多的工作已经出现在如何在公平限制下学习机器学习模型的工作,通常在某些敏感属性方面表达。在这项工作中,我们考虑了对手对目标模型具有黑箱访问的设置,并表明对手可以利用有关该模型公平性的信息,以增强他对训练数据敏感属性的重建。更确切地说,我们提出了一种通用的重建校正方法,该方法将其作为对手进行的初始猜测,并纠正它以符合某些用户定义的约束(例如公平信息),同时最大程度地减少了对手猜测的变化。所提出的方法对目标模型的类型,公平感知的学习方法以及对手的辅助知识不可知。为了评估我们的方法的适用性,我们对两种最先进的公平学习方法进行了彻底的实验评估,使用四个具有广泛公差的不同公平指标,并具有三个不同尺寸和敏感属性的数据集。实验结果证明了提出的方法改善训练集敏感属性的重建方法的有效性。
In recent years, a growing body of work has emerged on how to learn machine learning models under fairness constraints, often expressed with respect to some sensitive attributes. In this work, we consider the setting in which an adversary has black-box access to a target model and show that information about this model's fairness can be exploited by the adversary to enhance his reconstruction of the sensitive attributes of the training data. More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess. The proposed method is agnostic to the type of target model, the fairness-aware learning method as well as the auxiliary knowledge of the adversary. To assess the applicability of our approach, we have conducted a thorough experimental evaluation on two state-of-the-art fair learning methods, using four different fairness metrics with a wide range of tolerances and with three datasets of diverse sizes and sensitive attributes. The experimental results demonstrate the effectiveness of the proposed approach to improve the reconstruction of the sensitive attributes of the training set.