论文标题

协变量班次的稳健公平

Robust Fairness under Covariate Shift

论文作者

Rezaei, Ashkan, Liu, Anqi, Memarrast, Omid, Ziebart, Brian

论文摘要

对受保护的群体成员资格(种族,性别,年龄等)进行公平的预测已成为分类算法的重要要求。现有技术从采样的标记数据中得出了一个公平的模型,该模型取决于以下假设:训练和测试数据是从同一分布中相同且独立绘制(IID)的。在实践中,随着个人与机器学习系统互动的特征,分配转移可以并且确实发生在培训和测试数据集之间。我们研究了协变量转移下的公平性,即IID假设的放松,其中输入或协变量变化而有条件标签的分布保持不变。我们根据标签未知标签的目标数据寻求公正的决定。我们提出了一种方法,该方法在目标性能方面获得了对最差案例的预测变量,同时满足目标公平性要求并匹配源数据的统计属性。我们证明了我们在基准预测任务上的好处。

Making predictions that are fair with regard to protected group membership (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same distribution. In practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals interacting with the machine learning system change. We investigate fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same. We seek fair decisions under these assumptions on target data with unknown labels. We propose an approach that obtains the predictor that is robust to the worst-case in terms of target performance while satisfying target fairness requirements and matching statistical properties of the source data. We demonstrate the benefits of our approach on benchmark prediction tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源