论文标题

有偏见的模型有偏见的解释

Biased Models Have Biased Explanations

论文作者

Jain, Aditya, Ravula, Manish, Ghosh, Joydeep

论文摘要

我们通过为机器学习模型生成的基于属性的解释来研究机器学习(FAIRML)的公平性。我们的假设是:有偏见的模型具有偏见的解释。为了确定这一点,我们首先转化了群体公平的现有统计概念,并根据模型给出的解释来定义这些概念。然后,我们提出了一种新颖的方法,可以检测任何黑匣子模型的公平性。我们进一步研究后处理技术的公平性和原因,如何使用解释来使缓解偏见的技术更公平。我们还介绍了一种新颖的后处理缓解技术,该技术在保持小组水平公平的同时,提高了追索性的个人公平性。

We study fairness in Machine Learning (FairML) through the lens of attribute-based explanations generated for machine learning models. Our hypothesis is: Biased Models have Biased Explanations. To establish that, we first translate existing statistical notions of group fairness and define these notions in terms of explanations given by the model. Then, we propose a novel way of detecting (un)fairness for any black box model. We further look at post-processing techniques for fairness and reason how explanations can be used to make a bias mitigation technique more individually fair. We also introduce a novel post-processing mitigation technique which increases individual fairness in recourse while maintaining group level fairness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源