论文标题

结合预处理算法能否导致更好的机器学习公平?

Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning Fairness?

论文作者

Badran, Khaled, Côté, Pierre-Olivier, Kolopanis, Amanda, Bouchoucha, Rached, Collante, Antonio, Costa, Diego Elias, Shihab, Emad, Khomh, Foutse

论文摘要

随着机器学习(ML)系统在更关键的领域被采用,解决这些系统中可能发生的偏见变得越来越重要。可以使用几种公平的预处理算法来减轻模型培训期间的隐性偏见。这些算法采用了不同的公平概念,通常会导致冲突的策略,而公平与准确性之间的折衷方案。在这项工作中,我们评估了三种普遍的预处理算法,并研究了将所有算法组合为更强大的预处理集合的潜力。我们报告的经验教训可以帮助从业者更好地为其模型选择公平算法。

As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源