论文标题
AUC优化的随机方法受基于AUC的公平限制的约束
Stochastic Methods for AUC Optimization subject to AUC-based Fairness Constraints
论文作者
论文摘要
随着机器学习越来越多地用于做出高风险决策,挑战是避免不公平的AI系统,从而为受保护人群做出歧视性决策。获得公平预测模型的直接方法是通过优化其预测性能训练该模型,但要受公平限制的约束,在与公平性交易绩效时,该模型可以在帕累托效率上达到帕累托的效率。在各种公平指标中,基于ROC曲线(AUC)下面的区域的指标最近出现了,因为它们是阈值 - 敏捷的范围,对于数据不平衡而有效。在这项工作中,我们将公平感知的机器学习模型的培训问题作为AUC优化问题,受到一类基于AUC的公平限制。可以将这个问题重新制定为最小 - 最大约束的最小最大优化问题,我们根据为问题的特殊结构设计的新布雷格曼脱落而通过随机的一阶方法来解决。我们在数值上证明了我们的方法对不同公平指标下的现实数据数据的有效性。
As machine learning being used increasingly in making high-stakes decisions, an arising challenge is to avoid unfair AI systems that lead to discriminatory decisions for protected population. A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints, which achieves Pareto efficiency when trading off performance against fairness. Among various fairness metrics, the ones based on the area under the ROC curve (AUC) are emerging recently because they are threshold-agnostic and effective for unbalanced data. In this work, we formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints. This problem can be reformulated as a min-max optimization problem with min-max constraints, which we solve by stochastic first-order methods based on a new Bregman divergence designed for the special structure of the problem. We numerically demonstrate the effectiveness of our approach on real-world data under different fairness metrics.