论文标题

公平性和解释性的游戏

Games for Fairness and Interpretability

论文作者

Chu, Eric, Gillani, Nabeel, Makini, Sneha Priscilla

论文摘要

随着机器学习(ML)系统变得越来越普遍,确保其基础算法的公平和公平应用至关重要。我们认为,实现这一目标的一种方法是为ML开发人员设计和开发更公平的算法而积极培养公共压力 - 这是培养公共压力的一种方法,同时又服务算法开发人员的利益和目标是通过游戏玩法。我们提出了一种新的游戏类别 - ``公平性和解释性的游戏'' - 作为一种奖励一致的方法来生产更公平,更公平的算法的一个例子。公平性和可解释性的游戏是精心设计的游戏,具有大规模吸引力。它们固有地吸引人,提供有关机器学习模型如何工作的见解,并最终产生帮助研究人员和开发人员改善其算法的数据。我们重点介绍了几个可能的游戏示例,它们对公平性和解释性的影响,它们的扩散如何通过缩小算法开发人员和公众之间的差距来创造积极的公共压力,以及为什么机器学习社区可以从中受益。

As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair and equitable application of their underlying algorithms is of paramount importance. We argue that one way to achieve this is to proactively cultivate public pressure for ML developers to design and develop fairer algorithms -- and that one way to cultivate public pressure while simultaneously serving the interests and objectives of algorithm developers is through gameplay. We propose a new class of games -- ``games for fairness and interpretability'' -- as one example of an incentive-aligned approach for producing fairer and more equitable algorithms. Games for fairness and interpretability are carefully-designed games with mass appeal. They are inherently engaging, provide insights into how machine learning models work, and ultimately produce data that helps researchers and developers improve their algorithms. We highlight several possible examples of games, their implications for fairness and interpretability, how their proliferation could creative positive public pressure by narrowing the gap between algorithm developers and the general public, and why the machine learning community could benefit from them.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源