论文标题
提升:用于测量ML应用中公平性的可扩展框架
LiFT: A Scalable Framework for Measuring Fairness in ML Applications
论文作者
论文摘要
许多Internet应用程序由机器学习的模型提供动力,这些模型通常在通过隐式 /明确的用户反馈信号或人类判断的标签数据集上进行培训。由于这种数据集的产生可能存在社会偏见,因此训练有素的模型可能会偏见,从而导致潜在的歧视和损害弱势群体。由于需要理解和解决网络尺度ML系统中的算法偏差以及现有公平工具包的局限性,我们提出了LinkedIn Fairness工具包(LIFT),这是公平计算的框架,用于公平度量的框架,作为大型ML系统的一部分。我们强调了部署设置的关键要求,并介绍了我们的公平测量系统的设计。我们讨论将公平工具纳入实践和在LinkedIn部署期间所学的教训所遇到的挑战。最后,我们根据实践经验提供开放问题。
Many internet applications are powered by machine learned models, which are usually trained on labeled datasets obtained through either implicit / explicit user feedback signals or human judgments. Since societal biases may be present in the generation of such datasets, it is possible for the trained models to be biased, thereby resulting in potential discrimination and harms for disadvantaged groups. Motivated by the need for understanding and addressing algorithmic bias in web-scale ML systems and the limitations of existing fairness toolkits, we present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems. We highlight the key requirements in deployed settings, and present the design of our fairness measurement system. We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn. Finally, we provide open problems based on practical experience.