论文标题

人工智能的数据,力量和偏见

Data, Power and Bias in Artificial Intelligence

论文作者

Leavy, Susan, O'Sullivan, Barry, Siapera, Eugenia

论文摘要

人工智能有潜力加剧社会偏见,并在平等权利和公民自由方面取得了数十年的进步。用于训练机器学习算法的数据可能会捕获社会可能在社会中学习和永久存在的社会不平等,不平等或歧视性态度。从涉及技术解决方案,社会正义和数据治理措施的不同角度来解决这个问题的尝试正在迅速出现。尽管这些方法中的每种方法对于开发全面的解决方案都是必不可少的,但通常与每种解决方案相关的话语似乎都不同。本文审查了正在进行的工作,以确保来自不同领域的AI系统中的数据正义,公平性和偏见缓解,探讨了各种相互关联的动态,并研究了AI培训数据中偏见的不可避免性实际上是否可以用于社会利益。我们强调了与定义处理偏见的政策相关的复杂性。我们还考虑了解决社会偏见问题的技术挑战。

Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty. Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society. Attempts to address this issue are rapidly emerging from different perspectives involving technical solutions, social justice and data governance measures. While each of these approaches are essential to the development of a comprehensive solution, often discourse associated with each seems disparate. This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains exploring the interrelated dynamics of each and examining whether the inevitability of bias in AI training data may in fact be used for social good. We highlight the complexity associated with defining policies for dealing with bias. We also consider technical challenges in addressing issues of societal bias.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源