论文标题

错误信息检测系统中的正义:算法,利益相关者和潜在危害的分析

Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms

论文作者

Neumann, Terrence, De-Arteaga, Maria, Fazelpour, Sina

论文摘要

面对社交媒体上错误信息的规模和激增,许多平台和事实检查组织已转向算法,以自动化错误信息检测管道的关键部分。在为规模挑战提供有前途的解决方案时,与算法错误信息检测相关的道德和社会风险并不理解。在本文中,我们采用并扩展了信息司法的概念,以开发一个框架,以阐明与代表,参与,福利和负担的分配以及在错误信息检测管道中的可信度有关的司法问题。利用框架:(1)我们展示了如何在管道中三个算法阶段的利益相关者实现的不公正现象; (2)我们建议评估这些不公正现象的经验措施; (3)我们确定了这些危害的潜在来源。该框架应帮助研究人员,政策制定者和从业人员理解与这些算法相关的潜在危害或风险,并为该领域中算法公平性审核的设计提供概念指导。

Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines. While offering a promising solution to the challenge of scale, the ethical and societal risks associated with algorithmic misinformation detection are not well-understood. In this paper, we employ and extend upon the notion of informational justice to develop a framework for explicating issues of justice relating to representation, participation, distribution of benefits and burdens, and credibility in the misinformation detection pipeline. Drawing on the framework: (1) we show how injustices materialize for stakeholders across three algorithmic stages in the pipeline; (2) we suggest empirical measures for assessing these injustices; and (3) we identify potential sources of these harms. This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with these algorithms and provide conceptual guidance for the design of algorithmic fairness audits in this domain.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源