论文标题
算法标记对公平性的影响:Wikipedia的准实验证据
Effects of algorithmic flagging on fairness: quasi-experimental evidence from Wikipedia
论文作者
论文摘要
在线社区主持人通常依靠社交信号,例如用户是否具有帐户或个人资料页面,因为用户可能会引起问题。当主持人专注于这些信号而忽略其他人的不当行为时,依赖这些线索可能会导致过度封闭的偏见。我们建议,通过减少对社会信号的依赖并使其他所有人更明显的规范违规行为来提高适度工作效率的算法标记系统也可以使这些用户更加公平。我们分析了Wikipedia中的主持人行为,该行为是由RCFILTERS介导的,该系统显示社会信号和算法标志,并估计标记对主持人行动的因果效应。我们表明,算法标记的编辑更频繁地恢复,尤其是那些具有积极社会信号的既定编辑者,并且标记减少了撤销审核动作的可能性。我们的结果表明,算法标记系统在某些情况下会导致公平性的提高,但这种关系是复杂而偶然的。
Online community moderators often rely on social signals such as whether or not a user has an account or a profile page as clues that users may cause problems. Reliance on these clues can lead to overprofiling bias when moderators focus on these signals but overlook the misbehavior of others. We propose that algorithmic flagging systems deployed to improve the efficiency of moderation work can also make moderation actions more fair to these users by reducing reliance on social signals and making norm violations by everyone else more visible. We analyze moderator behavior in Wikipedia as mediated by RCFilters, a system which displays social signals and algorithmic flags, and estimate the causal effect of being flagged on moderator actions. We show that algorithmically flagged edits are reverted more often, especially those by established editors with positive social signals, and that flagging decreases the likelihood that moderation actions will be undone. Our results suggest that algorithmic flagging systems can lead to increased fairness in some contexts but that the relationship is complex and contingent.