论文标题
算法烙印
The Algorithmic Imprint
论文作者
论文摘要
当算法危害算法时,一个合理的反应是停止使用算法来解决与公平,问责制,透明性和道德(FATE)有关的问题。但是,仅由于去除算法并不意味着与命运相关的问题不再存在。在本文中,我们介绍了“算法烙印”的概念,以说明如何仅删除算法不一定会撤消或减轻其后果。我们通过2020年的事件来实现这一概念及其含义,围绕一般教育证书(GCE)高级考试的算法分级,这是一项国际公认的英国高中文凭考试,该考试在160多个国家 /地区进行了管理。尽管由于全球抗议活动,最终删除了算法标准化,但我们展示了如何撤消对塑造学生,老师'和父母生活的社会技术基础设施的算法烙印。这些事件提供了一个难得的机会,可以通过有或没有算法调解来分析世界状况。我们在孟加拉国将案例研究放置,以说明全球北部算法如何不成比例地影响全球南方的利益相关者。我们记录了一个超过一年的社区参与,包括47个互惠期,我们提出了孟加拉国发生的“什么”发生的第一个连贯的时间表,将它们通过算法烙印的镜头进行了上下文,将它们发生在“为什么”和“如何”。分析这些事件,我们强调了如何在基础设施,社会和个人层面上推断出算法烙印的轮廓。我们分享了概念和实际的含义,围绕着印象意识如何(a)扩大我们如何看待算法影响的界限,(b)告知我们如何设计算法,以及(c)指导我们在AI治理中。
When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE). However, just because an algorithm is removed does not imply its FATE-related issues cease to exist. In this paper, we introduce the notion of the "algorithmic imprint" to illustrate how merely removing an algorithm does not necessarily undo or mitigate its consequences. We operationalize this concept and its implications through the 2020 events surrounding the algorithmic grading of the General Certificate of Education (GCE) Advanced (A) Level exams, an internationally recognized UK-based high school diploma exam administered in over 160 countries. While the algorithmic standardization was ultimately removed due to global protests, we show how the removal failed to undo the algorithmic imprint on the sociotechnical infrastructures that shape students', teachers', and parents' lives. These events provide a rare chance to analyze the state of the world both with and without algorithmic mediation. We situate our case study in Bangladesh to illustrate how algorithms made in the Global North disproportionately impact stakeholders in the Global South. Chronicling more than a year-long community engagement consisting of 47 inter-views, we present the first coherent timeline of "what" happened in Bangladesh, contextualizing "why" and "how" they happened through the lenses of the algorithmic imprint and situated algorithmic fairness. Analyzing these events, we highlight how the contours of the algorithmic imprints can be inferred at the infrastructural, social, and individual levels. We share conceptual and practical implications around how imprint-awareness can (a) broaden the boundaries of how we think about algorithmic impact, (b) inform how we design algorithms, and (c) guide us in AI governance.