论文标题
通过位置解释来中介社区-AI互动:AI领导的节制案例
Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation
论文作者
论文摘要
人工智能(AI)在我们的日常技术中已经普遍存在,并影响个人和社区。可解释的AI(XAI)奖学金探索了解释和技术解释的哲学性质,这些性质通常是由实验室环境中的专家驱动的,对于外行人来说可能是具有挑战性的。此外,现有的XAI研究倾向于集中在个人水平上。关于人们如何在社区背景下理解和解释AI领导的决策知之甚少。从Xai和活动理论(基础HCI理论)借鉴,我们将解释如何位于社区的共享价值观,规范,知识和实践中,以及位置解释如何介导社区-AI的互动。然后,我们提出了一个关于AI领导的节制的案例研究,其中社区成员共同制定了AI领导的决定的解释,其中大多数是自动惩罚。最后,我们讨论了该框架在CSCW,HCI和XAI的交集中的含义。
Artificial intelligence (AI) has become prevalent in our everyday technologies and impacts both individuals and communities. The explainable AI (XAI) scholarship has explored the philosophical nature of explanation and technical explanations, which are usually driven by experts in lab settings and can be challenging for laypersons to understand. In addition, existing XAI research tends to focus on the individual level. Little is known about how people understand and explain AI-led decisions in the community context. Drawing from XAI and activity theory, a foundational HCI theory, we theorize how explanation is situated in a community's shared values, norms, knowledge, and practices, and how situated explanation mediates community-AI interaction. We then present a case study of AI-led moderation, where community members collectively develop explanations of AI-led decisions, most of which are automated punishments. Lastly, we discuss the implications of this framework at the intersection of CSCW, HCI, and XAI.