论文标题
AI生成语言的人类启发式方法有缺陷
Human heuristics for AI-generated language are flawed
论文作者
论文摘要
人类交流越来越多地与AI产生的语言混合。在聊天,电子邮件和社交媒体中,AI系统建议单词,完成句子或进行整个对话。 AI生成的语言通常不被确定为这样,而是作为人类写的语言,引起了人们对新型欺骗和操纵形式的担忧。在这里,我们研究了人类如何辨别AI产生的口头自我表达是最个人化和后果的语言形式之一。在六个实验中,参与者(n = 4,600)无法检测到专业,酒店和约会环境中最先进的AI语言模型产生的自我表现。对语言特征的计算分析表明,人类对AI生成的语言的判断受到直觉但有缺陷的启发式方法的阻碍,例如将第一人称代词,收缩使用或家庭主题与人类写入的语言相关联。我们在实验上证明,这些启发式方法使人类对AI生成的语言的判断可预测和可操作,从而使AI系统能够产生被认为是“比人类更多的人”的文本。我们讨论解决方案,例如AI重音,以减少AI产生的语言的欺骗潜力,从而限制了人类直觉的颠覆。
Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as "more human than human." We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.