论文标题

放错位置的信任:衡量机器学习在人类决策中的干扰

Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

论文作者

Suresh, Harini, Lao, Natalie, Liccardi, Ilaria

论文摘要

ML决策系统在网络上越来越普遍,但是它们的成功集成依赖于适当信任他们的人:他们应该使用该系统来填补其能力的空白,但要认识到该系统可能不正确的信号。我们通过对175名成年人的基于任务的研究来衡量人们对ML建议的信任如何因专业知识和更多系统信息而有所不同。我们使用了两个对人类困难的任务:比较大型人群和识别外观相似的动物。我们的结果提供了三个关键的见解:(1)人们相信对大多数时间执行正确执行的任务的ML建议不正确,即使他们对ML有很高的先验知识,或者给出了信息,表明该系统对其预测不信心; (2)四种不同类型的系统信息都增加了人们对建议的信任; (3)对于使用ML建议的决策者来说,数学和逻辑技能可能与ML一样重要。

ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源