论文标题
可解释的人工智能会改善人类决策吗?
Does Explainable Artificial Intelligence Improve Human Decision-Making?
论文作者
论文摘要
可解释的AI提供了有关模型预测的“为什么”的见解,为用户提供了更好地理解和信任模型的潜力,并识别和纠正不正确的AI预测。先前对人类和可解释的AI相互作用的研究集中在解释的解释性,信任和可用性等措施上。可解释的AI是否可以改善实际的人类决策以及确定基本模型问题的能力是开放的问题。使用实际数据集,我们将无需AI(控制),AI预测(无解释)和AI预测与解释进行比较和评估人类决策准确性。我们发现提供任何类型的AI预测倾向于提高用户决策的准确性,但是没有确定的证据表明可解释的AI具有有意义的影响。此外,我们观察到人类决策准确性的最强预测指标是AI的准确性,并且用户可以在某种程度上能够检测到AI何时正确而不是不正确,但这并没有受到包括解释的显着影响。我们的结果表明,至少在某些情况下,可解释的AI中提供的“为什么”信息可能无法增强用户决策,并且可能需要进一步的研究来了解如何将可解释的AI集成到真实系统中。
Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. Whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model are open questions. Using real datasets, we compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct versus incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.