论文标题

评估卷积神经网络的显着性图解释:用户研究

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

论文作者

Alqaraawi, Ahmed, Schuessler, Martin, Weiß, Philipp, Costanza, Enrico, Berthouze, Nadia

论文摘要

卷积神经网络(CNN)在一系列应用程序上提供了出色的机器学习性能,但是即使对于专家来说,它们的操作也很难解释。已经提出了各种解释算法来解决此问题,但据报道,有关其用户评估的研究工作有限。在本文中,我们报告了一项旨在评估“显着图”性能的在线组间用户研究,这是CNN的图像分类应用程序的流行说明算法。我们的结果表明,LRP算法产生的显着图帮助参与者了解系统对系统敏感的某些特定图像功能。但是,这些地图似乎为参与者提供了非常有限的帮助,以预测网络的新图像输出。利用我们的发现,我们强调了对设计和对可解释AI的进一步研究的影响。特别是,我们认为HCI和AI社区应该超越实例级别的解释。

Convolutional neural networks (CNNs) offer great machine learning performance over a range of applications, but their operation is hard to interpret, even for experts. Various explanation algorithms have been proposed to address this issue, yet limited research effort has been reported concerning their user evaluation. In this paper, we report on an online between-group user study designed to evaluate the performance of "saliency maps" - a popular explanation algorithm for image classification applications of CNNs. Our results indicate that saliency maps produced by the LRP algorithm helped participants to learn about some specific image features the system is sensitive to. However, the maps seem to provide very limited help for participants to anticipate the network's output for new images. Drawing on our findings, we highlight implications for design and further research on explainable AI. In particular, we argue the HCI and AI communities should look beyond instance-level explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源