论文标题

使用Hirescam而不是Grad-CAM来忠实解释卷积神经网络

Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks

论文作者

Draelos, Rachel Lea, Carin, Lawrence

论文摘要

解释方法促进了学习有意义的概念并避免利用虚假相关的模型的发展。我们说明了对流行神经网络解释方法的先前未知的限制:作为梯度平均步骤的副作用,Grad-CAM有时会突出该模型实际上没有使用的位置。为了解决这个问题,我们提出了Hirescam,这是一种新颖的类别解释方法,可以保证仅突出显示用于做出每个预测的模型的位置。我们证明,赫雷斯卡姆是CAM的概括,并探索了赫雷斯卡姆与其他基于梯度的解释方法之间的关系。包括人群评估在内的Pascal VOC 2012上的实验表明,尽管Hirescam的解释忠实地反映了该模型,但Grad-CAM通常会扩大注意力,从而创造出更大,更顺畅的可视化。总体而言,这项工作推进了卷积神经网络的解释方法,并可能有助于开发用于敏感应用的值得信赖的模型。

Explanation methods facilitate the development of models that learn meaningful concepts and avoid exploiting spurious correlations. We illustrate a previously unrecognized limitation of the popular neural network explanation method Grad-CAM: as a side effect of the gradient averaging step, Grad-CAM sometimes highlights locations the model did not actually use. To solve this problem, we propose HiResCAM, a novel class-specific explanation method that is guaranteed to highlight only the locations the model used to make each prediction. We prove that HiResCAM is a generalization of CAM and explore the relationships between HiResCAM and other gradient-based explanation methods. Experiments on PASCAL VOC 2012, including crowd-sourced evaluations, illustrate that while HiResCAM's explanations faithfully reflect the model, Grad-CAM often expands the attention to create bigger and smoother visualizations. Overall, this work advances convolutional neural network explanation approaches and may aid in the development of trustworthy models for sensitive applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源