论文标题
看起来像这样,因为...为可解释的图像识别解释原型
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
论文作者
论文摘要
具有原型的图像识别被认为是黑匣子深学习模型的可解释替代方法。分类取决于测试图像“看起来像”原型的程度。但是,人类的感知相似性可能与分类模型所学到的相似性不同。因此,只有可视化原型不足以使用户了解原型的准确代表,以及为什么模型认为原型和图像是相似的。我们解决了这种歧义,并认为应该解释原型。我们通过使用有关分类模型认为重要的视觉特征的文本定量信息自动增强视觉原型来提高可解释性。具体而言,我们的方法通过量化色调,形状,纹理,对比度和饱和度的影响来阐明原型的含义,并可以同时产生全局和局部解释。由于我们的方法的普遍性,它可以提高基于相似性的原型图像识别方法的解释性。在我们的实验中,我们将方法应用于现有的原型零件网络(Protopnet)。我们的分析证实了全局解释是可以推广的,并且通常与原型的视觉感知属性相对应。我们的解释与可能被错误解释的原型尤其重要。通过解释这种“误导”原型,我们提高了基于原型的分类模型的可解释性和可相似性。我们还使用我们的方法来检查视觉上相似的原型是否具有相似的解释,并且能够发现冗余。代码可从https://github.com/m-nauta/explaining_prototypes获得。
Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image "looks like" a prototype. However, perceptual similarity for humans can be different from the similarity learned by the classification model. Hence, only visualising prototypes can be insufficient for a user to understand what a prototype exactly represents, and why the model considers a prototype and an image to be similar. We address this ambiguity and argue that prototypes should be explained. We improve interpretability by automatically enhancing visual prototypes with textual quantitative information about visual characteristics deemed important by the classification model. Specifically, our method clarifies the meaning of a prototype by quantifying the influence of colour hue, shape, texture, contrast and saturation and can generate both global and local explanations. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition. In our experiments, we apply our method to the existing Prototypical Part Network (ProtoPNet). Our analysis confirms that the global explanations are generalisable, and often correspond to the visually perceptible properties of a prototype. Our explanations are especially relevant for prototypes which might have been interpreted incorrectly otherwise. By explaining such 'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model. We also use our method to check whether visually similar prototypes have similar explanations, and are able to discover redundancy. Code is available at https://github.com/M-Nauta/Explaining_Prototypes .