论文标题

使用基于热图的可解释AI提高深层神经网络分类置信度

Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI

论文作者

Tjoa, Erico, Khok, Hong Jing, Chouhan, Tushar, Cuntai, Guan

论文摘要

本文量化了基于热图的可解释AI(XAI)方法W.R.T图像分类问题的质量。在这里,如果热图提高了预测正确类别的概率,则认为它是可取的。经验证明,不同的基于XAI热图的方法可将分类置信度提高到不同扩展的置信度,具体取决于数据集,例如在胸部X射线肺炎数据集的Imagenet和反卷积上,显着性最佳。新颖性包括一个新的差距分布,显示正确和错误预测之间存在明显的差异。最后,引入了生成性增强解释,这是一种产生能够将预测置信度提高到高水平的热图的方法。

This paper quantifies the quality of heatmap-based eXplainable AI (XAI) methods w.r.t image classification problem. Here, a heatmap is considered desirable if it improves the probability of predicting the correct classes. Different XAI heatmap-based methods are empirically shown to improve classification confidence to different extents depending on the datasets, e.g. Saliency works best on ImageNet and Deconvolution on Chest X-Ray Pneumonia dataset. The novelty includes a new gap distribution that shows a stark difference between correct and wrong predictions. Finally, the generative augmentative explanation is introduced, a method to generate heatmaps capable of improving predictive confidence to a high level.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源