论文标题

图形分类:通过主题的语言解释身份感知图形分类器

GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the Language of Motifs

论文作者

Perotti, Alan, Bajardi, Paolo, Bonchi, Francesco, Panisson, André

论文摘要

大多数用于解释黑盒分类器的方法(例如,在表格数据,图像或时间序列上)都依赖于测量消除/扰动特征对模型输出的影响。这迫使解释语言与分类器的特征空间相匹配。但是,在处理图形数据时,基本特征与描述图形结构的边缘相对应,功能空间和解释语言之间的匹配可能不合适。因此,将特征空间(边缘)从所需的高级解释语言(例如主题)中解耦是为开发用于图形分类任务的可行解释的主要挑战。在本文中,我们介绍了GraphShap,这是一种基于Shapley的方法,能够为身份感知的图形分类器提供基于图案的解释,假设对模型或其培训数据没有任何知识:唯一的要求是,可以随意将分类器查询为黑盒。为了计算效率,我们探讨了一种渐进近似策略,并展示了简单的内核如何有效地近似解释得分,从而允许图形显示器在具有较大解释空间的场景上扩展(即大量主题)。我们在现实世界中的大脑网络数据集上展示了图形塑形,该数据集由自闭症谱系障碍和对照组影响的患者组成。我们的实验强调了如何通过几乎没有连接模式来有效解释由黑框模型提供的分类。

Most methods for explaining black-box classifiers (e.g. on tabular data, images, or time series) rely on measuring the impact that removing/perturbing features has on the model output. This forces the explanation language to match the classifier's feature space. However, when dealing with graph data, in which the basic features correspond to the edges describing the graph structure, this matching between features space and explanation language might not be appropriate. Decoupling the feature space (edges) from a desired high-level explanation language (such as motifs) is thus a major challenge towards developing actionable explanations for graph classification tasks. In this paper we introduce GRAPHSHAP, a Shapley-based approach able to provide motif-based explanations for identity-aware graph classifiers, assuming no knowledge whatsoever about the model or its training data: the only requirement is that the classifier can be queried as a black-box at will. For the sake of computational efficiency we explore a progressive approximation strategy and show how a simple kernel can efficiently approximate explanation scores, thus allowing GRAPHSHAP to scale on scenarios with a large explanation space (i.e. large number of motifs). We showcase GRAPHSHAP on a real-world brain-network dataset consisting of patients affected by Autism Spectrum Disorder and a control group. Our experiments highlight how the classification provided by a black-box model can be effectively explained by few connectomics patterns.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源