论文标题
图形神经网络的参数化解释器
Parameterized Explainer for Graph Neural Network
论文作者
论文摘要
尽管图神经网络(GNN)最近取得了进展,但解释GNNS做出的预测仍然是一个具有挑战性的开放问题。领先的方法独立解决了本地解释(即重要的子图结构和节点特征),以解释为什么GNN模型对单个实例进行预测,例如节点或图。结果,生成的解释是针对每个实例的艰苦定制的。独立解释每个实例的独特解释不足以提供对学习的GNN模型的全球理解,从而导致缺乏普遍性并阻碍其在归纳环境中使用。此外,由于它旨在解释一个实例,因此自然解释一组实例(例如,给定类的图)是一项挑战。在这项研究中,我们解决了这些主要挑战,并提出了PGEXPlainer,这是GNNS的参数解释器。 PGEXPLAINER采用了深层神经网络来参数化解释的生成过程,这使PGEXPlainer可以自然的方法共同解释多个实例。与现有工作相比,PGEXPLAINER具有更好的概括能力,可以轻松地在电感环境中使用。合成数据集和现实生活数据集的实验表现出高度竞争性的性能,在AUC中,AUC的相对改进高达24.7 \%,以解释领先基线的图形分类。
Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method independently addresses the local explanations (i.e., important subgraph structure and node features) to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized for each instance. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to a lack of generalizability and hindering it from being used in the inductive setting. Besides, as it is designed for explaining a single instance, it is challenging to explain a set of instances naturally (e.g., graphs of a given class). In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to explaining multiple instances collectively. Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7\% relative improvement in AUC on explaining graph classification over the leading baseline.