论文标题
GnninterPreter:图形神经网络的概率生成模型级解释
GNNInterpreter: A Probabilistic Generative Model-Level Explanation for Graph Neural Networks
论文作者
论文摘要
最近,图神经网络(GNN)显着提高了图形上机器学习任务的性能。但是,这一技术突破使人们感到奇怪:GNN如何做出这样的决定,我们可以高度信心信任它的预测吗?当涉及到一些关键领域(例如生物医学),做出错误的决定可能会产生严重的后果,在施用GNN的内部工作机制至关重要。在本文中,我们为遵循消息传递方案GnnInterPreter的不同GNN的模型模型级解释方法提出了一种模型模型级解释方法,以解释GNN模型的高级决策过程。更具体地说,GnnInterPreter学习了一种概率生成图分布,该分布产生最歧视的图形模式,GNN试图通过优化专门为GNN的模型级解释而设计的新型目标函数来检测进行一定的预测。与现有作品相比,GnnInterPreter在生成具有不同类型的节点和边缘功能的解释图方面更加灵活和计算有效,而无需引入另一个BlackBox或需要手动指定的域特异性规则。此外,在四个不同数据集上进行的实验研究表明,如果模型是理想的,则GnnInterPreter生成的解释图与所需的图形模式相匹配。否则,可以通过解释来揭示潜在的模型陷阱。可以在https://github.com/yolandalalalala/gnninterpreter上找到官方实施。
Recently, Graph Neural Networks (GNNs) have significantly advanced the performance of machine learning tasks on graphs. However, this technological breakthrough makes people wonder: how does a GNN make such decisions, and can we trust its prediction with high confidence? When it comes to some critical fields, such as biomedicine, where making wrong decisions can have severe consequences, it is crucial to interpret the inner working mechanisms of GNNs before applying them. In this paper, we propose a model-agnostic model-level explanation method for different GNNs that follow the message passing scheme, GNNInterpreter, to explain the high-level decision-making process of the GNN model. More specifically, GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect when making a certain prediction by optimizing a novel objective function specifically designed for the model-level explanation for GNNs. Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features, without introducing another blackbox or requiring manually specified domain-specific rules. In addition, the experimental studies conducted on four different datasets demonstrate that the explanation graphs generated by GNNInterpreter match the desired graph pattern if the model is ideal; otherwise, potential model pitfalls can be revealed by the explanation. The official implementation can be found at https://github.com/yolandalalala/GNNInterpreter.