论文标题
强大图形神经网络的图形结构学习
Graph Structure Learning for Robust Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNN)是表示图表学习的强大工具。但是,最近的研究表明,GNN容易受到精心制作的扰动的影响,称为对抗性攻击。对抗性攻击很容易欺骗GNN,以预测下游任务。对抗性攻击的脆弱性引起了人们对将GNN应用于安全至关重要的应用的越来越多的关注。因此,开发强大的算法来捍卫对抗性攻击具有重要意义。捍卫对抗性攻击的一个自然想法是清洁扰动图。显然,现实世界图具有一些内在属性。例如,许多现实世界图是低级和稀疏的,两个相邻节点的特征往往相似。实际上,我们发现对抗性攻击可能违反了这些图形属性。因此,在本文中,我们探索了这些属性,以捍卫图形上的对抗性攻击。特别是,我们提出了一个通用框架pro-gnn,可以从这些属性引导的扰动图中共同学习结构图和可靠的图形神经网络模型。在现实世界图上进行的广泛实验表明,即使图形严重扰动,所提出的框架与最先进的防御方法相比,其性能明显更好。我们将Pro-GNN的实现发布到我们的DeepRobust存储库中,以进行对抗攻击和防御(脚注:https://github.com/dse-msu/deeprobust)。可以在https://github.com/chandlerbang/pro-gnn中找到重现我们结果的特定实验设置。
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs. However, recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks. Adversarial attacks can easily fool GNNs in making predictions for downstream tasks. The vulnerability to adversarial attacks has raised increasing concerns for applying GNNs in safety-critical applications. Therefore, developing robust algorithms to defend adversarial attacks is of great significance. A natural idea to defend adversarial attacks is to clean the perturbed graph. It is evident that real-world graphs share some intrinsic properties. For example, many real-world graphs are low-rank and sparse, and the features of two adjacent nodes tend to be similar. In fact, we find that adversarial attacks are likely to violate these graph properties. Therefore, in this paper, we explore these properties to defend adversarial attacks on graphs. In particular, we propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model from the perturbed graph guided by these properties. Extensive experiments on real-world graphs demonstrate that the proposed framework achieves significantly better performance compared with the state-of-the-art defense methods, even when the graph is heavily perturbed. We release the implementation of Pro-GNN to our DeepRobust repository for adversarial attacks and defenses (footnote: https://github.com/DSE-MSU/DeepRobust). The specific experimental settings to reproduce our results can be found in https://github.com/ChandlerBang/Pro-GNN.