论文标题
图形神经网络的认证鲁棒性针对对抗性结构扰动
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
论文作者
论文摘要
Graph神经网络(GNN)最近在图形结构数据上引起了人们对节点和图形分类任务的关注。但是,最近的多项工作表明,攻击者可以通过扰动图形结构(即在图中添加或删除边缘)轻松地使GNN进行错误的预测。我们旨在通过开发坚固的GNN来防御此类攻击。具体而言,我们证明了针对结构扰动的节点和图形分类的任何GNN的认证鲁棒性保证。此外,我们表明我们的认证鲁棒性保证很紧张。我们的结果基于最近提出的称为随机平滑的技术,我们将其扩展到图形数据。我们还经验评估了多个GNN和多个基准数据集的节点和图形分类的方法。例如,在CORA数据集上,当攻击者最多可以在图中最多添加/删除时,具有随机平滑的图形卷积网络可以达到0.49的认证精度。
Graph neural networks (GNNs) have recently gained much attention for node and graph classification tasks on graph-structured data. However, multiple recent works showed that an attacker can easily make GNNs predict incorrectly via perturbing the graph structure, i.e., adding or deleting edges in the graph. We aim to defend against such attacks via developing certifiably robust GNNs. Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation. Moreover, we show that our certified robustness guarantee is tight. Our results are based on a recently proposed technique called randomized smoothing, which we extend to graph data. We also empirically evaluate our method for both node and graph classifications on multiple GNNs and multiple benchmark datasets. For instance, on the Cora dataset, Graph Convolutional Network with our randomized smoothing can achieve a certified accuracy of 0.49 when the attacker can arbitrarily add/delete at most 15 edges in the graph.