论文标题
不确定性匹配图神经网络以防御中毒攻击
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
论文作者
论文摘要
图形神经网络(GNNS)是神经网络对图形结构化数据的概括,通常是使用图形实体之间的消息传递来实现的。尽管GNN可有效地分类,链接预测和图形分类,但它们容易受到对抗性攻击的影响,即对结构的小扰动可能导致非平凡的性能退化。在这项工作中,我们提出了与GNN(UM-GNN)相匹配的不确定性,旨在通过利用消息传递框架中的认知不确定性来提高GNN模型的稳健性,尤其是针对图形结构中毒的攻击。更具体地说,我们建议建立一个无法直接访问图形结构的替代预测指标,而是通过新颖的不确定匹配策略从系统地从标准GNN中提取可靠的知识。有趣的是,这种解偶联使UM-GNN通过设计免疫逃避攻击,并显着改善了针对中毒攻击的鲁棒性。使用标准基准和一系列全球和目标攻击的经验研究,我们证明了UM-GNN的有效性,与包括最先进的强大GCN在内的现有基线相比。
Graph Neural Networks (GNNs), a generalization of neural networks to graph-structured data, are often implemented using message passes between entities of a graph. While GNNs are effective for node classification, link prediction and graph classification, they are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation. In this work, we propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models, particularly against poisoning attacks to the graph structure, by leveraging epistemic uncertainties from the message passing framework. More specifically, we propose to build a surrogate predictor that does not directly access the graph structure, but systematically extracts reliable knowledge from a standard GNN through a novel uncertainty-matching strategy. Interestingly, this uncoupling makes UM-GNN immune to evasion attacks by design, and achieves significantly improved robustness against poisoning attacks. Using empirical studies with standard benchmarks and a suite of global and target attacks, we demonstrate the effectiveness of UM-GNN, when compared to existing baselines including the state-of-the-art robust GCN.