论文标题

对抗性攻击和防御图:评论,工具和实证研究

Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies

论文作者

Jin, Wei, Li, Yaxin, Xu, Han, Wang, Yiqi, Ji, Shuiwang, Aggarwal, Charu, Tang, Jiliang

论文摘要

深度神经网络(DNNS)在各种任务中都取得了显着性能。但是,最近的研究表明,在输入中,小小的扰动很容易被称为对抗性攻击。由于DNN到图的扩展,因此已证明图形神经网络(GNN)继承了这种漏洞。对手可以通过修改图形结构(例如操纵几个边缘)来误导GNN来给出错误的预测。这种脆弱性引起了人们对在安全至关重要应用中调整GNN的极大关注,并且近年来引起了研究的越来越多。因此,有必要和及时地概述现有的图形对抗攻击和对策。在此调查中,我们对现有的攻击和防御进行了分类,并查看相应的最新方法。此外,我们已经开发了一个具有代表性算法的存储库(https://github.com/dse-msu/deeprobust/tree/master/master/deeprobust/graph)。存储库使我们能够进行实证研究,以加深对图形攻击和防御的理解。

Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. This vulnerability has arisen tremendous concerns for adapting GNNs in safety-critical applications and has attracted increasing research attention in recent years. Thus, it is necessary and timely to provide a comprehensive overview of existing graph adversarial attacks and the countermeasures. In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods. Furthermore, we have developed a repository with representative algorithms (https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph). The repository enables us to conduct empirical studies to deepen our understandings on attacks and defenses on graphs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源