论文标题

无监督的图形中毒攻击通过对比度损失反向攻击

Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation

论文作者

Zhang, Sixiao, Chen, Hongxu, Sun, Xiangguo, Li, Yicong, Xu, Guandong

论文摘要

图形对比度学习是最新的无监督图表示学习框架,并且与监督方法显示出可比的性能。但是,评估图形对比度学习是否对对抗性攻击是否强大仍然是一个开放问题,因为大多数现有的对抗性攻击都是监督模型,这意味着它们在很大程度上依赖标签,并且只能在特定情况下评估图形对比度学习。对于无监督的图形表示方法,例如图形对比学习,很难在现实世界中获得标签,从而使传统的监督图形攻击方法难以应用于测试其鲁棒性。在本文中,我们提出了一种新型的基于梯度的对抗性攻击,不依赖于图形对比度学习的标签。我们计算两种视图的邻接矩阵的梯度,并以梯度上升的速度翻转边缘以最大化对比度损失。通过这种方式,我们可以完全使用图形对比度学习模型生成的多种视图,并在不知道其标签的情况下选择最有用的边缘,因此可以承诺支持我们适合更多类型的下游任务的模型。广泛的实验表明,我们的攻击表现优于无监督的基线攻击,并且在多个下游任务(包括节点分类和链接预测)中的监督攻击具有可比性的性能。我们进一步表明,我们的攻击也可以转移到其他图表模型。

Graph contrastive learning is the state-of-the-art unsupervised graph representation learning framework and has shown comparable performance with supervised approaches. However, evaluating whether the graph contrastive learning is robust to adversarial attacks is still an open problem because most existing graph adversarial attacks are supervised models, which means they heavily rely on labels and can only be used to evaluate the graph contrastive learning in a specific scenario. For unsupervised graph representation methods such as graph contrastive learning, it is difficult to acquire labels in real-world scenarios, making traditional supervised graph attack methods difficult to be applied to test their robustness. In this paper, we propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning. We compute the gradients of the adjacency matrices of the two views and flip the edges with gradient ascent to maximize the contrastive loss. In this way, we can fully use multiple views generated by the graph contrastive learning models and pick the most informative edges without knowing their labels, and therefore can promisingly support our model adapted to more kinds of downstream tasks. Extensive experiments show that our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks including node classification and link prediction. We further show that our attack can be transferred to other graph representation models as well.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源