论文标题

在图表上进行的粗到最新对比度学习

Coarse-to-Fine Contrastive Learning on Graphs

论文作者

Zhao, Peiyao, Pan, Yuangang, Li, Xin, Chen, Xu, Tsang, Ivor W., Liao, Lejian

论文摘要

受对比度学习(CL)的令人印象深刻的成功的启发,已经采用了多种图表增强策略,以一种自我监督的方式学习节点表示。现有方法通过将扰动添加到图形结构或节点属性中来构建对比样本。尽管取得了令人印象深刻的结果,但它对假定的先前信息的财富相当不知所措:随着原始图上应用摄动度的增加,1)原始图与生成的增强图之间的相似性逐渐减少; 2)每个增强视图中所有节点之间的歧视逐渐增加。在本文中,我们认为,在我们的一般排名框架之后,可以将这两个先前的信息(以不同方式)纳入对比度学习范式中。特别是,我们首先将CL解释为学习排名的特殊情况(L2R),这激发了我们利用积极的增强观点之间的排名顺序。同时,我们引入了一个自排列的范式,以确保可以保持不同节点之间的歧视性信息,并且对不同程度的扰动的改变也较小。与监督和无监督的模型相比,各种基准数据集的实验结果验证了我们算法的有效性。

Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph, 1) the similarity between the original graph and the generated augmented graph gradually decreases; 2) the discrimination between all nodes within each augmented view gradually increases. In this paper, we argue that both such prior information can be incorporated (differently) into the contrastive learning paradigm following our general ranking framework. In particular, we first interpret CL as a special case of learning to rank (L2R), which inspires us to leverage the ranking order among positive augmented views. Meanwhile, we introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained and also be less altered to the perturbations of different degrees. Experiment results on various benchmark datasets verify the effectiveness of our algorithm compared with the supervised and unsupervised models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源