论文标题
带有变压器的图形掩盖自动编码器
Graph Masked Autoencoders with Transformers
论文作者
论文摘要
最近,变形金刚在学习图表示中表现出了有希望的性能。但是,由于很难从头开始训练深层变压器和二次记忆消耗W.R.T.节点的数量。在本文中,我们提出了图形蒙版自动编码器(GMAES),这是一种基于自审查的变压器的模型,用于学习图表表示。为了应对以上两个挑战,我们采用掩盖机制和非对称编码器设计。具体而言,GMAE将部分掩盖的图作为输入,并重建蒙版节点的特征。编码器和解码器是不对称的,其中编码器是深的变压器,解码器是浅变压器。与常规变压器相比,掩盖机制和不对称设计使GMAE成为记忆有效的模型。我们表明,当用作常规的自我监督图表模型时,GMAE在公共下游评估协议下都可以在图形分类任务和节点分类任务上实现最先进的性能。我们还表明,与以端到端方式从头开始训练相比,在简化训练过程的同时,使用GMAE进行了预训练和微调后,我们可以在培训和微调后实现可比的性能。
Recently, transformers have shown promising performance in learning graph representations. However, there are still some challenges when applying transformers to real-world scenarios due to the fact that deep transformers are hard to train from scratch and the quadratic memory consumption w.r.t. the number of nodes. In this paper, we propose Graph Masked Autoencoders (GMAEs), a self-supervised transformer-based model for learning graph representations. To address the above two challenges, we adopt the masking mechanism and the asymmetric encoder-decoder design. Specifically, GMAE takes partially masked graphs as input, and reconstructs the features of the masked nodes. The encoder and decoder are asymmetric, where the encoder is a deep transformer and the decoder is a shallow transformer. The masking mechanism and the asymmetric design make GMAE a memory-efficient model compared with conventional transformers. We show that, when serving as a conventional self-supervised graph representation model, GMAE achieves state-of-the-art performance on both the graph classification task and the node classification task under common downstream evaluation protocols. We also show that, compared with training in an end-to-end manner from scratch, we can achieve comparable performance after pre-training and fine-tuning using GMAE while simplifying the training process.