论文标题
用于图形实例建模的分段图形伯伯
Segmented Graph-Bert for Graph Instance Modeling
论文作者
论文摘要
在Graph实例表示学习中,不同的图形实例大小和图形节点无序属性都是使现有表示模型无法正常工作的主要障碍。在本文中,我们将研究Graph-Bert在图形实例表示学习中的有效性,该学习最初是为节点表示任务而设计的。为了使Graph-Bert适应新的问题设置,我们改用一个分段的体系结构对其进行了重新设计,该架构也被称为Seg-Bert(分段图),以便在本文中为参考简单。 SEG-BERT不再涉及不再涉及节点 - 订单变量输入或功能组件,并且可以自然处理图形节点无序属性。更重要的是,Seg-Bert具有分段的体系结构,并引入了三种不同的策略,以统一图形实例大小,即分别全部输入,填充/修剪和细分市场变化。 SEG-BERT是可以不受监督的方式预先验证的,可以直接或以必要的微调将其进一步转移到新任务。我们通过在七个图实例基准数据集上实验测试了SEG-BERT的有效性,而SEG-BERT可以超越其六种具有显着性能优势的比较方法。
In graph instance representation learning, both the diverse graph instance sizes and the graph node orderless property have been the major obstacles that render existing representation learning models fail to work. In this paper, we will examine the effectiveness of GRAPH-BERT on graph instance representation learning, which was designed for node representation learning tasks originally. To adapt GRAPH-BERT to the new problem settings, we re-design it with a segmented architecture instead, which is also named as SEG-BERT (Segmented GRAPH-BERT) for reference simplicity in this paper. SEG-BERT involves no node-order-variant inputs or functional components anymore, and it can handle the graph node orderless property naturally. What's more, SEG-BERT has a segmented architecture and introduces three different strategies to unify the graph instance sizes, i.e., full-input, padding/pruning and segment shifting, respectively. SEG-BERT is pre-trainable in an unsupervised manner, which can be further transferred to new tasks directly or with necessary fine-tuning. We have tested the effectiveness of SEG-BERT with experiments on seven graph instance benchmark datasets, and SEG-BERT can out-perform the comparison methods on six out of them with significant performance advantages.