论文标题
差距:知识图形到文本生成的图形感知语言模型框架
GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation
论文作者
论文摘要
由于额外的辅助预训练任务旨在使微调任务提高性能,因此KG至文本生成的最新改进是由于辅助预训练任务。这些任务需要广泛的计算资源,同时仅提示边际改进。在这里,我们证明,通过将图形感知元素融合到现有的预训练的语言模型中,我们能够超越最先进的模型,并缩小其他预训练任务所施加的差距。我们这样做是通过提出掩码结构来捕获邻里信息和一种新颖的类型编码器,该编码器会根据连接类型增加图形注意权重。两个公斤至文本基准数据集的实验表明,我们的模型具有竞争力,同时涉及较少的参数,并且没有其他预训练任务。通过将问题提出为框架,我们可以互换各种提出的组件,并开始根据图中发现的拓扑和类型信息来解释kg至文本生成模型。
Recent improvements in KG-to-text generation are due to additional auxiliary pre-training tasks designed to give the fine-tune task a boost in performance. These tasks require extensive computational resources while only suggesting marginal improvements. Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks. We do so by proposing a mask structure to capture neighborhood information and a novel type encoder that adds a bias to the graph-attention weights depending on the connection type. Experiments on two KG-to-text benchmark datasets show our models are competitive while involving fewer parameters and no additional pre-training tasks. By formulating the problem as a framework, we can interchange the various proposed components and begin interpreting KG-to-text generative models based on the topological and type information found in a graph.