论文标题
促进线性化图表到文本生成中的图形意识
Promoting Graph Awareness in Linearized Graph-to-Text Generation
论文作者
论文摘要
从结构化输入(例如含义表示或RDF三元组)中生成文本通常涉及使用专门的图形编码神经网络。但是,验证的变压器在图形输入的线性化方面的最新应用已在图形到文本任务上产生了最新的生成结果。在这里,我们探讨了这些线性化模型编码本地图结构的能力,特别是它们对图形线性化策略的不变性及其重建损坏的输入的能力。我们的发现激发了解决方案,以通过脚手架来丰富模型的隐式图编码的质量。也就是说,我们使用多任务文本到文本框架中实现的图形检验目标。我们发现,这些脱氧脚手架在低资源环境中导致下游产生的显着改善。
Generating text from structured inputs, such as meaning representations or RDF triples, has often involved the use of specialized graph-encoding neural networks. However, recent applications of pretrained transformers to linearizations of graph inputs have yielded state-of-the-art generation results on graph-to-text tasks. Here, we explore the ability of these linearized models to encode local graph structures, in particular their invariance to the graph linearization strategy and their ability to reconstruct corrupted inputs. Our findings motivate solutions to enrich the quality of models' implicit graph encodings via scaffolding. Namely, we use graph-denoising objectives implemented in a multi-task text-to-text framework. We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.