论文标题
通过表示和域适应的跨域几乎没有射击关系提取
Cross-Domain Few-Shot Relation Extraction via Representation Learning and Domain Adaptation
论文作者
论文摘要
很少有关系提取旨在识别每个关系中几乎没有标记的句子的新关系。以前的基于公表示的几片关系提取算法通过比较少数标记的句子嵌入的原型与使用训练有素的度量函数的查询句子的嵌入来识别关系。但是,由于这些领域与培训数据集中的域总是有很大的差异,因此这些方法对许多领域的看不见关系的概括能力是有限的。由于原型对于在潜在空间中获得实体之间的关系是必要的,因此我们建议从先验知识和关系的内在语义中学习更多可解释和有效的原型,以更有效地在各个领域中提取新关系。通过使用先验信息探索关系之间的关系,我们有效地改善了关系的原型表示。通过使用对比度学习使嵌入更独特的句子之间的分类边距,原型的几何解释性得到了增强。此外,利用转移学习方法来解决跨域问题,可以使原型的生成过程来解释其他域之间的差距,从而使原型更强大,并可以更好地提取跨多个域的关联。基准测试数据集的实验结果证明了建议方法比某些最新方法的优点。
Few-shot relation extraction aims to recognize novel relations with few labeled sentences in each relation. Previous metric-based few-shot relation extraction algorithms identify relationships by comparing the prototypes generated by the few labeled sentences embedding with the embeddings of the query sentences using a trained metric function. However, as these domains always have considerable differences from those in the training dataset, the generalization ability of these approaches on unseen relations in many domains is limited. Since the prototype is necessary for obtaining relationships between entities in the latent space, we suggest learning more interpretable and efficient prototypes from prior knowledge and the intrinsic semantics of relations to extract new relations in various domains more effectively. By exploring the relationships between relations using prior information, we effectively improve the prototype representation of relations. By using contrastive learning to make the classification margins between sentence embedding more distinct, the prototype's geometric interpretability is enhanced. Additionally, utilizing a transfer learning approach for the cross-domain problem allows the generation process of the prototype to account for the gap between other domains, making the prototype more robust and enabling the better extraction of associations across multiple domains. The experiment results on the benchmark FewRel dataset demonstrate the advantages of the suggested method over some state-of-the-art approaches.