论文标题
DWIE:多任务文档级信息提取的以实体为中心的数据集
DWIE: an entity-centric dataset for multi-task document-level information extraction
论文作者
论文摘要
本文介绍了Dwie,即“ Deutsche Welle for Information For Information提取”,这是一个新创建的多任务数据集,结合了四个主要信息提取(IE)注释子任务:(i)命名nertity识别(NER),(ii)Coreference分辨率分辨率,(II),(II)(III)关系提取(III)的关系提取(re)和(iv)and(iv)。 DWIE被认为是以实体为中心的数据集,它描述了完整文档级别上概念实体的交互和属性。这与目前主导的提及驱动的方法形成鲜明对比,这些方法是从对单个句子中指定实体提到的检测和分类开始的。此外,DWIE在为其构建和评估IE模型时提出了两个主要挑战。首先,将传统提及级评估指标用于以实体为中心的DWIE数据集进行NER和重新任务,可能会导致对更常见的实体的预测主导的测量。我们通过提出一个新实体驱动的指标来解决这个问题,该指标考虑了组成每个预测和基础真理实体的提及数量。其次,文档级的多任务注释要求模型在联合学习环境中的不同部分以及不同任务之间的实体提及之间传输信息。为了实现这一目标,我们建议在文档级提及跨度之间使用基于图的神经消息传递技术。我们的实验表明,将神经图传播纳入我们的联合模型时,最多提高了5.5 f1百分点。这表明了DWIE在图形神经网络中刺激进一步研究的潜力,以在多任务中表示学习。我们在https://github.com/klimzaporojets/dwie上公开提供DWIE。
This paper presents DWIE, the 'Deutsche Welle corpus for Information Extraction', a newly created multi-task dataset that combines four main Information Extraction (IE) annotation subtasks: (i) Named Entity Recognition (NER), (ii) Coreference Resolution, (iii) Relation Extraction (RE), and (iv) Entity Linking. DWIE is conceived as an entity-centric dataset that describes interactions and properties of conceptual entities on the level of the complete document. This contrasts with currently dominant mention-driven approaches that start from the detection and classification of named entity mentions in individual sentences. Further, DWIE presented two main challenges when building and evaluating IE models for it. First, the use of traditional mention-level evaluation metrics for NER and RE tasks on entity-centric DWIE dataset can result in measurements dominated by predictions on more frequently mentioned entities. We tackle this issue by proposing a new entity-driven metric that takes into account the number of mentions that compose each of the predicted and ground truth entities. Second, the document-level multi-task annotations require the models to transfer information between entity mentions located in different parts of the document, as well as between different tasks, in a joint learning setting. To realize this, we propose to use graph-based neural message passing techniques between document-level mention spans. Our experiments show an improvement of up to 5.5 F1 percentage points when incorporating neural graph propagation into our joint model. This demonstrates DWIE's potential to stimulate further research in graph neural networks for representation learning in multi-task IE. We make DWIE publicly available at https://github.com/klimzaporojets/DWIE.