论文标题

参考知识渊博的网络用于机器阅读理解

Reference Knowledgeable Network for Machine Reading Comprehension

论文作者

Zhao, Yilin, Zhang, Zhuosheng, Zhao, Hai

论文摘要

多选择机器阅读理解理解(MRC)作为挑战,要求模型从具有给定段落和问题的一组候选人中选择最合适的答案。大多数现有的研究都集中在特定任务或复杂网络的建模上,而没有明确指代相关且可信的外部知识源,这些知识源应该大大弥补了给定段落的不足。因此,我们提出了一个新型的基于参考的知识增强模型,称为参考知识渊博的网络(REKNET),该模型模拟了人类阅读策略,以从段落中完善关键信息,并必要地报价明确的知识。详细说明,Reknet优化了细化的关键信息并将其定义为参考跨度,然后用参考跨度和候选人的共发生信息引用明确的知识四倍。对拟议的重新连接进行了三个多选择的MRC基准评估:种族,梦想和宇宙质量检查,获得了一致且显着的性能提高,并且在强基本方面可观察到可观察到的统计学意义水平。我们的代码可在https://github.com/yilin11/reknet上找到。

Multi-choice Machine Reading Comprehension (MRC) as a challenge requires models to select the most appropriate answer from a set of candidates with a given passage and question. Most of the existing researches focus on the modeling of specific tasks or complex networks, without explicitly referring to relevant and credible external knowledge sources, which are supposed to greatly make up for the deficiency of the given passage. Thus we propose a novel reference-based knowledge enhancement model called Reference Knowledgeable Network (RekNet), which simulates human reading strategies to refine critical information from the passage and quote explicit knowledge in necessity. In detail, RekNet refines finegrained critical information and defines it as Reference Span, then quotes explicit knowledge quadruples by the co-occurrence information of Reference Span and candidates. The proposed RekNet is evaluated on three multi-choice MRC benchmarks: RACE, DREAM and Cosmos QA, obtaining consistent and remarkable performance improvement with observable statistical significance level over strong baselines. Our code is available at https://github.com/Yilin1111/RekNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源