论文标题

教学机理解与组成解释

Teaching Machine Comprehension with Compositional Explanations

论文作者

Ye, Qinyuan, Huang, Xiao, Boschee, Elizabeth, Ren, Xiang

论文摘要

机器阅读理解的进步(MRC)在很大程度上取决于以(问题,段落,答案)三倍的形式收集大规模的人类通知示例。相比之下,人类通常只能以几个示例来概括,依靠更深层的世界知识,语言成就和/或仅仅是优质的演绎能力。在本文中,我们使用少量的半结构化解释来重点介绍“教学”机器阅读理解,这些解释明确地告知机器为什么答案跨度正确。我们从解释中提取结构化变量和规则,并组成神经模块教师,以注释训练下游MRC模型的实例。我们使用可学习的神经模块和软逻辑来处理语言变化并克服稀疏覆盖范围;这些模块与MRC模型共同优化,以提高最终性能。在小队数据集上,我们提出的方法通过26个解释的监督获得了70.14%的F1得分,可与1,100个标记的实例相当,与普通监督学习相当,从而提高12倍的速度。

Advances in machine reading comprehension (MRC) rely heavily on the collection of large scale human-annotated examples in the form of (question, paragraph, answer) triples. In contrast, humans are typically able to generalize with only a few examples, relying on deeper underlying world knowledge, linguistic sophistication, and/or simply superior deductive powers. In this paper, we focus on "teaching" machines reading comprehension, using a small number of semi-structured explanations that explicitly inform machines why answer spans are correct. We extract structured variables and rules from explanations and compose neural module teachers that annotate instances for training downstream MRC models. We use learnable neural modules and soft logic to handle linguistic variation and overcome sparse coverage; the modules are jointly optimized with the MRC model to improve final performance. On the SQuAD dataset, our proposed method achieves 70.14% F1 score with supervision from 26 explanations, comparable to plain supervised learning using 1,100 labeled instances, yielding a 12x speed up.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源