论文标题
Fairr:忠实而强大的演绎推理自然语言
FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
论文作者
论文摘要
变形金刚已被证明能够在包含用自然语言编写的规则和陈述的逻辑规则基础上执行演绎推理。最近的作品表明,这样的模型还可以产生模拟模型的逻辑推理过程的推理步骤(即证明图)。当前,这些黑框模型同时生成了同一模型中的证明图和中间推断,因此可能是不忠实的。在这项工作中,我们通过定义三个模块化组件来构架演绎逻辑推理任务:规则选择,事实选择和知识组成。规则和事实选择步骤选择要使用的候选规则和事实,然后知识组成结合了它们以生成新的推论。这可以通过确保因果关系从证明步骤到推理推理来确保忠诚。为了测试我们的框架,我们提出了Fairr(忠实而强大的推理者),其中以上三个组件由变压器独立建模。我们观察到Fairr对新颖的语言扰动是强大的,并且在推断上比现有推理数据集的工作更快。此外,与黑盒生成模型相反,由于模块化方法,Fairr造成的误差更容易解释。
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in natural language. Recent works show that such models can also produce the reasoning steps (i.e., the proof graph) that emulate the model's logical reasoning process. Currently, these black-box models generate both the proof graph and intermediate inferences within the same model and thus may be unfaithful. In this work, we frame the deductive logical reasoning task by defining three modular components: rule selection, fact selection, and knowledge composition. The rule and fact selection steps select the candidate rule and facts to be used and then the knowledge composition combines them to generate new inferences. This ensures model faithfulness by assured causal relation from the proof step to the inference reasoning. To test our framework, we propose FaiRR (Faithful and Robust Reasoner) where the above three components are independently modeled by transformers. We observe that FaiRR is robust to novel language perturbations, and is faster at inference than previous works on existing reasoning datasets. Additionally, in contrast to black-box generative models, the errors made by FaiRR are more interpretable due to the modular approach.