论文标题

通过输入干预对问题的回答分析语言模型的语义忠诚

Analyzing Semantic Faithfulness of Language Models via Input Intervention on Question Answering

论文作者

Chaturvedi, Akshay, Bhar, Swarnadeep, Saha, Soumadeep, Garain, Utpal, Asher, Nicholas

论文摘要

基于变压器的语言模型已被证明对多个NLP任务非常有效。在本文中,我们在小型和大型版本中考虑了三种变压器模型,即伯特,罗伯塔和XLNET,并研究了他们对文本语义内容的表现如何忠实。我们形式化了语义忠实的概念,其中文本的语义内容应在模型的推论中有因果关系。然后,我们通过观察模型的行为来回答有关故事的问题,在执行了两种新颖的语义干预措施后:删除干预和否定干预措施来测试这个概念。尽管变形金刚模型在标准问题回答任务上达到了高性能,但我们表明,一旦我们对大量案例执行这些干预措施(删除干预措施约为50%,而否定干预的准确性下降约为20%)。然后,我们提出了一种基于干预的培训制度,该制度可以减轻删除干预措施的不良影响(从〜50%到〜6%)。我们分析模型的内部工作,以更好地了解基于干预的培训对缺失干预的有效性。但是我们表明,这种培训并没有减弱语义上不忠的其他方面,例如模型无法处理否定干预或捕获文本的谓词题目结构。我们还通过提示来测试指令,以便其处理两种干预措施并捕获谓词题目结构。虽然指令示意模型确实在谓词题目结构任务上实现了很高的性能,但它们无法对我们的删除和否定干预做出充分的响应。

Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large versions, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering. We then test this notion by observing a model's behavior on answering questions about a story after performing two novel semantic interventions: deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (~50% for deletion intervention, and ~20% drop in accuracy for negation intervention). We then propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from ~ 50% to ~6%). We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models' inability to deal with negation intervention or to capture the predicate-argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate-argument structure. While InstructGPT models do achieve very high performance on predicate-argument structure task, they fail to respond adequately to our deletion and negation interventions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源