论文标题
CNRL在Semeval-2020任务5:用多头自我发挥权重的语言建模因果推理,基于反事实检测
CNRL at SemEval-2020 Task 5: Modelling Causal Reasoning in Language with Multi-Head Self-Attention Weights based Counterfactual Detection
论文作者
论文摘要
在本文中,我们描述了一种通过使用多头自我注意力重量在文本中检测文本中的反事实来建模自然语言推理的方法。我们使用预先训练的变压器模型从文本中提取上下文嵌入和自我发场权重。我们显示了使用卷积层从这些自我发场权重中提取特定任务特征的使用。此外,我们用一个通用基础模型描述了一种微调方法,用于在两个密切相关的子任务之间进行相反的检测之间的知识共享。我们在实验中分析和比较各种变压器模型的性能。最后,我们通过多头自发项权重进行定性分析来解释我们的模型的动态。
In this paper, we describe an approach for modelling causal reasoning in natural language by detecting counterfactuals in text using multi-head self-attention weights. We use pre-trained transformer models to extract contextual embeddings and self-attention weights from the text. We show the use of convolutional layers to extract task-specific features from these self-attention weights. Further, we describe a fine-tuning approach with a common base model for knowledge sharing between the two closely related sub-tasks for counterfactual detection. We analyze and compare the performance of various transformer models in our experiments. Finally, we perform a qualitative analysis with the multi-head self-attention weights to interpret our models' dynamics.