论文标题
讲述伯特的完整故事:从当地的关注到全球聚集
Telling BERT's full story: from Local Attention to Global Aggregation
论文作者
论文摘要
我们深入研究了变压器体系结构中自我注意力头的行为。鉴于最近的工作不鼓励使用注意力分布来解释模型的行为,我们表明注意力分布仍可以提供对注意力头的本地行为的见解。这样,我们提出了由注意力和全局模式所揭示的局部模式的区别,这些模式和全局模式指的是输入,并从两个角度分析BERT。我们使用梯度归因来分析注意力头的输出如何取决于输入令牌,从而有效地扩展了基于局部注意力的分析,以说明整个变压器层中信息的混合。我们发现,由于模型内上下文的混合,注意力与归因分布之间存在显着差异。我们量化了这种差异,并观察到有趣的是,尽管混合了,但在所有层中仍存在一些模式。
We take a deep look into the behavior of self-attention heads in the transformer architecture. In light of recent work discouraging the use of attention distributions for explaining a model's behavior, we show that attention distributions can nevertheless provide insights into the local behavior of attention heads. This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles. We use gradient attribution to analyze how the output of an attention attention head depends on the input tokens, effectively extending the local attention-based analysis to account for the mixing of information throughout the transformer layers. We find that there is a significant discrepancy between attention and attribution distributions, caused by the mixing of context inside the model. We quantify this discrepancy and observe that interestingly, there are some patterns that persist across all layers despite the mixing.