论文标题

分析基于构象体的自动语音识别的自我注意力头多样性

Analysis of Self-Attention Head Diversity for Conformer-based Automatic Speech Recognition

论文作者

Audhkhasi, Kartik, Huang, Yinghui, Ramabhadran, Bhuvana, Moreno, Pedro J.

论文摘要

注意层是现代端到端自动语音识别系统的组成部分,例如作为变压器或构象体体系结构的一部分。注意通常是多头的,每个头部都有一组独立的学习参数,并在相同的输入特征序列上运行。多头注意的输出是单个头部输出的融合。我们经验分析了不同注意力头部产生的表示形式之间的多样性,并证明在训练过程中头部高度相关。我们研究了一些增加注意力头多样性的方法,包括为每个头部使用不同的注意机制和辅助训练损失功能来促进头部多样性。我们表明,在训练过程中引入多样性辅助损失功能是一种更有效的方法,并且在Librispeech语料库上获得了高达6%的相对相对的改善。最后,我们在注意力头的多样性与标题参数梯度的相似性之间建立了联系。

Attention layers are an integral part of modern end-to-end automatic speech recognition systems, for instance as part of the Transformer or Conformer architecture. Attention is typically multi-headed, where each head has an independent set of learned parameters and operates on the same input feature sequence. The output of multi-headed attention is a fusion of the outputs from the individual heads. We empirically analyze the diversity between representations produced by the different attention heads and demonstrate that the heads become highly correlated during the course of training. We investigate a few approaches to increasing attention head diversity, including using different attention mechanisms for each head and auxiliary training loss functions to promote head diversity. We show that introducing diversity-promoting auxiliary loss functions during training is a more effective approach, and obtain WER improvements of up to 6% relative on the Librispeech corpus. Finally, we draw a connection between the diversity of attention heads and the similarity of the gradients of head parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源