论文标题

多任务学习和联合优化变压器RNN-TRANSDUCER语音识别

Multitask Learning and Joint Optimization for Transformer-RNN-Transducer Speech Recognition

论文作者

Jeon, Jae-Jin, Kim, Eesung

论文摘要

最近,引入了几种名为Transformer-Transducer的端到端语音识别方法。根据这些方法,转录网络通常由基于变压器的神经网络建模,而预测网络可以由变压器或经常性神经网络(RNN)建模。本文探讨了变压器RNN-TransDucer系统的多任务学习,关节优化和关节解码方法。我们提出的方法具有主要优势,因为该模型可以维护有关大型文本语料库的信息。我们通过使用广泛使用的LiblisPeech数据集的著名ESPNET工具包进行实验来证明它们的有效性。我们还表明,对于测试清洁和测试数据集,所提出的方法分别可以将单词错误率(WER)降低16.6%和13.3%,而无需更改整体模型结构或利用外部LM。

Recently, several types of end-to-end speech recognition methods named transformer-transducer were introduced. According to those kinds of methods, transcription networks are generally modeled by transformer-based neural networks, while prediction networks could be modeled by either transformers or recurrent neural networks (RNN). This paper explores multitask learning, joint optimization, and joint decoding methods for transformer-RNN-transducer systems. Our proposed methods have the main advantage in that the model can maintain information on the large text corpus. We prove their effectiveness by performing experiments utilizing the well-known ESPNET toolkit for the widely used Librispeech datasets. We also show that the proposed methods can reduce word error rate (WER) by 16.6 % and 13.3 % for test-clean and test-other datasets, respectively, without changing the overall model structure nor exploiting an external LM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源