论文标题
基准的LF-MMI,CTC和RNN-T标准用于流媒体ASR
Benchmarking LF-MMI, CTC and RNN-T Criteria for Streaming ASR
论文作者
论文摘要
在这项工作中,为了衡量延迟控制的流媒体语音识别(ASR)应用的准确性和效率,我们对三个流行培训标准进行全面评估:LF-MMI,CTC和RNN-T。在使用培训数据3K-14K小时转录7种语言的社交媒体视频时,我们使用相同的数据集和编码器模型体系结构在每个标准之间进行了大规模的受控实验。我们发现RNN-T在ASR准确性方面具有一致的胜利,而CTC模型在推理效率下表现出色。此外,我们有选择地检查针对不同培训标准的各种建模策略,包括建模单元,编码器体系结构,预训练等。鉴于我们的最佳知识,鉴于我们的最佳知识,我们介绍了这三种跨很多语言中广泛使用的培训标准的首个全面基准。
In this work, to measure the accuracy and efficiency for a latency-controlled streaming automatic speech recognition (ASR) application, we perform comprehensive evaluations on three popular training criteria: LF-MMI, CTC and RNN-T. In transcribing social media videos of 7 languages with training data 3K-14K hours, we conduct large-scale controlled experimentation across each criterion using identical datasets and encoder model architecture. We find that RNN-T has consistent wins in ASR accuracy, while CTC models excel at inference efficiency. Moreover, we selectively examine various modeling strategies for different training criteria, including modeling units, encoder architectures, pre-training, etc. Given such large-scale real-world streaming ASR application, to our best knowledge, we present the first comprehensive benchmark on these three widely used training criteria across a great many languages.