论文标题

端到端的孟加拉语音认可

End-to-End Bengali Speech Recognition

论文作者

Mandal, Sayan, Yadav, Sarthak, Rai, Atul

论文摘要

孟加拉语是印度次大陆的著名语言。但是,尽管该地区所说的著名语言存在许多最先进的声学模型,但孟加拉语的研究和资源很少。在这项工作中,我们将基于CTC的CNN-RNN网络(基于深度学习的端到端自动语音识别技术)应用于孟加拉ASR任务。我们还提出和评估了小型7x3和3x3卷积内核的适用性和功效,这些内核主要是因为它们的插槽和参数有效的性质。我们提出了两个CNN块,2层块A和4层B块B,第一层由7x3核和随后的层组成,仅包含3x3核。使用公开可用的大型孟加拉ASR培训数据集,我们基准并评估了七种深度神经网络配置的性能,该培训的复杂性和深度对孟加拉语ASR任务。我们的最佳模型与B块B的WER为13.67,比可比模型的绝对降低1.39%,较大的卷积核41x11和21x11。

Bengali is a prominent language of the Indian subcontinent. However, while many state-of-the-art acoustic models exist for prominent languages spoken in the region, research and resources for Bengali are few and far between. In this work, we apply CTC based CNN-RNN networks, a prominent deep learning based end-to-end automatic speech recognition technique, to the Bengali ASR task. We also propose and evaluate the applicability and efficacy of small 7x3 and 3x3 convolution kernels which are prominently used in the computer vision domain primarily because of their FLOPs and parameter efficient nature. We propose two CNN blocks, 2-layer Block A and 4-layer Block B, with the first layer comprising of 7x3 kernel and the subsequent layers comprising solely of 3x3 kernels. Using the publicly available Large Bengali ASR Training data set, we benchmark and evaluate the performance of seven deep neural network configurations of varying complexities and depth on the Bengali ASR task. Our best model, with Block B, has a WER of 13.67, having an absolute reduction of 1.39% over comparable model with larger convolution kernels of size 41x11 and 21x11.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源