论文标题
Devicetts:一个小英尺印刷,快速,稳定的网络,用于文本到语音
DeviceTTS: A Small-Footprint, Fast, Stable Network for On-Device Text-to-Speech
论文作者
论文摘要
随着智能设备的数量的增加,对evice文本到语音(TTS)的需求迅速增加。近年来,已经提出了许多重要的端到端TTS方法,并大大提高了合成语音的质量。但是,为了确保合格的语音,大多数TTS系统都取决于大型且复杂的神经网络模型,并且很难在设备上部署这些TTS系统。在本文中,提出了一个小英尺印记,快速,稳定的网络,用于驻留TTS,称为Devicetts。 Devicett将持续时间预测器用作编码器和解码器之间的桥梁,以避免在Tacotron中跳过和重复的单词问题。众所周知,模型大小是设备TTS的关键因素。对于Devicett,深馈顺序存储网络(DFSMN)用作基本组件。此外,为了加快推理的速度,提出了混合分解解码器,以平衡推理速度和语音质量。经验是通过世界和LPCNet Vocoder完成的。最后,只有140万个模型参数和0.099 GFLOPS,Devicetts与Tacotron和FastSpeech实现了可比的性能。据我们所知,Devicetts可以在实际应用中满足大多数设备的需求。
With the number of smart devices increasing, the demand for on-device text-to-speech (TTS) increases rapidly. In recent years, many prominent End-to-End TTS methods have been proposed, and have greatly improved the quality of synthesized speech. However, to ensure the qualified speech, most TTS systems depend on large and complex neural network models, and it's hard to deploy these TTS systems on-device. In this paper, a small-footprint, fast, stable network for on-device TTS is proposed, named as DeviceTTS. DeviceTTS makes use of a duration predictor as a bridge between encoder and decoder so as to avoid the problem of words skipping and repeating in Tacotron. As we all know, model size is a key factor for on-device TTS. For DeviceTTS, Deep Feedforward Sequential Memory Network (DFSMN) is used as the basic component. Moreover, to speed up inference, mix-resolution decoder is proposed for balance the inference speed and speech quality. Experiences are done with WORLD and LPCNet vocoder. Finally, with only 1.4 million model parameters and 0.099 GFLOPS, DeviceTTS achieves comparable performance with Tacotron and FastSpeech. As far as we know, the DeviceTTS can meet the needs of most of the devices in practical application.