论文标题
在Zerospeech 2020挑战中,用于声学单元发现的矢量定量神经网络
Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
论文作者
论文摘要
在本文中,我们探讨了声学单元发现的矢量量化。利用未标记的数据,我们旨在学习语音的离散表示,将语音内容与特定于说话者的细节分开。我们提出了两个神经模型来应对这一挑战 - 两者都使用矢量量化将连续特征映射到有限的代码集。第一个模型是一种矢量定量的变分自动编码器(VQ-VAE)。在重建音频波形之前,VQ-VAE将语音编码为一系列离散单元。我们的第二个模型将矢量量化与对比度预测编码(VQ-CPC)相结合。这个想法是通过预测未来的声学单位来学习语音的代表。我们在2020年的Zerospeech挑战中评估了英语和印尼数据的模型。在ABX电话歧视测试中,这两种模型的表现都优于2019年和2020年挑战的所有提交,相对提高了30%以上。这些模型还可以在下游语音转换任务上竞争性能。在这两者中,VQ-CPC的性能一般而言稍好,并且训练更简单,更快。最后,探测实验表明矢量量化是一种有效的瓶颈,迫使模型丢弃扬声器信息。
In this paper, we explore vector quantization for acoustic unit discovery. Leveraging unlabelled data, we aim to learn discrete representations of speech that separate phonetic content from speaker-specific details. We propose two neural models to tackle this challenge - both use vector quantization to map continuous features to a finite set of codes. The first model is a type of vector-quantized variational autoencoder (VQ-VAE). The VQ-VAE encodes speech into a sequence of discrete units before reconstructing the audio waveform. Our second model combines vector quantization with contrastive predictive coding (VQ-CPC). The idea is to learn a representation of speech by predicting future acoustic units. We evaluate the models on English and Indonesian data for the ZeroSpeech 2020 challenge. In ABX phone discrimination tests, both models outperform all submissions to the 2019 and 2020 challenges, with a relative improvement of more than 30%. The models also perform competitively on a downstream voice conversion task. Of the two, VQ-CPC performs slightly better in general and is simpler and faster to train. Finally, probing experiments show that vector quantization is an effective bottleneck, forcing the models to discard speaker information.