论文标题
引入ecapa-tdnn和wav2vec2.0嵌入到口吃检测中
Introducing ECAPA-TDNN and Wav2Vec2.0 Embeddings to Stuttering Detection
论文作者
论文摘要
由于可用数据集的规模有限,因此在口吃检测(SD)任务中采用先进的深度学习(DL)体系结构具有挑战性。为此,这项工作介绍了用在大量音频数据集中训练的针对不同任务的预训练的深层模型提取的语音嵌入的应用。特别是,我们分别使用强调的通道注意,传播和聚合时间延迟神经网络(ECAPA-TDNN)和WAV2VEC2.0模型分别在Voxceleb和Librispeech数据集中训练的WAV2VEC2.0模型。提取嵌入后,我们使用几个传统分类器进行基准测试,例如k-neartime邻居,高斯天真贝叶斯和神经网络,以进行口吃检测任务。与仅在有限的SEP-28K数据集中训练的标准SD系统相比,我们就基准的总体准确度获得了16.74%的相对提高。最后,我们已经表明,将两个嵌入和串联的WAV2VEC2.0串联的嵌入可以分别提高SD性能高达1%和2.64%。
The adoption of advanced deep learning (DL) architecture in stuttering detection (SD) tasks is challenging due to the limited size of the available datasets. To this end, this work introduces the application of speech embeddings extracted with pre-trained deep models trained on massive audio datasets for different tasks. In particular, we explore audio representations obtained using emphasized channel attention, propagation, and aggregation-time-delay neural network (ECAPA-TDNN) and Wav2Vec2.0 model trained on VoxCeleb and LibriSpeech datasets respectively. After extracting the embeddings, we benchmark with several traditional classifiers, such as a k-nearest neighbor, Gaussian naive Bayes, and neural network, for the stuttering detection tasks. In comparison to the standard SD system trained only on the limited SEP-28k dataset, we obtain a relative improvement of 16.74% in terms of overall accuracy over baseline. Finally, we have shown that combining two embeddings and concatenating multiple layers of Wav2Vec2.0 can further improve SD performance up to 1% and 2.64% respectively.