论文标题

Uniphore提交无所畏惧的步骤挑战阶段2

Uniphore's submission to Fearless Steps Challenge Phase-2

论文作者

S, Karthik Pandia D, Spera, Cosimo

论文摘要

我们建议在无畏步骤中挑战2阶段的监督系统进行语音活动检测(SAD)和说话者身份(SID)任务。这两个任务的拟议系统共享一个通用的卷积神经网络(CNN)体系结构。 MEL频谱图用作特征。为了进行语音活动检测,频谱图被分为较小的重叠块。该网络经过训练以识别块。网络体系结构和SID任务使用的训练步骤类似于悲伤的任务,只是使用了更长的频谱图块。我们为SID任务提出了一种两级识别方法。首先,对于每个块,根据神经网络后验概率假设一组扬声器。最后,通过应用投票规则,使用块级别的假设来识别说话的说话人的身份。在悲伤的任务上,在DEV和评估集中分别获得了5.96%的检测成本函数评分和5.33%。在SID任务的DEV和评估集中获得了前5个检索精度为82.07%和82.42%。对结果进行了简要分析,以便在这两个任务中对失误分类案件提供见解。

We propose supervised systems for speech activity detection (SAD) and speaker identification (SID) tasks in Fearless Steps Challenge Phase-2. The proposed systems for both the tasks share a common convolutional neural network (CNN) architecture. Mel spectrogram is used as features. For speech activity detection, the spectrogram is divided into smaller overlapping chunks. The network is trained to recognize the chunks. The network architecture and the training steps used for the SID task are similar to that of the SAD task, except that longer spectrogram chunks are used. We propose a two-level identification method for SID task. First, for each chunk, a set of speakers is hypothesized based on the neural network posterior probabilities. Finally, the speaker identity of the utterance is identified using the chunk-level hypotheses by applying a voting rule. On SAD task, a detection cost function score of 5.96%, and 5.33% are obtained on dev and eval sets, respectively. A top 5 retrieval accuracy of 82.07% and 82.42% are obtained on the dev and eval sets for SID task. A brief analysis is made on the results to provide insights into the miss-classified cases in both the tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源