论文标题
协方差正规化用于概率线性判别分析
Covariance Regularization for Probabilistic Linear Discriminant Analysis
论文作者
论文摘要
概率线性判别分析(PLDA)通常在说话者验证系统中用于评分说话者嵌入的相似性。最近的研究通过对角度的协方差来改善PLDA在域匹配条件下的性能。我们怀疑这种残酷的修剪方法可能会消除其在嵌入说话者嵌入的维度相关性建模方面的能力,从而导致域适应性的性能不足。本文探讨了两种替代的协方差正规化方法,即插值PLDA和稀疏PLDA,以解决该问题。插值PLDA结合了从余弦评分到插值PLDA的协方差的先验知识。稀疏的PLDA引入了稀疏惩罚,以更新协方差。实验结果表明,两种方法都在适应性域的适应性方面明显超过对角线正则化。此外,当训练稀疏的PLDA以进行域适应时,内域数据可显着降低。
Probabilistic linear discriminant analysis (PLDA) is commonly used in speaker verification systems to score the similarity of speaker embeddings. Recent studies improved the performance of PLDA in domain-matched conditions by diagonalizing its covariance. We suspect such brutal pruning approach could eliminate its capacity in modeling dimension correlation of speaker embeddings, leading to inadequate performance with domain adaptation. This paper explores two alternative covariance regularization approaches, namely, interpolated PLDA and sparse PLDA, to tackle the problem. The interpolated PLDA incorporates the prior knowledge from cosine scoring to interpolate the covariance of PLDA. The sparse PLDA introduces a sparsity penalty to update the covariance. Experimental results demonstrate that both approaches outperform diagonal regularization noticeably with domain adaptation. In addition, in-domain data can be significantly reduced when training sparse PLDA for domain adaptation.