论文标题
有效的可训练前端神经语音增强
Efficient Trainable Front-Ends for Neural Speech Enhancement
论文作者
论文摘要
许多神经语音增强和源分离系统在时频域中运行。这样的模型通常受益于使其短期傅立叶变换(STFT)前端可训练。在当前的文献中,这些实施是大型离散的傅立叶变换矩阵。对于低计算系统而言,效率低下。我们根据蝴蝶机制提出了有效的,可训练的前端,以计算快速傅立叶变换,并显示其对低计算神经语音增强模型的准确性和效率优势。我们还探讨了使STFT窗口可训练的效果。
Many neural speech enhancement and source separation systems operate in the time-frequency domain. Such models often benefit from making their Short-Time Fourier Transform (STFT) front-ends trainable. In current literature, these are implemented as large Discrete Fourier Transform matrices; which are prohibitively inefficient for low-compute systems. We present an efficient, trainable front-end based on the butterfly mechanism to compute the Fast Fourier Transform, and show its accuracy and efficiency benefits for low-compute neural speech enhancement models. We also explore the effects of making the STFT window trainable.