论文标题

资源有效的分离变压器

Resource-Efficient Separation Transformer

论文作者

Della Libera, Luca, Subakan, Cem, Ravanelli, Mirco, Cornell, Samuele, Lepoutre, Frédéric, Grondin, François

论文摘要

变形金刚最近在语音分离中实现了最先进的表现。但是,这些模型在计算上是要求的,需要大量可学习的参数。本文以降低的计算成本探讨了基于变压器的语音分离。我们的主要贡献是基于自我注意力的体系结构的资源有效分离变压器(重新启动)的发展,以两种方式减轻了计算负担。首先,它在潜在空间中使用非重叠的块。其次,它以从每个块计算出的紧凑型潜在摘要运行。重新分配者在流行的WSJ0-2MIX和WHAM上取得了竞争性能!因果关系和非因果设置中的数据集。值得注意的是,它在记忆和推理时间方面比以前基于变压器的架构的缩放量明显好,这使其更适合处理长时间的混合物。

Transformers have recently achieved state-of-the-art performance in speech separation. These models, however, are computationally demanding and require a lot of learnable parameters. This paper explores Transformer-based speech separation with a reduced computational cost. Our main contribution is the development of the Resource-Efficient Separation Transformer (RE-SepFormer), a self-attention-based architecture that reduces the computational burden in two ways. First, it uses non-overlapping blocks in the latent space. Second, it operates on compact latent summaries calculated from each chunk. The RE-SepFormer reaches a competitive performance on the popular WSJ0-2Mix and WHAM! datasets in both causal and non-causal settings. Remarkably, it scales significantly better than the previous Transformer-based architectures in terms of memory and inference time, making it more suitable for processing long mixtures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源