论文标题
CLAR:听觉表示的对比度学习
CLAR: Contrastive Learning of Auditory Representations
论文作者
论文摘要
使用对比性的自我监督学习学习丰富的视觉表示非常成功。但是,我们是否可以使用类似的方法来学习出色的听觉表示,这仍然是一个主要问题。在本文中,我们扩展了先前的工作(SIMCLR),以学习更好的听觉表示。我们(1)介绍了适合听觉数据的各种数据增强,并评估其对预测性能的影响,(2)表明,与原始信号相比,与原始信号相比,具有时频音频功能的培训大大提高了学习表现的质量,并且表明,与所学的损失相比,与所学的损失相比,与自学损失相比,与造成的损失相比,相比之下。我们说明,通过将所有这些方法结合在一起,并与标记的数据相结合得多,我们的框架(CLAR)与监督方法相比,对预测性能的改善显着改善。此外,与自我监督的方法相比,我们的框架的收敛速度更快。
Learning rich visual representations using contrastive self-supervised learning has been extremely successful. However, it is still a major question whether we could use a similar approach to learn superior auditory representations. In this paper, we expand on prior work (SimCLR) to learn better auditory representations. We (1) introduce various data augmentations suitable for auditory data and evaluate their impact on predictive performance, (2) show that training with time-frequency audio features substantially improves the quality of the learned representations compared to raw signals, and (3) demonstrate that training with both supervised and contrastive losses simultaneously improves the learned representations compared to self-supervised pre-training followed by supervised fine-tuning. We illustrate that by combining all these methods and with substantially less labeled data, our framework (CLAR) achieves significant improvement on prediction performance compared to supervised approach. Moreover, compared to self-supervised approach, our framework converges faster with significantly better representations.