论文标题
与跨模式协议的视听实例歧视
Audio-Visual Instance Discrimination with Cross-Modal Agreement
论文作者
论文摘要
我们提出了一种自我监督的学习方法,以从视频和音频中学习视听表示。我们的方法使用对比度学习来从音频和反之亦然对视频进行跨模式歧视。我们表明,对跨模式歧视而不是模式内歧视的优化对于从视频和音频中学习良好的表示非常重要。有了这个简单但有力的见解,我们的方法在对行动识别任务进行填补时,可以实现高度竞争性的表现。此外,尽管对比度学习的最新工作将正面和负样本定义为个人实例,但我们通过探索跨模式一致性来推广此定义。我们通过在视频和音频特征空间中测量它们的相似性来将多个实例组合在一起。跨模式协议创造了更好的正面和负面集合,这使我们能够通过寻求主实例的模式歧视来校准视觉相似性,并在下游任务上取得显着收益。
We present a self-supervised learning approach to learn audio-visual representations from video and audio. Our method uses contrastive learning for cross-modal discrimination of video from audio and vice-versa. We show that optimizing for cross-modal discrimination, rather than within-modal discrimination, is important to learn good representations from video and audio. With this simple but powerful insight, our method achieves highly competitive performance when finetuned on action recognition tasks. Furthermore, while recent work in contrastive learning defines positive and negative samples as individual instances, we generalize this definition by exploring cross-modal agreement. We group together multiple instances as positives by measuring their similarity in both the video and audio feature spaces. Cross-modal agreement creates better positive and negative sets, which allows us to calibrate visual similarities by seeking within-modal discrimination of positive instances, and achieve significant gains on downstream tasks.