论文标题

监督在对比的半监督学习视觉表示方面加速了预训练

Supervision Accelerates Pre-training in Contrastive Semi-Supervised Learning of Visual Representations

论文作者

Assran, Mahmoud, Ballas, Nicolas, Castrejon, Lluis, Rabbat, Michael

论文摘要

我们研究了一种通过利用少量监督信息在预训练期间利用少量监督信息来提高对比度学习效率的策略。我们根据噪声对抗性估计和邻里组件分析提出了半监督的损失,Suncet,旨在区分不同类别的示例,除了自我审议的实例借口任务。在Imagenet上,我们发现Suncet可用于匹配以前的对比方法的半监督学习精度,同时使用少于预训练和计算的一半。我们的主要见解是,即使在预训练期间,甚至在微调过程中也利用少量标记的数据,还提供了一个重要的信号,可以显着加速对视觉表示的对比度学习。我们的代码可在github.com/facebookresearch/suncet上在线获得。

We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training. We propose a semi-supervised loss, SuNCEt, based on noise-contrastive estimation and neighbourhood component analysis, that aims to distinguish examples of different classes in addition to the self-supervised instance-wise pretext tasks. On ImageNet, we find that SuNCEt can be used to match the semi-supervised learning accuracy of previous contrastive approaches while using less than half the amount of pre-training and compute. Our main insight is that leveraging even a small amount of labeled data during pre-training, and not only during fine-tuning, provides an important signal that can significantly accelerate contrastive learning of visual representations. Our code is available online at github.com/facebookresearch/suncet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源