论文标题
语义标签可以帮助自我监督的视觉表示学习吗?
Can Semantic Labels Assist Self-Supervised Visual Representation Learning?
论文作者
论文摘要
最近,对比学习在很大程度上取得了无监督的视觉表示学习的进步。与完全监督的方法相比,一些自我监管的算法在Imagenet上进行了预先训练,报告了更高的转移学习绩效,似乎传达了人类标签几乎没有促进可转移视觉特征的信息。在本文中,我们捍卫了语义标签的实用性,但指出完全监督和自我监督的方法正在追求各种特征。为了减轻此问题,我们提出了一种新的算法,该算法在附近(SCAN)中有监督的对比度调整,该算法最大程度地阻止了语义指南损坏外观功能嵌入。在一系列下游任务中,与以前的全面监督和自我监管的方法相比,扫描的性能优于性能,有时增益很大。更重要的是,我们的研究表明,语义标签可用于协助自学方法,为社区打开新的方向。
Recently, contrastive learning has largely advanced the progress of unsupervised visual representation learning. Pre-trained on ImageNet, some self-supervised algorithms reported higher transfer learning performance compared to fully-supervised methods, seeming to deliver the message that human labels hardly contribute to learning transferrable visual features. In this paper, we defend the usefulness of semantic labels but point out that fully-supervised and self-supervised methods are pursuing different kinds of features. To alleviate this issue, we present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN) that maximally prevents the semantic guidance from damaging the appearance feature embedding. In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods, and sometimes the gain is significant. More importantly, our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.