论文标题
无监督的表示形式通过不变性流动学习
Unsupervised Representation Learning by InvariancePropagation
论文作者
论文摘要
基于对比度学习的无监督学习方法吸引了人们的注意力越来越多,并取得了令人鼓舞的结果。他们中的大多数旨在学习对实例级别的变体不变的表示,这些变体由同一实例的不同视图提供。在本文中,我们提出不变性繁殖,以专注于学习表示不变到类别级别的变化,这些变化是由同一类别的不同实例提供的。我们的方法递归地发现了位于表示空间中相同高密度区域的语义一致样品。我们展示了一种艰苦的抽样策略,以最大程度地提高锚样本与其硬阳性样本之间的一致性,从而提供了更多的类内变化,以帮助捕获更抽象的不变性。结果,使用Resnet-50作为骨干,我们的方法在Imagenet线性分类方面达到了71.3%的TOP-1准确性,并且仅在1%标签上仅在1%的标签上进行了78.2%的Top-5精确度,从而超过了先前的结果。我们还可以在其他下游任务上实现最先进的性能,包括Placs205和Pascal VOC的线性分类以及在小型数据集上进行转移学习。
Unsupervised learning methods based on contrastive learning have drawn increasing attention and achieved promising results. Most of them aim to learn representations invariant to instance-level variations, which are provided by different views of the same instance. In this paper, we propose Invariance Propagation to focus on learning representations invariant to category-level variations, which are provided by different instances from the same category. Our method recursively discovers semantically consistent samples residing in the same high-density regions in representation space. We demonstrate a hard sampling strategy to concentrate on maximizing the agreement between the anchor sample and its hard positive samples, which provide more intra-class variations to help capture more abstract invariance. As a result, with a ResNet-50 as the backbone, our method achieves 71.3% top-1 accuracy on ImageNet linear classification and 78.2% top-5 accuracy fine-tuning on only 1% labels, surpassing previous results. We also achieve state-of-the-art performance on other downstream tasks, including linear classification on Places205 and Pascal VOC, and transfer learning on small scale datasets.