论文标题

视觉表示学习,对低标签高数据制度的自我监督注意力

Visual Representation Learning with Self-Supervised Attention for Low-Label High-data Regime

论文作者

Bhattacharyya, Prarthana, Li, Chenge, Zhao, Xiaonan, Fehérvári, István, Sun, Jason

论文摘要

自学的自然语言对自然语言处理显示出了出色的结果,最近在图像识别方面显示了出色的结果。同时,视觉变压器及其变体已成为各种计算机视觉任务的卷积的有前途且可扩展的替代方案。在本文中,我们是第一个质疑自我监督的视觉变形金刚(SSL-VIT)是否可以适应低标签,高数据表明中的两个重要的计算机视觉任务:很少拍摄的图像分类和零照片图像检索。动机是减少训练视觉嵌入器所需的手动注释数量,并产生可推广和语义上有意义的嵌入。对于几次摄影图像分类,我们在没有任何监督的情况下训练SSL-VIT,并在外部数据上进行训练,并使用此训练有素的嵌入器快速适应具有有限标签的新颖类。对于零摄像图像检索,我们使用在没有任何标签的大型数据集上预先训练的SSL-VIT,并用几个公制学习目标对其进行微调。我们的自我保护的注意力表征的表现优于几个任务的几个公共基准测试的最先进,即Miniimagenet和Cub200,对几个摄像图像分类的分类不超过6%-10%,而斯坦福在线产品,CARS196和CUB200的零摄像机图像检索可通过最大摄影图像检索,可通过最大摄影图像检索到4%%-11%。代码可在\ url {https://github.com/autovision-cloud/ssl-vit-lowlabel-highdata}中获得。

Self-supervision has shown outstanding results for natural language processing, and more recently, for image recognition. Simultaneously, vision transformers and its variants have emerged as a promising and scalable alternative to convolutions on various computer vision tasks. In this paper, we are the first to question if self-supervised vision transformers (SSL-ViTs) can be adapted to two important computer vision tasks in the low-label, high-data regime: few-shot image classification and zero-shot image retrieval. The motivation is to reduce the number of manual annotations required to train a visual embedder, and to produce generalizable and semantically meaningful embeddings. For few-shot image classification we train SSL-ViTs without any supervision, on external data, and use this trained embedder to adapt quickly to novel classes with limited number of labels. For zero-shot image retrieval, we use SSL-ViTs pre-trained on a large dataset without any labels and fine-tune them with several metric learning objectives. Our self-supervised attention representations outperforms the state-of-the-art on several public benchmarks for both tasks, namely miniImageNet and CUB200 for few-shot image classification by up-to 6%-10%, and Stanford Online Products, Cars196 and CUB200 for zero-shot image retrieval by up-to 4%-11%. Code is available at \url{https://github.com/AutoVision-cloud/SSL-ViT-lowlabel-highdata}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源