论文标题
repre:通过重建预训练改善自我监督的视觉变压器
RePre: Improving Self-Supervised Vision Transformer with Reconstructive Pre-training
论文作者
论文摘要
最近,自我监督的视觉变形金刚因其令人印象深刻的表示能力而引起了前所未有的关注。但是,主要方法是对比学习,主要依赖于实例歧视借口任务,该任务了解了对图像的全球理解。本文通过重建预训练(Repre)将本地特征学习纳入自我监督的视觉变压器中。我们的repre通过添加一个分支,用于与现有的对比目标并行重建原始图像像素,从而扩展了对比框架。 Repre配备了基于轻量级卷积的解码器,该解码器融合了变压器编码器的多层次结构功能。多层次结构功能提供了从低到高语义信息的丰富监督,这对我们的repre至关重要。我们的代表对具有不同视觉变压器体系结构的各种对比框架进行了体面的改进。下游任务中的转移性能优于监督预训练和最新的(SOTA)自我监督的同行。
Recently, self-supervised vision transformers have attracted unprecedented attention for their impressive representation learning ability. However, the dominant method, contrastive learning, mainly relies on an instance discrimination pretext task, which learns a global understanding of the image. This paper incorporates local feature learning into self-supervised vision transformers via Reconstructive Pre-training (RePre). Our RePre extends contrastive frameworks by adding a branch for reconstructing raw image pixels in parallel with the existing contrastive objective. RePre is equipped with a lightweight convolution-based decoder that fuses the multi-hierarchy features from the transformer encoder. The multi-hierarchy features provide rich supervisions from low to high semantic information, which are crucial for our RePre. Our RePre brings decent improvements on various contrastive frameworks with different vision transformer architectures. Transfer performance in downstream tasks outperforms supervised pre-training and state-of-the-art (SOTA) self-supervised counterparts.