论文标题
学习实例和分布式视觉表示的极端掩盖
Extreme Masking for Learning Instance and Distributed Visual Representations
论文作者
论文摘要
本文提出了一种可扩展的方法,用于同时学习单个令牌和整体实例表示的空间分布的视觉表示。我们使用自我发项障碍块来表示空间分布的令牌,然后是跨注意区块,以汇总整体图像实例。该方法的核心是使用极大的令牌掩蔽(75 \%-90 \%)作为监督的数据增强。我们的模型命名为极端物,遵循普通的BYOL方法,其中训练了来自未掩盖子集的实例表示,从完整的输入中预测了这一点。该模型不是鼓励跨输入的不变性,而是需要捕获图像中的信息变化。本文做出了三个贡献:1)它将随机掩盖作为强大且计算上有效的数据增强,用于暹罗代表学习。 2)每个实例进行多次抽样,极端掩盖会大大加快学习的速度,并通过更多数据来提高性能。 3)与蒙版建模方法相比,极端探测性能比先前的对比模型更好。
The paper presents a scalable approach for learning spatially distributed visual representations over individual tokens and a holistic instance representation simultaneously. We use self-attention blocks to represent spatially distributed tokens, followed by cross-attention blocks to aggregate the holistic image instance. The core of the approach is the use of extremely large token masking (75\%-90\%) as the data augmentation for supervision. Our model, named ExtreMA, follows the plain BYOL approach where the instance representation from the unmasked subset is trained to predict that from the intact input. Instead of encouraging invariance across inputs, the model is required to capture informative variations in an image. The paper makes three contributions: 1) It presents random masking as a strong and computationally efficient data augmentation for siamese representation learning. 2) With multiple sampling per instance, extreme masking greatly speeds up learning and improves performance with more data. 3) ExtreMA obtains stronger linear probing performance than masked modeling methods, and better transfer performance than prior contrastive models.