论文标题

将视觉先验从自我监督学习中提取

Distilling Visual Priors from Self-Supervised Learning

论文作者

Zhao, Bingchen, Wen, Xin

论文摘要

卷积神经网络(CNN)容易过度适合小型培训数据集。我们提出了一种新型的两阶段管道,该管道利用自我监督的学习和知识蒸馏来提高CNN模型在数据缺陷设置下进行图像分类的概括能力。第一阶段是学习一个教师模型,该模型通过自学学习具有丰富且可推广的视觉表示,第二阶段是以自我验证方式将代表提炼成学生模型,同时将学生模型用于图像分类任务。我们还为自我监督的对比学习代理任务提出了新的边缘损失,以更好地学习数据缺陷的情况下的表示形式。与其他技巧一起,我们在Vipriors图像分类挑战中实现了竞争性能。

Convolutional Neural Networks (CNNs) are prone to overfit small training datasets. We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting. The first phase is to learn a teacher model which possesses rich and generalizable visual representations via self-supervised learning, and the second phase is to distill the representations into a student model in a self-distillation manner, and meanwhile fine-tune the student model for the image classification task. We also propose a novel margin loss for the self-supervised contrastive learning proxy task to better learn the representation under the data-deficient scenario. Together with other tricks, we achieve competitive performance in the VIPriors image classification challenge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源