论文标题

DSPNET:基于歧视性的自我监督学习,朝着可靠的预处理网络

DSPNet: Towards Slimmable Pretrained Networks based on Discriminative Self-supervised Learning

论文作者

Wang, Shaoru, Li, Zeming, Gao, Jin, Li, Liang, Hu, Weiming

论文摘要

自我监督的学习(SSL)取得了有希望的下游表现。但是,在面临现实世界应用程序中的各种资源预算时,要逐一预算多个尺寸的多个网络的计算负担。在本文中,我们提出了基于歧视性SSL的可靠预处理网络(DSPNET),可以立即训练,然后缩小到各种大小的多个子网络,每个尺寸都可以忠实地学习良好的表示,并可以作为具有各种资源预算的下游任务的良好初始化。具体而言,我们通过优雅地集成SSL和知识蒸馏,将微小网络的思想扩展到歧视性SSL范式。我们在ImageNet上显示了DSPNET在线性评估和半监督评估方案下单独预处理的网络的可比性或改进的性能,同时降低了较大的培训成本。预处理的模型还可以很好地推广到下游检测和分割任务。代码将公开。

Self-supervised learning (SSL) has achieved promising downstream performance. However, when facing various resource budgets in real-world applications, it costs a huge computation burden to pretrain multiple networks of various sizes one by one. In this paper, we propose Discriminative-SSL-based Slimmable Pretrained Networks (DSPNet), which can be trained at once and then slimmed to multiple sub-networks of various sizes, each of which faithfully learns good representation and can serve as good initialization for downstream tasks with various resource budgets. Specifically, we extend the idea of slimmable networks to a discriminative SSL paradigm, by integrating SSL and knowledge distillation gracefully. We show comparable or improved performance of DSPNet on ImageNet to the networks individually pretrained one by one under the linear evaluation and semi-supervised evaluation protocols, while reducing large training cost. The pretrained models also generalize well on downstream detection and segmentation tasks. Code will be made public.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源