论文标题

是什么使实例歧视对转移学习有益?

What makes instance discrimination good for transfer learning?

论文作者

Zhao, Nanxuan, Wu, Zhirong, Lau, Rynson W. H., Lin, Stephen

论文摘要

基于实例歧视借口任务的对比性视觉预测已取得了重大进展。值得注意的是,最近的无监督预定训练的工作已显示出超过监督的对应物,用于对对象检测和分割等下游应用程序进行填充。令人惊讶的是,将图像注释最好没有用于转移学习。在这项工作中,我们研究了以下问题:是什么使实例歧视对转移学习有益?实际上从这些模型中学到了什么知识?从对实例歧视的理解,我们如何更好地利用人类注释标签进行预处理?我们的发现是三倍。首先,对于转移而言,真正重要的是低级和中层表示,而不是高级表示。其次,传统监督模型实现的类别内不变性通过增加任务未对准而削弱了可转移性。最后,可以通过遵循基于示例的方法而没有明确限制的同一类别中的实例来加强监督预处理。

Contrastive visual pretraining based on the instance discrimination pretext task has made significant progress. Notably, recent work on unsupervised pretraining has shown to surpass the supervised counterpart for finetuning downstream applications such as object detection and segmentation. It comes as a surprise that image annotations would be better left unused for transfer learning. In this work, we investigate the following problems: What makes instance discrimination pretraining good for transfer learning? What knowledge is actually learned and transferred from these models? From this understanding of instance discrimination, how can we better exploit human annotation labels for pretraining? Our findings are threefold. First, what truly matters for the transfer is low-level and mid-level representations, not high-level representations. Second, the intra-category invariance enforced by the traditional supervised model weakens transferability by increasing task misalignment. Finally, supervised pretraining can be strengthened by following an exemplar-based approach without explicit constraints among the instances within the same category.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源