论文标题
自我监督方法在跨域几乎没有学习中的表现如何?
How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot Learning?
论文作者
论文摘要
在计算机视觉领域,跨域几乎没有学习(CDFSL)仍然是一个未解决的问题,而自我监管的学习则提出了一个有希望的解决方案。两种学习方法都试图减轻深网对大规模标记数据需求的依赖性。尽管自我监督的方法最近已经显着提高,但它们在CDFSL上的实用性相对尚未探索。在本文中,我们通过对现有方法的彻底评估来研究在CDFSL背景下学习的自我监督表示学习的作用。令人惊讶的是,即使有了浅层架构或小型培训数据集,与现有的SOTA方法相比,自我监管的方法也可以表现出色。然而,没有单一的自我监督方法主导所有数据集,表明现有的自我监督方法并非普遍适用。此外,我们发现从自我监管方法中提取的表示形式比监督方法表现出更强的鲁棒性。有趣的是,自我监督的表示是否在源域上表现良好,与它们在目标域上的适用性几乎没有相关性。作为研究的一部分,我们对六种代表性分类器的性能进行客观测量。结果表明,原型分类器是CDFSL的标准评估配方。
Cross-domain few-shot learning (CDFSL) remains a largely unsolved problem in the area of computer vision, while self-supervised learning presents a promising solution. Both learning methods attempt to alleviate the dependency of deep networks on the requirement of large-scale labeled data. Although self-supervised methods have recently advanced dramatically, their utility on CDFSL is relatively unexplored. In this paper, we investigate the role of self-supervised representation learning in the context of CDFSL via a thorough evaluation of existing methods. It comes as a surprise that even with shallow architectures or small training datasets, self-supervised methods can perform favorably compared to the existing SOTA methods. Nevertheless, no single self-supervised approach dominates all datasets indicating that existing self-supervised methods are not universally applicable. In addition, we find that representations extracted from self-supervised methods exhibit stronger robustness than the supervised method. Intriguingly, whether self-supervised representations perform well on the source domain has little correlation with their applicability on the target domain. As part of our study, we conduct an objective measurement of the performance for six kinds of representative classifiers. The results suggest Prototypical Classifier as the standard evaluation recipe for CDFSL.