论文标题

重新访问跨域的中层模式几乎没有识别

Revisiting Mid-Level Patterns for Cross-Domain Few-Shot Recognition

论文作者

Zou, Yixiong, Zhang, Shanghang, Yu, JianPeng, Tian, Yonghong, Moura, José M. F.

论文摘要

现有的少量学习(FSL)方法通常假设基类,而新颖的类来自同一域(内域设置)。但是,实际上,为某些特殊领域构建基本类别的特殊领域收集足够的培训样本可能是不可行的。为了解决这个问题,最近提出了跨域FSL(CDFSL)将知识从通用域基类转移到特殊域的新颖类。现有的CDFSL主要专注于在近域之间转移,而很少考虑在遥远的域之间转移,这在实际需求上是因为任何新颖的类都可能出现在现实世界中的应用中,而且更具挑战性。在本文中,我们研究了CDFSL的一个具有挑战性的子集,其中新颖的类位于基础类别的遥远域,通过重新审视中层特征,这些特征在主流FSL工作中更可转移但更易于探索。为了提高中级功能的可区分性,我们提出了一项残差预测任务,以鼓励中级功能以学习每个样本的歧视性信息。值得注意的是,这种机制也使近域中的内域FSL和CDFSL受益。因此,在相同的训练框架下,我们分别为跨域FSL提供两种类型的功能。在两个设置的六个公共数据集上的实验,包括两个具有挑战性的医疗数据集,验证了我们的理由并展示了最先进的绩效。代码将发布。

Existing few-shot learning (FSL) methods usually assume base classes and novel classes are from the same domain (in-domain setting). However, in practice, it may be infeasible to collect sufficient training samples for some special domains to construct base classes. To solve this problem, cross-domain FSL (CDFSL) is proposed very recently to transfer knowledge from general-domain base classes to special-domain novel classes. Existing CDFSL works mostly focus on transferring between near domains, while rarely consider transferring between distant domains, which is in practical need as any novel classes could appear in real-world applications, and is even more challenging. In this paper, we study a challenging subset of CDFSL where the novel classes are in distant domains from base classes, by revisiting the mid-level features, which are more transferable yet under-explored in main stream FSL work. To boost the discriminability of mid-level features, we propose a residual-prediction task to encourage mid-level features to learn discriminative information of each sample. Notably, such mechanism also benefits the in-domain FSL and CDFSL in near domains. Therefore, we provide two types of features for both cross- and in-domain FSL respectively, under the same training framework. Experiments under both settings on six public datasets, including two challenging medical datasets, validate the our rationale and demonstrate state-of-the-art performance. Code will be released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源