论文标题

跨越元任务的元学习,用于几次学习

Meta-Learning across Meta-Tasks for Few-Shot Learning

论文作者

Fei, Nanyi, Lu, Zhiwu, Gao, Yizhao, Tian, Jia, Xiang, Tao, Wen, Ji-Rong

论文摘要

现有的基于元学习的少数射击学习(FSL)方法通常采用情节培训策略,每个情节都包含元任务。在整个情节中,这些任务被随机采样,它们的关系被忽略。在本文中,我们认为应利用Meta-任务之间的关系,并从战略上对这些任务进行采样以协助元学习。具体而言,我们考虑了两种类型的元任务对定义的关系,并提出了不同的策略来利用它们。 (1)两个具有不相关类别的元任务:这对很有趣,因为它让人联想到源见的类和目标是看不见的类之间的关系,该类别由域差异引起的域间隙。提出了一种称为元域适应性(MDA)的新型学习目标,以使元学习模型对域间隙更加健壮。 (2)两个具有相同类别集的元任务:这对很有用,因为它可以用于学习可靠的模型,以防止较差的射击。为此,提出了一种新型的元知识蒸馏(MKD)目标。实验中有一些错误。因此,我们选择提取本文。

Existing meta-learning based few-shot learning (FSL) methods typically adopt an episodic training strategy whereby each episode contains a meta-task. Across episodes, these tasks are sampled randomly and their relationships are ignored. In this paper, we argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning. Specifically, we consider the relationships defined over two types of meta-task pairs and propose different strategies to exploit them. (1) Two meta-tasks with disjoint sets of classes: this pair is interesting because it is reminiscent of the relationship between the source seen classes and target unseen classes, featured with domain gap caused by class differences. A novel learning objective termed meta-domain adaptation (MDA) is proposed to make the meta-learned model more robust to the domain gap. (2) Two meta-tasks with identical sets of classes: this pair is useful because it can be employed to learn models that are robust against poorly sampled few-shots. To that end, a novel meta-knowledge distillation (MKD) objective is formulated. There are some mistakes in the experiments. We thus choose to withdraw this paper.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源