论文标题

重新访问元学习作为监督学习

Revisiting Meta-Learning as Supervised Learning

论文作者

Chao, Wei-Lun, Ye, Han-Jia, Zhan, De-Chuan, Campbell, Mark, Weinberger, Kilian Q.

论文摘要

近年来,关于元学习的新出版物和方法有很多新的出版物。这种社区范围内的热情引发了很好的见解,但也创造了许多看似不同的框架,这可能很难比较和评估。在本文中,我们旨在通过重新审视和加强元学习与传统监督学习之间的联系来提供一个有原则的统一框架。通过将成对的特定于任务数据集和目标模型视为(特征,标签)样本,我们可以将许多元学习算法减少到监督学习的实例。这种观点不仅将元学习统一为直观且实用的框架,而且还使我们能够直接从监督学习中转移见解以改善元学习。例如,我们可以更好地了解概括属性,并且可以轻松地传输众所周知的技术,例如模型集合,预训练,联合训练,数据增强,甚至是最近的基于邻居的方法。在元学习的背景下,我们提供了这些方法的直观类比,并表明它们在几乎没有射击学习上可以显着改善模型性能。

Recent years have witnessed an abundance of new publications and approaches on meta-learning. This community-wide enthusiasm has sparked great insights but has also created a plethora of seemingly different frameworks, which can be hard to compare and evaluate. In this paper, we aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning. By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning. This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning. For example, we obtain a better understanding of generalization properties, and we can readily transfer well-understood techniques, such as model ensemble, pre-training, joint training, data augmentation, and even nearest neighbor based methods. We provide an intuitive analogy of these methods in the context of meta-learning and show that they give rise to significant improvements in model performance on few-shot learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源