论文标题
本地非参数元学习
Local Nonparametric Meta-Learning
论文作者
论文摘要
元学习的一个核心目标是找到一个学习规则,该规则可以通过学习该集合的适当归纳偏见来实现一组任务的快速适应。大多数元学习算法都试图找到一个编码这种归纳偏见的\ textit {global}学习规则。但是,由固定尺寸表示代表的全球学习规则容易出现元拟合或反拟合,因为任务集的正确代表权很难先选择先验。即使选择正确,我们也表明,即使在相同的归纳偏见是适当的情况下,全局,固定大小的表示也经常失败。为了解决这些问题,我们提出了一种新型的非参数元学习算法,该算法利用了元训练的本地学习规则,这是基于基于注意力和基于功能梯度的基于注意力梯度的元学习的最新思想。在几个元回归问题中,我们使用局部非参数方法显示了改进的荟萃化结果,并在机器人基准Amnipush中实现最先进的结果。
A central goal of meta-learning is to find a learning rule that enables fast adaptation across a set of tasks, by learning the appropriate inductive bias for that set. Most meta-learning algorithms try to find a \textit{global} learning rule that encodes this inductive bias. However, a global learning rule represented by a fixed-size representation is prone to meta-underfitting or -overfitting since the right representational power for a task set is difficult to choose a priori. Even when chosen correctly, we show that global, fixed-size representations often fail when confronted with certain types of out-of-distribution tasks, even when the same inductive bias is appropriate. To address these problems, we propose a novel nonparametric meta-learning algorithm that utilizes a meta-trained local learning rule, building on recent ideas in attention-based and functional gradient-based meta-learning. In several meta-regression problems, we show improved meta-generalization results using our local, nonparametric approach and achieve state-of-the-art results in the robotics benchmark, Omnipush.