论文标题
元学习的模型不合时宜的学习
Model-Agnostic Learning to Meta-Learn
论文作者
论文摘要
在本文中,我们提出了一种学习算法,该算法使模型能够从看不见的任务分布中快速利用相关任务之间的共同点,然后迅速从同一分布中适应特定的任务。我们调查了使用不同任务分布的学习如何首先通过对相关任务进行元调节来提高适应性,然后再通过填充来改善目标任务概括。合成回归实验验证了学习元学习的直觉可改善适应性和连续概括。对更复杂的图像分类,持续回归和增强学习任务的实验表明,学习对元学习通常改善了特定于任务的适应性。在进行结论实验之前,通过同行评审对本提案中的方法,设置和假设进行了积极评估。
In this paper, we propose a learning algorithm that enables a model to quickly exploit commonalities among related tasks from an unseen task distribution, before quickly adapting to specific tasks from that same distribution. We investigate how learning with different task distributions can first improve adaptability by meta-finetuning on related tasks before improving goal task generalization with finetuning. Synthetic regression experiments validate the intuition that learning to meta-learn improves adaptability and consecutively generalization. Experiments on more complex image classification, continual regression, and reinforcement learning tasks demonstrate that learning to meta-learn generally improves task-specific adaptation. The methodology, setup, and hypotheses in this proposal were positively evaluated by peer review before conclusive experiments were carried out.