论文标题
元学习与特定于部分参数的特定适应性的收敛性
Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters
论文作者
论文摘要
尽管模型不合时宜的元学习(MAML)是元学习实践中非常成功的算法,但它可以具有较高的计算成本,因为它更新了所有模型参数,而不是任务特异性适应的内部循环和元初始化训练的外环。 Raghu等人最近提出了一种更有效的算法Anil(几乎没有内部环)。 2019年,它仅适应内部循环中的一小部分参数,因此与MAML相比,计算成本大大降低,如广泛的实验所证明的那样。但是,尚未研究Anil的理论融合。在本文中,我们表征了两个代表性的内环损失几何形状(即强烈跨性别性和非转化性)下Anil的收敛速率和计算复杂性。我们的结果表明,这种几何特性会显着影响芳基的整体收敛性能。例如,随着内环梯度下降步骤的数量$ n $的增加,Anil的收敛速度更快,但非convex Inner-loop损失的收敛速度较慢。此外,我们的复杂性分析提供了关于Anil比MAML提高效率的理论量化。对标准的少数元学习基准测试的实验验证了我们的理论发现。
Although model-agnostic meta-learning (MAML) is a very successful algorithm in meta-learning practice, it can have high computational cost because it updates all model parameters over both the inner loop of task-specific adaptation and the outer-loop of meta initialization training. A more efficient algorithm ANIL (which refers to almost no inner loop) was proposed recently by Raghu et al. 2019, which adapts only a small subset of parameters in the inner loop and thus has substantially less computational cost than MAML as demonstrated by extensive experiments. However, the theoretical convergence of ANIL has not been studied yet. In this paper, we characterize the convergence rate and the computational complexity for ANIL under two representative inner-loop loss geometries, i.e., strongly-convexity and nonconvexity. Our results show that such a geometric property can significantly affect the overall convergence performance of ANIL. For example, ANIL achieves a faster convergence rate for a strongly-convex inner-loop loss as the number $N$ of inner-loop gradient descent steps increases, but a slower convergence rate for a nonconvex inner-loop loss as $N$ increases. Moreover, our complexity analysis provides a theoretical quantification on the improved efficiency of ANIL over MAML. The experiments on standard few-shot meta-learning benchmarks validate our theoretical findings.