论文标题
基于梯度的元学习的多步估计
Multi-step Estimation for Gradient-based Meta-learning
论文作者
论文摘要
基于梯度的元学习方法在几次学习,转移学习和广泛的其他领域中都取得了成功。尽管它具有功效和简单性,但在大规模应用中,计算具有大记忆足迹的Hessian Matrix的负担是重要的挑战。为了解决这个问题,我们提出了一种简单而直接的方法,以通过在内部步骤的窗口中重复使用相同的梯度来降低成本。我们描述了拉格朗日形式主义中多步估计的动力学,并讨论了如何减少评估二阶导数估计动力学的动力学。为了验证我们的方法,我们针对多个设置进行了元转移学习和少量学习任务的实验。元转移的实验强调了训练元网络的适用性,而其他近似值受到限制。对于几次学习,我们评估了与流行基线相比的时间和记忆复杂性。我们表明,我们的方法会大大减少训练时间和记忆使用量,保持竞争精度,甚至在某些情况下表现不佳。
Gradient-based meta-learning approaches have been successful in few-shot learning, transfer learning, and a wide range of other domains. Despite its efficacy and simplicity, the burden of calculating the Hessian matrix with large memory footprints is the critical challenge in large-scale applications. To tackle this issue, we propose a simple yet straightforward method to reduce the cost by reusing the same gradient in a window of inner steps. We describe the dynamics of the multi-step estimation in the Lagrangian formalism and discuss how to reduce evaluating second-order derivatives estimating the dynamics. To validate our method, we experiment on meta-transfer learning and few-shot learning tasks for multiple settings. The experiment on meta-transfer emphasizes the applicability of training meta-networks, where other approximations are limited. For few-shot learning, we evaluate time and memory complexities compared with popular baselines. We show that our method significantly reduces training time and memory usage, maintaining competitive accuracies, or even outperforming in some cases.