论文标题
神经网络中的元学习:一项调查
Meta-Learning in Neural Networks: A Survey
论文作者
论文摘要
近年来,元学习或学习对学习的领域引起了人们的兴趣。与使用固定的学习算法从头开始解决任务的常规方法相反,鉴于多个学习情节的经验,元学习旨在改善学习算法本身。该范式提供了解决深度学习的许多常规挑战的机会,包括数据和计算瓶颈以及概括。这项调查描述了当代的元学习景观。我们首先讨论元学习的定义,并将其定位在相关领域,例如转移学习和超参数优化。然后,我们提出了一种新的分类法,该分类法提供了当今元学习方法空间的更全面分解。我们调查了元学习的有希望的应用和成功,例如少量学习和强化学习。最后,我们讨论了未来研究的杰出挑战和有希望的领域。
The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where tasks are solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself, given the experience of multiple learning episodes. This paradigm provides an opportunity to tackle many conventional challenges of deep learning, including data and computation bottlenecks, as well as generalization. This survey describes the contemporary meta-learning landscape. We first discuss definitions of meta-learning and position it with respect to related fields, such as transfer learning and hyperparameter optimization. We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods today. We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning. Finally, we discuss outstanding challenges and promising areas for future research.