论文标题
数据效率传输的统一先验
Uniform Priors for Data-Efficient Transfer
论文作者
论文摘要
深度神经网络在各种下游应用方面表现出了巨大的希望。但是它们适应新数据和任务的能力仍然是一个挑战。但是,对新任务进行少量或零射的适应性的能力对于机器学习模型的可扩展性和部署至关重要。因此,至关重要的是要了解什么是最能允许这种适应的深层网络中良好,可转移的功能的原因。在本文中,我们通过证明最可转移的功能在嵌入空间中具有很高的均匀性,并提出了一种统一的正则化方案,从而鼓励更好地传递和特征再利用,从而阐明了这一点。我们评估了其促进对未见任务和数据的适应能力的正则化,为此我们进行了一项彻底的实验研究,涵盖了四个相关和不同的领域:很少射击的元学习,深度度量学习,零射击域的适应性,以及分发分类分类。在所有实验中,我们都表明,统一性正规化始终为基线方法提供好处,并能够在深度度量学习和元学习中实现最先进的表现。
Deep Neural Networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenge. However, the ability to perform few or zero-shot adaptation to novel tasks is important for the scalability and deployment of machine learning models. It is therefore crucial to understand what makes for good, transfer-able features in deep networks that best allow for such adaptation. In this paper, we shed light on this by showing that features that are most transferable have high uniformity in the embedding space and propose a uniformity regularization scheme that encourages better transfer and feature reuse. We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data, for which we conduct a thorough experimental study covering four relevant, and distinct domains: few-shot Meta-Learning, Deep Metric Learning, Zero-Shot Domain Adaptation, as well as Out-of-Distribution classification. Across all experiments, we show that uniformity regularization consistently offers benefits over baseline methods and is able to achieve state-of-the-art performance in Deep Metric Learning and Meta-Learning.