论文标题

XTARNET:学习提取任务自适应表示,以增加数量学习

XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning

论文作者

Yoon, Sung Whan, Kim, Do-Yeon, Seo, Jun, Moon, Jaekyun

论文摘要

在保留先验知识的同时学习新颖的概念是机器学习的长期挑战。当只有几个标记的示例(即逐步学习的问题)提供新的任务时,挑战就会变得更大。我们提出XTARNET,该Xtarnet学会提取任务自适应表示(TAR),以促进渐进的几次学习。该方法利用了在一组基本类别上预测的骨干网络,同时还采用了跨情节进行元训练的其他模块。鉴于一项新任务,从元训练的模块中提取的新功能与从预告片模型获得的基本特征混合在一起。结合两个不同特征的过程提供了焦油,也由元训练的模块控制。焦油包含有效的信息,用于分类新颖和基本类别。基础和新颖的分类器通过利用焦油来迅速适应给定的任务。标准图像数据集的实验表明,XTARNET达到了最新的渐进率学习绩效。焦油的概念也可以与现有的增量少量学习方法结合使用。实际上,广泛的仿真结果表明,应用TAR可以显着增强已知方法。

Learning novel concepts while preserving prior knowledge is a long-standing challenge in machine learning. The challenge gets greater when a novel task is given with only a few labeled examples, a problem known as incremental few-shot learning. We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning. The method utilizes a backbone network pretrained on a set of base categories while also employing additional modules that are meta-trained across episodes. Given a new task, the novel feature extracted from the meta-trained modules is mixed with the base feature obtained from the pretrained model. The process of combining two different features provides TAR and is also controlled by meta-trained modules. The TAR contains effective information for classifying both novel and base categories. The base and novel classifiers quickly adapt to a given task by utilizing the TAR. Experiments on standard image datasets indicate that XtarNet achieves state-of-the-art incremental few-shot learning performance. The concept of TAR can also be used in conjunction with existing incremental few-shot learning methods; extensive simulation results in fact show that applying TAR enhances the known methods significantly.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源