论文标题
metaconcept:通过概念图学习抽象,以进行弱监督的少数学习
MetaConcept: Learn to Abstract via Concept Graph for Weakly-Supervised Few-Shot Learning
论文作者
论文摘要
元学习被证明是解决几乎没有学习问题的有效框架。关键挑战是如何最大程度地减少跨任务的基础学习者的概括错误。在本文中,我们通过利用概念图来探讨概念层次结构知识,并将概念图作为基础学习者的明确元知识,而不是学习隐含的元知识,从而提高了元学习的分类性能,使不能进行弱点的学习率使很少的学习问题降低了。为此,我们提出了一个新颖的元学习框架,称为Metaconcept,该框架通过概念图学习了抽象概念。具体而言,我们首先提出了一个具有多级概念抽象的新颖正则化,以限制元学习者通过概念图学习抽象概念(即识别从低级别到高级别的概念)。然后,我们提出一个元概念推断网络作为基础学习者的元学习者,旨在通过抽象概念的共同推断和一些带注释的样本来快速适应新的任务。我们已经对两个弱监督的少量学习基准进行了广泛的实验,即WS-Imagenet-Pure和WS-Imagenet-Mix。我们的实验结果表明,1)拟议的metaconcept优于最先进的方法,其分类精度提高了2%至6%; 2)拟议的元观察能够产生良好的性能,而仅使用弱标记的数据集进行训练。
Meta-learning has been proved to be an effective framework to address few-shot learning problems. The key challenge is how to minimize the generalization error of base learner across tasks. In this paper, we explore the concept hierarchy knowledge by leveraging concept graph, and take the concept graph as explicit meta-knowledge for the base learner, instead of learning implicit meta-knowledge, so as to boost the classification performance of meta-learning on weakly-supervised few-shot learning problems. To this end, we propose a novel meta-learning framework, called MetaConcept, which learns to abstract concepts via the concept graph. Specifically, we firstly propose a novel regularization with multi-level conceptual abstraction to constrain a meta-learner to learn to abstract concepts via the concept graph (i.e. identifying the concepts from low to high levels). Then, we propose a meta concept inference network as the meta-learner for the base learner, aiming to quickly adapt to a novel task by the joint inference of the abstract concepts and a few annotated samples. We have conducted extensive experiments on two weakly-supervised few-shot learning benchmarks, namely, WS-ImageNet-Pure and WS-ImageNet-Mix. Our experimental results show that 1) the proposed MetaConcept outperforms state-of-the-art methods with an improvement of 2% to 6% in classification accuracy; 2) the proposed MetaConcept can be able to yield a good performance though merely training with weakly-labeled data sets.