论文标题

自我实践:将深层网络概括为未见课程,以进行几次学习

Self-Augmentation: Generalizing Deep Networks to Unseen Classes for Few-Shot Learning

论文作者

Seo, Jin-Woo, Jung, Hong-Gyu, Lee, Seong-Whan

论文摘要

很少有学习旨在通过一些培训示例将看不见的课程分类。尽管最近的作品表明,具有精心设计的培训策略的标准迷你批次培训可以提高看不见的班级的概括能力,但在诸如记忆培训统计数据之类的深层网络中,众所周知的问题较少探索几次学习。为了解决这个问题,我们提出了自我授权,以巩固自我混合和自我归化。具体而言,我们利用一种称为自混合的区域辍学技术,其中图像的贴片被替换为同一图像中的其他值。然后,我们采用一个具有辅助分支的骨干网络,该网络具有自己的分类器来强制实施知识共享。最后,我们提出了一个本地代表学习者,以进一步利用一些看不见的课程的培训示例。实验结果表明,该提出的方法的表现优于盛行的几个基准测试的最新方法,并提高了概括能力。

Few-shot learning aims to classify unseen classes with a few training examples. While recent works have shown that standard mini-batch training with a carefully designed training strategy can improve generalization ability for unseen classes, well-known problems in deep networks such as memorizing training statistics have been less explored for few-shot learning. To tackle this issue, we propose self-augmentation that consolidates self-mix and self-distillation. Specifically, we exploit a regional dropout technique called self-mix, in which a patch of an image is substituted into other values in the same image. Then, we employ a backbone network that has auxiliary branches with its own classifier to enforce knowledge sharing. Lastly, we present a local representation learner to further exploit a few training examples for unseen classes. Experimental results show that the proposed method outperforms the state-of-the-art methods for prevalent few-shot benchmarks and improves the generalization ability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源