论文标题
“不到一个”学习:从m <n样本学习n课
'Less Than One'-Shot Learning: Learning N Classes From M<N Samples
论文作者
论文摘要
深度神经网络需要大量的训练集,但遭受了高计算成本和较长的培训时间。在较小的训练集的同时维持几乎相同的准确性的训练将非常有益。在少数图的学习环境中,模型必须学习新课程,只给出了该类别的少数样本。一声学习是一种极端的形式,即模型必须从单个示例中学习新课程。我们提出了“少于一个”的学习任务,其中模型必须仅在$ m <n $示例中学习$ n $ n $ new类,并且我们表明这在软标签的帮助下是可以实现的。我们使用k-nearthent邻居分类器的软标签概括来探索可以在“少于一个”的学习设置中创建的复杂决策景观。我们分析了这些决策景观,以得出使用$ M <n $软标签样本分离$ n $类的理论下限,并研究所得系统的鲁棒性。
Deep neural networks require large training sets but suffer from high computational cost and long training times. Training on much smaller training sets while maintaining nearly the same accuracy would be very beneficial. In the few-shot learning setting, a model must learn a new class given only a small number of samples from that class. One-shot learning is an extreme form of few-shot learning where the model must learn a new class from a single example. We propose the `less than one'-shot learning task where models must learn $N$ new classes given only $M<N$ examples and we show that this is achievable with the help of soft labels. We use a soft-label generalization of the k-Nearest Neighbors classifier to explore the intricate decision landscapes that can be created in the `less than one'-shot learning setting. We analyze these decision landscapes to derive theoretical lower bounds for separating $N$ classes using $M<N$ soft-label samples and investigate the robustness of the resulting systems.