论文标题
基于样本的正则化:一种转移学习策略,以更好地泛化
Sample-based Regularization: A Transfer Learning Strategy Toward Better Generalization
论文作者
论文摘要
用少量数据培训深层神经网络是一个具有挑战性的问题,因为它很容易受到过度拟合的影响。但是,我们经常遇到的实际困难之一是收集许多样本。转移学习是解决此问题的一种经济有效的解决方案。通过使用经过大规模数据集训练的源模型,目标模型可以减轻由于缺乏培训数据而构成的过度拟合。诉诸源模型的概括能力,提出了几种在整个训练过程中使用源知识的方法。但是,这可能会限制目标模型的潜力,并且从源中传输的一些知识可能会干扰训练程序。为了通过一些训练样本来改善目标模型的概括性能,我们提出了一种正规化方法,称为基于样本的正则化(SBR),该方法在培训过程中不依赖源知识。使用SBR,我们提出了一个新的转移学习培训框架。实验结果表明,我们的框架在各种配置中的表现优于现有方法。
Training a deep neural network with a small amount of data is a challenging problem as it is vulnerable to overfitting. However, one of the practical difficulties that we often face is to collect many samples. Transfer learning is a cost-effective solution to this problem. By using the source model trained with a large-scale dataset, the target model can alleviate the overfitting originated from the lack of training data. Resorting to the ability of generalization of the source model, several methods proposed to use the source knowledge during the whole training procedure. However, this is likely to restrict the potential of the target model and some transferred knowledge from the source can interfere with the training procedure. For improving the generalization performance of the target model with a few training samples, we proposed a regularization method called sample-based regularization (SBR), which does not rely on the source's knowledge during training. With SBR, we suggested a new training framework for transfer learning. Experimental results showed that our framework outperformed existing methods in various configurations.