论文标题
自我挑战改善跨域概括
Self-Challenging Improves Cross-Domain Generalization
论文作者
论文摘要
卷积神经网络(CNN)通过激活与标签相关的主要特征进行图像分类。当培训和测试数据处于相似的分布之下时,它们的主要特征是相似的,这通常有助于测试数据上的不错的性能。但是,当对来自不同分布的样品进行测试时,该性能是未完成的,这导致了跨域图像分类的挑战。我们引入了一个简单的训练启发式,表示自我挑战(RSC),可显着改善CNN对室外数据的概括。 RSC迭代挑战(丢弃)在训练数据上激活的主要功能,并迫使网络激活与标签相关的剩余功能。此过程似乎可以激活适用于室外数据的功能表示,而无需事先了解新域而没有学习额外的网络参数。我们介绍了RSC的理论特性和条件,以改善跨域泛化。该实验认可我们RSC方法的简单,有效和建筑敏捷的性质。
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels. When the training and testing data are under similar distributions, their dominant features are similar, which usually facilitates decent performance on the testing data. The performance is nonetheless unmet when tested on samples from different distributions, leading to the challenges in cross-domain image classification. We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data. RSC iteratively challenges (discards) the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels. This process appears to activate feature representations applicable to out-of-domain data without prior knowledge of new domain and without learning extra network parameters. We present theoretical properties and conditions of RSC for improving cross-domain generalization. The experiments endorse the simple, effective and architecture-agnostic nature of our RSC method.