论文标题

学习学习单个领域的概括

Learning to Learn Single Domain Generalization

论文作者

Qiao, Fengchun, Zhao, Long, Peng, Xi

论文摘要

我们关注模型概括中最坏的情况,因为一个模型旨在在许多看不见的域上表现良好,而只有一个单个域可用于培训。我们提出了一种名为“对抗域”增强的新方法,以解决此分布(OOD)概括问题。关键思想是利用对抗性训练来创建“虚构的”但“具有挑战性”的人群,模型可以从中学会通过理论保证进行概括。为了促进快速和理想的域增强,我们将模型训练施加在元学习方案中,并使用Wasserstein自动编码器(WAE)放宽广泛使用的最坏情况约束。提供了详细的理论分析来证明我们的配方,而在多个基准数据集上进行了广泛的实验表明其在应对单个领域概括方面的出色表现。

We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源