论文标题

GCISG:指导性因果不变学习,以改善syn到真实的概括

GCISG: Guided Causal Invariant Learning for Improved Syn-to-real Generalization

论文作者

Nam, Gilhyun, Choi, Gyeongjae, Lee, Kyungmin

论文摘要

当培训数据稀缺时,用人为生成的数据进行训练可以是一种替代方法,但由于较大的域间隙,培训数据的概括性能差。在本文中,我们通过使用因果框架进行数据生成来表征域间隙。我们假设真实和合成数据具有常见的内容变量,但样式变量不同。因此,随着模型了解滋扰样式变量,对合成数据集训练的模型可能具有较差的概括。为此,我们提出了因果不变性学习,该学习鼓励模型学习一种风格不变的表示,从而增强了SYN到真实的概括。此外,我们提出了一种简单而有效的特征蒸馏方法,以防止灾难性地忘记对真实领域的语义知识。总而言之,我们将我们的方法称为指导的因果不变的syn到现实的概括,从而有效地提高了syn到真实的概括的性能。我们从经验上验证了提出的方法的有效性,尤其是我们的方法在视觉syn到真实的域概括任务(例如图像分类和语义分割)上实现了最新的实现。

Training a deep learning model with artificially generated data can be an alternative when training data are scarce, yet it suffers from poor generalization performance due to a large domain gap. In this paper, we characterize the domain gap by using a causal framework for data generation. We assume that the real and synthetic data have common content variables but different style variables. Thus, a model trained on synthetic dataset might have poor generalization as the model learns the nuisance style variables. To that end, we propose causal invariance learning which encourages the model to learn a style-invariant representation that enhances the syn-to-real generalization. Furthermore, we propose a simple yet effective feature distillation method that prevents catastrophic forgetting of semantic knowledge of the real domain. In sum, we refer to our method as Guided Causal Invariant Syn-to-real Generalization that effectively improves the performance of syn-to-real generalization. We empirically verify the validity of proposed methods, and especially, our method achieves state-of-the-art on visual syn-to-real domain generalization tasks such as image classification and semantic segmentation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源