论文标题

对抗域适应的类分配对齐

Class Distribution Alignment for Adversarial Domain Adaptation

论文作者

Yang, Wanqi, Ling, Tong, Yang, Chengmei, Wang, Lei, Shi, Yinghuan, Zhou, Luping, Yang, Ming

论文摘要

大多数现有的无监督域适应方法主要集中在对齐源和目标域之间样品的边际分布。该设置不足以考虑两个域之间的类分布信息,这可能会对域间隙的减少产生不利影响。为了解决这个问题,我们提出了一种称为条件对抗图像翻译(CADIT)的新方法,以明确地对准两个域之间的样品的类别分布。它整合了歧视性结构的损失和联合对抗生成损失。前者在整个图像翻译过程中有效防止了不希望的标签粉刷,而后者则保持图像和标签的联合分布对齐。此外,我们的方法在适应之前和之后强制实现目标域图像的分类一致性,以帮助两个域中的分类器训练。在多个基准数据集上进行了广泛的实验,包括数字,面,场景和Office31,表明我们的方法在与最先进的方法相比在目标域中获得了较高的分类。同样,定性和定量结果都很好地支持了我们的动机,即使班级分布对齐确实可以改善域的适应性。

Most existing unsupervised domain adaptation methods mainly focused on aligning the marginal distributions of samples between the source and target domains. This setting does not sufficiently consider the class distribution information between the two domains, which could adversely affect the reduction of domain gap. To address this issue, we propose a novel approach called Conditional ADversarial Image Translation (CADIT) to explicitly align the class distributions given samples between the two domains. It integrates a discriminative structure-preserving loss and a joint adversarial generation loss. The former effectively prevents undesired label-flipping during the whole process of image translation, while the latter maintains the joint distribution alignment of images and labels. Furthermore, our approach enforces the classification consistence of target domain images before and after adaptation to aid the classifier training in both domains. Extensive experiments were conducted on multiple benchmark datasets including Digits, Faces, Scenes and Office31, showing that our approach achieved superior classification in the target domain when compared to the state-of-the-art methods. Also, both qualitative and quantitative results well supported our motivation that aligning the class distributions can indeed improve domain adaptation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源