论文标题

通过不确定性和自我训练的特征对齐,用于无源无监督的域适应

Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation

论文作者

Lee, JoonHo, Lee, Gyemin

论文摘要

大多数无监督的域适应性(UDA)方法假设在模型适应过程中可用标记的源图像可用。但是,由于机密性问题或移动设备上的内存约束,这种假设通常是不可行的。一些最近开发的方法在适应过程中不需要源图像,但是它们在扰动图像上的性能有限。为了解决这些问题,我们提出了一种新型的无源UDA方法,该方法仅使用预先训练的源模型和未标记的目标图像。我们的方法通过合并数据增强并以两个一致性目标训练特征生成器来捕获局部不确定性。鼓励功能生成器从头部分类器的决策边界学习一致的视觉功能。因此,改编的模型对图像扰动变得更加健壮。受到自我监督学习的启发,我们的方法促进了预测空间和特征空间之间的空间对齐,同时在特征空间内结合了空间一致性,以减少源域和目标域之间的域间隙。我们还考虑认识性的不确定性来提高模型适应性绩效。对流行的UDA基准数据集进行的广泛实验表明,所提出的无源方法是可比甚至优于香草UDA方法的。此外,当输入图像受到干扰时,改编的模型显示出更强大的结果。

Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation. However, this assumption is often infeasible owing to confidentiality issues or memory constraints on mobile devices. Some recently developed approaches do not require source images during adaptation, but they show limited performance on perturbed images. To address these problems, we propose a novel source-free UDA method that uses only a pre-trained source model and unlabeled target images. Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives. The feature generator is encouraged to learn consistent visual features away from the decision boundaries of the head classifier. Thus, the adapted model becomes more robust to image perturbations. Inspired by self-supervised learning, our method promotes inter-space alignment between the prediction space and the feature space while incorporating intra-space consistency within the feature space to reduce the domain gap between the source and target domains. We also consider epistemic uncertainty to boost the model adaptation performance. Extensive experiments on popular UDA benchmark datasets demonstrate that the proposed source-free method is comparable or even superior to vanilla UDA methods. Moreover, the adapted models show more robust results when input images are perturbed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源