论文标题
开放式域适应的无源渐进式学习学习
Source-Free Progressive Graph Learning for Open-Set Domain Adaptation
论文作者
论文摘要
开放式域适应性(OSDA)在许多视觉识别任务中引起了极大的关注。但是,由于三个主要原因,大多数现有的OSDA方法受到限制,包括:(1)缺乏对泛化结合的基本理论分析,(2)对适应过程中源和目标数据共存的依赖,以及(3)未能准确估计模型预测的不确定性。我们提出了一个渐进的图形学习(PGL)框架,将目标假设空间分解为共享和未知子空间,然后逐步为假设适应的目标域中的最自信的已知样本进行伪造。此外,我们解决了更现实的无源开放式域适应(SF-OSDA)设置,该设置没有假设源和目标域的共存,并在两阶段的框架中引入了平衡的伪标记(BP-L)策略,即SF-PGL,即SF-PGL。与PGL不同,该PGL应用于伪标记的所有目标样本的类别不稳定的常数阈值,SF-PGL模型均匀地以固定比率从每个类别中选择了最自信的目标实例。每个班级的置信度阈值被认为是学习语义信息的“不确定性”,然后将其用于权衡适应步骤中的分类损失。我们在基准图像分类和动作识别数据集上进行了无监督和半监督的OSDA和SF-OSDA实验。此外,我们发现平衡的伪标记在改善校准中起着重要作用,这使得受过训练的模型降低了对目标数据的过度自信或不受信心的预测。源代码可在https://github.com/luoyadan/sf-pgl上找到。
Open-set domain adaptation (OSDA) has gained considerable attention in many visual recognition tasks. However, most existing OSDA approaches are limited due to three main reasons, including: (1) the lack of essential theoretical analysis of generalization bound, (2) the reliance on the coexistence of source and target data during adaptation, and (3) failing to accurately estimate the uncertainty of model predictions. We propose a Progressive Graph Learning (PGL) framework that decomposes the target hypothesis space into the shared and unknown subspaces, and then progressively pseudo-labels the most confident known samples from the target domain for hypothesis adaptation. Moreover, we tackle a more realistic source-free open-set domain adaptation (SF-OSDA) setting that makes no assumption about the coexistence of source and target domains, and introduce a balanced pseudo-labeling (BP-L) strategy in a two-stage framework, namely SF-PGL. Different from PGL that applies a class-agnostic constant threshold for all target samples for pseudo-labeling, the SF-PGL model uniformly selects the most confident target instances from each category at a fixed ratio. The confidence thresholds in each class are regarded as the 'uncertainty' of learning the semantic information, which are then used to weigh the classification loss in the adaptation step. We conducted unsupervised and semi-supervised OSDA and SF-OSDA experiments on the benchmark image classification and action recognition datasets. Additionally, we find that balanced pseudo-labeling plays a significant role in improving calibration, which makes the trained model less prone to over-confident or under-confident predictions on the target data. Source code is available at https://github.com/Luoyadan/SF-PGL.