论文标题
与域的自适应对象的自定义对比度学习自适应对象重新ID
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
论文作者
论文摘要
域自适应对象重新ID旨在将学习的知识从标记的源域转移到未标记的目标域,以解决开放阶级的重新识别问题。尽管最先进的基于伪标签的方法取得了巨大的成功,但由于域间隙和不满意的聚类性能,它们并没有充分利用所有有价值的信息。为了解决这些问题,我们提出了一个新颖的自定进度对比学习框架,并具有混合记忆。混合存储器动态生成源域类级,目标域群集级别和未群集的实例级监督信号,用于学习特征表示。与传统的对比学习策略不同,所提出的框架共同区分了源域类别,目标域群集和未群集的实例。最重要的是,提出的自定进度方法逐渐创造了更可靠的簇来完善混合记忆和学习目标,并被证明是我们出色表现的关键。我们的方法在对象重新ID的多个域适应任务上优于最先进的方法,甚至可以提高源域上的性能,而无需任何额外的注释。我们对无监督对象重新ID的广义版本超过了最新的算法,在Market-1501和MSMT17基准方面,大量的16.7%和7.9%。
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks.