论文标题
学习通过基于内存的多源元学习来概括看不见的域名
Learning to Generalize Unseen Domains via Memory-based Multi-Source Meta-Learning for Person Re-Identification
论文作者
论文摘要
在重新识别(REID)的最新进展中,在受监督和无监督的学习环境中获得了令人印象深刻的准确性。但是,大多数现有方法都需要通过访问数据来训练新域的新模型。由于公共隐私,新的域数据并非总是可访问的,从而导致这些方法的适用性有限。在本文中,我们研究了REID中的多源域概括的问题,该问题旨在学习一个模型,该模型在只有几个标记的源域上可以很好地在看不见的域上表现良好。为了解决这个问题,我们提出了基于内存的多源元学习(M $^3 $ L)框架,以训练不看见域的可推广模型。具体而言,引入了元学习策略,以模拟域测试过程的域概括过程,以学习更多可概括的模型。为了克服由参数分类器引起的不稳定的元优化,我们提出了一种基于内存的识别损失,该损失是非参数,并与元学习和协调。我们还提出了一个元批归一层(metabn),以使元检验特征多样化,从而进一步建立了元学习的优势。实验表明,我们的M $^3 $ L可以有效地增强模型对看不见的域的概括能力,并且可以在四个大规模REID数据集上胜过最先进的方法。
Recent advances in person re-identification (ReID) obtain impressive accuracy in the supervised and unsupervised learning settings. However, most of the existing methods need to train a new model for a new domain by accessing data. Due to public privacy, the new domain data are not always accessible, leading to a limited applicability of these methods. In this paper, we study the problem of multi-source domain generalization in ReID, which aims to learn a model that can perform well on unseen domains with only several labeled source domains. To address this problem, we propose the Memory-based Multi-Source Meta-Learning (M$^3$L) framework to train a generalizable model for unseen domains. Specifically, a meta-learning strategy is introduced to simulate the train-test process of domain generalization for learning more generalizable models. To overcome the unstable meta-optimization caused by the parametric classifier, we propose a memory-based identification loss that is non-parametric and harmonizes with meta-learning. We also present a meta batch normalization layer (MetaBN) to diversify meta-test features, further establishing the advantage of meta-learning. Experiments demonstrate that our M$^3$L can effectively enhance the generalization ability of the model for unseen domains and can outperform the state-of-the-art methods on four large-scale ReID datasets.