论文标题
RGB-INFRARED人员重新识别的跨模式成对图像生成
Cross-Modality Paired-Images Generation for RGB-Infrared Person Re-Identification
论文作者
论文摘要
由于RGB和IR图像之间发生了较大的跨模式变化,RGB-INFRADE(IR)人的重新识别非常具有挑战性。关键解决方案是学习桥梁RGB和IR模式的对齐功能。但是,由于每对RGB和IR图像之间缺乏对应标签,大多数方法都试图通过减少整个RGB和IR集合之间的距离来减轻设置级别对齐的变化。但是,这种设定级别的对齐可能会导致某些实例的错位,这限制了RGB-IR RE-ID的性能。与现有方法不同,在本文中,我们建议生成跨模式配对图像并执行全局设置级别和细粒度实例级别对齐。我们提出的方法具有多种优点。首先,我们的方法可以通过删除特定于模态和模态不变的功能来执行设定级别的对齐。与常规方法相比,我们的方法可以明确删除特定于模式的特征,并且可以更好地降低方式变化。其次,给定一个人的跨模式不成比例,我们的方法可以从交换的图像中生成交叉模式配对的图像。与他们一起,我们可以通过最大程度地减少每对图像的距离来直接执行实例级别对齐。对两个标准基准测试的广泛实验结果表明,提出的模型对最新方法有利。尤其是在SYSU-MM01数据集上,我们的模型可以在Rank-1和MAP方面获得9.2%和7.7%的增益。代码可在https://github.com/wangguanan/jsia-reid上找到。
RGB-Infrared (IR) person re-identification is very challenging due to the large cross-modality variations between RGB and IR images. The key solution is to learn aligned features to the bridge RGB and IR modalities. However, due to the lack of correspondence labels between every pair of RGB and IR images, most methods try to alleviate the variations with set-level alignment by reducing the distance between the entire RGB and IR sets. However, this set-level alignment may lead to misalignment of some instances, which limits the performance for RGB-IR Re-ID. Different from existing methods, in this paper, we propose to generate cross-modality paired-images and perform both global set-level and fine-grained instance-level alignments. Our proposed method enjoys several merits. First, our method can perform set-level alignment by disentangling modality-specific and modality-invariant features. Compared with conventional methods, ours can explicitly remove the modality-specific features and the modality variation can be better reduced. Second, given cross-modality unpaired-images of a person, our method can generate cross-modality paired images from exchanged images. With them, we can directly perform instance-level alignment by minimizing distances of every pair of images. Extensive experimental results on two standard benchmarks demonstrate that the proposed model favourably against state-of-the-art methods. Especially, on SYSU-MM01 dataset, our model can achieve a gain of 9.2% and 7.7% in terms of Rank-1 and mAP. Code is available at https://github.com/wangguanan/JSIA-ReID.