论文标题
基于视频的人重新识别的属性感知的身份硬质三胞胎损失
Attribute-aware Identity-hard Triplet Loss for Video-based Person Re-identification
论文作者
论文摘要
基于视频的人重新识别(RE-ID)是一项重要的计算机视觉任务。在基于视频的人重新ID中经常使用的批处理三胞胎损失遭受不同阳性问题(DVDP)问题之间的距离差异。在本文中,我们通过引入一种称为属性感知的身份硬质三重损失(AITL)的新公制学习方法来解决此问题,该方法通过计算属性距离来减少正样本之间的类内变异。为了实现基于视频的人的完整模型,还提出了具有属性驱动时空注意力(ASTA)机制的多任务框架。在火星和DUKEMTMC-VID数据集上进行了广泛的实验表明,AITL和ASTA都非常有效。他们增强了他们,即使是简单的基于视频的简单加权人重新基线也可以超越现有的最新方法。这些代码已发布在https://github.com/yuange250/video基于person-reid-with-with-attribute-information上。
Video-based person re-identification (Re-ID) is an important computer vision task. The batch-hard triplet loss frequently used in video-based person Re-ID suffers from the Distance Variance among Different Positives (DVDP) problem. In this paper, we address this issue by introducing a new metric learning method called Attribute-aware Identity-hard Triplet Loss (AITL), which reduces the intra-class variation among positive samples via calculating attribute distance. To achieve a complete model of video-based person Re-ID, a multi-task framework with Attribute-driven Spatio-Temporal Attention (ASTA) mechanism is also proposed. Extensive experiments on MARS and DukeMTMC-VID datasets shows that both the AITL and ASTA are very effective. Enhanced by them, even a simple light-weighted video-based person Re-ID baseline can outperform existing state-of-the-art approaches. The codes has been published on https://github.com/yuange250/Video-based-person-ReID-with-Attribute-information.