论文标题
结合步态识别的轮廓和骨骼数据
Combining the Silhouette and Skeleton Data for Gait Recognition
论文作者
论文摘要
步态识别是一种长途生物识别技术,最近引起了强烈的兴趣。目前,两项主要的步态识别作品是基于外观和基于模型的,它们分别从轮廓和骨骼中提取特征。但是,基于外观的方法受到换衣服和携带条件的极大影响,而基于模型的方法受姿势估计的准确性的限制。为了应对这一挑战,本文提出了一个简单而有效的两分支网络,该网络包含一个基于CNN的分支,以Silhouettes作为输入和一个基于GCN的分支以骨骼为输入。此外,为了在基于GCN的分支中更好的步态表示形式,我们提出了一个完全连接的图形卷积操作员,以整合多尺度的图形卷积并减轻对自然关节连接的依赖。此外,我们部署了一个名为STC-ATT的多维注意模块,以同时学习空间,时间和渠道注意。 CASIA-B和OUMVLP的实验结果表明,我们的方法在各种条件下实现了最先进的性能。
Gait recognition, a long-distance biometric technology, has aroused intense interest recently. Currently, the two dominant gait recognition works are appearance-based and model-based, which extract features from silhouettes and skeletons, respectively. However, appearance-based methods are greatly affected by clothes-changing and carrying conditions, while model-based methods are limited by the accuracy of pose estimation. To tackle this challenge, a simple yet effective two-branch network is proposed in this paper, which contains a CNN-based branch taking silhouettes as input and a GCN-based branch taking skeletons as input. In addition, for better gait representation in the GCN-based branch, we present a fully connected graph convolution operator to integrate multi-scale graph convolutions and alleviate the dependence on natural joint connections. Also, we deploy a multi-dimension attention module named STC-Att to learn spatial, temporal and channel-wise attention simultaneously. The experimental results on CASIA-B and OUMVLP show that our method achieves state-of-the-art performance in various conditions.