论文标题
RGB-D动作识别的红外和3D骨架功能融合
Infrared and 3D skeleton feature fusion for RGB-D action recognition
论文作者
论文摘要
基于骨架的动作识别的挑战是很难以类似的动作和与对象相关的动作对动作进行分类。在这方面,其他流的视觉线索有助于。 RGB数据对照明条件是明智的,因此在黑暗中无法使用。为了减轻此问题并仍然从视觉流中受益,我们提出了组合骨骼和红外数据的模块化网络(融合)。 2D卷积神经网络(CNN)用作从骨架数据中提取特征的姿势模块。 3D CNN用作红外模块,从视频中提取视觉提示。然后使用多层感知器(MLP)将两个特征向量串联并相连。骨骼数据还可以调节红外视频,并在表演主题周围提供农作物,因此实际上将重点放在红外模块的注意力下。消融研究表明,在其他大规模数据集上使用预训练的网络作为我们的模块和数据增强,可以对动作分类精度产生可观的提高。还证明了我们的种植策略的强有力贡献。我们在NTU RGB+D数据集上评估了我们的方法,该数据集是深度摄像机的人类动作识别的最大数据集,并报告最先进的性能。
A challenge of skeleton-based action recognition is the difficulty to classify actions with similar motions and object-related actions. Visual clues from other streams help in that regard. RGB data are sensible to illumination conditions, thus unusable in the dark. To alleviate this issue and still benefit from a visual stream, we propose a modular network (FUSION) combining skeleton and infrared data. A 2D convolutional neural network (CNN) is used as a pose module to extract features from skeleton data. A 3D CNN is used as an infrared module to extract visual cues from videos. Both feature vectors are then concatenated and exploited conjointly using a multilayer perceptron (MLP). Skeleton data also condition the infrared videos, providing a crop around the performing subjects and thus virtually focusing the attention of the infrared module. Ablation studies show that using pre-trained networks on other large scale datasets as our modules and data augmentation yield considerable improvements on the action classification accuracy. The strong contribution of our cropping strategy is also demonstrated. We evaluate our method on the NTU RGB+D dataset, the largest dataset for human action recognition from depth cameras, and report state-of-the-art performances.