论文标题
Igformer:基于骨架的人类相互作用识别的相互作用图变压器
IGFormer: Interaction Graph Transformer for Skeleton-based Human Interaction Recognition
论文作者
论文摘要
在许多应用中,人类互动识别非常重要。识别相互作用的一种至关重要的提示是交互式部位。在这项工作中,我们提出了一个新型的交互图形变压器(Igformer)网络,以通过将交互式身体部件作为图进行建模,以用于基于骨架的交互识别。更具体地说,所提出的Igformer根据交互式身体部位之间的语义和距离相关性构造相互作用图,并通过基于学到的图表汇总交互式身体部位的信息来增强每个人的表示。此外,我们提出了一个语义分区模块,将每个人类骨架序列转换为一个身体零件序列,以更好地捕获用于学习图形的骨骼序列的空间和时间信息。在三个基准数据集上进行的广泛实验表明,我们的模型的表现优于最先进的利润率。
Human interaction recognition is very important in many applications. One crucial cue in recognizing an interaction is the interactive body parts. In this work, we propose a novel Interaction Graph Transformer (IGFormer) network for skeleton-based interaction recognition via modeling the interactive body parts as graphs. More specifically, the proposed IGFormer constructs interaction graphs according to the semantic and distance correlations between the interactive body parts, and enhances the representation of each person by aggregating the information of the interactive body parts based on the learned graphs. Furthermore, we propose a Semantic Partition Module to transform each human skeleton sequence into a Body-Part-Time sequence to better capture the spatial and temporal information of the skeleton sequence for learning the graphs. Extensive experiments on three benchmark datasets demonstrate that our model outperforms the state-of-the-art with a significant margin.