论文标题

GCNNMATCH:图形卷积神经网络,用于通过Sinkhorn归一化进行多对象跟踪

GCNNMatch: Graph Convolutional Neural Networks for Multi-Object Tracking via Sinkhorn Normalization

论文作者

Papakis, Ioannis, Sarkar, Abhijit, Karpatne, Anuj

论文摘要

本文提出了一种基于图形卷积神经网络(GCNN)的特征提取和对象关联的端到端特征匹配的新方法,用于在线多对象跟踪(MOT)。基于图的方法将过去框架的对象以及当前框架的外观和几何形状纳入特征学习的任务。这种新的范式使网络能够利用对象几何形状的“上下文”信息,并使我们能够在多个对象的特征之间建模相互作用。我们提出的框架的另一个核心创新是使用Sinkhorn算法在模型培训期间对物体之间的关联进行端到端学习。通过考虑到MOT任务的约束,对网络进行了训练以预测对象关联。实验结果表明,拟议方法在最新的在线方法中提出的MOT '15,16,17和'20挑战方面达到了最高表现的功效。该代码可在https://github.com/ipapakis/gcnnmatch上找到。

This paper proposes a novel method for online Multi-Object Tracking (MOT) using Graph Convolutional Neural Network (GCNN) based feature extraction and end-to-end feature matching for object association. The Graph based approach incorporates both appearance and geometry of objects at past frames as well as the current frame into the task of feature learning. This new paradigm enables the network to leverage the "context" information of the geometry of objects and allows us to model the interactions among the features of multiple objects. Another central innovation of our proposed framework is the use of the Sinkhorn algorithm for end-to-end learning of the associations among objects during model training. The network is trained to predict object associations by taking into account constraints specific to the MOT task. Experimental results demonstrate the efficacy of the proposed approach in achieving top performance on the MOT '15, '16, '17 and '20 Challenges among state-of-the-art online approaches. The code is available at https://github.com/IPapakis/GCNNMatch.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源