论文标题

简单的无监督多对象跟踪

Simple Unsupervised Multi-Object Tracking

论文作者

Karthik, Shyamgopal, Prabhu, Ameya, Gandhi, Vineet

论文摘要

多目标跟踪最近取得了很多进展,尽管具有大量注释成本,以开发出更好和更大的标签数据集。在这项工作中,我们通过提出一个无监督的重新识别网络来消除对带注释的数据集的需求,从而完全避开了培训所需的标签成本。给定未标记的视频,我们提出的方法(SimpleReId)首先使用排序生成跟踪标签,并训练REID网络,以使用CrossentRopy损失来预测生成的标签。我们证明,SimpleReId的性能比简单的替代方案要好得多,并且我们在各种跟踪框架上始终如一地恢复其受监督的对应物的全部性能。观察结果是不寻常的,因为在拥挤的情况下,无监督的里德不会在遮挡和剧烈的观点变化中脱颖而出。通过将我们的无监督的SimpleReid与受过增强静止图像的CenterTrack纳入了训练的中心轨道,我们在不使用跟踪监督的情况下在流行数据集中建立了新的最新性能,将Current Best(CenterTrack)击败0.2-0.3 MOTA和4.4-4.4-4.8 IDF1得分。我们进一步提供了有限范围的证据,以提高IDF1分数超出我们在研究环境中无监督的REID之外。我们的调查表明,通过在简单的无监督替代方案中表现出希望,对更复杂,监督,端到端跟踪器进行了重新考虑。

Multi-object tracking has seen a lot of progress recently, albeit with substantial annotation costs for developing better and larger labeled datasets. In this work, we remove the need for annotated datasets by proposing an unsupervised re-identification network, thus sidestepping the labeling costs entirely, required for training. Given unlabeled videos, our proposed method (SimpleReID) first generates tracking labels using SORT and trains a ReID network to predict the generated labels using crossentropy loss. We demonstrate that SimpleReID performs substantially better than simpler alternatives, and we recover the full performance of its supervised counterpart consistently across diverse tracking frameworks. The observations are unusual because unsupervised ReID is not expected to excel in crowded scenarios with occlusions, and drastic viewpoint changes. By incorporating our unsupervised SimpleReID with CenterTrack trained on augmented still images, we establish a new state-of-the-art performance on popular datasets like MOT16/17 without using tracking supervision, beating current best (CenterTrack) by 0.2-0.3 MOTA and 4.4-4.8 IDF1 scores. We further provide evidence for limited scope for improvement in IDF1 scores beyond our unsupervised ReID in the studied settings. Our investigation suggests reconsideration towards more sophisticated, supervised, end-to-end trackers by showing promise in simpler unsupervised alternatives.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源