论文标题

Amodal CityScapes:一个新的数据集,它的一代和Amodal语义分割挑战基线

Amodal Cityscapes: A New Dataset, its Generation, and an Amodal Semantic Segmentation Challenge Baseline

论文作者

Breitenstein, Jasmin, Fingscheidt, Tim

论文摘要

Amodal的感知称人类想象被遮挡物体的整个形状的能力。这为人类提供了一个优势,可以跟踪正在发生的一切,尤其是在拥挤的情况下。然而,典型的感知功能缺乏肢体感知能力,因此在闭塞情况下处于不利地位。复杂的城市驾驶场景通常会经历许多不同类型的阻塞,因此,自动化车辆的Amodal感知是研究的重要任务。在本文中,我们考虑了Amodal语义分割的任务,并提出了一种生成数据集以训练Amodal语义分割方法的通用方法。我们使用这种方法来生成Amodal CityScapes数据集。此外,我们提出并评估一种方法作为Amodal CityScapes的基线,显示了其适用于汽车环境感知中Amodal语义细分的适用性。我们提供了在GitHub上重新生成此数据集的方法。

Amodal perception terms the ability of humans to imagine the entire shapes of occluded objects. This gives humans an advantage to keep track of everything that is going on, especially in crowded situations. Typical perception functions, however, lack amodal perception abilities and are therefore at a disadvantage in situations with occlusions. Complex urban driving scenarios often experience many different types of occlusions and, therefore, amodal perception for automated vehicles is an important task to investigate. In this paper, we consider the task of amodal semantic segmentation and propose a generic way to generate datasets to train amodal semantic segmentation methods. We use this approach to generate an amodal Cityscapes dataset. Moreover, we propose and evaluate a method as baseline on Amodal Cityscapes, showing its applicability for amodal semantic segmentation in automotive environment perception. We provide the means to re-generate this dataset on github.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源