论文标题

标签有效的自动驾驶视觉抽象

Label Efficient Visual Abstractions for Autonomous Driving

论文作者

Behl, Aseem, Chitta, Kashyap, Prakash, Aditya, Ohn-Bar, Eshed, Geiger, Andreas

论文摘要

众所周知,语义分割可以用作学习驾驶策略的有效中间表示。但是,街道场景语义细分的任务需要昂贵的注释。此外,使用辅助图像空间损耗函数,分割算法通常受过训练,无论实际驾驶任务如何,这些算法不能保证可最大程度地提高驾驶指标,例如安全性或每次干预的距离。在这项工作中,我们试图量化减少细分注释成本对学习行为克隆剂的影响。我们分析了几个基于细分的中间表示。我们使用这些视觉抽象来系统地研究注释效率和驾驶性能之间的权衡,即标记的类型,用于学习视觉抽象模型的图像样本的数量及其粒度(例如,对象掩码vs. 2D边界框)。我们的分析发现了一些实用的见解,即如何以更有效的方式利用基于细分的视觉抽象。令人惊讶的是,我们发现可以通过降低注释成本来实现最先进的驾驶性能。除了标签效率之外,我们还会在利用视觉抽象时发现一些其他培训优势,例如与最先进的端到端驾驶模型相比,学到的政策方差大大降低。

It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, i.e., the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (e.g., object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源