论文标题

语义细分的课堂开发学习重复使用既不旧数据也不是旧标签

Class-Incremental Learning for Semantic Segmentation Re-Using Neither Old Data Nor Old Labels

论文作者

Klingner, Marvin, Bär, Andreas, Donn, Philipp, Fingscheidt, Tim

论文摘要

虽然接受语义细分培训的神经网络对于自主驾驶中的感知至关重要,但大多数当前算法都采用固定的类,在开发新的自主驾驶系统并需要其他类别时会呈现主要限制。在本文中,我们提出了一种实施课堂学习的技术,以进行语义细分,而无需使用标记的数据,该技术最初是对模型进行培训的。以前的方法仍然依赖于新旧班的标签,或者无法正确区分它们。我们展示了如何通过一种新颖的课堂学习技术来克服这些问题,尽管如此,该技术仅需要新课程的标签。具体而言,(i)我们引入了一种新的损失函数,该功能既不依赖旧数据,也不依赖旧标签,(ii)我们展示了如何以模块化方式集成到预验证的语义分割模型中,最后(iii)我们在统一设置中重新实现先前的方法,以将它们比较它们。我们在CityScapes数据集上评估了我们的方法,在该数据集中,我们超过了所有基线的MIOU性能,绝对达到了3.5%,仅依靠所有数据和标签的单阶段训练的最高绩效限制,仅比单级训练的上限限制高2.2%。

While neural networks trained for semantic segmentation are essential for perception in autonomous driving, most current algorithms assume a fixed number of classes, presenting a major limitation when developing new autonomous driving systems with the need of additional classes. In this paper we present a technique implementing class-incremental learning for semantic segmentation without using the labeled data the model was initially trained on. Previous approaches still either rely on labels for both old and new classes, or fail to properly distinguish between them. We show how to overcome these problems with a novel class-incremental learning technique, which nonetheless requires labels only for the new classes. Specifically, (i) we introduce a new loss function that neither relies on old data nor on old labels, (ii) we show how new classes can be integrated in a modular fashion into pretrained semantic segmentation models, and finally (iii) we re-implement previous approaches in a unified setting to compare them to ours. We evaluate our method on the Cityscapes dataset, where we exceed the mIoU performance of all baselines by 3.5% absolute reaching a result, which is only 2.2% absolute below the upper performance limit of single-stage training, relying on all data and labels simultaneously.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源