论文标题

GLENET:使用生成标签的不确定性估计来增强3D对象检测器

GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation

论文作者

Zhang, Yifan, Zhang, Qijian, Zhu, Zhiyu, Hou, Junhui, Yuan, Yixuan

论文摘要

由遮挡,信号丢失或手动注释误差引起的3D边界框的地面真相注释的固有歧义会使训练过程中的深3D对象检测器混淆,从而使检测准确性恶化。但是,现有方法在某种程度上忽略了此类问题,并将标签视为确定性。在本文中,我们将标签不确定性问题提出为潜在的物体边界框的多样性。然后,我们提出了一个从条件变化自动编码器的生成框架Glenet,以建模典型的3D对象与其潜在的带有潜在变量的潜在地面周围边界框之间的一对一关系。 Glenet产生的标签不确定性是一个插件模块,可以方便地集成到现有的深3D检测器中,以构建概率检测器并监督本地化不确定性的学习。此外,我们提出了概率检测器中的不确定性感知质量估计量架构,以指导对IOU分支的培训,并预测了本地化不确定性。我们将提出的方法纳入各种流行的3D检测器中,并在Kitti和Waymo基准数据集上显示出显着,一致的性能提高。尤其是,拟议的GLENET-VR的表现都优于所有基于激光雷达的方法,并以很大的利润在挑战性的KITTI测试集中获得了单模式方法的最高排名。源代码和预培训模型可在\ url {https://github.com/eaphan/glenet}上公开获得。

The inherent ambiguity in ground-truth annotations of 3D bounding boxes, caused by occlusions, signal missing, or manual annotation errors, can confuse deep 3D object detectors during training, thus deteriorating detection accuracy. However, existing methods overlook such issues to some extent and treat the labels as deterministic. In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects. Then, we propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables. The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors to build probabilistic detectors and supervise the learning of the localization uncertainty. Besides, we propose an uncertainty-aware quality estimator architecture in probabilistic detectors to guide the training of the IoU-branch with predicted localization uncertainty. We incorporate the proposed methods into various popular base 3D detectors and demonstrate significant and consistent performance gains on both KITTI and Waymo benchmark datasets. Especially, the proposed GLENet-VR outperforms all published LiDAR-based approaches by a large margin and achieves the top rank among single-modal methods on the challenging KITTI test set. The source code and pre-trained models are publicly available at \url{https://github.com/Eaphan/GLENet}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源