论文标题

通过学习上下文形状先验从场景完成

Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion

论文作者

Yan, Xu, Gao, Jiantao, Li, Jie, Zhang, Ruimao, Li, Zhen, Huang, Rui, Cui, Shuguang

论文摘要

激光点云分析是3D计算机视觉的核心任务,尤其是自动驾驶。但是,由于单扫激圈云中严重的稀疏性和噪声干扰,精确的语义分割是非平凡的。在本文中,我们提出了一个新型的稀疏点云点云语义分割框架,并在上下文形状的先验中辅助。在实践中,任何吸引力的网络都可以实现单个扫描点云的初始语义分割(SS),然后流入语义场景完成(SSC)模块作为输入。通过将激光雷达序列中的多个帧合并为监督,优化的SSC模块从顺序的LIDAR数据中学到了上下文形状的先验,将稀疏的单个扫描点云完成到密集。因此,它固有地通过完全端到端的培训来改善SS优化。此外,提出了一个点素相互作用(PVI)模块,以进一步增强SS和SSC任务之间的知识融合,即促进点云的不完整局部几何形状与完整的体素全球结构的相互作用。此外,可以在推理过程中丢弃辅助SSC和PVI模块,而不会增加SS的负担。广泛的实验证实,我们的JS3C-NET在Semantickitti和Semanticposs基准测试中都取得了出色的性能,即相应地提高了4%和3%。

LiDAR point cloud analysis is a core task for 3D computer vision, especially for autonomous driving. However, due to the severe sparsity and noise interference in the single sweep LiDAR point cloud, the accurate semantic segmentation is non-trivial to achieve. In this paper, we propose a novel sparse LiDAR point cloud semantic segmentation framework assisted by learned contextual shape priors. In practice, an initial semantic segmentation (SS) of a single sweep point cloud can be achieved by any appealing network and then flows into the semantic scene completion (SSC) module as the input. By merging multiple frames in the LiDAR sequence as supervision, the optimized SSC module has learned the contextual shape priors from sequential LiDAR data, completing the sparse single sweep point cloud to the dense one. Thus, it inherently improves SS optimization through fully end-to-end training. Besides, a Point-Voxel Interaction (PVI) module is proposed to further enhance the knowledge fusion between SS and SSC tasks, i.e., promoting the interaction of incomplete local geometry of point cloud and complete voxel-wise global structure. Furthermore, the auxiliary SSC and PVI modules can be discarded during inference without extra burden for SS. Extensive experiments confirm that our JS3C-Net achieves superior performance on both SemanticKITTI and SemanticPOSS benchmarks, i.e., 4% and 3% improvement correspondingly.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源