论文标题
完全稀疏的3D对象检测
Fully Sparse 3D Object Detection
论文作者
论文摘要
随着LIDAR的感知范围的增加,基于激光雷达的3D对象检测成为自主驾驶长期感知任务的主要任务。主流3D对象检测器通常在网络骨干和预测头上构建密集的特征图。但是,密集特征图上的计算和空间成本与感知范围是二次的,这使它们几乎无法扩展到远程设置。为了启用有效的基于远程激光痛的对象检测,我们构建了一个完全稀疏的3D对象检测器(FSD)。 FSD的计算和空间成本大致是线性的,与感知范围无关。 FSD建立在一般的稀疏体素编码器和新颖的稀疏实例识别(SIR)模块的基础上。爵士第一将点分组为实例,然后应用实例的特征提取和预测。这样,爵士解决了中心功能缺失的问题,这阻碍了所有基于中心或基于锚的探测器的完全稀疏体系结构的设计。此外,SIR通过将点分组为实例,避免了以前基于点的方法中耗时的邻居查询。我们在大规模Waymo开放数据集上进行了广泛的实验,以揭示FSD的工作机制,并报告了最先进的性能。为了证明FSD在远程检测中的优势,我们还对Argoverse 2数据集进行了实验,该数据集的感知范围比Waymo Open Datatet(7500万美元)更大。在如此庞大的感知范围内,FSD实现了最先进的性能,并且比密集的速度快2.4 $ \ times $。代码将在https://github.com/tusimple/sst上发布。
As the perception range of LiDAR increases, LiDAR-based 3D object detection becomes a dominant task in the long-range perception task of autonomous driving. The mainstream 3D object detectors usually build dense feature maps in the network backbone and prediction head. However, the computational and spatial costs on the dense feature map are quadratic to the perception range, which makes them hardly scale up to the long-range setting. To enable efficient long-range LiDAR-based object detection, we build a fully sparse 3D object detector (FSD). The computational and spatial cost of FSD is roughly linear to the number of points and independent of the perception range. FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module. SIR first groups the points into instances and then applies instance-wise feature extraction and prediction. In this way, SIR resolves the issue of center feature missing, which hinders the design of the fully sparse architecture for all center-based or anchor-based detectors. Moreover, SIR avoids the time-consuming neighbor queries in previous point-based methods by grouping points into instances. We conduct extensive experiments on the large-scale Waymo Open Dataset to reveal the working mechanism of FSD, and state-of-the-art performance is reported. To demonstrate the superiority of FSD in long-range detection, we also conduct experiments on Argoverse 2 Dataset, which has a much larger perception range ($200m$) than Waymo Open Dataset ($75m$). On such a large perception range, FSD achieves state-of-the-art performance and is 2.4$\times$ faster than the dense counterpart. Codes will be released at https://github.com/TuSimple/SST.