论文标题
基于3D激光雷达的视频对象检测的颞通道变压器在自动驾驶中
Temporal-Channel Transformer for 3D Lidar-Based Video Object Detection in Autonomous Driving
论文作者
论文摘要
该行业自主驾驶的强劲需求引起了人们对3D对象检测的强烈兴趣,并导致了许多出色的3D对象检测算法。但是,绝大多数算法仅模拟单帧数据,而忽略了数据序列的时间信息。在这项工作中,我们提出了一个称为颞通道变压器的新变压器,以模拟从激光雷达数据中检测到的视频对象的时空域和通道域关系。作为该变压器的特殊设计,编码器中编码的信息与解码器中的信息不同,即编码器编码多个帧的临时通道信息,而解码器则以体量的方式解码当前帧的空间通道信息。具体而言,变压器的时间通道编码器旨在通过利用来自不同通道和帧的特征之间的相关性来编码不同的通道和帧的信息。另一方面,变压器的空间解码器将解码当前帧的每个位置的信息。在使用检测头进行对象检测之前,部署了栅极机制来重新校准当前框架的特征,该特征通过重复性地改进目标框架的表示与上采样过程中的代表来滤除对象无关的信息。实验结果表明,我们在Nuscenes基准的基于网格体素的3D对象检测中实现了最先进的性能。
The strong demand of autonomous driving in the industry has lead to strong interest in 3D object detection and resulted in many excellent 3D object detection algorithms. However, the vast majority of algorithms only model single-frame data, ignoring the temporal information of the sequence of data. In this work, we propose a new transformer, called Temporal-Channel Transformer, to model the spatial-temporal domain and channel domain relationships for video object detecting from Lidar data. As a special design of this transformer, the information encoded in the encoder is different from that in the decoder, i.e. the encoder encodes temporal-channel information of multiple frames while the decoder decodes the spatial-channel information for the current frame in a voxel-wise manner. Specifically, the temporal-channel encoder of the transformer is designed to encode the information of different channels and frames by utilizing the correlation among features from different channels and frames. On the other hand, the spatial decoder of the transformer will decode the information for each location of the current frame. Before conducting the object detection with detection head, the gate mechanism is deployed for re-calibrating the features of current frame, which filters out the object irrelevant information by repetitively refine the representation of target frame along with the up-sampling process. Experimental results show that we achieve the state-of-the-art performance in grid voxel-based 3D object detection on the nuScenes benchmark.