论文标题
基于深度学习的雷达和相机传感器融合架构,用于对象检测
A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection
论文作者
论文摘要
近年来,使用深度学习在相机图像中的对象检测已被成功证明。检测率上升和计算高效的网络结构正在将这项技术推向生产工具中的应用。然而,在恶劣的天气条件下,通过稀疏点燃的区域和晚上,相机的传感器质量受到限制。我们的方法通过融合相机数据并预测网络层中的稀疏雷达数据来增强当前的2D对象检测网络。拟议的摄影液adfusionnet(CRF-NET)自动了解传感器数据融合的层次对检测结果最有益。此外,我们介绍了Blackin,这是一种受辍学启发的培训策略,将学习重点放在特定的传感器类型上。我们表明,Fusion网络能够胜过两个不同数据集的最先进的图像网络。该研究的代码将在以下网址向公众提供:https://github.com/tumftm/cameraradarfusionnet。
Object detection in camera images, using deep learning has been proven successfully in recent years. Rising detection rates and computationally efficient network structures are pushing this technique towards application in production vehicles. Nevertheless, the sensor quality of the camera is limited in severe weather conditions and through increased sensor noise in sparsely lit areas and at night. Our approach enhances current 2D object detection networks by fusing camera data and projected sparse radar data in the network layers. The proposed CameraRadarFusionNet (CRF-Net) automatically learns at which level the fusion of the sensor data is most beneficial for the detection result. Additionally, we introduce BlackIn, a training strategy inspired by Dropout, which focuses the learning on a specific sensor type. We show that the fusion network is able to outperform a state-of-the-art image-only network for two different datasets. The code for this research will be made available to the public at: https://github.com/TUMFTM/CameraRadarFusionNet.