论文标题

DDD20端到端的事件摄像机驾驶数据集:与深度学习的融合框架和事件,以改进转向预测

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction

论文作者

Hu, Yuhuang, Binas, Jonathan, Neil, Daniel, Liu, Shih-Chii, Delbruck, Tobi

论文摘要

在困难的照明条件下,神经形态事件摄像机可用于动态视力问题。为了研究在汽车驾驶应用中使用事件摄像机的研究,本文报告了一个新的端到端驾驶数据集,称为DDD20。该数据集用戴维斯摄像机捕获,该摄像机同时播放了动态视觉传感器(DVS)亮度变化事件和主动像素传感器(APS)强度帧。 DDD20是迄今为止最长的事件摄像头端到端驾驶数据集,迄今为止,在各种照明条件下,戴维斯事件+框架摄像头和车辆人体控制数据从4000公里的高速公路和城市驾驶收集。使用DDD20,我们使用深度学习方法预测瞬时人体方向盘角度进行了融合亮度变化事件和强度框架数据的首次研究。在整个白天和夜间条件中,与单独使用DVS+APS框架(0.88)相比,单独使用DVS+APS框架(0.88),从RESNET-32的人类转向预测的解释差异要好得多。

Neuromorphic event cameras are useful for dynamic vision problems under difficult lighting conditions. To enable studies of using event cameras in automobile driving applications, this paper reports a new end-to-end driving dataset called DDD20. The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions. Using DDD20, we report the first study of fusing brightness change events and intensity frame data using a deep learning approach to predict the instantaneous human steering wheel angle. Over all day and night conditions, the explained variance for human steering prediction from a Resnet-32 is significantly better from the fused DVS+APS frames (0.88) than using either DVS (0.67) or APS (0.77) data alone.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源