论文标题

D3VO:深度,深度,深度姿势和深度不确定性,用于单眼视觉计量学

D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry

论文作者

Yang, Nan, von Stumberg, Lukas, Wang, Rui, Cremers, Daniel

论文摘要

我们建议D3VO作为单眼视觉探光仪的新型框架,该框架利用了三个级别的深度网络 - 深度,姿势和不确定性估计。我们首先提出了一个新颖的自我监督的单眼深度估计网络,该网络在没有任何外部监督的情况下对立体声视频进行了培训。特别是,它将训练图像对与相似的照明条件对齐,并与预测的亮度转换参数对齐。此外,我们在输入图像上对像素的光度不确定性进行了建模,从而提高了深度估计精度,并为直接(无功能)视觉镜头中的光度残差提供了学习的加权函数。评估结果表明,所提出的网络的表现优于最先进的自我监督深度估计网络。 D3VO将预测的深度,姿势和不确定性紧密地融合到直接的视觉探光方法中,以增强前端跟踪以及后端非线性优化。我们根据Kitti Odometry基准和Euroc MAV数据集的单眼视觉探光度来评估D3VO。结果表明,D3VO的表现优于最先进的传统单眼VO方法。它还可以与Kitti上的最新立体声/激光镜和最新的Euroc MAV视觉惯性探测仪相当,同时仅使用单个相机。

We propose D3VO as a novel framework for monocular visual odometry that exploits deep networks on three levels -- deep depth, pose and uncertainty estimation. We first propose a novel self-supervised monocular depth estimation network trained on stereo videos without any external supervision. In particular, it aligns the training image pairs into similar lighting condition with predictive brightness transformation parameters. Besides, we model the photometric uncertainties of pixels on the input images, which improves the depth estimation accuracy and provides a learned weighting function for the photometric residuals in direct (feature-less) visual odometry. Evaluation results show that the proposed network outperforms state-of-the-art self-supervised depth estimation networks. D3VO tightly incorporates the predicted depth, pose and uncertainty into a direct visual odometry method to boost both the front-end tracking as well as the back-end non-linear optimization. We evaluate D3VO in terms of monocular visual odometry on both the KITTI odometry benchmark and the EuRoC MAV dataset.The results show that D3VO outperforms state-of-the-art traditional monocular VO methods by a large margin. It also achieves comparable results to state-of-the-art stereo/LiDAR odometry on KITTI and to the state-of-the-art visual-inertial odometry on EuRoC MAV, while using only a single camera.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源