论文标题
DEVNET:通过密度构建的自我监督的单眼深度学习
DevNet: Self-supervised Monocular Depth Learning via Density Volume Construction
论文作者
论文摘要
从单眼图像中学习的自我监督深度学习通常依赖于暂时相邻图像帧之间的2D像素光度关系。但是,他们既没有完全利用3D点的几何对应关系,也没有有效地应对闭塞或照明不一致引起的光度扭曲中的歧义。为了解决这些问题,这项工作提出了密度量构建网络(DEVNET),这是一种新型的自我监管的单眼深度学习框架,可以考虑3D空间信息,并利用相邻的相机flustums中更强大的几何约束。我们的DEVNET不是直接从单个图像中回归像素值,而是将摄像头划分为多个平行的平面,并预测每个平面上的点闭塞概率密度。最终的深度图是通过沿相应射线集成密度来生成的。在训练过程中,引入了新颖的正规化策略和损失功能,以减轻光度歧义和过度拟合。如果没有明显放大的模型参数的大小或运行时间,DEVNET在Kitti-2015室外数据集和NYU-V2室内数据集上均优于几个代表性基准。特别是,在深度估计的任务中,在Kitti-2015和Nyu-V2上,DEVNET的根平方差速减少了4%。代码可在https://github.com/gitkaichenzhou/devnet上找到。
Self-supervised depth learning from monocular images normally relies on the 2D pixel-wise photometric relation between temporally adjacent image frames. However, they neither fully exploit the 3D point-wise geometric correspondences, nor effectively tackle the ambiguities in the photometric warping caused by occlusions or illumination inconsistency. To address these problems, this work proposes Density Volume Construction Network (DevNet), a novel self-supervised monocular depth learning framework, that can consider 3D spatial information, and exploit stronger geometric constraints among adjacent camera frustums. Instead of directly regressing the pixel value from a single image, our DevNet divides the camera frustum into multiple parallel planes and predicts the pointwise occlusion probability density on each plane. The final depth map is generated by integrating the density along corresponding rays. During the training process, novel regularization strategies and loss functions are introduced to mitigate photometric ambiguities and overfitting. Without obviously enlarging model parameters size or running time, DevNet outperforms several representative baselines on both the KITTI-2015 outdoor dataset and NYU-V2 indoor dataset. In particular, the root-mean-square-deviation is reduced by around 4% with DevNet on both KITTI-2015 and NYU-V2 in the task of depth estimation. Code is available at https://github.com/gitkaichenzhou/DevNet.