论文标题
视图合成的快速深度估计
Fast Depth Estimation for View Synthesis
论文作者
论文摘要
立体图像序列的差异/深度估计是3D视觉中的重要元素。由于阻塞,不完善的设置和均匀的亮度,深度的准确估计仍然是一个具有挑战性的问题。靶向视图综合,我们提出了一个基于学习的新型框架,利用扩张的卷积,密集连接的卷积模块,紧凑的解码器和跳过连接。网络很浅,但密集,因此它是快速准确的。另外两个贡献 - 深度分辨率的非线性调整和投影损失的引入,导致估计误差分别减少了20%和25%。结果表明,我们的网络的表现优于最先进的方法,其深度估计准确性的平均提高,并将合成分别分别提高了约45%和34%。如果我们的方法产生可比的估计深度质量,则其性能比这些方法快10倍。
Disparity/depth estimation from sequences of stereo images is an important element in 3D vision. Owing to occlusions, imperfect settings and homogeneous luminance, accurate estimate of depth remains a challenging problem. Targetting view synthesis, we propose a novel learning-based framework making use of dilated convolution, densely connected convolutional modules, compact decoder and skip connections. The network is shallow but dense, so it is fast and accurate. Two additional contributions -- a non-linear adjustment of the depth resolution and the introduction of a projection loss, lead to reduction of estimation error by up to 20% and 25% respectively. The results show that our network outperforms state-of-the-art methods with an average improvement in accuracy of depth estimation and view synthesis by approximately 45% and 34% respectively. Where our method generates comparable quality of estimated depth, it performs 10 times faster than those methods.