论文标题

MononeuralFusion:在线单眼神经3D重建与几何先验

MonoNeuralFusion: Online Monocular Neural 3D Reconstruction with Geometric Priors

论文作者

Zou, Zi-Xin, Huang, Shi-Sheng, Cao, Yan-Pei, Mu, Tai-Jiang, Shan, Ying, Fu, Hongbo

论文摘要

来自单眼视频的高保真3D场景重建仍然具有挑战性,尤其是对于完整和细粒的几何重建。以前的3D重建方法具有神经隐式表示表现出了有希望的完整场景重建能力,而其结果通常是过度光滑的,并且缺乏足够的几何细节。本文介绍了一种新颖的神经隐性场景表示,并从单眼视频中进行了高保真在线3D场景重建的音量渲染。对于细粒的重建,我们的主要见解是将几何学先验纳入神经隐式场景表示和神经体积渲染中,从而导致基于体积渲染优化的有效的几何学习机制。从中受益,我们介绍了单眼视频进行在线神经3D重建的单声道融合,在该视频中,在该扫描中有效地生成和优化了3D场景几何形状。与最先进的方法进行了广泛的比较表明,我们的单一灌注始终在定量和定性上产生更好的完整和细粒度重建结果。

High-fidelity 3D scene reconstruction from monocular videos continues to be challenging, especially for complete and fine-grained geometry reconstruction. The previous 3D reconstruction approaches with neural implicit representations have shown a promising ability for complete scene reconstruction, while their results are often over-smooth and lack enough geometric details. This paper introduces a novel neural implicit scene representation with volume rendering for high-fidelity online 3D scene reconstruction from monocular videos. For fine-grained reconstruction, our key insight is to incorporate geometric priors into both the neural implicit scene representation and neural volume rendering, thus leading to an effective geometry learning mechanism based on volume rendering optimization. Benefiting from this, we present MonoNeuralFusion to perform the online neural 3D reconstruction from monocular videos, by which the 3D scene geometry is efficiently generated and optimized during the on-the-fly 3D monocular scanning. The extensive comparisons with state-of-the-art approaches show that our MonoNeuralFusion consistently generates much better complete and fine-grained reconstruction results, both quantitatively and qualitatively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源