论文标题

DARF:深度可见的可推广神经辐射场

DARF: Depth-Aware Generalizable Neural Radiance Field

论文作者

Shi, Yue, Rong, Dingyi, Chen, Chang, Ma, Chaofan, Ni, Bingbing, Zhang, Wenjun

论文摘要

神经辐射场(NERF)彻底改变了新型视图渲染任务,并取得了令人印象深刻的结果。但是,效率低下的采样和人均优化阻碍了其广泛的应用。尽管已经提出了一些可推广的nerf,但由于缺乏几何形状和场景独特性,渲染质量并不令人满意。为了解决这些问题,我们提出了具有深度感知的动态抽样(DADS)策略的深度感知可通用的神经辐射场(DARF),以在没有每场景点优化的情况下对看不见的场景进行有效的新型视图渲染和无监督的深度估计。我们的框架与大多数现有的可概括性NERF不同,在像素级别和几何级别上都没有看到几个输入图像。通过引入预先训练的深度估计模块,以得出先验的深度,将射线采样间隔缩小到估计表面的接近空间,并在预期的最大位置中采样,我们保留场景特征,同时学习新颖观察合成的共同属性。此外,我们引入了多层次的语义一致性损失(MSC),以协助提供更多信息丰富的代表性学习。关于室内和室外数据集的广泛实验表明,与最先进的NERF方法相比,DARF将样本降低了50%,同时改善了渲染质量和深度估计。我们的代码可在https://github.com/shiyue001/garf.git上找到。

Neural Radiance Field (NeRF) has revolutionized novel-view rendering tasks and achieved impressive results. However, the inefficient sampling and per-scene optimization hinder its wide applications. Though some generalizable NeRFs have been proposed, the rendering quality is unsatisfactory due to the lack of geometry and scene uniqueness. To address these issues, we propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy to perform efficient novel view rendering and unsupervised depth estimation on unseen scenes without per-scene optimization. Distinct from most existing generalizable NeRFs, our framework infers the unseen scenes on both pixel level and geometry level with only a few input images. By introducing a pre-trained depth estimation module to derive the depth prior, narrowing down the ray sampling interval to the proximity space of the estimated surface, and sampling in expectation maximum position, we preserve scene characteristics while learning common attributes for novel-view synthesis. Moreover, we introduce a Multi-level Semantic Consistency loss (MSC) to assist with more informative representation learning. Extensive experiments on indoor and outdoor datasets show that compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation. Our code is available on https://github.com/shiyue001/GARF.git.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源