论文标题

PS-NERF:多视图光度立体声的神经反向渲染

PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

论文作者

Yang, Wenqi, Chen, Guanying, Chen, Chaofeng, Chen, Zhenfang, Wong, Kwan-Yee K.

论文摘要

传统的多视图光度立体声(MVPS)方法通常由多个不相交阶段组成,从而导致明显的累积错误。在本文中,我们为基于隐式表示的MVP提出了一种神经反向渲染方法。给定通过多个未知方向灯照亮的非陆层物体的多视图图像,我们的方法共同估计几何形状,材料和灯光。我们的方法首先采用多光图像来估计每视图表面正常地图,这些图用于使从神经辐射场得出的正态定向。然后,它可以根据具有阴影可区分的渲染层共同优化表面正常的BRDF和灯光。优化后,重建的对象可用于新颖的视图渲染,重新定义和材料编辑。合成数据集和真实数据集的实验表明,我们的方法比现有的MVP和神经渲染方法实现了更准确的形状重建。我们的代码和模型可以在https://ywq.github.io/psnerf上找到。

Traditional multi-view photometric stereo (MVPS) methods are often composed of multiple disjoint stages, resulting in noticeable accumulated errors. In this paper, we present a neural inverse rendering method for MVPS based on implicit representation. Given multi-view images of a non-Lambertian object illuminated by multiple unknown directional lights, our method jointly estimates the geometry, materials, and lights. Our method first employs multi-light images to estimate per-view surface normal maps, which are used to regularize the normals derived from the neural radiance field. It then jointly optimizes the surface normals, spatially-varying BRDFs, and lights based on a shadow-aware differentiable rendering layer. After optimization, the reconstructed object can be used for novel-view rendering, relighting, and material editing. Experiments on both synthetic and real datasets demonstrate that our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods. Our code and model can be found at https://ywq.github.io/psnerf.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源