论文标题

深3D捕获:稀疏多视图图像的几何和反射率

Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images

论文作者

Bi, Sai, Xu, Zexiang, Sunkavalli, Kalyan, Kriegman, David, Ramamoorthi, Ravi

论文摘要

我们介绍了一种基于学习的新方法,以从仅在相处的点照明下捕获的六个由宽基基线摄像机捕获的稀疏集中的稀疏集中重建高质量的几何形状和复杂的,空间上变化的BRDF。我们首先使用深层多视立体网络估算每视深度图;这些深度图用于粗调不同的视图。我们提出了一种新型的多视图估计网络体系结构,该结构经过训练,可以从这些粗糙的图像中池特征进行池特征,并预测空间变化的弥漫性反照率,表面正常状态,镜面粗糙度和镜面反向反映。我们通过共同优化多视图反射网络的潜在空间来最大程度地减少使用预测图像和输入图像呈现的图像之间的光度误差。尽管以前的最新方法在这种稀疏的采集设置上失败了,但我们通过对合成和真实数据进行了广泛的实验,表明我们的方法会产生高质量的重建,这些重建可用于呈现影像逼真的图像。

We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object from a sparse set of only six images captured by wide-baseline cameras under collocated point lighting. We first estimate per-view depth maps using a deep multi-view stereo network; these depth maps are used to coarsely align the different views. We propose a novel multi-view reflectance estimation network architecture that is trained to pool features from these coarsely aligned images and predict per-view spatially-varying diffuse albedo, surface normals, specular roughness and specular albedo. We do this by jointly optimizing the latent space of our multi-view reflectance network to minimize the photometric error between images rendered with our predictions and the input images. While previous state-of-the-art methods fail on such sparse acquisition setups, we demonstrate, via extensive experiments on synthetic and real data, that our method produces high-quality reconstructions that can be used to render photorealistic images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源