论文标题

通过重新聚体化对神经SDF的可区分渲染

Differentiable Rendering of Neural SDFs through Reparameterization

论文作者

Bangaru, Sai Praveen, Gharbi, Michaël, Li, Tzu-Mao, Luan, Fujun, Sunkavalli, Kalyan, Hašan, Miloš, Bi, Sai, Xu, Zexiang, Bernstein, Gilbert, Durand, Frédo

论文摘要

我们提出了一种在神经SDF渲染器中相对于几何场景参数自动计算正确梯度的方法。最近基于物理的可区分渲染技术使用了边缘采样来处理不连续性,尤其是在对象轮廓上,但是SDF没有简单的参数形式,可用于采样。取而代之的是,我们的方法建立在区域采样技术的基础上,并为SDFs开发了连续的翘曲功能,以解决这些不连续性。我们的方法利用了在SDF中编码的表面的距离,并在球形示踪剂点上使用正交来计算此翘曲功能。我们进一步表明,这可以通过对这些点进行次采样来使该方法可用于神经SDF。我们可区分的渲染器可用于优化从多视图图像中的神经形状,并对最近的基于SDF的逆渲染方法产生可比的3D重建,而无需2D分割掩码来指导几何形状优化,并且对几何形状没有容量的近似值。

We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDF renderers. Recent physically-based differentiable rendering techniques for meshes have used edge-sampling to handle discontinuities, particularly at object silhouettes, but SDFs do not have a simple parametric form amenable to sampling. Instead, our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for these discontinuities. Our method leverages the distance to surface encoded in an SDF and uses quadrature on sphere tracer points to compute this warping function. We further show that this can be done by subsampling the points to make the method tractable for neural SDFs. Our differentiable renderer can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions to recent SDF-based inverse rendering methods, without the need for 2D segmentation masks to guide the geometry optimization and no volumetric approximations to the geometry.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源