论文标题
nerf-in:自由形式的nerf用rgb-d先验介绍
NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors
论文作者
论文摘要
尽管神经辐射场(NERF)表现出引人注目的新视图综合结果,但编辑预训练的NERF仍然没有直觉,因为神经网络的参数和场景几何/外观通常没有明确关联。在本文中,我们介绍了第一个框架,该框架使用户可以在3D场景中删除不需要的对象或修饰不希望的区域,该场景由预先训练的NERF表示,而没有任何类别的数据和培训。用户首先绘制一个自由形式的掩码,以通过预先训练的NERF的渲染视图指定包含不需要对象的区域。我们的框架首先将用户提供的掩码转移到其他渲染视图,并估算这些传输的蒙版区域内的指导颜色和深度图像。接下来,我们制定了一个优化问题,该问题通过更新NERF模型的参数,共同对所有蒙版区域中的图像内容共同涂抹图像内容。我们在各种场景上演示了我们的框架,并证明它在使用较短的时间和用户手动努力的情况下,在多个视图中获得了视觉上的合理和结构一致的结果。
Though Neural Radiance Field (NeRF) demonstrates compelling novel view synthesis results, it is still unintuitive to edit a pre-trained NeRF because the neural network's parameters and the scene geometry/appearance are often not explicitly associated. In this paper, we introduce the first framework that enables users to remove unwanted objects or retouch undesired regions in a 3D scene represented by a pre-trained NeRF without any category-specific data and training. The user first draws a free-form mask to specify a region containing unwanted objects over a rendered view from the pre-trained NeRF. Our framework first transfers the user-provided mask to other rendered views and estimates guiding color and depth images within these transferred masked regions. Next, we formulate an optimization problem that jointly inpaints the image content in all masked regions across multiple views by updating the NeRF model's parameters. We demonstrate our framework on diverse scenes and show it obtained visual plausible and structurally consistent results across multiple views using shorter time and less user manual efforts.