论文标题

WILDNERF:使用稀疏单眼数据捕获的野外动态场景的完整查看合成

wildNeRF: Complete view synthesis of in-the-wild dynamic scenes captured using sparse monocular data

论文作者

Khalid, Shuja, Rudzicz, Frank

论文摘要

我们提出了一种新颖的神经辐射模型,该模型可在自我监督的方式中训练,以构成动态非结构化场景的新型视图综合。我们的端到端可训练算法在几秒钟内学习了高度复杂,现实世界中的静态场景,并在几分钟内具有刚性和非刚性运动。通过区分静态和运动以运动为中心的像素,我们从一组稀疏的图像组中创建高质量的表示。我们对现有基准进行了广泛的定性和定量评估,并在具有挑战性的NVIDIA动态场景数据集上为性能度量设定了最先进的评估。此外,我们评估了我们在挑战现实数据集(例如cholec80 and Surgicalactions160)上的模型性能。

We present a novel neural radiance model that is trainable in a self-supervised manner for novel-view synthesis of dynamic unstructured scenes. Our end-to-end trainable algorithm learns highly complex, real-world static scenes within seconds and dynamic scenes with both rigid and non-rigid motion within minutes. By differentiating between static and motion-centric pixels, we create high-quality representations from a sparse set of images. We perform extensive qualitative and quantitative evaluation on existing benchmarks and set the state-of-the-art on performance measures on the challenging NVIDIA Dynamic Scenes Dataset. Additionally, we evaluate our model performance on challenging real-world datasets such as Cholec80 and SurgicalActions160.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源