论文标题

神经体:具有结构性潜在代码的隐式神经表示,用于动态人类的新型视图综合

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

论文作者

Peng, Sida, Zhang, Yuanqing, Xu, Yinghao, Wang, Qianqian, Shuai, Qing, Bao, Hujun, Zhou, Xiaowei

论文摘要

本文从非常稀疏的一组相机视图中解决了人类表演者的新视图综合挑战。最近的一些作品表明,3D场景的学习隐式神经表现形式具有显着的视图质量,具有密集的输入视图。但是,如果观点高度稀疏,则表示学习将是不适合的。为了解决这个不适的问题,我们的关键想法是整合视频帧的观察结果。为此,我们提出了一种神经体,这是一种新的人体表示,它假设不同框架的学识渊博的神经表示共享固定在可变形网格上的相同的潜在代码,以便可以自然整合跨帧的观测值。可变形的网格还为网络提供了更有效地学习3D表示的几何指南。为了评估我们的方法,我们创建了一个名为ZJU-MOCAP的多视图数据集,该数据集用复杂的动作捕获表演者。 ZJU-mocap上的实验表明,我们的方法在新的视图合成质量方面优于先前的作品。我们还展示了我们的方法可以从People-Snapshot数据集中的单眼视频中重建一个移动的人的能力。代码和数据集可在https://zju3dv.github.io/neuralbody/上获得。

This paper addresses the challenge of novel view synthesis for a human performer from a very sparse set of camera views. Some recent works have shown that learning implicit neural representations of 3D scenes achieves remarkable view synthesis quality given dense input views. However, the representation learning will be ill-posed if the views are highly sparse. To solve this ill-posed problem, our key idea is to integrate observations over video frames. To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated. The deformable mesh also provides geometric guidance for the network to learn 3D representations more efficiently. To evaluate our approach, we create a multi-view dataset named ZJU-MoCap that captures performers with complex motions. Experiments on ZJU-MoCap show that our approach outperforms prior works by a large margin in terms of novel view synthesis quality. We also demonstrate the capability of our approach to reconstruct a moving person from a monocular video on the People-Snapshot dataset. The code and dataset are available at https://zju3dv.github.io/neuralbody/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源