论文标题
神经hofusion:在人对象相互作用下的神经体积渲染
NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions
论文作者
论文摘要
人类对象相互作用的4D建模对于众多应用至关重要。但是,有效的体积捕获和复杂相互作用方案的渲染,尤其是从稀疏投入中,仍然具有挑战性。在本文中,我们提出了神经霍夫斯,这是一种使用稀疏的消费者RGBD传感器捕获和渲染的神经方法。它将传统的非刚性融合与最近的神经隐式建模和混合进展相结合,在这些建模和融合的进步中,被捕获的人和物体被置于层次。对于几何建模,我们提出了一种具有非刚性键量融合的神经隐式推理方案,以及一个模板AID可靠的对象跟踪管道。我们的方案在复杂的相互作用和遮挡下实现了详细和完整的几何产生。此外,我们引入了一个层面的人体对象纹理渲染方案,该方案结合了空间和时间域中的体积和基于图像的渲染,以获得光真实的结果。广泛的实验表明,在复杂的人类对象相互作用下,我们的方法在合成照片现实的免费视图结果中的有效性和效率。
4D modeling of human-object interactions is critical for numerous applications. However, efficient volumetric capture and rendering of complex interaction scenarios, especially from sparse inputs, remain challenging. In this paper, we propose NeuralHOFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors. It marries traditional non-rigid fusion with recent neural implicit modeling and blending advances, where the captured humans and objects are layerwise disentangled. For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion, as well as a template-aid robust object tracking pipeline. Our scheme enables detailed and complete geometry generation under complex interactions and occlusions. Moreover, we introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains to obtain photo-realistic results. Extensive experiments demonstrate the effectiveness and efficiency of our approach in synthesizing photo-realistic free-view results under complex human-object interactions.