论文标题
在3D场景中综合长期3D人类运动和互动
Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes
论文作者
论文摘要
综合3D人类运动在许多图形应用以及了解人类活动中起着重要作用。尽管为产生现实和自然的人类运动做出了许多努力,但大多数方法忽略了建模人类习惯互动和负担的重要性。另一方面,负担得起的推理(例如,站在地板上或坐在椅子上)主要是通过静态的人类姿势和手势来研究的,很少有人通过人类的运动来解决。在本文中,我们建议弥合人类运动的合成和场景负担推理。我们提出了一个分层生成框架,以在3D场景结构上综合3D人体运动调理。在此框架的基础上,我们通过优化进一步实施了人网格和场景点云之间的多个几何约束,以改善现实的综合。我们的实验表明,关于在场景中产生自然和物理上合理的人类运动的先前方法的显着改善。
Synthesizing 3D human motion plays an important role in many graphics applications as well as understanding human activity. While many efforts have been made on generating realistic and natural human motion, most approaches neglect the importance of modeling human-scene interactions and affordance. On the other hand, affordance reasoning (e.g., standing on the floor or sitting on the chair) has mainly been studied with static human pose and gestures, and it has rarely been addressed with human motion. In this paper, we propose to bridge human motion synthesis and scene affordance reasoning. We present a hierarchical generative framework to synthesize long-term 3D human motion conditioning on the 3D scene structure. Building on this framework, we further enforce multiple geometry constraints between the human mesh and scene point clouds via optimization to improve realistic synthesis. Our experiments show significant improvements over previous approaches on generating natural and physically plausible human motion in a scene.