论文标题

Neural3Points:学习为虚拟现实用户生成身体现实的全身运动

Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users

论文作者

Ye, Yongjing, Liu, Libin, Hu, Lei, Xia, Shihong

论文摘要

对反映用户在VR世界中的动作的化身进行动画化,从而可以与虚拟环境进行自然相互作用。它有可能让远程用户以某种方式进行交流和协作。但是,典型的VR系统仅提供一组非常稀疏的三个位置传感器,包括头部安装的显示器(HMD)和两个手持式控制器,这使得对用户的全身运动的估计是一个困难的问题。在这项工作中,我们提出了一种基于数据驱动的物理学方法,该方法可根据这些VR跟踪器的转换来预测用户的现实全身运动,并模拟Avatar角色以实时模仿虚拟世界中的此类用户行动。我们使用经过精心设计的预处理过程的增强学习来训练系统,以确保培训的成功和模拟质量。我们通过广泛的示例证明了该方法的有效性。

Animating an avatar that reflects a user's action in the VR world enables natural interactions with the virtual environment. It has the potential to allow remote users to communicate and collaborate in a way as if they met in person. However, a typical VR system provides only a very sparse set of up to three positional sensors, including a head-mounted display (HMD) and optionally two hand-held controllers, making the estimation of the user's full-body movement a difficult problem. In this work, we present a data-driven physics-based method for predicting the realistic full-body movement of the user according to the transformations of these VR trackers and simulating an avatar character to mimic such user actions in the virtual world in real-time. We train our system using reinforcement learning with carefully designed pretraining processes to ensure the success of the training and the quality of the simulation. We demonstrate the effectiveness of the method with an extensive set of examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源