论文标题

运动感知对象感知3D自我置换估计的运动学引导的增强学习

Kinematics-Guided Reinforcement Learning for Object-Aware 3D Ego-Pose Estimation

论文作者

Luo, Zhengyi, Hachiuma, Ryo, Yuan, Ye, Iwase, Shun, Kitani, Kris M.

论文摘要

我们提出了一种将对象相互作用和人体动力学结合到使用头部安装相机估算的任务中的方法。我们使用人体的运动学模型来代表人类运动的整个范围,以及人体的动力学模型与物理模拟器中的物体相互作用。通过将对象建模,运动学建模和动力学建模汇总在增强学习(RL)框架中,我们可以启用对象感知的3D EGO置换估计。我们通过国家和行动空间的设计来设计几项代表性创新,以结合3D场景环境并提高姿势估计质量。我们还构建了一个微调步骤,以纠正漂移并完善估计的人类对象相互作用。这是第一项估计与以eg中心视频的物体(例如,椅子,盒子,障碍)为物体(例如,椅子,盒子,障碍)的物理有效的3D全身交互序列的工作。进行受控和内部设置的实验表明,我们的方法可以成功提取与物理定律一致的对象条件的3D自我置序序列。

We propose a method for incorporating object interaction and human body dynamics into the task of 3D ego-pose estimation using a head-mounted camera. We use a kinematics model of the human body to represent the entire range of human motion, and a dynamics model of the body to interact with objects inside a physics simulator. By bringing together object modeling, kinematics modeling, and dynamics modeling in a reinforcement learning (RL) framework, we enable object-aware 3D ego-pose estimation. We devise several representational innovations through the design of the state and action space to incorporate 3D scene context and improve pose estimation quality. We also construct a fine-tuning step to correct the drift and refine the estimated human-object interaction. This is the first work to estimate a physically valid 3D full-body interaction sequence with objects (e.g., chairs, boxes, obstacles) from egocentric videos. Experiments with both controlled and in-the-wild settings show that our method can successfully extract an object-conditioned 3D ego-pose sequence that is consistent with the laws of physics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源