论文标题
Body2Hands:学会从对话式手势身体动态中推断3D手
Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body Dynamics
论文作者
论文摘要
我们提出了一个新颖的人体运动的深入的事先运动,以实现3D手形状的综合和对话手势领域的估计。我们的模型基于这样的见解,即身体运动和手势在非语言交流环境中密切相关。我们将这一先验的学习作为一个预测任务,这是一项仅用身体运动输入的3D手随时间变化的预测任务。经过从大规模的互联网视频数据集获得的3D姿势估计培训,我们的手预测模型仅在说话者的手臂的3D运动作为输入的情况下,就会产生令人信服的3D手势。我们证明了从人体运动输入中手势合成的方法的功效,并且是基于单图像的3D手姿势估计的强大身体。我们证明,我们的方法表现优于以前的最新方法,并且可以超越基于独白的培训数据到多人对话。视频结果可从http://people.eecs.berkeley.edu/~evonne_ng/projects/body2hands/获得。
We propose a novel learned deep prior of body motion for 3D hand shape synthesis and estimation in the domain of conversational gestures. Our model builds upon the insight that body motion and hand gestures are strongly correlated in non-verbal communication settings. We formulate the learning of this prior as a prediction task of 3D hand shape over time given body motion input alone. Trained with 3D pose estimations obtained from a large-scale dataset of internet videos, our hand prediction model produces convincing 3D hand gestures given only the 3D motion of the speaker's arms as input. We demonstrate the efficacy of our method on hand gesture synthesis from body motion input, and as a strong body prior for single-view image-based 3D hand pose estimation. We demonstrate that our method outperforms previous state-of-the-art approaches and can generalize beyond the monologue-based training data to multi-person conversations. Video results are available at http://people.eecs.berkeley.edu/~evonne_ng/projects/body2hands/.