论文标题

MotionMixer:基于MLP的3D人体姿势预测

MotionMixer: MLP-based 3D Human Body Pose Forecasting

论文作者

Bouazizi, Arij, Holzbock, Adrian, Kressel, Ulrich, Dietmayer, Klaus, Belagiannis, Vasileios

论文摘要

在这项工作中,我们提出了MotionMixer,这是一个有效的3D人体姿势预测模型,仅基于多层感知器(MLP)。 MotionMixer通过依次混合这两种方式来学习时空3D体姿势依赖性。给定3D身体姿势的堆叠序列,空间MLP提取物是身体关节的细粒空间依赖性。然后,随着时间的推移,身体关节的相互作用由时间MLP建模。空间上的混合特征最终被聚集并解码以获得未来的运动。为了校准姿势序列中每个时间步的影响,我们利用挤压和兴奋(SE)块。我们使用标准评估协议评估了36M,Amass和3DPW数据集的方法。对于所有评估,我们将展示最先进的性能,同时具有较少参数的模型。我们的代码可在以下网址找到:https://github.com/motionmlp/motionmixer

In this work, we present MotionMixer, an efficient 3D human body pose forecasting model based solely on multi-layer perceptrons (MLPs). MotionMixer learns the spatial-temporal 3D body pose dependencies by sequentially mixing both modalities. Given a stacked sequence of 3D body poses, a spatial-MLP extracts fine grained spatial dependencies of the body joints. The interaction of the body joints over time is then modelled by a temporal MLP. The spatial-temporal mixed features are finally aggregated and decoded to obtain the future motion. To calibrate the influence of each time step in the pose sequence, we make use of squeeze-and-excitation (SE) blocks. We evaluate our approach on Human3.6M, AMASS, and 3DPW datasets using the standard evaluation protocols. For all evaluations, we demonstrate state-of-the-art performance, while having a model with a smaller number of parameters. Our code is available at: https://github.com/MotionMLP/MotionMixer

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源