论文标题

学会在视频中估算人类运动的外力

Learning to Estimate External Forces of Human Motion in Video

论文作者

Louis, Nathan, Templin, Tylan N., Eliason, Travis D., Nicolella, Daniel P., Corso, Jason J.

论文摘要

分析运动表现或预防伤害需要捕获人体在某些运动中施加的地面反作用力(GRF)。标准实践在受控的环境中使用与力板配对的物理标记,但这是由于高成本,冗长的实现时间和差异而损害的。因此,我们提出了视频中的GRF推论。尽管最近的工作使用LSTM从2D观点估算了GRF,但它们的建模和表示能力可能受到限制。首先,我们建议使用变压器体系结构从视频任务中解决GRF,这是第一个这样做的。然后,我们引入了新的损失,以最大程度地减少回归曲线中的高影响峰。我们还表明,对2D到3D人类姿势估计的训练和多任务学习可以提高对看不见动作的概括。并且在较小(稀有)GRF数据集上进行填充时,对这项不同任务的预培训可提供良好的初始权重。我们评估了Laas Parkour和新收集的Widlepose数据集;与先前的方法相比,我们出现的误差降低了19%。

Analyzing sports performance or preventing injuries requires capturing ground reaction forces (GRFs) exerted by the human body during certain movements. Standard practice uses physical markers paired with force plates in a controlled environment, but this is marred by high costs, lengthy implementation time, and variance in repeat experiments; hence, we propose GRF inference from video. While recent work has used LSTMs to estimate GRFs from 2D viewpoints, these can be limited in their modeling and representation capacity. First, we propose using a transformer architecture to tackle the GRF from video task, being the first to do so. Then we introduce a new loss to minimize high impact peaks in regressed curves. We also show that pre-training and multi-task learning on 2D-to-3D human pose estimation improves generalization to unseen motions. And pre-training on this different task provides good initial weights when finetuning on smaller (rarer) GRF datasets. We evaluate on LAAS Parkour and a newly collected ForcePose dataset; we show up to 19% decrease in error compared to prior approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源