论文标题
转置:通过变压器定位的关键点定位
TransPose: Keypoint Localization via Transformer
论文作者
论文摘要
尽管基于CNN的模型在人类姿势估计上取得了显着进展,但他们捕获的空间依赖性以定位的关键点仍不清楚。在这项工作中,我们提出了一个称为\ textbf {transpose}的模型,该模型介绍了人类姿势估计的变压器。变压器内置的注意力层使我们的模型能够有效地捕获远程关系,还可以揭示预测的关键点所依赖的依赖性。为了预测关键点热图,最后一个注意力层充当聚合器,从图像线索中收集贡献,并形成关键点的最大位置。这种基于热图的定位方法通过变压器符合激活最大化原理〜\ cite {erhan2009Visalization}。并且揭示的依赖性是特定图像特异性和细粒度的,这也可以提供模型如何处理特殊情况(例如遮挡)的证据。该实验表明,在可可验证和测试-DEV集中,转型在75.8 AP和75.0 AP中实现,而比主流CNN体系结构更轻巧,更快。转置模型在MPII基准测试上也非常适合转移,在微调小型培训成本时,在测试组上取得了出色的性能。代码和预训练的模型是公开可用的\ footNote {\ url {https://github.com/yangsenius/transpose}}。
While CNN-based models have made remarkable progress on human pose estimation, what spatial dependencies they capture to localize keypoints remains unclear. In this work, we propose a model called \textbf{TransPose}, which introduces Transformer for human pose estimation. The attention layers built in Transformer enable our model to capture long-range relationships efficiently and also can reveal what dependencies the predicted keypoints rely on. To predict keypoint heatmaps, the last attention layer acts as an aggregator, which collects contributions from image clues and forms maximum positions of keypoints. Such a heatmap-based localization approach via Transformer conforms to the principle of Activation Maximization~\cite{erhan2009visualizing}. And the revealed dependencies are image-specific and fine-grained, which also can provide evidence of how the model handles special cases, e.g., occlusion. The experiments show that TransPose achieves 75.8 AP and 75.0 AP on COCO validation and test-dev sets, while being more lightweight and faster than mainstream CNN architectures. The TransPose model also transfers very well on MPII benchmark, achieving superior performance on the test set when fine-tuned with small training costs. Code and pre-trained models are publicly available\footnote{\url{https://github.com/yangsenius/TransPose}}.