论文标题

可变形的视频变压器

Deformable Video Transformer

论文作者

Wang, Jue, Torresani, Lorenzo

论文摘要

视频变压器最近成为卷积网络进行动作分类的有效替代方法。但是,大多数先前的视频变压器都采用全球时空关注或手动定义的策略来比较框架内部和跨框架的补丁。这些固定的注意力方案不仅具有较高的计算成本,而且通过比较预定位置的斑块,它们忽略了视频中的运动动力学。在本文中,我们介绍了可变形的视频变压器(DVT),该视频变压器(DVT)动态地预测了一个基于运动信息的每个查询位置的一小部分视频补丁,从而允许该模型根据跨帧的对应关系来决定在视频中查看的位置。至关重要的是,这些基于运动的对应关系是在视频的压缩格式中存储的零成本以零成本获得的。我们的可变形注意机制直接在分类性能方面进行了优化,从而消除了注意策略的次优设计。在四个大规模视频基准测试(Kinetics-400,Someings-someth-v2,Epic-kitchens和潜水48)上进行的实验表明,与现有的视频变压器相比,我们的模型在相同或较低的计算成本上实现了更高的准确性,并且它在这四个数据集中获得了正常的结果。

Video transformers have recently emerged as an effective alternative to convolutional networks for action classification. However, most prior video transformers adopt either global space-time attention or hand-defined strategies to compare patches within and across frames. These fixed attention schemes not only have high computational cost but, by comparing patches at predetermined locations, they neglect the motion dynamics in the video. In this paper, we introduce the Deformable Video Transformer (DVT), which dynamically predicts a small subset of video patches to attend for each query location based on motion information, thus allowing the model to decide where to look in the video based on correspondences across frames. Crucially, these motion-based correspondences are obtained at zero-cost from information stored in the compressed format of the video. Our deformable attention mechanism is optimised directly with respect to classification performance, thus eliminating the need for suboptimal hand-design of attention strategies. Experiments on four large-scale video benchmarks (Kinetics-400, Something-Something-V2, EPIC-KITCHENS and Diving-48) demonstrate that, compared to existing video transformers, our model achieves higher accuracy at the same or lower computational cost, and it attains state-of-the-art results on these four datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源