论文标题
探索视频识别的时间动态数据增强
Exploring Temporally Dynamic Data Augmentation for Video Recognition
论文作者
论文摘要
最近,数据增强已成为视觉识别任务的现代培训食谱的重要组成部分。但是,尽管有效性,但很少探索视频识别的数据增强。很少有用于视频识别的现有增强食谱通过将相同的操作应用于整个视频框架来天真地扩展图像增强方法。我们的主要思想是,每帧的增强操作的大小都需要随着时间的推移而更改,以捕获现实世界视频的时间变化。在训练过程中,应使用更少的其他超参数来尽可能多地产生这些变化。通过这种动机,我们提出了一个简单而有效的视频数据增强框架Dynaaugment。每个帧上增强操作的大小通过有效的机制,傅立叶采样更改,该采样将各种,平滑和现实的时间变化参数。 Dynaaugment还包括一个扩展的搜索空间,适用于自动数据增强方法的视频。 Dynaaugment在实验上表明,从各种视频模型的静态增强中可以改善其他性能室。 Specifically, we show the effectiveness of DynaAugment on various video datasets and tasks: large-scale video recognition (Kinetics-400 and Something-Something-v2), small-scale video recognition (UCF- 101 and HMDB-51), fine-grained video recognition (Diving-48 and FineGym), video action segmentation on Breakfast, video action localization on THUMOS'14, and video object detection on MOT17DET。 Dynaaugment还使视频模型能够学习更广泛的表示形式,以改善损坏视频的模型鲁棒性。
Data augmentation has recently emerged as an essential component of modern training recipes for visual recognition tasks. However, data augmentation for video recognition has been rarely explored despite its effectiveness. Few existing augmentation recipes for video recognition naively extend the image augmentation methods by applying the same operations to the whole video frames. Our main idea is that the magnitude of augmentation operations for each frame needs to be changed over time to capture the real-world video's temporal variations. These variations should be generated as diverse as possible using fewer additional hyper-parameters during training. Through this motivation, we propose a simple yet effective video data augmentation framework, DynaAugment. The magnitude of augmentation operations on each frame is changed by an effective mechanism, Fourier Sampling that parameterizes diverse, smooth, and realistic temporal variations. DynaAugment also includes an extended search space suitable for video for automatic data augmentation methods. DynaAugment experimentally demonstrates that there are additional performance rooms to be improved from static augmentations on diverse video models. Specifically, we show the effectiveness of DynaAugment on various video datasets and tasks: large-scale video recognition (Kinetics-400 and Something-Something-v2), small-scale video recognition (UCF- 101 and HMDB-51), fine-grained video recognition (Diving-48 and FineGym), video action segmentation on Breakfast, video action localization on THUMOS'14, and video object detection on MOT17Det. DynaAugment also enables video models to learn more generalized representation to improve the model robustness on the corrupted videos.