论文标题
通过解耦来增强无监督的视频表示学习和运动
Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion
论文作者
论文摘要
我们期望视频表示学习将捕获的一个重要因素,尤其是与图像表示学习相反,是对象运动。但是,我们发现在当前主流视频数据集中,某些动作类别与动作发生的场景高度相关,从而使模型倾向于将仅编码场景信息的解决方案降低。例如,受过训练的模型可能仅仅是因为它看到了场地,因此可以将视频预测为播放足球,从而忽略了该主题是在球场上作为啦啦队长跳舞。这违背了我们对视频表示学习的最初意图,并且可能会在不同的数据集中带来场景偏见,而这些偏见是不容忽视的。为了解决这个问题,我们建议用两个简单的操作将场景和运动(DSM)解除,以使模型对运动信息的关注得到更好的付费。具体来说,我们为每个视频构建一个正剪辑和一个负剪辑。与原始视频相比,正/负面是运动的/否定的,但被空间扰动和暂时的局部扰动所遭受的折断/破碎/未受影响。我们的目标是将正面靠近,同时将负面的负数推向潜在空间的原始剪辑。这样,场景的影响会削弱,而网络的时间敏感性进一步增强。我们对具有各种骨干和不同预训练数据集的两个任务进行实验,并发现我们的方法超过了SOTA方法,在UCF101和HMDB51数据集上分别使用同一骨架来分别在UCF101和HMDB51数据集上提高了8.1%和8.8%的改善。
One significant factor we expect the video representation learning to capture, especially in contrast with the image representation learning, is the object motion. However, we found that in the current mainstream video datasets, some action categories are highly related with the scene where the action happens, making the model tend to degrade to a solution where only the scene information is encoded. For example, a trained model may predict a video as playing football simply because it sees the field, neglecting that the subject is dancing as a cheerleader on the field. This is against our original intention towards the video representation learning and may bring scene bias on different dataset that can not be ignored. In order to tackle this problem, we propose to decouple the scene and the motion (DSM) with two simple operations, so that the model attention towards the motion information is better paid. Specifically, we construct a positive clip and a negative clip for each video. Compared to the original video, the positive/negative is motion-untouched/broken but scene-broken/untouched by Spatial Local Disturbance and Temporal Local Disturbance. Our objective is to pull the positive closer while pushing the negative farther to the original clip in the latent space. In this way, the impact of the scene is weakened while the temporal sensitivity of the network is further enhanced. We conduct experiments on two tasks with various backbones and different pre-training datasets, and find that our method surpass the SOTA methods with a remarkable 8.1% and 8.8% improvement towards action recognition task on the UCF101 and HMDB51 datasets respectively using the same backbone.