论文标题
不仅看着,而且要倾听:在弱监督下学习多模式暴力检测
Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision
论文作者
论文摘要
多年来,在计算机视觉中研究了暴力检测。但是,先前的工作要么是肤浅的,例如,短曲板的分类和单个方案,要么是供应不足的,例如单个模态和基于手工制作的功能多模态。为了解决这个问题,在这项工作中,我们首先发布了一个名为XD-Violence的大规模和多场景数据集,总持续时间为217小时,其中包含4754个未经修剪的视频,带有音频信号和弱标签。然后,我们提出了一个包含三个平行分支的神经网络,以捕获视频片段之间的不同关系并整合特征,其中整体分支使用相似性先验,局部分支捕获长距离依赖性,使用邻近性先验捕获本地位置关系,并分数分支动态捕获预测得分的接近性。此外,我们的方法还包括一个满足在线检测需求的近似器。我们的方法优于我们已发布的数据集和其他现有基准的其他最先进方法。此外,广泛的实验结果还显示了多模式(视听)输入和建模关系的积极作用。代码和数据集将在https://roc-ng.github.io/xd-violence/中发布。
Violence detection has been studied in computer vision for years. However, previous work are either superficial, e.g., classification of short-clips, and the single scenario, or undersupplied, e.g., the single modality, and hand-crafted features based multimodality. To address this problem, in this work we first release a large-scale and multi-scene dataset named XD-Violence with a total duration of 217 hours, containing 4754 untrimmed videos with audio signals and weak labels. Then we propose a neural network containing three parallel branches to capture different relations among video snippets and integrate features, where holistic branch captures long-range dependencies using similarity prior, localized branch captures local positional relation using proximity prior, and score branch dynamically captures the closeness of predicted score. Besides, our method also includes an approximator to meet the needs of online detection. Our method outperforms other state-of-the-art methods on our released dataset and other existing benchmark. Moreover, extensive experimental results also show the positive effect of multimodal (audio-visual) input and modeling relationships. The code and dataset will be released in https://roc-ng.github.io/XD-Violence/.