论文标题

对象类意识到通过图像翻译检测视频异常检测

Object Class Aware Video Anomaly Detection through Image Translation

论文作者

Baradaran, Mohammad, Bergevin, Robert

论文摘要

半监督视频异常检测(VAD)方法将异常检测的任务作为与学习的正常模式的偏差的检测。该领域的先前工作(基于重建或基于预测的方法)具有两个缺点:1)它们专注于低级功能,并且(尤其是整体方法)没有有效地考虑对象类。 2)以对象为中心的方法忽略了某些上下文信息(例如位置)。为了应对这些挑战,本文提出了一种新颖的两流对象感知的VAD方法,该方法通过图像翻译任务了解正常的外观和运动模式。外观分支将输入图像转换为Mask-RCNN产生的目标语义分割图,运动分支将每个帧与其预期光流量幅度相关联。在推理阶段与预期外观或运动的任何偏差都显示出潜在异常的程度。我们评估了有关上海,UCSD-PED1和UCSD-PED2数据集的建议方法,与最先进的作品相比,结果显示出竞争性能。最重要的是,结果表明,作为对先前方法的重大改进,我们方法的检测是完全可以解释的,并且异常在框架中精确局部化。

Semi-supervised video anomaly detection (VAD) methods formulate the task of anomaly detection as detection of deviations from the learned normal patterns. Previous works in the field (reconstruction or prediction-based methods) suffer from two drawbacks: 1) They focus on low-level features, and they (especially holistic approaches) do not effectively consider the object classes. 2) Object-centric approaches neglect some of the context information (such as location). To tackle these challenges, this paper proposes a novel two-stream object-aware VAD method that learns the normal appearance and motion patterns through image translation tasks. The appearance branch translates the input image to the target semantic segmentation map produced by Mask-RCNN, and the motion branch associates each frame with its expected optical flow magnitude. Any deviation from the expected appearance or motion in the inference stage shows the degree of potential abnormality. We evaluated our proposed method on the ShanghaiTech, UCSD-Ped1, and UCSD-Ped2 datasets and the results show competitive performance compared with state-of-the-art works. Most importantly, the results show that, as significant improvements to previous methods, detections by our method are completely explainable and anomalies are localized accurately in the frames.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源