论文标题

学习记忆引导的异常检测正态性

Learning Memory-guided Normality for Anomaly Detection

论文作者

Park, Hyunjong, Noh, Jongyoun, Ham, Bumsub

论文摘要

我们解决了异常检测的问题,即检测视频序列中的异常事件。基于卷积神经网络(CNN)的异常检测方法通常利用诸如重建输入视频帧之类的代理任务,以学习描述正常性的模型,而无需在训练时看到异常样本,并使用测试时间的造型误差来量化异常程度。这些方法的主要缺点是它们不明确考虑正常模式的多样性,并且CNN的强大表示能力允许重建异常视频帧。为了解决这个问题,我们提出了一种无监督的学习方法,以明确考虑正常模式的多样性,同时降低了CNN的表示能力。为此,我们建议将内存模块与新的更新方案一起使用,其中内存记录正常数据的原型模式中的项目。我们还提出了新颖的功能紧凑性和分离性损失,以训练记忆,从而增强了记忆项目的判别能力以及从正常数据中获得深入学习的功能。标准基准的实验结果证明了我们方法的有效性和效率,这表现优于最新技术。

We address the problem of anomaly detection, that is, detecting anomalous events in a video sequence. Anomaly detection methods based on convolutional neural networks (CNNs) typically leverage proxy tasks, such as reconstructing input video frames, to learn models describing normality without seeing anomalous samples at training time, and quantify the extent of abnormalities using the reconstruction error at test time. The main drawbacks of these approaches are that they do not consider the diversity of normal patterns explicitly, and the powerful representation capacity of CNNs allows to reconstruct abnormal video frames. To address this problem, we present an unsupervised learning approach to anomaly detection that considers the diversity of normal patterns explicitly, while lessening the representation capacity of CNNs. To this end, we propose to use a memory module with a new update scheme where items in the memory record prototypical patterns of normal data. We also present novel feature compactness and separateness losses to train the memory, boosting the discriminative power of both memory items and deeply learned features from normal data. Experimental results on standard benchmarks demonstrate the effectiveness and efficiency of our approach, which outperforms the state of the art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源