论文标题
上下文恢复和知识检索:视频异常检测的新型两流框架
Context Recovery and Knowledge Retrieval: A Novel Two-Stream Framework for Video Anomaly Detection
论文作者
论文摘要
视频异常检测旨在在视频中找到不符合预期行为的事件。普遍的方法主要通过摘要重建或将来的框架预测误差来检测异常。但是,错误高度取决于当前段的局部环境,并且缺乏对正态性的理解。为了解决这个问题,我们建议不仅通过本地环境来检测异常事件,而且还根据测试事件与培训数据正常性的知识之间的一致性。具体而言,我们提出了一个基于上下文恢复和知识检索的新颖的两流框架,这两个流可以相互补充。对于上下文恢复流,我们提出了一个时空的U-NET,可以完全利用运动信息来预测未来的框架。此外,我们提出了一种最大的局部错误机制,以减轻复杂前景对象引起的大恢复错误的问题。对于知识检索流,我们提出了一种改进的可学习区域敏感性散列的散列,该哈希通过暹罗网络优化了哈希功能和相互差异损失。关于正态性的知识是编码和存储在哈希表中的,测试事件与知识表示之间的距离用于揭示异常的概率。最后,我们融合了从两个流的异常得分以检测异常。广泛的实验证明了这两个流的有效性和互补性,因此提出的两流框架在四个数据集上实现了最新的性能。
Video anomaly detection aims to find the events in a video that do not conform to the expected behavior. The prevalent methods mainly detect anomalies by snippet reconstruction or future frame prediction error. However, the error is highly dependent on the local context of the current snippet and lacks the understanding of normality. To address this issue, we propose to detect anomalous events not only by the local context, but also according to the consistency between the testing event and the knowledge about normality from the training data. Concretely, we propose a novel two-stream framework based on context recovery and knowledge retrieval, where the two streams can complement each other. For the context recovery stream, we propose a spatiotemporal U-Net which can fully utilize the motion information to predict the future frame. Furthermore, we propose a maximum local error mechanism to alleviate the problem of large recovery errors caused by complex foreground objects. For the knowledge retrieval stream, we propose an improved learnable locality-sensitive hashing, which optimizes hash functions via a Siamese network and a mutual difference loss. The knowledge about normality is encoded and stored in hash tables, and the distance between the testing event and the knowledge representation is used to reveal the probability of anomaly. Finally, we fuse the anomaly scores from the two streams to detect anomalies. Extensive experiments demonstrate the effectiveness and complementarity of the two streams, whereby the proposed two-stream framework achieves state-of-the-art performance on four datasets.