论文标题
乍得:夏洛特异常数据集
CHAD: Charlotte Anomaly Dataset
论文作者
论文摘要
近年来,我们对视频异常检测的数据驱动的深度学习方法产生了重大兴趣,其中算法必须确定视频的特定帧是否包含异常行为。但是,视频异常检测特别是上下文特定的,并且代表性数据集的可用性严重限制了现实世界中的准确性。此外,大多数最先进方法当前报告的指标通常不能反映模型在现实世界中的表现如何。在本文中,我们介绍了夏洛特异常数据集(CHAD)。乍得是商业停车场设置中的高分辨率,多摄像头异常数据集。除了框架级异常标签外,乍得还是第一个包含每个参与者的边界框,身份和姿势注释的异常数据集。这对于基于骨架的异常检测特别有益,这对于在现实环境中较低的计算需求很有用。乍得也是第一个包含同一场景多个视图的异常数据集。乍得凭借四个相机的景观和超过115万帧的框架,是最大的完全注释的异常检测数据集,包括人员注释,是从固定摄像机的连续视频流中收集的,用于智能视频监视应用程序。为了证明CHAD在培训和评估中的功效,我们基准了两种基于骨架的最先进的基于骨架的异常检测算法,并提供了全面的分析,包括定量结果和定性检查。该数据集可在https://github.com/tecsar-uncc/chad上找到。
In recent years, we have seen a significant interest in data-driven deep learning approaches for video anomaly detection, where an algorithm must determine if specific frames of a video contain abnormal behaviors. However, video anomaly detection is particularly context-specific, and the availability of representative datasets heavily limits real-world accuracy. Additionally, the metrics currently reported by most state-of-the-art methods often do not reflect how well the model will perform in real-world scenarios. In this article, we present the Charlotte Anomaly Dataset (CHAD). CHAD is a high-resolution, multi-camera anomaly dataset in a commercial parking lot setting. In addition to frame-level anomaly labels, CHAD is the first anomaly dataset to include bounding box, identity, and pose annotations for each actor. This is especially beneficial for skeleton-based anomaly detection, which is useful for its lower computational demand in real-world settings. CHAD is also the first anomaly dataset to contain multiple views of the same scene. With four camera views and over 1.15 million frames, CHAD is the largest fully annotated anomaly detection dataset including person annotations, collected from continuous video streams from stationary cameras for smart video surveillance applications. To demonstrate the efficacy of CHAD for training and evaluation, we benchmark two state-of-the-art skeleton-based anomaly detection algorithms on CHAD and provide comprehensive analysis, including both quantitative results and qualitative examination. The dataset is available at https://github.com/TeCSAR-UNCC/CHAD.