论文标题

使用异构自主系统的无监督异常检测

Unsupervised Abnormality Detection Using Heterogeneous Autonomous Systems

论文作者

Chowdhury, Sayeed Shafayet, Islam, Kazi Mejbaul, Noor, Rouhan

论文摘要

在监视场景中,异常检测(AD)是一个新兴而挑战性的研究领域。对于无人机或汽车等自动驾驶汽车,实时区分正常状态和异常状态非常重要。此外,我们还需要检测任何设备故障。但是,异常的性质和程度可能会因实际环境和对手而异。结果,对所有情况A-Priori进行建模并使用监督方法进行分类是不切实际的。此外,自动驾驶汽车提供了各种数据类型,例如图像,其他模拟传感器数据,如果有效利用,所有这些都可以在异常检测中有用。为此,在本文中提出了一个异质系统,该系统估计了无人监视无人机的异常程度,以无监督的方式分析实时图像和IMU(惯性测量单元)传感器数据。在这里,我们展示了一个卷积神经网络(CNN)架构,称为Anglenet,以估计正常图像与正在考虑的另一个图像之间的角度,这为我们提供了设备异常的量度。此外,IMU数据在自动编码器中用于预测异常。最后,这两种算法的结果被结合起来估计异常的最终程度。所提出的方法在IEEE SP杯-2020数据集上的精度为97.3%。此外,我们还在内部数据集上测试了这种方法,以验证其鲁棒性。

Anomaly detection (AD) in a surveillance scenario is an emerging and challenging field of research. For autonomous vehicles like drones or cars, it is immensely important to distinguish between normal and abnormal states in real-time. Additionally, we also need to detect any device malfunction. But the nature and degree of abnormality may vary depending upon the actual environment and adversary. As a result, it is impractical to model all cases a-priori and use supervised methods to classify. Also, an autonomous vehicle provides various data types like images and other analog or digital sensor data, all of which can be useful in anomaly detection if leveraged fruitfully. To that effect, in this paper, a heterogeneous system is proposed which estimates the degree of abnormality of an unmanned surveillance drone, analyzing real-time image and IMU (Inertial Measurement Unit) sensor data in an unsupervised manner. Here, we have demonstrated a Convolutional Neural Network (CNN) architecture, named AngleNet to estimate the angle between a normal image and another image under consideration, which provides us with a measure of anomaly of the device. Moreover, the IMU data are used in autoencoder to predict abnormality. Finally, the results from these two algorithms are ensembled to estimate the final degree of abnormality. The proposed method performs satisfactorily on the IEEE SP Cup-2020 dataset with an accuracy of 97.3%. Additionally, we have also tested this approach on an in-house dataset to validate its robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源