论文标题

通过对一级嵌入的反向蒸馏进行异常检测

Anomaly Detection via Reverse Distillation from One-Class Embedding

论文作者

Deng, Hanqiu, Li, Xingyu

论文摘要

知识蒸馏(KD)在无监督异常检测(AD)的具有挑战性的问题上取得了令人鼓舞的结果。教师学生(T-S)模型中异常的表示差异为AD提供了基本证据。但是,在先前的研究中,使用相似或相同的体系结构来构建教师和学生模型,阻碍了异常表示的多样性。为了解决这个问题,我们提出了一种新颖的T-S模型,该模型由教师编码器和学生解码器组成,并因此介绍了一个简单而有效的“反向蒸馏”范式。学生网络没有直接接收原始图像,而是将教师模型的一级嵌入作为输入和目标,以恢复教师的多尺度表示。本质上,这项研究的知识蒸馏始于抽象的高级演示,到低级特征。此外,我们在T-S型号中引入了可训练的单级瓶颈嵌入(OCBE)模块。获得的紧凑型嵌入有效地保留了有关正常模式的基本信息,但放弃了异常扰动。对AD和一级新颖性检测基准进行的广泛实验表明,我们的方法超过了SOTA性能,证明了我们提出的方法的有效性和普遍性。

Knowledge distillation (KD) achieves promising results on the challenging problem of unsupervised anomaly detection (AD).The representation discrepancy of anomalies in the teacher-student (T-S) model provides essential evidence for AD. However, using similar or identical architectures to build the teacher and student models in previous studies hinders the diversity of anomalous representations. To tackle this problem, we propose a novel T-S model consisting of a teacher encoder and a student decoder and introduce a simple yet effective "reverse distillation" paradigm accordingly. Instead of receiving raw images directly, the student network takes teacher model's one-class embedding as input and targets to restore the teacher's multiscale representations. Inherently, knowledge distillation in this study starts from abstract, high-level presentations to low-level features. In addition, we introduce a trainable one-class bottleneck embedding (OCBE) module in our T-S model. The obtained compact embedding effectively preserves essential information on normal patterns, but abandons anomaly perturbations. Extensive experimentation on AD and one-class novelty detection benchmarks shows that our method surpasses SOTA performance, demonstrating our proposed approach's effectiveness and generalizability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源