论文标题
使用通道注意力和熵最小化改善自动编码器新颖性检测
Improving auto-encoder novelty detection using channel attention and entropy minimization
论文作者
论文摘要
新颖性检测是一个重要的研究领域,主要解决嵌入式的分类问题,该问题通常由正常样品和异常样品组成。自动编码器通常用于新颖性检测。但是,自动编码器的概括能力可能会导致异常元素的不良重建并降低模型的识别能力。为了解决问题,我们关注更好地重建正常样本的观点,并保留正常样品的独特信息,以提高自动编码器的新颖性检测性能。首先,我们将注意力机制引入任务。在注意机制的作用下,自动编码器可以通过对抗训练更加关注嵌入式样本的表示。其次,我们将信息熵应用到潜在层中,以使其稀疏并限制多样性的表达。三个公共数据集的实验结果表明,与以前的流行方法相比,所提出的方法可以达到可比的性能。
Novelty detection is a important research area which mainly solves the classification problem of inliers which usually consists of normal samples and outliers composed of abnormal samples. Auto-encoder is often used for novelty detection. However, the generalization ability of the auto-encoder may cause the undesirable reconstruction of abnormal elements and reduce the identification ability of the model. To solve the problem, we focus on the perspective of better reconstructing the normal samples as well as retaining the unique information of normal samples to improve the performance of auto-encoder for novelty detection. Firstly, we introduce attention mechanism into the task. Under the action of attention mechanism, auto-encoder can pay more attention to the representation of inlier samples through adversarial training. Secondly, we apply the information entropy into the latent layer to make it sparse and constrain the expression of diversity. Experimental results on three public datasets show that the proposed method achieves comparable performance compared with previous popular approaches.