论文标题
学习和评估表示深度分类的表示形式
Learning and Evaluating Representations for Deep One-class Classification
论文作者
论文摘要
我们提出了一个两阶段的框架,用于深一级分类。我们首先从一级数据中学习自我监督的表示,然后在学习的表示形式上构建一级分类器。该框架不仅允许学习更好的表示形式,而且还允许构建忠于目标任务的一级分类器。我们认为,受生成或判别模型中统计观点启发的分类器比现有方法更有效,例如替代分类器的正态性评分。我们在拟议的一级分类框架下彻底评估了不同的自我监督的表示算法。此外,我们提出了一种新颖的分布对比度学习,该学习通过数据扩展扩展了训练分布,以阻止对比度表示的均匀性。在实验中,我们证明了视觉域一级分类基准的最新性能,包括新颖性和异常检测。最后,我们介绍了视觉解释,证实了深度级分类器的决策过程对人类直观。该代码可从https://github.com/google-research/deep_representation_one_class获得。
We present a two-stage framework for deep one-class classification. We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. The framework not only allows to learn better representations, but also permits building one-class classifiers that are faithful to the target task. We argue that classifiers inspired by the statistical perspective in generative or discriminative models are more effective than existing approaches, such as a normality score from a surrogate classifier. We thoroughly evaluate different self-supervised representation learning algorithms under the proposed framework for one-class classification. Moreover, we present a novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of contrastive representations. In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks, including novelty and anomaly detection. Finally, we present visual explanations, confirming that the decision-making process of deep one-class classifiers is intuitive to humans. The code is available at https://github.com/google-research/deep_representation_one_class.