论文标题
生成分类器作为值得信赖的图像分类的基础
Generative Classifiers as a Basis for Trustworthy Image Classification
论文作者
论文摘要
随着深度学习系统的成熟,对模型评估的信任度变得越来越重要。我们将信任度理解为解释性和鲁棒性的结合。生成分类器(GCS)是一种有前途的模型,据说自然可以实现这些品质。但是,这主要在过去的简单数据集(例如MNIST和CIFAR)上进行了证明。在这项工作中,我们首先开发了一种体系结构和培训计划,该计划使GC可以以更相关的复杂性来实施计算机视觉,即ImageNet挑战。其次,我们证明了GC对值得信赖的图像分类的巨大潜力。与前馈模型相比,即使GC只是天真地应用,解释性和鲁棒性的某些方面也得到了极大的改善。尽管并非所有的可信赖问题都得到了完全解决,但我们观察到GC是进一步算法和修改的高度有希望的基础。我们发布了训练有素的模型以供下载,希望它可以作为其他生成分类任务的起点,其方式与审核的Resnet Architectures对判别分类的方式相同。
With the maturing of deep learning systems, trustworthiness is becoming increasingly important for model assessment. We understand trustworthiness as the combination of explainability and robustness. Generative classifiers (GCs) are a promising class of models that are said to naturally accomplish these qualities. However, this has mostly been demonstrated on simple datasets such as MNIST and CIFAR in the past. In this work, we firstly develop an architecture and training scheme that allows GCs to operate on a more relevant level of complexity for practical computer vision, namely the ImageNet challenge. Secondly, we demonstrate the immense potential of GCs for trustworthy image classification. Explainability and some aspects of robustness are vastly improved compared to feed-forward models, even when the GCs are just applied naively. While not all trustworthiness problems are solved completely, we observe that GCs are a highly promising basis for further algorithms and modifications. We release our trained model for download in the hope that it serves as a starting point for other generative classification tasks, in much the same way as pretrained ResNet architectures do for discriminative classification.