论文标题
密集:杂种异常检测,以识别致密的识别
DenseHybrid: Hybrid Anomaly Detection for Dense Open-set Recognition
论文作者
论文摘要
可以通过定期训练数据的生成建模或通过对负面训练数据进行区分来构想异常检测。这两种方法表现出不同的故障模式。因此,混合算法提出了一个有吸引力的研究目标。不幸的是,密集的异常检测需要翻译均衡和非常大的输入分辨率。这些要求取消了所有以前的混合方法,我们的最佳知识。因此,我们设计了一种基于重新解释的歧视logits的新型混合算法,以作为非正常的关节分布的对数$ \ hat {p}(\ mathbf {x},\ mathbf {y})$。我们的模型建立在共享的卷积表示基础上,我们从中恢复了三个密集的预测:i)封闭式类后$ p(\ MathBf {y} | \ Mathbf {x})$,II)数据集后验$ p(d_ {in} $ \ hat {p}(\ mathbf {x})$。后两个预测既已在标准培训数据和通用负面数据集上进行培训。我们将这两个预测融合到混合异常评分中,该评分允许在大型自然图像上进行密集的开放式识别。我们仔细设计了数据可能性的自定义损失,以避免通过不可降低的归一量$ z(θ)$向后传播。实验评估了我们对标准密集异常检测基准的贡献,以及开放式MIOU的贡献 - 开放式开放式性能的新型指标。尽管在标准语义分段基线上忽略了可忽视的计算开销,但我们的提交表现达到了最先进的性能。
Anomaly detection can be conceived either through generative modelling of regular training data or by discriminating with respect to negative training data. These two approaches exhibit different failure modes. Consequently, hybrid algorithms present an attractive research goal. Unfortunately, dense anomaly detection requires translational equivariance and very large input resolutions. These requirements disqualify all previous hybrid approaches to the best of our knowledge. We therefore design a novel hybrid algorithm based on reinterpreting discriminative logits as a logarithm of the unnormalized joint distribution $\hat{p}(\mathbf{x}, \mathbf{y})$. Our model builds on a shared convolutional representation from which we recover three dense predictions: i) the closed-set class posterior $P(\mathbf{y}|\mathbf{x})$, ii) the dataset posterior $P(d_{in}|\mathbf{x})$, iii) unnormalized data likelihood $\hat{p}(\mathbf{x})$. The latter two predictions are trained both on the standard training data and on a generic negative dataset. We blend these two predictions into a hybrid anomaly score which allows dense open-set recognition on large natural images. We carefully design a custom loss for the data likelihood in order to avoid backpropagation through the untractable normalizing constant $Z(θ)$. Experiments evaluate our contributions on standard dense anomaly detection benchmarks as well as in terms of open-mIoU - a novel metric for dense open-set performance. Our submissions achieve state-of-the-art performance despite neglectable computational overhead over the standard semantic segmentation baseline.