论文标题

通过深层生成模型对异常检测的进一步分析

Further Analysis of Outlier Detection with Deep Generative Models

论文作者

Wang, Ziyu, Dai, Bin, Wipf, David, Zhu, Jun

论文摘要

最近的违反直觉的发现,深度生成模型(DGM)经常可以为离群值分配更高的可能性,这对离群检测应用程序以及我们对生成建模的整体理解都具有影响。在这项工作中,我们从观察到模型的典型集合和高密度区域可能不会结合这种现象的可能解释。从这个有利的角度来看,我们提出了一个新颖的离群测试,其经验成功表明,现有基于可能性的异常测试的失败并不一定意味着相应的生成模型未校准。我们还进行了其他实验,以帮助解散低级纹理与高级语义在区分异常值中的影响。总体而言,这些结果表明,需要对文献中通常应用的标准评估实践和基准进行修改。

The recent, counter-intuitive discovery that deep generative models (DGMs) can frequently assign a higher likelihood to outliers has implications for both outlier detection applications as well as our overall understanding of generative modeling. In this work, we present a possible explanation for this phenomenon, starting from the observation that a model's typical set and high-density region may not conincide. From this vantage point we propose a novel outlier test, the empirical success of which suggests that the failure of existing likelihood-based outlier tests does not necessarily imply that the corresponding generative model is uncalibrated. We also conduct additional experiments to help disentangle the impact of low-level texture versus high-level semantics in differentiating outliers. In aggregate, these results suggest that modifications to the standard evaluation practices and benchmarks commonly applied in the literature are needed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源