论文标题

评估深神经网络的概括包膜的调查:预测性不确定性,分布不确定性和对抗性样本

A Survey on Assessing the Generalization Envelope of Deep Neural Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples

论文作者

Lust, Julia, Condurache, Alexandru Paul

论文摘要

深度神经网络(DNNS)在众多应用程序上实现最新性能。但是,由于其决策标准通常是非透明的,因此很难事先告诉事先分辨出一个DNN是否会提供正确的输出。如果输入位于其泛化信封所包含的区域内,则DNN将提供正确的输出。在这种情况下,网络合理处理了输入样本中包含的信息。如果DNN正确概括,则在推理时间进行评估至关重要。当前,实现这一目标的方法在不同的问题设置中相对于彼此独立地进行了研究,从而导致了三个主要的研究和文献领域:预测性不确定性,分布外检测和对抗性示例检测。这项调查将研究机器学习方法(尤其是DNN)的概括性能的较大框架中连接了三个字段。我们强调了共同点,指出了最有希望的方法,并给出了在推理时间提供的方法的结构化概述,意味着确定当前输入是否在DNN的概括范围内。

Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications. However, it is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent. A DNN delivers the correct output if the input is within the area enclosed by its generalization envelope. In this case, the information contained in the input sample is processed reasonably by the network. It is of large practical importance to assess at inference time if a DNN generalizes correctly. Currently, the approaches to achieve this goal are investigated in different problem set-ups rather independently from one another, leading to three main research and literature fields: predictive uncertainty, out-of-distribution detection and adversarial example detection. This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs. We underline the common ground, point at the most promising approaches and give a structured overview of the methods that provide at inference time means to establish if the current input is within the generalization envelope of a DNN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源