论文标题

具有随机输入噪声的神经网络的数据驱动认证

Data-Driven Certification of Neural Networks with Random Input Noise

论文作者

Anderson, Brendon G., Sojoudi, Somayeh

论文摘要

在存在输入不确定性的情况下,证明神经网络的鲁棒性的方法对于安全至关重要的环境至关重要。文献中的大多数认证方法都是为对抗或最坏情况输入而设计的,但是研究人员最近表明需要考虑考虑随机输入噪声的方法。在本文中,我们检查了输入受到任意概率分布的随机噪声的设置。我们提出了一种鲁棒性认证方法,该方法降低了网络输出安全的可能性。该界限被视为偶然受限的优化问题,然后使用输入输出样品重新重新重新进行重新重新构建,以使优化约束可牵引。我们开发了足够的条件,以使结果优化成为凸,以及以压倒性概率使稳健性绑定所需的样本数量。我们为特殊情况显示,提出的优化将减少为直观的封闭式解决方案。关于合成,MNIST和CIFAR-10网络的案例研究在实验上表明,与先前的最新技术相比,该方法能够证明对更大的不确定性区域的各种输入噪声状态的鲁棒性。

Methods to certify the robustness of neural networks in the presence of input uncertainty are vital in safety-critical settings. Most certification methods in the literature are designed for adversarial or worst-case inputs, but researchers have recently shown a need for methods that consider random input noise. In this paper, we examine the setting where inputs are subject to random noise coming from an arbitrary probability distribution. We propose a robustness certification method that lower-bounds the probability that network outputs are safe. This bound is cast as a chance-constrained optimization problem, which is then reformulated using input-output samples to make the optimization constraints tractable. We develop sufficient conditions for the resulting optimization to be convex, as well as on the number of samples needed to make the robustness bound hold with overwhelming probability. We show for a special case that the proposed optimization reduces to an intuitive closed-form solution. Case studies on synthetic, MNIST, and CIFAR-10 networks experimentally demonstrate that this method is able to certify robustness against various input noise regimes over larger uncertainty regions than prior state-of-the-art techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源