论文标题
深度神经网络的可扩展定量验证
Scalable Quantitative Verification For Deep Neural Networks
论文作者
论文摘要
尽管深度神经网络(DNN)的功能成功取得了成功,但它们的可信赖性仍然是至关重要的开放挑战。为了应对这一挑战,已经提出了测试和验证技术。但是,这些现有技术可以为大型网络提供可扩展性,或者是正式的保证,而不是两者兼而有之。在本文中,我们为深神经网络提出了一个可扩展的定量验证框架,即,通过正式保证可以满足所需的概率属性,以进行测试驱动的方法。我们的技术执行足够的测试,直到可以证明正式概率属性的健全性。它可用于证明确定性和随机DNN的属性。我们在一种称为provero的工具中实现了我们的方法,并在证明DNN的对抗性鲁棒性的背景下应用。在这种情况下,我们首先展示了一种新的稳健性攻击态度衡量,它为评估今天报告的鲁棒性的方法提供了一种替代方法。其次,Provero为大型DNN提供了鲁棒性证书,其中现有的最新验证工具无法产生结论性的结果。我们的工作为验证现实世界深神经网络捕获的分布的属性铺平了前进的方向,即使测试人员只能对神经网络进行黑盒访问,也可以证明保证。
Despite the functional success of deep neural networks (DNNs), their trustworthiness remains a crucial open challenge. To address this challenge, both testing and verification techniques have been proposed. But these existing techniques provide either scalability to large networks or formal guarantees, not both. In this paper, we propose a scalable quantitative verification framework for deep neural networks, i.e., a test-driven approach that comes with formal guarantees that a desired probabilistic property is satisfied. Our technique performs enough tests until soundness of a formal probabilistic property can be proven. It can be used to certify properties of both deterministic and randomized DNNs. We implement our approach in a tool called PROVERO and apply it in the context of certifying adversarial robustness of DNNs. In this context, we first show a new attack-agnostic measure of robustness which offers an alternative to purely attack-based methodology of evaluating robustness being reported today. Second, PROVERO provides certificates of robustness for large DNNs, where existing state-of-the-art verification tools fail to produce conclusive results. Our work paves the way forward for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees, even where testers only have black-box access to the neural network.