论文标题

通过浮点数值错误利用经过验证的神经网络

Exploiting Verified Neural Networks via Floating Point Numerical Error

论文作者

Jia, Kai, Rinard, Martin

论文摘要

研究人员开发了由需要表征深神经网络鲁棒性的动机的神经网络验证算法。验证者渴望回答神经网络是否保证了空间中所有投入的某些属性。但是,许多验证者不准确地模拟浮点算术,但没有彻底讨论后果。 我们表明,浮点误差的疏忽导致不符合验证,可以在实践中系统地利用。对于预验证的神经网络,我们提出了一种方法,该方法有效地搜索输入作为证人,以了解完整验证者的鲁棒性主张的错误。我们还提出了一种构建神经网络体系结构和权重的方法,这些神经网络体系结构和权重导致错误的验证者的错误结果。我们的结果强调,要实现对神经网络的实际验证,任何验证系统都必须准确(或保守地)对网络推理或验证系统中任何浮点计算的影响进行建模。

Researchers have developed neural network verification algorithms motivated by the need to characterize the robustness of deep neural networks. The verifiers aspire to answer whether a neural network guarantees certain properties with respect to all inputs in a space. However, many verifiers inaccurately model floating point arithmetic but do not thoroughly discuss the consequences. We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice. For a pretrained neural network, we present a method that efficiently searches inputs as witnesses for the incorrectness of robustness claims made by a complete verifier. We also present a method to construct neural network architectures and weights that induce wrong results of an incomplete verifier. Our results highlight that, to achieve practically reliable verification of neural networks, any verification system must accurately (or conservatively) model the effects of any floating point computations in the network inference or verification system.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源