论文标题

关于二元神经网络的可拖动表示

On Tractable Representations of Binary Neural Networks

论文作者

Shi, Weijia, Shih, Andy, Darwiche, Adnan, Choi, Arthur

论文摘要

我们考虑将二进制神经网络的决策函数汇编为可处理的表示,例如有序的二进制决策图(OBDDS)和句子决策图(SDDS)。以OBDD/SDD的形式获得此功能可有助于对神经网络行为的解释和形式验证。首先,我们考虑验证神经网络的鲁棒性的任务,并展示我们如何计算神经网络的预期鲁棒性,并且给定IT的OBDD/SDD表示。接下来,我们将基于用于编译神经元的伪多项式时间算法来考虑一种更有效的编译神经网络的方法。然后,我们在手写数字数据集中提供了一个案例研究,强调了两个从同一数据集训练的神经网络如何具有很高的精度,但具有截然不同的鲁棒性。最后,在实验中,我们表明,获得神经网络作为SDD的紧凑表示是可行的。

We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs). Obtaining this function as an OBDD/SDD facilitates the explanation and formal verification of a neural network's behavior. First, we consider the task of verifying the robustness of a neural network, and show how we can compute the expected robustness of a neural network, given an OBDD/SDD representation of it. Next, we consider a more efficient approach for compiling neural networks, based on a pseudo-polynomial time algorithm for compiling a neuron. We then provide a case study in a handwritten digits dataset, highlighting how two neural networks trained from the same dataset can have very high accuracies, yet have very different levels of robustness. Finally, in experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源