论文标题

CODNN-编码分类的强大神经网络

CodNN -- Robust Neural Networks From Coded Classification

论文作者

Raviv, Netanel, Jain, Siddharth, Upadhyaya, Pulakesh, Bruck, Jehoshua, Jiang, Anxiao

论文摘要

深度神经网络(DNNS)是正在进行的信息革命中的革命力量,但它们的内在属性仍然是一个谜。特别是,众所周知,DNN对噪声高度敏感,无论是对抗性还是随机性。这对DNN的硬件实施以及它们在自动驾驶等关键应用程序中的部署构成了根本挑战。在本文中,我们通过错误校正代码来构建强大的DNN。通过我们的方法,DNN的数据或内部层是使用错误校正代码编码的,并保证了噪声下的成功计算。由于可以将DNN视为分类任务的分层串联,因此我们的研究始于对嘈杂的编码输入进行分类的核心任务,并朝着强大的DNN进行了发展。我们专注于二进制数据和线性代码。我们的主要结果是,普遍的平价代码可以保证大型DNN家族的鲁棒性,其中包括最近普及的二进制神经网络。此外,我们表明,编码的分类问题与布尔功能的傅立叶分析有着深厚的联系。与文献中现有的解决方案相反,我们的结果不依赖于改变DNN的训练过程,而是提供数学上严格的保证而不是实验证据。

Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet their intrinsic properties remain a mystery. In particular, it is widely known that DNNs are highly sensitive to noise, whether adversarial or random. This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving. In this paper we construct robust DNNs via error correcting codes. By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed. Since DNNs can be seen as a layered concatenation of classification tasks, our research begins with the core task of classifying noisy coded inputs, and progresses towards robust DNNs. We focus on binary data and linear codes. Our main result is that the prevalent parity code can guarantee robustness for a large family of DNNs, which includes the recently popularized binarized neural networks. Further, we show that the coded classification problem has a deep connection to Fourier analysis of Boolean functions. In contrast to existing solutions in the literature, our results do not rely on altering the training process of the DNN, and provide mathematically rigorous guarantees rather than experimental evidence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源