论文标题
关于隐私保护和可验证的Relu网络的多项式近似
On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks
论文作者
论文摘要
将深层神经网络(DNNS)推理任务外包给不受信任的云提高了数据隐私和完整性问题。尽管有许多技术可以确保基于多项式的计算的隐私和完整性,但DNN涉及非多项式计算。为了应对这些挑战,已经提出了基于替换多项式激活函数的矫正线性单元(relu)功能等非多功能激活函数的几种隐私和可验证的推理技术。这样的技术通常需要有限场上具有整数系数或多项式的多项式。在此类要求的激励下,一些作品提出了用平方功能替换Relu功能的作品。在这项工作中,我们从经验上表明,即使在限制多项式具有整数系数的情况下,正方形函数也不是可以替换Relu函数的最佳多项式。相反,我们提出了具有一阶术语的2度多项式激活函数,并从经验上表明它可以导致更好的模型。我们在CIFAR和Tiny Imagenet数据集(例如VGG-16)上进行的实验表明,与正方形函数相比,我们提出的功能可提高测试准确性高达10.4%。
Outsourcing deep neural networks (DNNs) inference tasks to an untrusted cloud raises data privacy and integrity concerns. While there are many techniques to ensure privacy and integrity for polynomial-based computations, DNNs involve non-polynomial computations. To address these challenges, several privacy-preserving and verifiable inference techniques have been proposed based on replacing the non-polynomial activation functions such as the rectified linear unit (ReLU) function with polynomial activation functions. Such techniques usually require polynomials with integer coefficients or polynomials over finite fields. Motivated by such requirements, several works proposed replacing the ReLU function with the square function. In this work, we empirically show that the square function is not the best degree-2 polynomial that can replace the ReLU function even when restricting the polynomials to have integer coefficients. We instead propose a degree-2 polynomial activation function with a first order term and empirically show that it can lead to much better models. Our experiments on the CIFAR and Tiny ImageNet datasets on various architectures such as VGG-16 show that our proposed function improves the test accuracy by up to 10.4% compared to the square function.