论文标题

维护深神经网络的工业规模隐私

Industrial Scale Privacy Preserving Deep Neural Network

论文作者

Zheng, Longfei, Chen, Chaochao, Liu, Yingting, Wu, Bingzhe, Wu, Xibin, Wang, Li, Wang, Lei, Zhou, Jun, Yang, Shuang

论文摘要

深度神经网络(DNN)在诸如欺诈检测和遇险预测之类的现实世界中表现出巨大的潜力。同时,数据隔离已成为当前的一个严重问题,即不同的各方无法彼此共享数据。为了解决这个问题,大多数研究都利用加密技术来训练安全的DNN模型为多方培训,而不会损害其私人数据。尽管此类方法具有强大的安全保证,但由于其沟通较高和计算的复杂性,它们很难扩展到深网和大型数据集。为了在数据隔离方案中解决现有的安全深神经网络(DNN)的可伸缩性,在本文中,我们提出了维护神经网络学习范式的工业规模隐私,该隐私范围是安全的,该范式可以防止半盛名对手。我们的主要思想是将DNN的计算图分为两个部分,即,使用加密技术由每个方执行与私人数据相关的计算,其余计算由具有较高计算能力的中性服务器完成。我们还提出了一种防御机制,以进一步保护隐私保护。我们对现实世界欺诈检测数据集和财务困扰预测数据集进行了实验,令人鼓舞的结果证明了我们的建议的实用性。

Deep Neural Network (DNN) has been showing great potential in kinds of real-world applications such as fraud detection and distress prediction. Meanwhile, data isolation has become a serious problem currently, i.e., different parties cannot share data with each other. To solve this issue, most research leverages cryptographic techniques to train secure DNN models for multi-parties without compromising their private data. Although such methods have strong security guarantee, they are difficult to scale to deep networks and large datasets due to its high communication and computation complexities. To solve the scalability of the existing secure Deep Neural Network (DNN) in data isolation scenarios, in this paper, we propose an industrial scale privacy preserving neural network learning paradigm, which is secure against semi-honest adversaries. Our main idea is to split the computation graph of DNN into two parts, i.e., the computations related to private data are performed by each party using cryptographic techniques, and the rest computations are done by a neutral server with high computation ability. We also present a defender mechanism for further privacy protection. We conduct experiments on real-world fraud detection dataset and financial distress prediction dataset, the encouraging results demonstrate the practicalness of our proposal.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源