论文标题

朝着差异私人文本表示形式

Towards Differentially Private Text Representations

论文作者

Lyu, Lingjuan, Li, Yitong, He, Xuanli, Xiao, Tong

论文摘要

大多数深度学习框架都要求用户将其本地数据或模型更新汇总到受信任的服务器上以训练或维护全局模型。在许多应用程序中,可以访问用户信息的受信任服务器的假设是不适合的。为了解决此问题,我们在不信任的服务器设置下开发了一个新的深度学习框架,其中包括三个模块:(1)嵌入模块,(2)随机化模块和(3)分类器模块。对于随机化模块,我们提出了一种新型的本地差异私有(LDP)协议,以减少隐私参数$ε$对准确性的影响,并为选择LDP选择随机概率提供了增强的灵活性。分析和实验表明,与非私有框架和现有的LDP协议相比,我们的框架提供了可比甚至更好的性能,证明了我们的LDP协议的优势。

Most deep learning frameworks require users to pool their local data or model updates to a trusted server to train or maintain a global model. The assumption of a trusted server who has access to user information is ill-suited in many applications. To tackle this problem, we develop a new deep learning framework under an untrusted server setting, which includes three modules: (1) embedding module, (2) randomization module, and (3) classifier module. For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter $ε$ on accuracy, and provide enhanced flexibility in choosing randomization probabilities for LDP. Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols, demonstrating the advantages of our LDP protocol.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源