论文标题
使用Fisher信息测量和控制拆分层隐私泄漏
Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
论文作者
论文摘要
拆分学习和推理建议运行跨客户设备和云的大型模型的培训/推理。但是,这样的模型拆分引起了隐私问题,因为流过拆分层的激活可能会泄露有关客户端私人输入数据的信息。当前,没有一个好方法可以量化通过分离层泄漏多少私人信息,也没有一种将隐私提高到所需级别的好方法。 在这项工作中,我们建议将Fisher信息用作隐私指标,以衡量和控制信息泄漏。我们表明,Fisher信息可以直观地理解以无偏重建攻击者约束的错误形式通过拆分层泄漏了多少私人信息。然后,我们提出了一种增强隐私的技术REFIL,可以在拆分层上强制使用用户呈现的Fisher信息泄漏,以实现高隐私,同时保持合理的实用程序。
Split learning and inference propose to run training/inference of a large model that is split across client devices and the cloud. However, such a model splitting imposes privacy concerns, because the activation flowing through the split layer may leak information about the clients' private input data. There is currently no good way to quantify how much private information is being leaked through the split layer, nor a good way to improve privacy up to the desired level. In this work, we propose to use Fisher information as a privacy metric to measure and control the information leakage. We show that Fisher information can provide an intuitive understanding of how much private information is leaking through the split layer, in the form of an error bound for an unbiased reconstruction attacker. We then propose a privacy-enhancing technique, ReFIL, that can enforce a user-desired level of Fisher information leakage at the split layer to achieve high privacy, while maintaining reasonable utility.