论文标题

保护会员推理攻击的隐私生成框架

Privacy-preserving Generative Framework Against Membership Inference Attacks

论文作者

Yang, Ruikang, Ma, Jianfeng, Miao, Yinbin, Ma, Xindi

论文摘要

人工智能和机器学习已融入我们生活的各个方面,个人数据的隐私吸引了越来越多的关注。由于该模型的产生需要提取培训数据的有效信息,因此该模型有泄漏培训数据隐私的风险。会员推理攻击可以在一定程度上测量源数据的模型泄漏。在本文中,我们通过生成模型变化自动编码器(VAE)的信息提取和数据生成功能来设计一个针对会员推理攻击的隐私生成框架,以生成满足差异隐私需求的合成数据。我们直接处理原始数据。我们首先通过VAE模型将源数据映射到潜在空间以获取潜在代码,然后执行满足潜在代码的指标隐私的噪声过程,最后使用VAE模型重建合成数据。我们的实验评估表明,经过新生成的合成数据训练的机器学习模型可以有效抵抗会员推理攻击并仍然保持高实用性。

Artificial intelligence and machine learning have been integrated into all aspects of our lives and the privacy of personal data has attracted more and more attention. Since the generation of the model needs to extract the effective information of the training data, the model has the risk of leaking the privacy of the training data. Membership inference attacks can measure the model leakage of source data to a certain degree. In this paper, we design a privacy-preserving generative framework against membership inference attacks, through the information extraction and data generation capabilities of the generative model variational autoencoder (VAE) to generate synthetic data that meets the needs of differential privacy. Instead of adding noise to the model output or tampering with the training process of the target model, we directly process the original data. We first map the source data to the latent space through the VAE model to get the latent code, then perform noise process satisfying metric privacy on the latent code, and finally use the VAE model to reconstruct the synthetic data. Our experimental evaluation demonstrates that the machine learning model trained with newly generated synthetic data can effectively resist membership inference attacks and still maintain high utility.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源