论文标题

针对生物特征模型的隐私攻击,样品较少:合并多个模型的输出

Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

论文作者

Ahmad, Sohaib, Fuller, Benjamin, Mahmood, Kaleel

论文摘要

身份验证系统容易受到模型反演攻击的影响,在这种攻击中,对手能够近似目标机器学习模型的倒数。生物识别模型是这种攻击的主要候选者。这是因为反转生物特征模型允许攻击者产生逼真的生物识别输入,以使生物识别认证系统欺骗。 进行成功模型反转攻击的主要限制之一是所需的培训数据量。在这项工作中,我们专注于虹膜和面部生物识别系统,并提出了一种新技术,可大大减少所需的训练数据量。通过利用多个模型的输出,我们能够使用1/10进行模型反转攻击,以艾哈迈德和富勒(IJCB 2020)的训练集大小(IJCB 2020)进行虹膜数据,而Mai等人的训练集大小为1/1000。 (模式分析和机器智能2019)的面部数据。我们将新的攻击技术表示为结构的随机,并损失对齐。我们的攻击是黑框,不需要了解目标神经网络的权重,只需要输出向量的维度和值。 为了显示对齐损失的多功能性,我们将攻击框架应用于会员推理的任务(Shokri等,IEEE S&P 2017),对生物识别数据。对于IRIS,针对分类网络的会员推断攻击从52%提高到62%的精度。

Authentication systems are vulnerable to model inversion attacks where an adversary is able to approximate the inverse of a target machine learning model. Biometric models are a prime candidate for this type of attack. This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems. One of the main constraints in conducting a successful model inversion attack is the amount of training data required. In this work, we focus on iris and facial biometric systems and propose a new technique that drastically reduces the amount of training data necessary. By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data. We denote our new attack technique as structured random with alignment loss. Our attacks are black-box, requiring no knowledge of the weights of the target neural network, only the dimension, and values of the output vector. To show the versatility of the alignment loss, we apply our attack framework to the task of membership inference (Shokri et al., IEEE S&P 2017) on biometric data. For the iris, membership inference attack against classification networks improves from 52% to 62% accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源