论文标题

剪辑知道我的脸吗?

Does CLIP Know My Face?

论文作者

Hintersdorf, Dominik, Struppek, Lukas, Brack, Manuel, Friedrich, Felix, Schramowski, Patrick, Kersting, Kristian

论文摘要

随着各种应用中深度学习的兴起,围绕培训数据保护的隐私问题已成为研究的关键领域。尽管先前的研究集中在单模式模型中的隐私风险上,但我们引入了一种新的方法来评估多模式模型的隐私,特别是视觉模型,例如剪辑。拟议的身份推理攻击(IDIA)通过用同一个人的图像查询模型来揭示是否将个人包含在培训数据中。该模型从各种可能的文本标签中进行选择,该模型揭示了它是否识别该人,因此被用于培训。我们在剪辑上进行的大规模实验表明,可以非常高的精度来识别用于训练的个体。我们确认该模型已经学会了将名称与被描绘的个体关联,这意味着存在可以由对手提取的敏感信息。我们的结果强调了在大型模型中需要更强大的隐私保护的需求,并建议IDIA可用于证明未经授权使用数据进行培训和执行隐私法。

With the rise of deep learning in various applications, privacy concerns around the protection of training data have become a critical area of research. Whereas prior studies have focused on privacy risks in single-modal models, we introduce a novel method to assess privacy for multi-modal models, specifically vision-language models like CLIP. The proposed Identity Inference Attack (IDIA) reveals whether an individual was included in the training data by querying the model with images of the same person. Letting the model choose from a wide variety of possible text labels, the model reveals whether it recognizes the person and, therefore, was used for training. Our large-scale experiments on CLIP demonstrate that individuals used for training can be identified with very high accuracy. We confirm that the model has learned to associate names with depicted individuals, implying the existence of sensitive information that can be extracted by adversaries. Our results highlight the need for stronger privacy protection in large-scale models and suggest that IDIAs can be used to prove the unauthorized use of data for training and to enforce privacy laws.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源