论文标题

生成模型的分散归因

Decentralized Attribution of Generative Models

论文作者

Kim, Changhoon, Ren, Yi, Yang, Yezhou

论文摘要

生成模型的应用不断增长,导致了新的威胁,例如恶意性格和侵犯数字版权。对这些威胁的一种解决方案是模型归因,即,识别用户端模型的识别,在该模型中生成了所产生的内容。现有研究表明,通过对所有用户端模型训练的集中分类器归因的经验可行性。但是,随着模型数量的增加,这种方法实际上并不可扩展。它也没有提供可归因性保证。为此,本文研究了分散的归因,该归因依赖于与每个用户端模型相关的二进制分类器。每个二进制分类器都通过特定于用户的密钥进行参数化,并将其关联的模型分布与真实数据分布区分开。我们开发了足够的键条件,以保证属性下限。我们的方法已在MNIST,Celeba和FFHQ数据集上进行了验证。我们还研究了发电质量和归因对对抗后制过程的稳健性之间的权衡。

Growing applications of generative models have led to new threats such as malicious personation and digital copyright infringement. One solution to these threats is model attribution, i.e., the identification of user-end models where the contents under question are generated from. Existing studies showed empirical feasibility of attribution through a centralized classifier trained on all user-end models. However, this approach is not scalable in reality as the number of models ever grows. Neither does it provide an attributability guarantee. To this end, this paper studies decentralized attribution, which relies on binary classifiers associated with each user-end model. Each binary classifier is parameterized by a user-specific key and distinguishes its associated model distribution from the authentic data distribution. We develop sufficient conditions of the keys that guarantee an attributability lower bound. Our method is validated on MNIST, CelebA, and FFHQ datasets. We also examine the trade-off between generation quality and robustness of attribution against adversarial post-processes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源