论文标题

在对比学习中对预训练的编码器进行水印

Watermarking Pre-trained Encoders in Contrastive Learning

论文作者

Wu, Yutong, Qiu, Han, Zhang, Tianwei, L, Jiwei, Qiu, Meikang

论文摘要

对比学习已成为预先培训图像编码器的一种流行技术,该技术可用于以有效的方式构建各种下游分类模型。此过程需要大量的数据和计算资源。因此,预训练的编码器是需要仔细保护的重要知识产权。将现有水印技术从分类任务迁移到对比度学习方案是一项挑战,因为编码器的所有者缺乏将来将从编码器开发的下游任务的知识。我们为预训练的编码器提出了\ textit {first}水印方法。我们引入了任务不足的损失函数,以有效地嵌入编码器作为后门作为水印。该后门仍然可以存在于从编码器传输的任何下游模型中。对不同对比度学习算法,数据集和下游任务的广泛评估表明,我们的水印表现出对不同对抗性操作的高效和鲁棒性。

Contrastive learning has become a popular technique to pre-train image encoders, which could be used to build various downstream classification models in an efficient way. This process requires a large amount of data and computation resources. Hence, the pre-trained encoders are an important intellectual property that needs to be carefully protected. It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario, as the owner of the encoder lacks the knowledge of the downstream tasks which will be developed from the encoder in the future. We propose the \textit{first} watermarking methodology for the pre-trained encoders. We introduce a task-agnostic loss function to effectively embed into the encoder a backdoor as the watermark. This backdoor can still exist in any downstream models transferred from the encoder. Extensive evaluations over different contrastive learning algorithms, datasets, and downstream tasks indicate our watermarks exhibit high effectiveness and robustness against different adversarial operations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源