论文标题

稀有嵌入和梯度结合的联邦学习中的后门攻击

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling

论文作者

Yoo, KiYoon, Kwak, Nojun

论文摘要

联合学习的最新进展表明了其在分散数据集中学习的有希望的能力。但是,由于参与该框架的对手可能出于对抗目的毒化全球模型的潜在风险,因此引起了大量工作。本文通过NLP模型的稀有单词嵌入来研究模型中毒对后门攻击的可行性。在文本分类中,只有不到1%的对手客户足以操纵模型输出,而无需在干净的句子上效果下降。对于不太复杂的数据集,仅0.1%的对手客户足以有效地毒化全球模型。我们还提出了一种专门从事联合学习方案的技术,称为“梯度集合”,该方案在我们所有实验环境中增强了后门的性能。

Recent advances in federated learning have demonstrated its promising capability to learn on decentralized datasets. However, a considerable amount of work has raised concerns due to the potential risks of adversaries participating in the framework to poison the global model for an adversarial purpose. This paper investigates the feasibility of model poisoning for backdoor attacks through rare word embeddings of NLP models. In text classification, less than 1% of adversary clients suffices to manipulate the model output without any drop in the performance on clean sentences. For a less complex dataset, a mere 0.1% of adversary clients is enough to poison the global model effectively. We also propose a technique specialized in the federated learning scheme called Gradient Ensemble, which enhances the backdoor performance in all our experimental settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源