论文标题
在联邦学习中为模型中毒攻击的防御策略:一项调查
Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey
论文作者
论文摘要
分布式机器学习的进步可以增强未来的通信和网络。联邦学习(FL)的出现为分布式机器学习提供了有效的框架,但是,这仍然面临许多安全挑战。其中,模型中毒攻击对FL的安全性和性能有重大影响。鉴于有许多研究重点是防御模型中毒攻击,因此有必要调查现有工作并提供见解以激发未来的研究。在本文中,我们首先将模型中毒攻击的防御机制分为两类:用于本地模型更新的评估方法和全球模型的聚合方法。然后,我们详细分析了一些现有的防御策略。我们还讨论了一些潜在的挑战和未来的研究方向。据我们所知,我们是第一个调查佛罗里达州模型中毒攻击的防御方法。
Advances in distributed machine learning can empower future communications and networking. The emergence of federated learning (FL) has provided an efficient framework for distributed machine learning, which, however, still faces many security challenges. Among them, model poisoning attacks have a significant impact on the security and performance of FL. Given that there have been many studies focusing on defending against model poisoning attacks, it is necessary to survey the existing work and provide insights to inspire future research. In this paper, we first classify defense mechanisms for model poisoning attacks into two categories: evaluation methods for local model updates and aggregation methods for the global model. Then, we analyze some of the existing defense strategies in detail. We also discuss some potential challenges and future research directions. To the best of our knowledge, we are the first to survey defense methods for model poisoning attacks in FL.