论文标题
您所需要的声誉机制是:协作公平和对抗性的鲁棒性在联合学习中
A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning
论文作者
论文摘要
联合学习(FL)是一个新兴的实用框架,可在多个参与者(例如最终用户,组织和公司)中有效且可扩展的机器学习。但是,大多数现有的FL或分布式学习框架并没有很好地解决两个重要问题:协作公平和对抗性的鲁棒性(例如自由骑士和恶意参与者)。在常规FL中,所有参与者都将获得全球模型(同等的奖励),这可能对高度贡献的参与者不公平。此外,由于缺乏保障机制,自由骑士或恶意对手可以游戏系统以免费访问全球模型或破坏它。在本文中,我们提出了一种新颖的健壮而公平的联合学习(RFFL)框架,以通过声誉机制同时实现协作公平和对抗性鲁棒性。 RFFL通过通过上传的梯度(使用向量相似性)检查其贡献,从而维持每个参与者的声誉,从而确定要删除的非限制或恶意参与者。我们的方法通过不需要任何辅助/验证数据集来区分自身。基准数据集的广泛实验表明,RFFL可以达到高公平性,并且对不同类型的对手非常强大,同时实现了竞争性的预测准确性。
Federated learning (FL) is an emerging practical framework for effective and scalable machine learning among multiple participants, such as end users, organizations and companies. However, most existing FL or distributed learning frameworks have not well addressed two important issues together: collaborative fairness and adversarial robustness (e.g. free-riders and malicious participants). In conventional FL, all participants receive the global model (equal rewards), which might be unfair to the high-contributing participants. Furthermore, due to the lack of a safeguard mechanism, free-riders or malicious adversaries could game the system to access the global model for free or to sabotage it. In this paper, we propose a novel Robust and Fair Federated Learning (RFFL) framework to achieve collaborative fairness and adversarial robustness simultaneously via a reputation mechanism. RFFL maintains a reputation for each participant by examining their contributions via their uploaded gradients (using vector similarity) and thus identifies non-contributing or malicious participants to be removed. Our approach differentiates itself by not requiring any auxiliary/validation dataset. Extensive experiments on benchmark datasets show that RFFL can achieve high fairness and is very robust to different types of adversaries while achieving competitive predictive accuracy.