论文标题

联合学习中的协作公平

Collaborative Fairness in Federated Learning

论文作者

Lyu, Lingjuan, Xu, Xinyi, Wang, Qian

论文摘要

在当前的深度学习范式中,本地培训或独立框架往往会导致过度拟合,从而易于推广。可以通过分布式或联合学习(FL)来解决此问题,该学习利用参数服务器来汇总单个参与者的模型更新。但是,大多数现有的分布式或FL框架都忽略了参与的重要方面:协作公平。特别是,无论他们的贡献如何,所有参与者都可以收到相同或相似的模型。为了解决这个问题,我们调查了FL中的协作公平性,并提出了一种新颖的合作公平联合学习(CFFL)框架,该框架利用声誉来强制参与者融合不同的模型,从而在不损害预测性能的情况下实现公平性。基准数据集上的大量实验表明,CFFL实现高公平性,与分布式框架相当的准确性,并且优于独立框架。

In current deep learning paradigms, local training or the Standalone framework tends to result in overfitting and thus poor generalizability. This problem can be addressed by Distributed or Federated Learning (FL) that leverages a parameter server to aggregate model updates from individual participants. However, most existing Distributed or FL frameworks have overlooked an important aspect of participation: collaborative fairness. In particular, all participants can receive the same or similar models, regardless of their contributions. To address this issue, we investigate the collaborative fairness in FL, and propose a novel Collaborative Fair Federated Learning (CFFL) framework which utilizes reputation to enforce participants to converge to different models, thus achieving fairness without compromising the predictive performance. Extensive experiments on benchmark datasets demonstrate that CFFL achieves high fairness, delivers comparable accuracy to the Distributed framework, and outperforms the Standalone framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源