论文标题

FedSmart:自动更新联合学习优化机制

FedSmart: An Auto Updating Federated Learning Optimization Mechanism

论文作者

He, Anxun, Wang, Jianzong, Huang, Zhangcheng, Xiao, Jing

论文摘要

联邦学习为保护数据隐私做出了重要贡献。以前的许多作品基于以下假设:数据是独立分布(IID)的。结果,非相同独立分布(非IID)数据的模型性能超出了预期,这是具体的情况。确保在非IID数据上确保模型鲁棒性的一些现有方法,例如数据共享策略或预处理,可能会导致隐私泄漏。此外,还有一些参与者试图用低质量数据毒害模型。在本文中,引入了一种基于性能的参数返回方法,我们称其为FederatedSmart(FedSmart)。它通过共享全局梯度优化了每个客户端的不同模型,并将每个客户的数据作为本地验证集提取,并且模型在t循环中实现的准确性决定了下一轮的权重。实验结果表明,FEDSMART使参与者能够为具有相似数据分布的参与者分配更大的权重。

Federated learning has made an important contribution to data privacy-preserving. Many previous works are based on the assumption that the data are independently identically distributed (IID). As a result, the model performance on non-identically independently distributed (non-IID) data is beyond expectation, which is the concrete situation. Some existing methods of ensuring the model robustness on non-IID data, like the data-sharing strategy or pretraining, may lead to privacy leaking. In addition, there exist some participants who try to poison the model with low-quality data. In this paper, a performance-based parameter return method for optimization is introduced, we term it FederatedSmart (FedSmart). It optimizes different model for each client through sharing global gradients, and it extracts the data from each client as a local validation set, and the accuracy that model achieves in round t determines the weights of the next round. The experiment results show that FedSmart enables the participants to allocate a greater weight to the ones with similar data distribution.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源