论文标题

Defta:FedAvg的插件分散替代品

DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg

论文作者

Zhou, Yuhao, Shi, Minjia, Tian, Yuxin, Ye, Qing, Lv, Jiancheng

论文摘要

联合学习(FL)被确定为大规模分布式机器学习(ML)的关键推动力,而无需本地RAW数据集共享,从而大大减少了隐私问题并减轻了孤立的数据问题。实际上,FL的繁荣很大程度上是由于一个称为FedAvg的集中式框架,在该框架中,工人负责模型培训,服务器控制模型聚合。但是,FedAvg的集中式工作者服务器体系结构引起了新的问题,无论是群集的可扩展性,数据泄漏的风险以及中央服务器的故障甚至叛逃。为了克服这些问题,我们提出了分散的联邦信任平均(DEFTA),这是一个分散的FL框架,可作为FedAvg的插件替代品,在安装后立即为联邦学习过程带来更好的安全性,可扩展性和容忍度的更好的安全性,可扩展性和容忍性。原则上,它从根本上从建筑的角度解决了上述问题,而无需妥协或权衡取舍,主要由一个新的模型组成,该模型通过理论绩效分析汇总了公式,以及分散的信任系统(DTS),以极大地提高系统鲁棒性。请注意,由于DEFTA是在框架级别上的FedAvg的替代方法,因此\ textit {为FedAvg发布的普遍算法也可以轻松地在Defta中使用。在六个数据集和六个基本模型上进行的广泛实验表明,DEFTA不仅在更现实的环境中与FedAvg具有可比的性能,而且即使有66%的工人是恶意的,也可以实现出色的弹性。此外,我们还提出了DEFTA的异步变体,以赋予其更强大的可用性。

Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem. In reality, the prosperity of FL is largely due to a centralized framework called FedAvg, in which workers are in charge of model training and servers are in control of model aggregation. However, FedAvg's centralized worker-server architecture has raised new concerns, be it the low scalability of the cluster, the risk of data leakage, and the failure or even defection of the central server. To overcome these problems, we propose Decentralized Federated Trusted Averaging (DeFTA), a decentralized FL framework that serves as a plug-and-play replacement for FedAvg, instantly bringing better security, scalability, and fault-tolerance to the federated learning process after installation. In principle, it fundamentally resolves the above-mentioned issues from an architectural perspective without compromises or tradeoffs, primarily consisting of a new model aggregating formula with theoretical performance analysis, and a decentralized trust system (DTS) to greatly improve system robustness. Note that since DeFTA is an alternative to FedAvg at the framework level, \textit{prevalent algorithms published for FedAvg can be also utilized in DeFTA with ease}. Extensive experiments on six datasets and six basic models suggest that DeFTA not only has comparable performance with FedAvg in a more realistic setting, but also achieves great resilience even when 66% of workers are malicious. Furthermore, we also present an asynchronous variant of DeFTA to endow it with more powerful usability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源