论文标题

分裂:当联合学习时会遇到分裂学习

SplitFed: When Federated Learning Meets Split Learning

论文作者

Thapa, Chandra, Chamikara, M. A. P., Camtepe, Seyit, Sun, Lichao

论文摘要

联合学习(FL)和分裂学习(SL)是两种流行的分布式机器学习方法。两者都遵循模型到数据的场景;客户训练和测试机器学习模型而无需共享原始数据。由于客户和服务器之间的机器学习模型体系结构,SL提供了比FL更好的模型隐私。此外,拆分模型使SL成为资源约束环境的更好选择。但是,由于多个客户的基于中继的培训,SL的性能比FL慢。在这方面,本文提出了一种名为Splitfed学习(SFL)的新方法,该方法将两种方法与消除其固有缺点的两种方法合并,以及具有差异性隐私和Pixeldp的精制建筑配置,以增强数据隐私和模型鲁棒性。我们的分析和经验结果表明,(纯)SFL提供了与SL相似的测试准确性和沟通效率,同时大大减少了每个全局时期的计算时间,而不是SL中的SL。此外,与SL一样,其对FL的沟通效率随客户数量的提高。此外,在扩展的实验环境下,进一步评估了SFL具有隐私和鲁棒性措施的性能。

Federated learning (FL) and split learning (SL) are two popular distributed machine learning approaches. Both follow a model-to-data scenario; clients train and test machine learning models without sharing raw data. SL provides better model privacy than FL due to the machine learning model architecture split between clients and the server. Moreover, the split model makes SL a better option for resource-constrained environments. However, SL performs slower than FL due to the relay-based training across multiple clients. In this regard, this paper presents a novel approach, named splitfed learning (SFL), that amalgamates the two approaches eliminating their inherent drawbacks, along with a refined architectural configuration incorporating differential privacy and PixelDP to enhance data privacy and model robustness. Our analysis and empirical results demonstrate that (pure) SFL provides similar test accuracy and communication efficiency as SL while significantly decreasing its computation time per global epoch than in SL for multiple clients. Furthermore, as in SL, its communication efficiency over FL improves with the number of clients. Besides, the performance of SFL with privacy and robustness measures is further evaluated under extended experimental settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源