论文标题
联合学习和分裂学习物联网的端到端评估
End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
论文作者
论文摘要
这项工作是在现实世界IoT设置中在学习绩效和设备实现开销方面评估和比较掌握学习(FL)和拆分神经网络(Splitnn)的首次尝试。我们考虑各种数据集,不同的模型架构,多个客户端和各种性能指标。为了通过模型的准确性和收敛速度指标指定的学习性能,我们在不同类型的数据分布(例如不平衡和非独立且相同分布式(非IID)数据)下经验评估了FL和SplitNN。我们表明,在数据分布不平衡的数据分布下,SplitNN的学习性能要比FL更好,但是在极端的非IID数据分布下,FL的效果差。对于实施开销,我们在Raspberry PIS上端到端坐骑fl和Splitnn,并全面评估了间接费用,包括训练时间,真实LAN设置下的通信开销,功耗和内存使用情况。我们的主要观察结果是,在物联网方案下,通信流量是主要问题的,FL似乎在SplitNN上表现更好,因为FL的通信开销明显较低,而Splitnn与Splitnn相比,这在经验上证实了先前的统计分析。此外,我们揭示了关于分裂的几个无法认可的局限性,为将来的研究构成了基础。
This work is the first attempt to evaluate and compare felderated learning (FL) and split neural networks (SplitNN) in real-world IoT settings in terms of learning performance and device implementation overhead. We consider a variety of datasets, different model architectures, multiple clients, and various performance metrics. For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data. We show that the learning performance of SplitNN is better than FL under an imbalanced data distribution, but worse than FL under an extreme non-IID data distribution. For implementation overhead, we end-to-end mount both FL and SplitNN on Raspberry Pis, and comprehensively evaluate overheads including training time, communication overhead under the real LAN setting, power consumption and memory usage. Our key observations are that under IoT scenario where the communication traffic is the main concern, the FL appears to perform better over SplitNN because FL has the significantly lower communication overhead compared with SplitNN, which empirically corroborate previous statistical analysis. In addition, we reveal several unrecognized limitations about SplitNN, forming the basis for future research.