论文标题

边缘计算中个性化深神网络的多任务联盟学习

Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing

论文作者

Mills, Jed, Hu, Jia, Min, Geyong

论文摘要

联合学习(FL)是一种在移动设备上进行协作培训深神网络(DNN)的新兴方法,而没有离开设备的私人用户数据。先前的工作表明,非独立和相同分布的(非IID)用户数据会损害FL算法的收敛速度。此外,大多数现有在FL的工作都衡量了全球模型的准确性,但是在许多情况下,例如用户内容启用,提高单个用户模型准确性(UA)是真正的目标。为了解决这些问题,我们提出了一种多任务FL(MTFL)算法,该算法将未填充的批处理范围(BN)层引入联合DNN。 MTFL通过允许用户训练个性化的数据来使UA和收敛速度受益。 MTFL与流行的迭代FL优化算法(例如联合平均(FedAvg))兼容,我们从经验上表明,当MTFL内的优化策略用作优化策略时,ADAM优化的分布形式(FedAvg-Adam)的分布形式使收敛速度进一步进一步。使用MNIST和CIFAR10进行的实验表明,MTFL能够显着减少到达目标UA所需的回合数量,当使用现有FL优化策略时,最多可减少$ 5 \ times $,并且在使用FedAvg-Adam时进一步提高了$ 3 \ times $改进。我们将MTFL与竞争性个性化的FL算法进行比较,这表明它能够在所有被考虑的情况下实现MNIST和CIFAR10的最佳UA。最后,我们在边缘计算床上用F​​edAvg-Adam评估MTFL,表明其收敛性和UA受益于其开销。

Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to $5\times$ when using existing FL optimisation strategies, and with a further $3\times$ improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源