论文标题
FedKnow:通过签名任务知识集成在边缘的联合持续学习
FedKNOW: Federated Continual Learning with Signature Task Knowledge Integration at Edge
论文作者
论文摘要
深层神经网络(DNNS)在物联网中被普遍存在,并已成为我们日常生活的组成部分。在处理现实世界中不断发展的学习任务(例如对不同类型的对象进行分类)时,DNN面临着根据不同边缘设备上的任务不断重新训练的挑战。联合持续学习是一种有前途的技术,它提供了部分解决方案,但尚未克服以下困难:由于设备处理有限,由于非IID数据的有限通信引起的负面知识转移以及任务和边缘设备的可伸缩性有限,因此造成的严重准确性损失。在本文中,我们提出了通过签名任务知识的新颖概念FedKnow,这是一个准确且可扩展的联合持续学习框架。 FedKnow是一种客户端解决方案,它不断提取并整合了受当前任务影响很大的签名任务的知识。 FedKnow的每个客户均由知识提取器,梯度修复器,最重要的是梯度积分器组成。在训练新任务后,梯度集成仪可确保通过有效结合从过去的本地任务和其他客户通过全球模型确定的签名任务来预防灾难性遗忘和减轻负面知识转移。我们在Pytorch中实施了FedKnow,并使用流行的联邦持续学习基准对最新技术进行了广泛的评估。关于异质边缘设备的广泛评估结果表明,FedKnow将模型准确性提高了63.24%,而不会增加模型训练时间,将通信成本降低34.28%,并且在诸如大量任务或客户等困难方案之下可以取得更大的改进,以及培训不同的复杂网络。
Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the challenge to continually retrain themselves according to the tasks on different edge devices. Federated continual learning is a promising technique that offers partial solutions but yet to overcome the following difficulties: the significant accuracy loss due to the limited on-device processing, the negative knowledge transfer caused by the limited communication of non-IID data, and the limited scalability on the tasks and edge devices. In this paper, we propose FedKNOW, an accurate and scalable federated continual learning framework, via a novel concept of signature task knowledge. FedKNOW is a client side solution that continuously extracts and integrates the knowledge of signature tasks which are highly influenced by the current task. Each client of FedKNOW is composed of a knowledge extractor, a gradient restorer and, most importantly, a gradient integrator. Upon training for a new task, the gradient integrator ensures the prevention of catastrophic forgetting and mitigation of negative knowledge transfer by effectively combining signature tasks identified from the past local tasks and other clients' current tasks through the global model. We implement FedKNOW in PyTorch and extensively evaluate it against state-of-the-art techniques using popular federated continual learning benchmarks. Extensive evaluation results on heterogeneous edge devices show that FedKNOW improves model accuracy by 63.24% without increasing model training time, reduces communication cost by 34.28%, and achieves more improvements under difficult scenarios such as large numbers of tasks or clients, and training different complex networks.