论文标题
通过联合沟通和计算设计的节能联合边缘学习
Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design
论文作者
论文摘要
本文研究了联合边缘学习系统,其中边缘服务器协调一组边缘设备,以基于其本地分布的数据样本来训练共享的机器学习模型。在分布式培训期间,我们利用联合通信和计算设计来提高系统能效,其中全局ML参数聚合的通信资源分配和用于本地更新MLPARAMETER的计算资源分配均已共同优化。特别是,我们考虑了边缘设备的两个传输协议,以分别基于非正交多访问和时间划分多访问权限将ML参数上传到边缘服务器。在这两种方案下,我们通过共同优化边缘设备的传输功率和速率来最大程度地降低所有边缘设备在特定有限培训期间的总能量消耗,以便上传MLPARAMETERS及其中央处理单元频率以进行本地更新。我们提出有效的算法,以使用凸优化的技术来最佳地解决配方的能量最小化问题。数值结果表明,与其他基准方案相比,我们提出的联合通信和计算设计可显着提高联合边缘学习系统的能源效率,通过正确平衡通信和计算之间的能源折衷。
This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning model based on their locally distributed data samples. During the distributed training, we exploit the joint communication and computation design for improving the system energy efficiency, in which both the communication resource allocation for global ML parameters aggregation and the computation resource allocation for locally updating MLparameters are jointly optimized. In particular, we consider two transmission protocols for edge devices to upload ML parameters to edge server, based on the non orthogonal multiple access and time division multiple access, respectively. Under both protocols, we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy, by jointly optimizing the transmission power and rates at edge devices for uploading MLparameters and their central processing unit frequencies for local update. We propose efficient algorithms to optimally solve the formulated energy minimization problems by using the techniques from convex optimization. Numerical results show that as compared to other benchmark schemes, our proposed joint communication and computation design significantly improves the energy efficiency of the federated edge learning system, by properly balancing the energy tradeoff between communication and computation.