论文标题

基于移动边缘计算网络中的任务和资源分配的机器学习方法

A Machine Learning Approach for Task and Resource Allocation in Mobile Edge Computing Based Networks

论文作者

Wang, Sihua, Chen, Mingzhe, Liu, Xuanlin, Yin, Changchuan, Cui, Shuguang, Poor, H. Vincent

论文摘要

在本文中,研究了一个无线网络的联合任务,频谱和传输电源分配问题,其中基站(BSS)配备了移动边缘计算(MEC)服务器,以共同为用户提供计算和通信服务。每个用户可以从三种类型的计算任务中请求一个计算任务。由于每个计算任务的数据大小是不同的,因为请求的计算任务有所不同,因此BSS必须调整其资源(子载波和传输功率)和任务分配方案以有效地为用户服务。该问题被称为一个优化问题,其目标是最大程度地减少所有用户的最大计算和传输延迟。开发了一种多堆栈增强学习(RL)算法来解决此问题。使用所提出的算法,每个BS都可以在其多个堆栈中记录历史资源分配方案和用户的信息,以避免学习相同的资源分配方案和用户状态,从而提高收敛速度和学习效率。仿真结果表明,与标准的Q学习算法相比,所提出的算法可以减少收敛和最大延迟所需的迭代次数和最大延迟。

In this paper, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile edge computing (MEC) servers to jointly provide computational and communication services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multi-stack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users' information in its multiple stacks to avoid learning the same resource allocation scheme and users' states, thus improving the convergence speed and learning efficiency. Simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源