论文标题

无线移动网络中分布式功率控制的深度参与者学习

Deep Actor-Critic Learning for Distributed Power Control in Wireless Mobile Networks

论文作者

Nasir, Yasar Sinan, Guo, Dongning

论文摘要

深度强化学习为监督深度学习和经典优化提供了一种无模型的替代方案,用于解决无线网络中的发射功率控制问题。多代理深入强化学习方法将每个发射器视为单个学习代理,通过观察当地的无线环境来确定其发射功率水平。遵循一定的策略,这些代理商学会了协作最大化全球目标,例如,汇率实用程序函数。该多代理方案易于扩展,实际上适用于大型蜂窝网络。在这项工作中,我们通过深入的参与者学习,更具体地说,通过调整深层确定性政策梯度,提出了分布式执行的连续电源控制算法。此外,我们将提出的功率控制算法集成到设备移动设备并迅速变化的时间间隙系统中。我们使用仿真结果证明了所提出的算法的功能。

Deep reinforcement learning offers a model-free alternative to supervised deep learning and classical optimization for solving the transmit power control problem in wireless networks. The multi-agent deep reinforcement learning approach considers each transmitter as an individual learning agent that determines its transmit power level by observing the local wireless environment. Following a certain policy, these agents learn to collaboratively maximize a global objective, e.g., a sum-rate utility function. This multi-agent scheme is easily scalable and practically applicable to large-scale cellular networks. In this work, we present a distributively executed continuous power control algorithm with the help of deep actor-critic learning, and more specifically, by adapting deep deterministic policy gradient. Furthermore, we integrate the proposed power control algorithm to a time-slotted system where devices are mobile and channel conditions change rapidly. We demonstrate the functionality of the proposed algorithm using simulation results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源