论文标题
通过多代理增强学习在雾无线电网络中通过多代理增强学习的合作边缘缓存
Cooperative Edge Caching via Multi Agent Reinforcement Learning in Fog Radio Access Networks
论文作者
论文摘要
在本文中,研究了雾无线电访问网络(F-RAN)中的合作边缘缓存问题。为了最大程度地减少内容的传输延迟,我们制定了合作的缓存优化问题,以找到全球最佳的缓存策略。考虑此问题的非确定性多项式硬(NP-HARD)属性,提议多个代理加固学习(MARL)的合作缓存方案(基于基于Q-NETWORK)的fog-Q-Network(MARL)的合作缓存方案(基于双方)。在多代理系统中介绍通信过程。每一个F-AP记录其相关F-APS的历史缓存策略作为对通信程序的观察。通过交换观察,F-AP可以利用合作并使全球最佳的缓存策略。显示结果表明,基于MARL的合作缓存方案与Benchmark Mark Marks intermine flaste teminim inatimimimimimimimim inimim plaste temimim plays temine perlast perlast temim plays temim plays temal ship caching方案具有出色的性能。
In this paper, the cooperative edge caching problem in fog radio access networks (F-RANs) is investigated. To minimize the content transmission delay, we formulate the cooperative caching optimization problem to find the globally optimal caching strategy.By considering the non-deterministic polynomial hard (NP-hard) property of this problem, a Multi Agent Reinforcement Learning (MARL)-based cooperative caching scheme is proposed.Our proposed scheme applies double deep Q-network (DDQN) in every fog access point (F-AP), and introduces the communication process in multi-agent system. Every F-AP records the historical caching strategies of its associated F-APs as the observations of communication procedure.By exchanging the observations, F-APs can leverage the cooperation and make the globally optimal caching strategy.Simulation results show that the proposed MARL-based cooperative caching scheme has remarkable performance compared with the benchmark schemes in minimizing the content transmission delay.