论文标题
针对D2D启用了群集设备的D2D启用蜂窝系统的优化缓存和频谱分区
Optimized Caching and Spectrum Partitioning for D2D enabled Cellular Systems with Clustered Devices
论文作者
论文摘要
移动设备的缓存和利用设备待办事项(D2D)通信是两种有前途的方法,可以支持通过无线网络进行大量内容交付。基于启用缓存的无线网络的分析通常是通过假设设备均匀分布的,但是在社交网络中,移动设备本质上分组为差异群集。在这个方面,本文提出了一个时空数学模型,该模型跟踪服务请求到达并说明群集设备的几何形状。假定两种设备,特别是内容客户和内容提供商。假定内容提供商具有剩余存储器,该内存被利用,以根据随机的概率缓存方案从已知库中主动缓存内容。内容客户端可以从其接近度(群集)中从最近的内容提供商中检索请求的内容,或者作为最后的度假胜地(BS)。开发的时空模型被利用以制定内容缓存和频谱分区的关节优化问题,以最大程度地减少平均服务延迟。由于优化问题的高复杂性,使用块坐标下降(BCD)优化技术将缓存和频谱分配问题解耦和迭代。为此,分别为带宽分区和概率缓存子问题获得了最佳和最佳的解决方案。数值结果强调了在相等和优化的带宽分配下,所提出的方案比常规缓存方案的优越性。特别是,与分别在相等的带宽分配下的ZIPF和统一的缓存方案相比,平均服务延迟降低了近100%和350%。
Caching at mobile devices and leveraging device- to-device (D2D) communication are two promising approaches to support massive content delivery over wireless networks. The analysis of cache-enabled wireless networks is usually carried out by assuming that devices are uniformly distributed, however, in social networks, mobile devices are intrinsically grouped into disjoint clusters. In this regards, this paper proposes a spatiotemporal mathematical model that tracks the service requests arrivals and account for the clustered devices geometry. Two kinds of devices are assumed, particularly, content clients and content providers. Content providers are assumed to have a surplus memory which is exploited to proactively cache contents from a known library, following a random probabilistic caching scheme. Content clients can retrieve a requested content from the nearest content provider in their proximity (cluster), or, as a last resort, the base station (BS). The developed spatiotemporal model is leveraged to formulate a joint optimization problem of the content caching and spectrum partitioning in order to minimize the average service delay. Due to the high complexity of the optimization problem, the caching and spectrum partitioning problems are decoupled and solved iteratively using the block coordinate descent (BCD) optimization technique. To this end, an optimal and suboptimal solutions are obtained for the bandwidth partitioning and probabilistic caching subproblems, respectively. Numerical results highlight the superiority of the proposed scheme over conventional caching schemes under equal and optimized bandwidth allocations. Particularly, it is shown that the average service delay is reduced by nearly 100% and 350%, compared to the Zipf and uniform caching schemes under equal bandwidth allocations, respectively.