论文标题

分布式自动学习GNN,用于多细胞群的NOMA通信

Distributed Auto-Learning GNN for Multi-Cell Cluster-Free NOMA Communications

论文作者

Xu, Xiaoxia, Liu, Yuanwei, Chen, Qimei, Mu, Xidong, Ding, Zhiguo

论文摘要

提出了一个多细胞集群的NOMA框架,在该框架中均通过无灵活的无簇连续干扰取消(SIC)和协调的波束形成设计共同减轻细胞内和细胞间干扰。共同设计问题是为了使系统总和速率最大化,同时满足SIC解码要求和用户的最低数据速率要求。为了解决这一高度复杂和耦合的非凸混合整数非线性编程(MINLP),提出了一种新型的分布式自动学习图神经网络(AUTOGNN)结构,以减轻基础站(BSS)之间压倒性的信息交换负担。拟议的Autognn可以训练GNN模型权重,同时自动优化GNN体系结构,即GNN网络深度和消息嵌入尺寸,以实现沟通效率高效的分布式调度。基于提议的体系结构,进一步开发了双级预言学习算法,以有效地近似模型训练中的高度降低。从理论上讲,拟议的双级自闭症学习算法可以收敛到固定点。数值结果表明:1)在多细胞方案中,提出的无簇NOMA框架优于基于群集的Noma框架; 2)与传统的基于凸优化的方法以及具有固定体系结构的常规GNN相比,所提出的Autognn体系结构可显着降低计算和通信开销。

A multi-cell cluster-free NOMA framework is proposed, where both intra-cell and inter-cell interference are jointly mitigated via flexible cluster-free successive interference cancellation (SIC) and coordinated beamforming design. The joint design problem is formulated to maximize the system sum rate while satisfying the SIC decoding requirements and users' minimum data rate requirements. To address this highly complex and coupling non-convex mixed integer nonlinear programming (MINLP), a novel distributed auto-learning graph neural network (AutoGNN) architecture is proposed to alleviate the overwhelming information exchange burdens among base stations (BSs). The proposed AutoGNN can train the GNN model weights whilst automatically optimizing the GNN architecture, namely the GNN network depth and message embedding sizes, to achieve communication-efficient distributed scheduling. Based on the proposed architecture, a bi-level AutoGNN learning algorithm is further developed to efficiently approximate the hypergradient in model training. It is theoretically proved that the proposed bi-level AutoGNN learning algorithm can converge to a stationary point. Numerical results reveal that: 1) the proposed cluster-free NOMA framework outperforms the conventional cluster-based NOMA framework in the multi-cell scenario; and 2) the proposed AutoGNN architecture significantly reduces the computation and communication overheads compared to the conventional convex optimization-based methods and the conventional GNNs with fixed architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源