论文标题
深图卷积网络的稀疏图学习
Sparse Graph Learning with Spectrum Prior for Deep Graph Convolutional Networks
论文作者
论文摘要
图形卷积网络(GCN)采用了针对具有不规则结构的数据量身定制的图形过滤内核。但是,仅仅堆叠更多的GCN层并不能提高性能。取而代之的是,输出收敛到非信息性低维子空间,其中收敛速率以图形频谱为特征 - 这是GCN中已知的过度光滑问题。在本文中,我们提出了一种稀疏的图形学习算法,该算法在计算图形拓扑之前包含了一个新频谱,该图形拓扑构成了过度光滑的,同时保留了数据固有的成对相关性。具体而言,基于多层GCN输出的光谱分析,我们在图形laplacian矩阵$ \ mathbf {l} $的图中得出了一个频谱,以鲁棒化模型的表现力,以防止过度平滑。然后,我们通过频谱先验提出稀疏的图学习问题,通过块坐标下降(BCD)有效地解决。此外,我们优化了权重参数以频谱先验的依从性术语的权重参数,基于在没有频谱操纵的原始图上的数据平滑度。然后将输出$ \ mathbf {l} $进行标准化以进行监督GCN培训。实验表明,与竞争方案相比,我们的建议对回归和分类任务的预测准确性更高,并且预测准确性更高。
A graph convolutional network (GCN) employs a graph filtering kernel tailored for data with irregular structures. However, simply stacking more GCN layers does not improve performance; instead, the output converges to an uninformative low-dimensional subspace, where the convergence rate is characterized by the graph spectrum -- this is the known over-smoothing problem in GCN. In this paper, we propose a sparse graph learning algorithm incorporating a new spectrum prior to compute a graph topology that circumvents over-smoothing while preserving pairwise correlations inherent in data. Specifically, based on a spectral analysis of multilayer GCN output, we derive a spectrum prior for the graph Laplacian matrix $\mathbf{L}$ to robustify the model expressiveness against over-smoothing. Then, we formulate a sparse graph learning problem with the spectrum prior, solved efficiently via block coordinate descent (BCD). Moreover, we optimize the weight parameter trading off the fidelity term with the spectrum prior, based on data smoothness on the original graph learned without spectrum manipulation. The output $\mathbf{L}$ is then normalized for supervised GCN training. Experiments show that our proposal produced deeper GCNs and higher prediction accuracy for regression and classification tasks compared to competing schemes.