论文标题
学习图形神经网络具有近似梯度下降
Learning Graph Neural Networks with Approximate Gradient Descent
论文作者
论文摘要
本文提供了第一种用于学习图神经网络(GNN)的有效算法,该算法是一个隐藏层用于节点信息卷积的算法。研究了两种类型的GNN,具体取决于标签是否附在节点还是图形上。开发了一个综合设计和分析GNN培训算法收敛的框架。提出的算法适用于广泛的激活函数,包括Relu,泄漏的Relu,Sigmod,Softplus和Swish。结果表明,所提出的算法保证了与GNN的基本真实参数的线性收敛速率。对于两种类型的GNN,都表征了节点数量或图表数的样本复杂性。理论上还表征了特征维度和GNN结构对收敛速率的影响。进一步提供数值实验以验证我们的理论分析。
The first provably efficient algorithm for learning graph neural networks (GNNs) with one hidden layer for node information convolution is provided in this paper. Two types of GNNs are investigated, depending on whether labels are attached to nodes or graphs. A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed. The algorithm proposed is applicable to a wide range of activation functions including ReLU, Leaky ReLU, Sigmod, Softplus and Swish. It is shown that the proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs. For both types of GNNs, sample complexity in terms of the number of nodes or the number of graphs is characterized. The impact of feature dimension and GNN structure on the convergence rate is also theoretically characterized. Numerical experiments are further provided to validate our theoretical analysis.