论文标题
ASFGNN:自动分离的图形神经网络
ASFGNN: Automated Separated-Federated Graph Neural Network
论文作者
论文摘要
图形神经网络(GNN)通过利用图形数据实现了出色的性能。 GNN模型的成功始终取决于丰富的特征和相邻的关系。但是,实际上,这种数据通常由不同的数据所有者(客户端)隔离,因此很可能是非独立且分布相同的(非IID)的。同时,考虑到数据所有者的网络状态有限,用于协作学习方法的超参数优化是在数据隔离方案中耗时的。为了解决这些问题,我们提出了一个自动化的分离的图形神经网络(ASFGNN)学习范式。 ASFGNN由两个主要组成部分组成,即GNN的培训和超参数的调整。具体而言,为了解决数据非IID问题,我们首先提出了一个分离的GNN学习模型,该模型将GNN培训分为两个部分:分别由客户完成的消息传递部分,以及客户在联邦政府学到的损失计算部分。为了解决耗时的参数调整问题,我们利用贝叶斯优化技术自动调整所有客户端的超参数。我们在基准数据集上进行实验,结果表明,就准确性和参数调整效率而言,ASFGNN显着优于幼稚的联合GNN。
Graph Neural Networks (GNNs) have achieved remarkable performance by taking advantage of graph data. The success of GNN models always depends on rich features and adjacent relationships. However, in practice, such data are usually isolated by different data owners (clients) and thus are likely to be Non-Independent and Identically Distributed (Non-IID). Meanwhile, considering the limited network status of data owners, hyper-parameters optimization for collaborative learning approaches is time-consuming in data isolation scenarios. To address these problems, we propose an Automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm. ASFGNN consists of two main components, i.e., the training of GNN and the tuning of hyper-parameters. Specifically, to solve the data Non-IID problem, we first propose a separated-federated GNN learning model, which decouples the training of GNN into two parts: the message passing part that is done by clients separately, and the loss computing part that is learnt by clients federally. To handle the time-consuming parameter tuning problem, we leverage Bayesian optimization technique to automatically tune the hyper-parameters of all the clients. We conduct experiments on benchmark datasets and the results demonstrate that ASFGNN significantly outperforms the naive federated GNN, in terms of both accuracy and parameter-tuning efficiency.