论文标题

与隐藏表示共享的联合学习图

Graph Federated Learning with Hidden Representation Sharing

论文作者

Wu, Shuang, Zhang, Mingxuan, Li, Yuantong, Yang, Carl, Li, Pan

论文摘要

当每个客户端的本地数据不足时,在图形上学习(日志)被广泛用于多客户系统,并且多个客户必须共享其原始数据以学习质量高质量的模型。一种情况是向具有有限历史数据的客户推荐项目,并与社交网络中的其他客户共享类似的偏好。另一方面,由于对客户数据隐私的保护的需求不断增长,联合学习(FL)已被广泛采用:FL要求在多客户系统中对模型进行培训,并限制客户之间的原始数据共享。日志和FL之间的潜在潜在数据共享冲突尚未探索,并且如何从双方受益是一个有前途的问题。在这项工作中,我们首先制定了联合学习的图(GFL)问题,该问题统一了多客户系统中的日志和FL,然后提出共享隐藏的表示形式,而不是邻居的原始数据,以保护数据隐私作为解决方案。为了克服GFL中的偏置梯度问题,我们提供了一种梯度估计方法及其在非凸目标下的收敛分析。在实验中,我们在图表上评估了分类任务的方法。我们的实验显示了我们的理论与实践之间的良好匹配。

Learning on Graphs (LoG) is widely used in multi-client systems when each client has insufficient local data, and multiple clients have to share their raw data to learn a model of good quality. One scenario is to recommend items to clients with limited historical data and sharing similar preferences with other clients in a social network. On the other hand, due to the increasing demands for the protection of clients' data privacy, Federated Learning (FL) has been widely adopted: FL requires models to be trained in a multi-client system and restricts sharing of raw data among clients. The underlying potential data-sharing conflict between LoG and FL is under-explored and how to benefit from both sides is a promising problem. In this work, we first formulate the Graph Federated Learning (GFL) problem that unifies LoG and FL in multi-client systems and then propose sharing hidden representation instead of the raw data of neighbors to protect data privacy as a solution. To overcome the biased gradient problem in GFL, we provide a gradient estimation method and its convergence analysis under the non-convex objective. In experiments, we evaluate our method in classification tasks on graphs. Our experiment shows a good match between our theory and the practice.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源