论文标题
要了解各个客户在联合学习中的影响
Toward Understanding the Influence of Individual Clients in Federated Learning
论文作者
论文摘要
联合学习允许移动客户端共同培训全球模型,而无需将其私人数据发送到中央服务器。广泛的工作研究了全球模型的性能保证,但是,仍不清楚每个客户如何影响协作培训过程。在这项工作中,我们定义了一个新的概念,称为{\ em Fed-Influence},以量化对模型参数的影响,并提出了一种有效有效的算法来估计该指标。特别是,我们的设计满足了几个理想的属性:(1)它既不需要重新培训也不需要重新验证,仅向客户和服务器添加线性计算开销; (2)它严格维护联合学习的宗旨,而不会透露任何客户的本地私人数据; (3)它在凸和非凸损耗函数上都很好地工作,并且不需要最终模型是最佳的。合成数据集和女权数据集的经验结果表明,我们的估计方法可以与偏见相似。此外,我们显示了在模型调试中使用FED影响力的应用。
Federated learning allows mobile clients to jointly train a global model without sending their private data to a central server. Extensive works have studied the performance guarantee of the global model, however, it is still unclear how each individual client influences the collaborative training process. In this work, we defined a new notion, called {\em Fed-Influence}, to quantify this influence over the model parameters, and proposed an effective and efficient algorithm to estimate this metric. In particular, our design satisfies several desirable properties: (1) it requires neither retraining nor retracing, adding only linear computational overhead to clients and the server; (2) it strictly maintains the tenets of federated learning, without revealing any client's local private data; and (3) it works well on both convex and non-convex loss functions, and does not require the final model to be optimal. Empirical results on a synthetic dataset and the FEMNIST dataset demonstrate that our estimation method can approximate Fed-Influence with small bias. Further, we show an application of Fed-Influence in model debugging.