论文标题
联合学习中的鲁棒性和个性化:通过正则化的统一方法
Robustness and Personalization in Federated Learning: A Unified Approach via Regularization
论文作者
论文摘要
我们提供了一类用于强大的个性化联合学习的方法,称为FED+,该方法统一了许多联合学习算法。这类方法的主要优点是更好地适应联邦培训中发现的现实特征,例如跨各方缺乏IID数据,对异常值或散乱者的鲁棒性的需求以及在特定于派对特定数据集中表现良好的要求。我们通过问题公式实现这一目标,该问题使中央服务器能够采用可靠的方式来汇总本地模型,同时保持本地计算的结构完整。在各方局部数据的异质性程度上,我们为FED+提供了汇聚+在不同(鲁棒)聚合方法下的convex和non-Convex损失函数的统计假设。美联储+理论还可以处理包括散乱者在内的异质计算环境,而没有其他假设;具体而言,收敛结果涵盖了一般设置,在该设置中,各方的本地更新步骤数可能会有所不同。我们通过跨标准基准数据集进行了广泛的实验来证明FED+的好处。
We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms. The principal advantage of this class of methods is to better accommodate the real-world characteristics found in federated training, such as the lack of IID data across parties, the need for robustness to outliers or stragglers, and the requirement to perform well on party-specific datasets. We achieve this through a problem formulation that allows the central server to employ robust ways of aggregating the local models while keeping the structure of local computation intact. Without making any statistical assumption on the degree of heterogeneity of local data across parties, we provide convergence guarantees for Fed+ for convex and non-convex loss functions under different (robust) aggregation methods. The Fed+ theory is also equipped to handle heterogeneous computing environments including stragglers without additional assumptions; specifically, the convergence results cover the general setting where the number of local update steps across parties can vary. We demonstrate the benefits of Fed+ through extensive experiments across standard benchmark datasets.