论文标题
FEDHPO-B:用于联合超参数优化的基准套件
FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization
论文作者
论文摘要
高参数优化(HPO)对于机器学习算法以实现令人满意的性能至关重要,其进度已被相关的基准增强。尽管如此,现有的努力将所有人都集中在HPO上,同时忽略了联合学习(FL),这是从分散数据中进行协作学习模型的有希望的范式。在本文中,我们首先从各个方面确定了FL算法的HPO唯一性。由于这种唯一性,现有的HPO基准不再满足比较FL设置中HPO方法的需求。为了促进HPO在FL环境中的研究,我们提出并实施了一个基准套件FedHPO-B,该基准套件融合了全面的FL任务,实现了有效的功能评估,并简化了持续的扩展。我们还基于FEDHPO-B进行了广泛的实验,以基准一些HPO方法。我们在https://github.com/alibaba/federatedscope/tree/master/master/master/benchmark/fedhpob上开放source fedhpo-b。
Hyperparameter optimization (HPO) is crucial for machine learning algorithms to achieve satisfactory performance, whose progress has been boosted by related benchmarks. Nonetheless, existing efforts in benchmarking all focus on HPO for traditional centralized learning while ignoring federated learning (FL), a promising paradigm for collaboratively learning models from dispersed data. In this paper, we first identify some uniqueness of HPO for FL algorithms from various aspects. Due to this uniqueness, existing HPO benchmarks no longer satisfy the need to compare HPO methods in the FL setting. To facilitate the research of HPO in the FL setting, we propose and implement a benchmark suite FedHPO-B that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions. We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods. We open-source FedHPO-B at https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB.