论文标题
有效的自动现金通过上升的土匪
Efficient Automatic CASH via Rising Bandits
论文作者
论文摘要
组合算法选择和超参数优化(现金)是自动机器学习(AUTOML)中最根本的问题之一。现有的贝叶斯优化(BO)解决方案通过组合所有机器学习(ML)算法的超参数,将现金问题转化为超参数优化(HPO)问题,并使用BO方法来解决它。结果,由于现金中的高参数空间,这些方法遇到了低效率问题。为了减轻此问题,我们提出了交替的优化框架,其中每种ML算法的HPO问题和算法选择问题交替优化。在此框架中,BO方法用于分别解决每种ML算法的HPO问题,并结合了BO方法的较小的超参数空间。此外,我们推出了崛起的土匪,这是一种面向现金的多军土匪(MAB)变体,以现金为算法选择。该框架可以通过相对较小的超参数空间和加速算法选择来解决HPO问题的优势。此外,我们进一步开发了一种有效的在线算法,以解决理论上的保证,以解决上升的土匪。 30个OpenML数据集的广泛实验证明了所提出的方法比竞争基准的优越性。
The Combined Algorithm Selection and Hyperparameter optimization (CASH) is one of the most fundamental problems in Automatic Machine Learning (AutoML). The existing Bayesian optimization (BO) based solutions turn the CASH problem into a Hyperparameter Optimization (HPO) problem by combining the hyperparameters of all machine learning (ML) algorithms, and use BO methods to solve it. As a result, these methods suffer from the low-efficiency problem due to the huge hyperparameter space in CASH. To alleviate this issue, we propose the alternating optimization framework, where the HPO problem for each ML algorithm and the algorithm selection problem are optimized alternately. In this framework, the BO methods are used to solve the HPO problem for each ML algorithm separately, incorporating a much smaller hyperparameter space for BO methods. Furthermore, we introduce Rising Bandits, a CASH-oriented Multi-Armed Bandits (MAB) variant, to model the algorithm selection in CASH. This framework can take the advantages of both BO in solving the HPO problem with a relatively small hyperparameter space and the MABs in accelerating the algorithm selection. Moreover, we further develop an efficient online algorithm to solve the Rising Bandits with provably theoretical guarantees. The extensive experiments on 30 OpenML datasets demonstrate the superiority of the proposed approach over the competitive baselines.