论文标题

具有性能不确定性的多目标超参数优化

Multi-objective hyperparameter optimization with performance uncertainty

论文作者

Morales-Hernández, Alejandro, Van Nieuwenhuyse, Inneke, Nápoles, Gonzalo

论文摘要

任何机器学习(ML)算法的性能受到其超参数的选择影响。由于训练和评估ML算法通常很昂贵,因此需要在实践中有效地计算高参数优化(HPO)方法。多数目标HPO的大多数现有方法都使用进化策略和基于元模型的优化。但是,很少有方法可以解释性能测量中的不确定性。本文介绍了多目标超参数优化,并且在评估ML算法时不确定性。我们将树结构化parzen估计量(TPE)的采样策略与训练高斯过程回归(GPR)在异质噪声后获得的元模型相结合。关于三个分析测试功能和三个ML问题的实验结果表明,相对于超量指标的多目标TPE和GPR的改善。

The performance of any Machine Learning (ML) algorithm is impacted by the choice of its hyperparameters. As training and evaluating a ML algorithm is usually expensive, the hyperparameter optimization (HPO) method needs to be computationally efficient to be useful in practice. Most of the existing approaches on multi-objective HPO use evolutionary strategies and metamodel-based optimization. However, few methods have been developed to account for uncertainty in the performance measurements. This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of ML algorithms. We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise. Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR, achieved with respect to the hypervolume indicator.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源