论文标题

可证明的多目标黑匣子优化的随机超量量表

Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization

论文作者

Golovin, Daniel, Zhang, Qiuyi

论文摘要

单目标黑框优化(也称为零订单优化)是最小化标量目标$ f(x)$的过程,给定适应性选择的输入$ x $的评估。在本文中,我们考虑了多目标优化,其中$ f(x)$输出可能具有竞争目标的向量,目标是将其收敛到帕累托边境。从数量上讲,我们希望最大化标准的超量指标指标,该指标测量了整个所选输入集的主导地位。在本文中,我们引入了一种新颖的标量函数,我们将其称为“超量”标量化,并表明从适当选择的分布中绘制随机标量,可用于有效地近似于超音音指标指标。我们利用这种连接来表明通过常见的采集功能,例如汤普森采样或上限绑定,贝叶斯优化通过我们的标量化,可证明通过在$ \ widetilde {o}(o}(o}(\ sqrt {t})的顺序上,可证明与整个帕累托边界收敛。此外,我们通过证明任何可证明的收敛性的单目标优化过程都可以轻松地转换为具有可证明的收敛保证的多目标优化过程,从而强调了标量框架的一般实用性。

Single-objective black box optimization (also known as zeroth-order optimization) is the process of minimizing a scalar objective $f(x)$, given evaluations at adaptively chosen inputs $x$. In this paper, we consider multi-objective optimization, where $f(x)$ outputs a vector of possibly competing objectives and the goal is to converge to the Pareto frontier. Quantitatively, we wish to maximize the standard hypervolume indicator metric, which measures the dominated hypervolume of the entire set of chosen inputs. In this paper, we introduce a novel scalarization function, which we term the hypervolume scalarization, and show that drawing random scalarizations from an appropriately chosen distribution can be used to efficiently approximate the hypervolume indicator metric. We utilize this connection to show that Bayesian optimization with our scalarization via common acquisition functions, such as Thompson Sampling or Upper Confidence Bound, provably converges to the whole Pareto frontier by deriving tight hypervolume regret bounds on the order of $\widetilde{O}(\sqrt{T})$. Furthermore, we highlight the general utility of our scalarization framework by showing that any provably convergent single-objective optimization process can be effortlessly converted to a multi-objective optimization process with provable convergence guarantees.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源