论文标题

四倍的随机高斯流程

Quadruply Stochastic Gaussian Processes

论文作者

Evans, Trefor W., Nair, Prasanth B.

论文摘要

我们引入了训练可伸缩的高斯过程(GP)模型的随机变异推理过程,该过程的读取物复杂性与培训点的数量($ n $)无关,$ n $以及内核近似中使用的数字基函数,$ m $。我们的核心贡献包括对高斯可能性的证据下限(ELBO)的无偏随机估计量,以及一个随机估计器,其随机估计量在其他几种可能性(例如Laplace and Logistic)中降低了Elbo。随机优化的独立性更新$ n $和$ m $的复杂性可以使用大容量的GP模型对巨大数据集进行推断。我们使用GPS和相关向量机对大型分类和回归数据集进行了准确的推断,最多$ M = 10^7 $基础功能。

We introduce a stochastic variational inference procedure for training scalable Gaussian process (GP) models whose per-iteration complexity is independent of both the number of training points, $n$, and the number basis functions used in the kernel approximation, $m$. Our central contributions include an unbiased stochastic estimator of the evidence lower bound (ELBO) for a Gaussian likelihood, as well as a stochastic estimator that lower bounds the ELBO for several other likelihoods such as Laplace and logistic. Independence of the stochastic optimization update complexity on $n$ and $m$ enables inference on huge datasets using large capacity GP models. We demonstrate accurate inference on large classification and regression datasets using GPs and relevance vector machines with up to $m = 10^7$ basis functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源