论文标题
集成在繁殖高斯内核的内核希尔伯特空间
Integration in reproducing kernel Hilbert spaces of Gaussian kernels
论文作者
论文摘要
高斯内核在机器学习,不确定性定量和分散的数据近似中起着核心作用,但是从数值分析的角度来看,人们的关注相对较少。找到算法的基本问题尚未完全求解高斯内核重现功能的有效数值集成的算法。在本文中,我们构建了两类算法,这些算法使用$ n $评估来集成由高斯内核复制的$ d $变量函数,并证明了其最坏情况错误的指数或超代数衰减。与较早的工作相反,对高斯内核的长度尺度参数没有任何约束。第一类算法是通过经典高斯 - 缓冲区规则的适当缩放来获得的。对于这些算法,我们在表格$ \ exp(-c_1 n^{1/d})n^{1/(4d)} $和$ \ exp(-c_2 n^{1/d})n^{ - 1/(4d)$ for sumptants $ ostants $ ostants $ ostants $ c的情况下,在$ \ exp(-c_1 n^{1/d})n^{1/(4d)} $和$ \ exp(-c_2 n^{1/d})n^{ - 1/(4D)$ for sumptants $ for sumpants $ c的情况下。我们构建的第二类算法更灵活,并且使用最佳的最佳权重,可以将其视为嵌套序列。对于这些算法,我们得出了$ \ exp(-c_3 n^{1/(2d)})$的上限,用于正常数$ c_3 $。
The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm for efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use $N$ evaluations to integrate $d$-variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss-Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms $\exp(-c_1 N^{1/d}) N^{1/(4d)}$ and $\exp(-c_2 N^{1/d}) N^{-1/(4d)}$, respectively, for positive constants $c_1 > c_2$. The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we derive upper bounds of the form $\exp(-c_3 N^{1/(2d)})$ for a positive constant $c_3$.