论文标题
Banach空间之间的输入输出图的随机特征模型
The Random Feature Model for Input-Output Maps between Banach Spaces
论文作者
论文摘要
机器学习社区众所周知,随机特征模型是对内核插值或回归方法的参数近似。它通常用于将有限维输入空间映射到实际线路。在本文中,我们提出了一种将随机特征模型用作数据驱动的替代的方法,用于将输入Banach空间映射到输出Banach空间的操作员。尽管该方法非常笼统,但我们考虑了由部分微分方程(PDE)定义的运算符;在这里,输入和输出本身是函数,指定问题所需的输入参数(例如初始数据或系数)所需的功能,而输出是问题的解决方案。离散化后,该模型从这个无限维观点继承了几个理想的属性,包括相对于真实的PDE解决方案映射,包括网格不变的近似误差以及以一个网格分辨率训练的能力,然后在不同的网格分辨率下部署。我们将随机特征模型视为非侵入性数据驱动的模拟器,为其解释提供了一个数学框架,并证明了其有效,准确地近似于物理科学和工程应用中出现的两个原型PDE的非线性参数到解决方案的能力:粘性汉堡的方程式和可变系数。
Well known to the machine learning community, the random feature model is a parametric approximation to kernel interpolation or regression methods. It is typically used to approximate functions mapping a finite-dimensional input space to the real line. In this paper, we instead propose a methodology for use of the random feature model as a data-driven surrogate for operators that map an input Banach space to an output Banach space. Although the methodology is quite general, we consider operators defined by partial differential equations (PDEs); here, the inputs and outputs are themselves functions, with the input parameters being functions required to specify the problem, such as initial data or coefficients, and the outputs being solutions of the problem. Upon discretization, the model inherits several desirable attributes from this infinite-dimensional viewpoint, including mesh-invariant approximation error with respect to the true PDE solution map and the capability to be trained at one mesh resolution and then deployed at different mesh resolutions. We view the random feature model as a non-intrusive data-driven emulator, provide a mathematical framework for its interpretation, and demonstrate its ability to efficiently and accurately approximate the nonlinear parameter-to-solution maps of two prototypical PDEs arising in physical science and engineering applications: viscous Burgers' equation and a variable coefficient elliptic equation.