论文标题

参数PDE的模型降低和神经网络

Model Reduction and Neural Networks for Parametric PDEs

论文作者

Bhattacharya, Kaushik, Hosseini, Bamdad, Kovachki, Nikola B., Stuart, Andrew M.

论文摘要

我们为无限维空间之间的输入输出图的数据驱动近似开发了一个通用框架。拟议的方法是由神经网络和深度学习的最新成功以及模型降低的想法结合使用的。这种组合导致神经网络近似,原则上是在无限维空间上定义的,实际上,对计算所需的这些空间的有限维近似值的维度很强。对于一类输入输出图,以及在输入上适当选择的概率度量,我们证明了所提出的近似方法的收敛性。我们还包括数字实验,这些实验证明了该方法的有效性,显示了近似方案相对于离散化大小的收敛性和鲁棒性,并将其与文献中现有算法进行了比较;我们的示例包括在差异形成椭圆形偏微分方程(PDE)问题中从系数到解决方案的映射,以及用于粘性汉堡方程的解决方案操作员。

We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. We also include numerical experiments which demonstrate the effectiveness of the method, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare it with existing algorithms from the literature; our examples include the mapping from coefficient to solution in a divergence form elliptic partial differential equation (PDE) problem, and the solution operator for viscous Burgers' equation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源