论文标题

优化离散损失(ODIL),以使用机器学习工具为部分微分方程解决前向和反问题

Optimizing a DIscrete Loss (ODIL) to solve forward and inverse problems for partial differential equations using machine learning tools

论文作者

Karnakov, Petr, Litvinov, Sergey, Koumoutsakos, Petros

论文摘要

我们介绍了使用机器学习工具的数值解决方案(PDE)的数值解决方案的优化优化损失(ODIL)框架。该框架将数值方法制定为使用梯度下降和牛顿方法解决的离散残差的最小化。我们证明了这种方法在可能缺少参数或没有足够数据的方程式上的价值。该框架是针对基于网格的PDE离散化的,并继承其准确性,收敛性和保护性能。它保留了解决方案的稀疏性,并且很容易适用于逆问题和不良问题。它应用于使用梯度下降算法(包括经常在机器学习中部署的梯度下降算法),应用于PDE受限的优化,光流,系统识别和数据同化。我们将ODIL与相关方法进行比较,该方法用神经网络代表解决方案。我们比较了这两种方法,并证明了ODIL的优势,包括明显更高的收敛速率和几个数量级的计算成本。我们在各种线性和非线性偏微分方程(包括流量重建问题的Navier-Stokes方程)上评估了该方法。

We introduce the Optimizing a Discrete Loss (ODIL) framework for the numerical solution of Partial Differential Equations (PDE) using machine learning tools. The framework formulates numerical methods as a minimization of discrete residuals that are solved using gradient descent and Newton's methods. We demonstrate the value of this approach on equations that may have missing parameters or where no sufficient data is available to form a well-posed initial-value problem. The framework is presented for mesh based discretizations of PDEs and inherits their accuracy, convergence, and conservation properties. It preserves the sparsity of the solutions and is readily applicable to inverse and ill-posed problems. It is applied to PDE-constrained optimization, optical flow, system identification, and data assimilation using gradient descent algorithms including those often deployed in machine learning. We compare ODIL with related approach that represents the solution with neural networks. We compare the two methodologies and demonstrate advantages of ODIL that include significantly higher convergence rates and several orders of magnitude lower computational cost. We evaluate the method on various linear and nonlinear partial differential equations including the Navier-Stokes equations for flow reconstruction problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源