论文标题
使用神经网络和分层矩阵自动稳定有限元模拟
Automatic stabilization of finite-element simulations using neural networks and hierarchical matrices
论文作者
论文摘要
具有最佳测试功能的Petrov-Galerkin制剂可以稳定有限元模拟。特别是,在离散试验空间的情况下,最佳测试空间会导致数值方案,从问题依赖性能量规范方面提供最佳近似值。这种理想的方法有两个缺点:首先,我们需要明确了解最佳测试功能。其次,最佳测试功能可能具有巨大的支持,诱导昂贵的密度线性系统。但是,PDE的参数系列是一个示例,值得投资一些(离线)计算工作,以获得稳定的线性系统,这些系统可以在在线阶段中有效地为给定的一组参数进行有效解决。因此,作为第一个缺点的补救措施,我们明确计算(离线)映射任何PDE参数,以将与该PDE参数相关的最佳测试函数系数(以基础扩展为基础)的系数矩阵。接下来,作为第二个缺点的补救措施,我们使用低级别近似值来层次压缩最佳测试功能系数的(非平方)矩阵。为了加速此过程,我们训练一个神经网络,以学习压缩算法的关键瓶颈(对于给定的一组PDE参数)。在线求解最终的(压缩)Petrov-Galerkin配方时,我们采用了带有廉价矩阵矢量乘法的GMRES迭代求解器,这要归功于压缩矩阵的低级功能。我们执行实验表明,完整的在线程序与原始(不稳定)Galerkin方法一样快。换句话说,我们几乎可以免费使用层次矩阵和神经网络进行稳定。我们通过2D Eriksson-Johnson和Hemholtz模型问题来说明我们的发现。
Petrov-Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. Nevertheless, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently, for a given set of parameters, in an online stage. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE-parameter, to the matrix of coefficients of optimal test functions (in a basis expansion) associated with that PDE-parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE-parameters). When solving online the resulting (compressed) Petrov-Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix-vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure as fast as the original (unstable) Galerkin approach. In other words, we get the stabilization with hierarchical matrices and neural networks practically for free. We illustrate our findings by means of 2D Eriksson-Johnson and Hemholtz model problems.