论文标题
使用深层神经网络和高斯工艺提高数据效率,以进行空气动力设计优化
Enhanced data efficiency using deep neural networks and Gaussian processes for aerodynamic design optimization
论文作者
论文摘要
基于伴随的优化方法对于空气形状设计具有吸引力,主要是因为它们的计算成本独立于输入空间的维度及其生成高保真梯度的能力,然后可以在基于梯度的优化器中使用。这使它们非常适合基于高度模拟的高保真模拟形状优化高度参数化的几何形状,例如飞机机翼。但是,基于伴随的求解器的开发涉及仔细的数学处理,其实施需要详细的软件开发。此外,当解决了多个优化问题时,它们可能会变得非常昂贵,每个问题都需要多个重新启动以规避本地优势。在这项工作中,我们提出了一个启用机器学习,基于替代的框架,该框架取代了昂贵的旁边求解器,而不会损害预测预测精度。具体而言,我们首先是从评估模型不固定级的高保真仿真模型的训练数据中训练深神网络(DNN),该数据是在几何形状参数上的实验设计。然后可以使用基于梯度的优化器与训练有素的DNN来计算最佳形状。随后,我们还执行无梯度的贝叶斯优化,在该优化中,训练有素的DNN被用作先前的平均值。我们观察到,后一种框架(DNN-BO)在基于DNN的优化策略上改善了相同的计算成本。总体而言,该框架以很高的精度预测了真正的最佳功能,而与基于伴随的方法相比,需要更少的高保真函数调用。此外,我们表明可以使用高精度的同一机器学习模型来解决多个优化问题,以摊销与构建模型相关的离线成本。
Adjoint-based optimization methods are attractive for aerodynamic shape design primarily due to their computational costs being independent of the dimensionality of the input space and their ability to generate high-fidelity gradients that can then be used in a gradient-based optimizer. This makes them very well suited for high-fidelity simulation based aerodynamic shape optimization of highly parametrized geometries such as aircraft wings. However, the development of adjoint-based solvers involve careful mathematical treatment and their implementation require detailed software development. Furthermore, they can become prohibitively expensive when multiple optimization problems are being solved, each requiring multiple restarts to circumvent local optima. In this work, we propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver, without compromising on predicting predictive accuracy. Specifically, we first train a deep neural network (DNN) from training data generated from evaluating the high-fidelity simulation model on a model-agnostic, design of experiments on the geometry shape parameters. The optimum shape may then be computed by using a gradient-based optimizer coupled with the trained DNN. Subsequently, we also perform a gradient-free Bayesian optimization, where the trained DNN is used as the prior mean. We observe that the latter framework (DNN-BO) improves upon the DNN-only based optimization strategy for the same computational cost. Overall, this framework predicts the true optimum with very high accuracy, while requiring far fewer high-fidelity function calls compared to the adjoint-based method. Furthermore, we show that multiple optimization problems can be solved with the same machine learning model with high accuracy, to amortize the offline costs associated with constructing our models.