论文标题

基于LP的预测+优化的内部点求解

Interior Point Solving for LP-based prediction+optimisation

论文作者

Mandi, Jayanta, Guns, Tias

论文摘要

解决优化问题是许多现实生活分析应用程序中决策的关键。但是,优化问题的系数通常是不确定的,并且取决于外部因素,例如未来的需求,能源或股票价格。机器学习(ML)模型,尤其是神经网络,越来越多地用于以数据驱动的方式估算这些系数。因此,端到端的预测和优化方法考虑了预测值解决优化问题的有效性,并受到了越来越多的关注。如果存在整数线性编程问题,克服其非差异的一种流行方法是在连续放松中添加二次惩罚术语,以便可以使用差异化对二次程序的区别。取而代之的是,我们调查了更原则的对数屏障项的使用,如内部点求解器中广泛用于线性编程。具体而言,我们考虑了LP的同质自偶式配方,而不是区分KKT条件,并显示了内部点步的方向与学习所需的相应梯度之间的关系。最后,我们的经验实验证明了我们的方法的表现与不超过最先进的QPTL(二次编程任务损失)的表述一样好。以及Elmachtoub和Grigas的SPO方法。

Solving optimization problems is the key to decision making in many real-life analytics applications. However, the coefficients of the optimization problems are often uncertain and dependent on external factors, such as future demand or energy or stock prices. Machine learning (ML) models, especially neural networks, are increasingly being used to estimate these coefficients in a data-driven way. Hence, end-to-end predict-and-optimize approaches, which consider how effective the predicted values are to solve the optimization problem, have received increasing attention. In case of integer linear programming problems, a popular approach to overcome their non-differentiabilty is to add a quadratic penalty term to the continuous relaxation, such that results from differentiating over quadratic programs can be used. Instead we investigate the use of the more principled logarithmic barrier term, as widely used in interior point solvers for linear programming. Specifically, instead of differentiating the KKT conditions, we consider the homogeneous self-dual formulation of the LP and we show the relation between the interior point step direction and corresponding gradients needed for learning. Finally our empirical experiments demonstrate our approach performs as good as if not better than the state-of-the-art QPTL (Quadratic Programming task loss) formulation of Wilder et al. and SPO approach of Elmachtoub and Grigas.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源