论文标题
解释线性程序的机器
Machines Explaining Linear Programs
论文作者
论文摘要
最近,使机器学习模型更加可解释,以便可以信任他们的性能。尽管成功,但这些方法主要集中在深度学习方法上,而机器学习中的基本优化方法(例如线性程序(LP))被排除在外。即使可以将LP视为白盒或Clearbox模型,就输入和输出之间的关系而言,它们也不容易理解。由于线性程序仅为优化问题提供最佳解决方案,因此进一步的解释通常会有所帮助。在这项工作中,我们将解释神经网络解释的归因方法扩展到线性程序。这些方法通过提供模型输入的相关性分数来解释模型,以显示每个输入对输出的影响。除了使用经典的基于梯度的归因方法,我们还提出了一种将基于扰动的归因方法适应LPS的方法。我们对几种不同的线性和整数问题的评估表明,归因方法可以为线性程序生成有用的解释。但是,我们还证明,直接使用神经归因方法可能会带来一些缺点,因为这些方法在神经网络上的属性不一定会转移到线性程序中。如果线性程序具有多个最佳解决方案,则方法也可能会挣扎,因为求解器只是返回一个可能的解决方案。希望我们的结果可以用作朝这个方向进行进一步研究的好起点。
There has been a recent push in making machine learning models more interpretable so that their performance can be trusted. Although successful, these methods have mostly focused on the deep learning methods while the fundamental optimization methods in machine learning such as linear programs (LP) have been left out. Even if LPs can be considered as whitebox or clearbox models, they are not easy to understand in terms of relationships between inputs and outputs. As a linear program only provides the optimal solution to an optimization problem, further explanations are often helpful. In this work, we extend the attribution methods for explaining neural networks to linear programs. These methods explain the model by providing relevance scores for the model inputs, to show the influence of each input on the output. Alongside using classical gradient-based attribution methods we also propose a way to adapt perturbation-based attribution methods to LPs. Our evaluations of several different linear and integer problems showed that attribution methods can generate useful explanations for linear programs. However, we also demonstrate that using a neural attribution method directly might come with some drawbacks, as the properties of these methods on neural networks do not necessarily transfer to linear programs. The methods can also struggle if a linear program has more than one optimal solution, as a solver just returns one possible solution. Our results can hopefully be used as a good starting point for further research in this direction.