论文标题
基于梯度的反事实解释,使用可处理的概率模型
Gradient-based Counterfactual Explanations using Tractable Probabilistic Models
论文作者
论文摘要
反事实示例是机器学习模型的一类吸引人的事后解释。给定的输入$ x $ of类$ y_1 $,其反事实是另一个类$ y_0 $的对比示例$ x^\ prime $。当前方法主要通过复杂的优化解决此任务:根据使用硬或软约束的反事实结果$ y_0 $的丢失来定义目标函数,然后将此功能优化为黑框。但是,这种“深度学习”方法相当缓慢,有时很棘手,可能导致不现实的反事实例子。在这项工作中,我们提出了一种新的方法,仅使用基于可行概率模型的两个梯度计算来处理这些问题。首先,我们计算一个不受约束的反事实$ u $ $ x $,以诱导反事实$ y_0 $。然后,我们将$ U $调整到更高的密度区域,从而导致$ x^{\ prime} $。经验证据证明了我们方法的主要优势。
Counterfactual examples are an appealing class of post-hoc explanations for machine learning models. Given input $x$ of class $y_1$, its counterfactual is a contrastive example $x^\prime$ of another class $y_0$. Current approaches primarily solve this task by a complex optimization: define an objective function based on the loss of the counterfactual outcome $y_0$ with hard or soft constraints, then optimize this function as a black-box. This "deep learning" approach, however, is rather slow, sometimes tricky, and may result in unrealistic counterfactual examples. In this work, we propose a novel approach to deal with these problems using only two gradient computations based on tractable probabilistic models. First, we compute an unconstrained counterfactual $u$ of $x$ to induce the counterfactual outcome $y_0$. Then, we adapt $u$ to higher density regions, resulting in $x^{\prime}$. Empirical evidence demonstrates the dominant advantages of our approach.