论文标题
广义形状:在机器学习中生成多种类型的解释
Generalized SHAP: Generating multiple types of explanations in machine learning
论文作者
论文摘要
关于模型的许多重要问题不能仅仅通过解释每个功能有助于其输出的贡献来回答。为了回答更广泛的问题,我们概括了一种流行的,数学上良好的解释技术,Shapley添加说明(SHAP)。我们的新方法 - 夏普利添加说明(G -shap) - 产生许多其他类型的解释,包括:1)一般分类解释;为什么这个样本更有可能属于一个类而不是另一个类? 2)组间差异;为什么观察群之间的模型预测有所不同? 3)模型故障;为什么我们的模型在给定样本上的表现不佳?我们正式定义了这些类型的解释,并说明了它们在实际数据上的实际使用。
Many important questions about a model cannot be answered just by explaining how much each feature contributes to its output. To answer a broader set of questions, we generalize a popular, mathematically well-grounded explanation technique, Shapley Additive Explanations (SHAP). Our new method - Generalized Shapley Additive Explanations (G-SHAP) - produces many additional types of explanations, including: 1) General classification explanations; Why is this sample more likely to belong to one class rather than another? 2) Intergroup differences; Why do our model's predictions differ between groups of observations? 3) Model failure; Why does our model perform poorly on a given sample? We formally define these types of explanations and illustrate their practical use on real data.