论文标题

Shapley在数据歧管上的解释性

Shapley explainability on the data manifold

论文作者

Frye, Christopher, de Mijolla, Damien, Begley, Tom, Cowton, Laurence, Stanley, Megan, Feige, Ilya

论文摘要

AI中的解释性对于模型开发,遵守调节以及为预测提供了操作细微差别至关重要。 Shapley解释性的框架以数学原则和模型不合命斯液的方式将模型的预测归因于其输入特征。但是,沙普利解释性的一般实现是一个站不住脚的假设:模型的特征是不相关的。在这项工作中,我们证明了这一假设的明确缺点,并为尊重数据歧管的莎普利解释性开发了两种解决方案。一种基于生成建模的解决方案可灵活访问数据归档;另一个直接学习沙普利价值功能,以灵活性为代价提供性能和稳定性。虽然“ off-manifold” shapley值可以(i)产生错误的解释,但(ii)隐藏隐式模型对敏感属性的依赖性,(iii)在高维数据中导致难以理解的解释,但跨性别性解释性解释性克服了这些问题。

Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions. The Shapley framework for explainability attributes a model's predictions to its input features in a mathematically principled and model-agnostic way. However, general implementations of Shapley explainability make an untenable assumption: that the model's features are uncorrelated. In this work, we demonstrate unambiguous drawbacks of this assumption and develop two solutions to Shapley explainability that respect the data manifold. One solution, based on generative modelling, provides flexible access to data imputations; the other directly learns the Shapley value-function, providing performance and stability at the cost of flexibility. While "off-manifold" Shapley values can (i) give rise to incorrect explanations, (ii) hide implicit model dependence on sensitive attributes, and (iii) lead to unintelligible explanations in higher-dimensional data, on-manifold explainability overcomes these problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源