论文标题

slisemap:通过本地解释进行监督的维度降低

SLISEMAP: Supervised dimensionality reduction through local explanations

论文作者

Björklund, Anton, Mäkelä, Jarmo, Puolamäki, Kai

论文摘要

解释黑匣子学习模型的现有方法通常集中于建立特定数据项模型行为的本地解释。可以为所有数据项创建全局解释,但是这些解释通常对复杂的黑匣子模型的保真度较低。我们提出了一种新的监督歧管可视化方法Slisemap,同时找到了所有数据项的局部解释,并构建了黑匣子模型的(通常是)(通常)的二维全局可视化,以便附近投影具有类似局部解释的数据项。我们提供了对问题的数学推导,并使用GPU优化的Pytorch库实现了开源实现。我们将SLISEMAP与多种流行的维度降低方法进行了比较,并发现Slisemap能够利用标记的数据来创建具有一致的本地白盒模型的嵌入。我们还将slisemap与其他模型不合时宜的局部解释方法进行了比较,并表明slisemap提供了可比的解释,并且可视化可以使对黑匣子回归和分类模型有更广泛的了解。

Existing methods for explaining black box learning models often focus on building local explanations of model behaviour for a particular data item. It is possible to create global explanations for all data items, but these explanations generally have low fidelity for complex black box models. We propose a new supervised manifold visualisation method, SLISEMAP, that simultaneously finds local explanations for all data items and builds a (typically) two-dimensional global visualisation of the black box model such that data items with similar local explanations are projected nearby. We provide a mathematical derivation of our problem and an open source implementation implemented using the GPU-optimised PyTorch library. We compare SLISEMAP to multiple popular dimensionality reduction methods and find that SLISEMAP is able to utilise labelled data to create embeddings with consistent local white box models. We also compare SLISEMAP to other model-agnostic local explanation methods and show that SLISEMAP provides comparable explanations and that the visualisations can give a broader understanding of black box regression and classification models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源