论文标题

减少维度的局部解释

Local Explanation of Dimensionality Reduction

论文作者

Bardos, Avraam, Mollas, Ioannis, Bassiliades, Nick, Tsoumakas, Grigorios

论文摘要

降低降低(DR)是准备和分析高维数据的流行方法。减少的数据表示在计算密集程度上较少,更易于管理和可视化,同时保留了很大一部分原始信息。除了这些优势之外,在大多数情况下,这些减少的表示形式可能难以解释,尤其是当DR方法没有提供有关原始空间的哪些特征导致其构建的进一步信息时。可解释的机器学习解决了这个问题,这是一个可解释的人工智能的子字段,该子场涉及机器学习模型的不透明度。但是,当前对可解释的机器学习的研究集中在监督任务上,而无监督的任务(例如降低维度降低)没有探索。在本文中,我们介绍了LXDR,这是一种能够提供DR技术输出的局部解释的技术。提出了实验结果和两个LXDR用例示例,以评估其有用性。

Dimensionality reduction (DR) is a popular method for preparing and analyzing high-dimensional data. Reduced data representations are less computationally intensive and easier to manage and visualize, while retaining a significant percentage of their original information. Aside from these advantages, these reduced representations can be difficult or impossible to interpret in most circumstances, especially when the DR approach does not provide further information about which features of the original space led to their construction. This problem is addressed by Interpretable Machine Learning, a subfield of Explainable Artificial Intelligence that addresses the opacity of machine learning models. However, current research on Interpretable Machine Learning has been focused on supervised tasks, leaving unsupervised tasks like Dimensionality Reduction unexplored. In this paper, we introduce LXDR, a technique capable of providing local interpretations of the output of DR techniques. Experiment results and two LXDR use case examples are presented to evaluate its usefulness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源