论文标题

GAM(E)是否更换?基于加性模型约束的可解释的机器学习模型的评估

GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

论文作者

Zschech, Patrick, Weinzierl, Sven, Hambauer, Nico, Zilker, Sandra, Kraus, Mathias

论文摘要

有关可解释的人工智能(XAI)的信息系统的数量(IS)的研究目前正在爆炸,因为该领域要求对机器学习的内部决策逻辑(ML)模型的内部决策逻辑更加透明。但是,大多数在XAI下包含的技术都提供了事后分析解释,必须谨慎考虑,因为它们仅使用基础ML模型的近似值。因此,我们的论文研究了一系列可解释的ML模型,并讨论了它们对IS社区的适用性。更具体地说,我们的重点是通用加性模型(GAM)的高级扩展,其中预测因子以非线性方式独立建模,以生成可以捕获任意模式但保持完全解释的形状函数。在我们的研究中,我们评估了五个GAM的预测质量,与六种传统的ML模型相比,并评估了它们的视觉输出以解释。在此基础上,我们研究了它们的优点和局限性,并为进一步的改进带来了设计含义。

The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with caution as they only use approximations of the underlying ML model. Therefore, our paper investigates a series of intrinsically interpretable ML models and discusses their suitability for the IS community. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable. In our study, we evaluate the prediction qualities of five GAMs as compared to six traditional ML models and assess their visual outputs for model interpretability. On this basis, we investigate their merits and limitations and derive design implications for further improvements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源