论文标题
值得信赖的医学人工智能的上下文依赖性解释性和可竞争性:自早产婴儿中发病识别模型的识别错误
Context-dependent Explainability and Contestability for Trustworthy Medical Artificial Intelligence: Misclassification Identification of Morbidity Recognition Models in Preterm Infants
论文作者
论文摘要
尽管AI的机器学习(ML)模型在医学方面取得了很高的表现,但它们并非没有错误。授权临床医生确定不正确的模型建议对于使医疗AI的信任至关重要。可解释的AI(XAI)旨在通过澄清AI推理来支持最终用户来满足这一要求。几项关于生物医学成像的研究最近取得了令人鼓舞的结果。然而,使用表格数据的模型解决方案还不足以满足临床医生的要求。本文提出了一种方法,以支持临床医生确定用表格数据训练的ML模型的故障。我们在三个主要支柱上建立了方法:分解通过利用临床环境潜在空间来设定的功能,评估全球解释的临床关联,以及基于潜在的空间相似性(LSS)的本地解释。我们证明了我们基于ML的识别对自我感染引起的早产婴儿的识别的方法。死亡率,终身残疾和由于模型失败引起的抗生素抗性的风险是该领域的一个开放研究问题。我们实现了通过我们的方法确定两个模型的错误分类案例。通过将本地解释进行情境化,我们的解决方案为临床医生提供了可行的见解,以支持其自主权以实现明智的最终决定。
Although machine learning (ML) models of AI achieve high performances in medicine, they are not free of errors. Empowering clinicians to identify incorrect model recommendations is crucial for engendering trust in medical AI. Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users. Several studies on biomedical imaging achieved promising results recently. Nevertheless, solutions for models using tabular data are not sufficient to meet the requirements of clinicians yet. This paper proposes a methodology to support clinicians in identifying failures of ML models trained with tabular data. We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations. We demonstrated our methodology on ML-based recognition of preterm infant morbidities caused by infection. The risk of mortality, lifelong disability, and antibiotic resistance due to model failures was an open research question in this domain. We achieved to identify misclassification cases of two models with our approach. By contextualizing local explanations, our solution provides clinicians with actionable insights to support their autonomy for informed final decisions.