论文标题

适用于神经影像学的机器学习方法的解释性

Interpretability of Machine Learning Methods Applied to Neuroimaging

论文作者

Thibeau-Sutre, Elina, Collin, Sasha, Burgos, Ninon, Colliot, Olivier

论文摘要

深度学习方法已经在处理自然图像的过程中非常流行,然后成功地适应了神经影像领域。由于这些方法是非透明的,因此需要使用可解释性方法来验证它们并确保其可靠性。确实,已经表明,即使使用无关的功能,深度学习模型也可以通过利用训练集中的偏见来获得高性能。可以使用可解释性方法检测到这种不良情况。最近,已经提出了许多解释神经网络的方法。但是,这个领域还不成熟。机器学习用户旨在解释其模型时面临两个主要问题:选择哪种方法以及如何评估其可靠性?在这里,我们旨在通过介绍为评估其可靠性以及在神经成像环境中开发的最常见的可解释性方法和指标来提供这些问题的答案。请注意,这不是一项详尽的调查:我们旨在专注于我们认为是最有代表性和相关性的研究。

Deep learning methods have become very popular for the processing of natural images, and were then successfully adapted to the neuroimaging field. As these methods are non-transparent, interpretability methods are needed to validate them and ensure their reliability. Indeed, it has been shown that deep learning models may obtain high performance even when using irrelevant features, by exploiting biases in the training set. Such undesirable situations can potentially be detected by using interpretability methods. Recently, many methods have been proposed to interpret neural networks. However, this domain is not mature yet. Machine learning users face two major issues when aiming to interpret their models: which method to choose, and how to assess its reliability? Here, we aim at providing answers to these questions by presenting the most common interpretability methods and metrics developed to assess their reliability, as well as their applications and benchmarks in the neuroimaging context. Note that this is not an exhaustive survey: we aimed to focus on the studies which we found to be the most representative and relevant.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源