论文标题

数量:可解释的AI工具包,用于负责评估神经网络及其他

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond

论文作者

Hedström, Anna, Weber, Leander, Bareeva, Dilyara, Krakowczyk, Daniel, Motzkus, Franz, Samek, Wojciech, Lapuschkin, Sebastian, Höhne, Marina M. -C.

论文摘要

解释方法的评估是一个研究主题,尚未得到深入探讨,但是,由于解释性应该可以增强对人工智能的信任,因此有必要系统地检查和比较解释方法以确认其正确性。到目前为止,还没有对XAI评估的关注的工具,可以详尽而迅速地允许研究人员评估神经网络预测的解释的性能。因此,为了提高该领域的透明度和可重复性,我们建立了数量 - Python中的全面,评估工具包,其中包括日益增长的,组织良好的评估指标和教程,用于评估可解释的方法。该工具包已经过彻底的测试,可在PYPI(或https://github.com/understandable-machine-intelligence-lab/quantus/)上获得开源许可证。

The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool with focus on XAI evaluation exists that exhaustively and speedily allows researchers to evaluate the performance of explanations of neural network predictions. To increase transparency and reproducibility in the field, we therefore built Quantus -- a comprehensive, evaluation toolkit in Python that includes a growing, well-organised collection of evaluation metrics and tutorials for evaluating explainable methods. The toolkit has been thoroughly tested and is available under an open-source license on PyPi (or on https://github.com/understandable-machine-intelligence-lab/Quantus/).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源