论文标题

解释性,那呢?编辑机器学习模型以反映人类的知识和价值观

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

论文作者

Wang, Zijie J., Kale, Alex, Nori, Harsha, Stella, Peter, Nunnally, Mark E., Chau, Duen Horng, Vorvoreanu, Mihaela, Vaughan, Jennifer Wortman, Caruana, Rich

论文摘要

机器学习(ML)可解释性技术可以揭示数据中的不良模式,该模型用于利用进行预测的模型 - 一旦部署就会​​造成危害。但是,如何采取行动解决这些模式并不总是很清楚。在ML与人类计算机互动研究人员,医师和数据科学家之间的合作中,我们开发了GAM Changer,这是第一个帮助域专家和数据科学家轻松,负责任地编辑通用的加法模型(GAM)和解决有问题的模式的交互系统。借助新颖的交互技术,我们的工具将可解释性置于行动中 - 使用户能够分析,验证和使模型行为与知识和价值相结合。医师已经开始使用我们的工具来调查和修复肺炎和败血症风险预测模型,以及在不同领域工作的7位数据科学家的评估重点介绍了我们的工具易于使用,满足他们的模型编辑需求,并适合当前的工作流程。我们的工具以现代网络技术为基础,在用户的Web浏览器或计算笔记本电脑中本地运行,从而降低了使用的障碍。 GAM Changer可在以下公共演示链接中获得:https://intertret.ml/gam-changer。

Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed. However, how to take action to address these patterns is not always clear. In a collaboration between ML and human-computer interaction researchers, physicians, and data scientists, we develop GAM Changer, the first interactive system to help domain experts and data scientists easily and responsibly edit Generalized Additive Models (GAMs) and fix problematic patterns. With novel interaction techniques, our tool puts interpretability into action--empowering users to analyze, validate, and align model behaviors with their knowledge and values. Physicians have started to use our tool to investigate and fix pneumonia and sepsis risk prediction models, and an evaluation with 7 data scientists working in diverse domains highlights that our tool is easy to use, meets their model editing needs, and fits into their current workflows. Built with modern web technologies, our tool runs locally in users' web browsers or computational notebooks, lowering the barrier to use. GAM Changer is available at the following public demo link: https://interpret.ml/gam-changer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源