论文标题
机器学习对外部利益相关者的解释性
Machine Learning Explainability for External Stakeholders
论文作者
论文摘要
随着机器学习越来越多地部署在影响人们生计的高风险环境中,越来越多的呼吁打开黑匣子,并使机器学习算法更加可解释。提供有用的解释需要仔细考虑利益相关者的需求,包括最终用户,监管机构和领域专家。尽管需要这种需求,但几乎没有工作以促进利益相关者对可解释的机器学习的对话。为了帮助解决这一差距,我们在学者,行业专家,法律学者和决策者之间进行了为期一天的封闭式研讨会,以开发围绕解释性的共同语言,并了解当前在服务跨度目标中部署可解释机器学习的解决方案的缺点和潜在的解决方案。我们还要求参与者分享案例研究,以大规模部署可解释的机器学习。在本文中,我们提供了有关可解释的机器学习,从这些研究的经验教训的各种案例研究的简短摘要,并讨论了公开挑战。
As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.