论文标题

重新思考解释性作为对话:从业者的观点

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

论文作者

Lakkaraju, Himabindu, Slack, Dylan, Chen, Yuxin, Tan, Chenhao, Singh, Sameer

论文摘要

随着从业人员越来越多地将机器学习模型部署在诸如医疗保健,金融和政策之类的关键领域中,确保领域专家与这些模型有效运作变得至关重要。解释性是弥合人类决策者与机器学习模型之间差距的一种方法。但是,大多数关于解释性的现有工作都集中在一次性的,静态解释之类的特征重要性或规则列表。对于许多需要从利益相关者那里进行动态,连续发现的用例,这类解释可能不足。在文献中,很少有作品向决策者询问他们想在未来的解释中看到的现有解释和其他Desiderata的实用性。在这项工作中,我们解决了这一差距,并进行了一项研究,我们在其中就他们的需求和解释的需求和愿望采访了医生,医疗保健专业人员和决策者。我们的研究表明,决策者将强烈喜欢以自然语言对话的形式进行互动解释。领域专家希望将机器学习模型视为“另一位同事”,即可以通过询问为什么他们通过表达且易于访问的自然语言互动做出特定决定来负责的人。考虑到这些需求,我们概述了研究人员在设计互动解释作为未来工作的起点时应遵循的五个原则。此外,我们展示了为什么自然语言对话满足这些原则,并且是建立互动解释的理想方式。接下来,我们为对话系统提供了解释性的设计,并讨论建立这些系统的风险,权衡和研究机会。总体而言,我们希望我们的工作是研究人员和工程师设计交互式解释性系统的起点。

As practitioners increasingly deploy machine learning models in critical domains such as health care, finance, and policy, it becomes vital to ensure that domain experts function effectively alongside these models. Explainability is one way to bridge the gap between human decision-makers and machine learning models. However, most of the existing work on explainability focuses on one-off, static explanations like feature importances or rule lists. These sorts of explanations may not be sufficient for many use cases that require dynamic, continuous discovery from stakeholders. In the literature, few works ask decision-makers about the utility of existing explanations and other desiderata they would like to see in an explanation going forward. In this work, we address this gap and carry out a study where we interview doctors, healthcare professionals, and policymakers about their needs and desires for explanations. Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues. Domain experts wish to treat machine learning models as "another colleague", i.e., one who can be held accountable by asking why they made a particular decision through expressive and accessible natural language interactions. Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations as a starting place for future work. Further, we show why natural language dialogues satisfy these principles and are a desirable way to build interactive explanations. Next, we provide a design of a dialogue system for explainability and discuss the risks, trade-offs, and research opportunities of building these systems. Overall, we hope our work serves as a starting place for researchers and engineers to design interactive explainability systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源