论文标题
解释本体论:以用户为中心AI的解释模型
Explanation Ontology: A Model of Explanations for User-Centered AI
论文作者
论文摘要
自从其构想以来,解释性一直是人工智能(AI)系统的目标,随着更复杂的AI模型越来越多地用于医疗保健等关键的高风险环境中,因此需要增强解释性。解释通常以非原理,事后方式添加到AI系统中。通过更多地采用这些系统并强调以用户为中心的解释性,需要进行结构化表示,将解释性视为主要考虑因素,将最终用户映射到特定的解释类型以及系统的AI功能。我们设计了一个解释本体,以模拟该过程中的解释,对系统和用户属性的作用以及不同文献衍生的解释类型的范围。我们指出了本体如何支持医疗领域中解释的用户要求。我们通过针对系统设计师的一系列能力问题来评估我们的本体,这些问题可能会使用我们的本体来确定在系统设计设置和实时操作中的用户需求和系统功能的结合,将包括哪些解释类型包括在内。通过使用此本体,系统设计人员将能够在AI系统可以并且应该提供的解释方面做出明智的选择。
Explainability has been a goal for Artificial Intelligence (AI) systems since their conception, with the need for explainability growing as more complex AI models are increasingly used in critical, high-stakes settings such as healthcare. Explanations have often added to an AI system in a non-principled, post-hoc manner. With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration, mapping end user needs to specific explanation types and the system's AI capabilities. We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types. We indicate how the ontology can support user requirements for explanations in the domain of healthcare. We evaluate our ontology with a set of competency questions geared towards a system designer who might use our ontology to decide which explanation types to include, given a combination of users' needs and a system's capabilities, both in system design settings and in real-time operations. Through the use of this ontology, system designers will be able to make informed choices on which explanations AI systems can and should provide.