论文标题
决策支持和推荐系统中解释的系统审查和分类学
A systematic review and taxonomy of explanations in decision support and recommender systems
论文作者
论文摘要
随着人工智能领域的最新进展,越来越多的决策任务委派给了软件系统。成功和采用此类系统的关键要求是用户必须信任系统选择甚至完全自动化的决策。为了实现这一目标,自从专家系统早年以来,已广泛研究了解释设施,以作为建立这些系统的信任的一种手段。随着当今越来越复杂的机器学习算法,在解释,问责制和对这种系统的信任的背景下,新的挑战不断出现。在这项工作中,我们系统地回顾了有关咨询系统中解释的文献。这是一个包括推荐系统的系统家族,这是实践中最成功的提供建议软件的类别之一。我们研究解释的目的以及它们的生成,向用户呈现和评估的方式。结果,在设计当前和未来决策支持系统的解释设施时,我们得出了一种新颖的综合分类法。分类法包括各种不同的方面,例如解释目标,响应能力,内容和呈现。此外,我们确定了到目前为止尚未解决的几个挑战,例如与与说明的呈现以及如何评估解释设施有关的细粒度问题有关。
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.