论文标题
评估提高AI透明度的方法:案例研究
Evaluating a Methodology for Increasing AI Transparency: A Case Study
论文作者
论文摘要
为了应对人们对人工智能潜在危害(AI)的潜在危害的越来越关注,社会已经开始要求对如何创建和使用AI模型和系统的透明度。为了解决这些问题,一些努力提出了拟议的文档模板,其中包含模型开发人员要回答的问题。这些模板提供了一个有用的起点,但是没有单个模板可以满足消费者的各种文档的需求。但是,原则上有可能创建一种可重复的方法来生成真正有用的文档。理查兹等。 [25]提出了这种方法来确定特定的文档需求并创建模板以满足这些需求。尽管这是一个有前途的建议,但尚未进行评估。 本文在实践中介绍了对这种以用户为中心的方法的首次评估,并报告了AI领域的团队在医疗保健领域的经验,该团队采用了它,以提高多种AI模型的透明度。该方法被认为是由未经用户以用户为中心的技术培训的开发人员可以使用的,从而引导他们创建文档模板,该模板可以解决消费者的特定需求,同时仍然可以在不同的模型和用例中重复使用。审查了对该方法的收益和成本的分析,并总结了方法论和支持工具的进一步改进的建议。
In reaction to growing concerns about the potential harms of artificial intelligence (AI), societies have begun to demand more transparency about how AI models and systems are created and used. To address these concerns, several efforts have proposed documentation templates containing questions to be answered by model developers. These templates provide a useful starting point, but no single template can cover the needs of diverse documentation consumers. It is possible in principle, however, to create a repeatable methodology to generate truly useful documentation. Richards et al. [25] proposed such a methodology for identifying specific documentation needs and creating templates to address those needs. Although this is a promising proposal, it has not been evaluated. This paper presents the first evaluation of this user-centered methodology in practice, reporting on the experiences of a team in the domain of AI for healthcare that adopted it to increase transparency for several AI models. The methodology was found to be usable by developers not trained in user-centered techniques, guiding them to creating a documentation template that addressed the specific needs of their consumers while still being reusable across different models and use cases. Analysis of the benefits and costs of this methodology are reviewed and suggestions for further improvement in both the methodology and supporting tools are summarized.