论文标题
来自数据分布和不确定性镜头的AI系统风险评估的探索性研究
An Exploratory Study of AI System Risk Assessment from the Lens of Data Distribution and Uncertainty
论文作者
论文摘要
深度学习(DL)已成为一种驱动力,并已在具有竞争性能的许多领域和应用中广泛采用。实际上,为了解决现实世界应用中的非平凡和复杂的任务,DL通常不是独立使用的,而是作为较大复杂AI系统的小工具的贡献。尽管在模型级别研究深神经网络(DNN)的质量问题的趋势迅速增加,但很少有研究研究单位级别的DNN质量以及对系统级别的潜在影响。更重要的是,它还缺乏有关如何从单位级别到系统级别对AI系统进行风险评估的系统调查。为了弥合这一差距,本文启动了从数据分布和不确定性角度评估AI系统风险评估的早期探索性研究,以解决这些问题。我们提出了一个通用框架,该框架具有用于分析AI系统的探索性研究。经过大规模(700多种实验配置和5000多个GPU小时)的实验和深入研究之后,我们达到了一些关键的有趣发现,突出了对AI系统进行更深入研究的实际需求和机会。
Deep learning (DL) has become a driving force and has been widely adopted in many domains and applications with competitive performance. In practice, to solve the nontrivial and complicated tasks in real-world applications, DL is often not used standalone, but instead contributes as a piece of gadget of a larger complex AI system. Although there comes a fast increasing trend to study the quality issues of deep neural networks (DNNs) at the model level, few studies have been performed to investigate the quality of DNNs at both the unit level and the potential impacts on the system level. More importantly, it also lacks systematic investigation on how to perform the risk assessment for AI systems from unit level to system level. To bridge this gap, this paper initiates an early exploratory study of AI system risk assessment from both the data distribution and uncertainty angles to address these issues. We propose a general framework with an exploratory study for analyzing AI systems. After large-scale (700+ experimental configurations and 5000+ GPU hours) experiments and in-depth investigations, we reached a few key interesting findings that highlight the practical need and opportunities for more in-depth investigations into AI systems.