论文标题
AI系统中的验证和透明度用于药物宣传:适用于医学文献监测不良事件的案例研究
Validation and Transparency in AI systems for pharmacovigilance: a case study applied to the medical literature monitoring of adverse events
论文作者
论文摘要
适用于生物医学文本的人工智能的最新进展正在为改善当前不断增长的现实世界数据所负担的药物守护活动提供了令人兴奋的机会。为了充分实现这些机会,应考虑现有的监管指导和行业最佳实践,以提高系统的整体信任度并实现更广泛的采用。在本文中,我们介绍了一项案例研究,介绍了如何在药物宣传中对现有的AI系统进行现有指南,重点介绍了科学文献中不良事件的医学文献监测(MLM)的特定任务。我们描述了一个AI系统,其目的是减少与主题专家密切合作建立的MLM活动的努力,并考虑有关药物守护和AI透明度的经过验证的系统的指南。特别是,我们利用公共披露作为一种有用的风险控制措施来减轻系统滥用并赢得用户信任。此外,我们提出的实验结果表明,该系统可以显着消除筛选工作,同时保持高水平的召回(过滤55%的无关文章,目标召回涉嫌不良文章为0.99),并为调整所需的召回提供了可靠的方法,以适合特定风险。
Recent advances in artificial intelligence applied to biomedical text are opening exciting opportunities for improving pharmacovigilance activities currently burdened by the ever growing volumes of real world data. To fully realize these opportunities, existing regulatory guidance and industry best practices should be taken into consideration in order to increase the overall trustworthiness of the system and enable broader adoption. In this paper we present a case study on how to operationalize existing guidance for validated AI systems in pharmacovigilance focusing on the specific task of medical literature monitoring (MLM) of adverse events from the scientific literature. We describe an AI system designed with the goal of reducing effort in MLM activities built in close collaboration with subject matter experts and considering guidance for validated systems in pharmacovigilance and AI transparency. In particular we make use of public disclosures as a useful risk control measure to mitigate system misuse and earn user trust. In addition we present experimental results showing the system can significantly remove screening effort while maintaining high levels of recall (filtering 55% of irrelevant articles on average, for a target recall of 0.99 on suspected adverse articles) and provide a robust method for tuning the desired recall to suit a particular risk profile.