论文标题

大型语言模型编码临床知识

Large Language Models Encode Clinical Knowledge

论文作者

Singhal, Karan, Azizi, Shekoofeh, Tu, Tao, Mahdavi, S. Sara, Wei, Jason, Chung, Hyung Won, Scales, Nathan, Tanwani, Ajay, Cole-Lewis, Heather, Pfohl, Stephen, Payne, Perry, Seneviratne, Martin, Gamble, Paul, Kelly, Chris, Scharli, Nathaneal, Chowdhery, Aakanksha, Mansfield, Philip, Arcas, Blaise Aguera y, Webster, Dale, Corrado, Greg S., Matias, Yossi, Chou, Katherine, Gottweis, Juraj, Tomasev, Nenad, Liu, Yun, Rajkomar, Alvin, Barral, Joelle, Semturs, Christopher, Karthikesalingam, Alan, Natarajan, Vivek

论文摘要

大型语言模型(LLM)在自然语言理解和产生方面表现出了令人印象深刻的能力,但是医疗和临床应用的质量栏很高。如今,试图评估模型的临床知识通常依赖于有限基准的自动评估。没有标准可以评估模型的预测和推理。为了解决这个问题,我们提出了MultiMedQA,这是一个基准,结合了六个现有的开放式问题,回答了跨越专业体检,研究和消费者查询的数据集;和HealthSearchQA,这是一个在线搜索的医学问题的新的自由响应数据集。我们提出了一个沿多个轴的模型答案评估的框架,包括事实,精度,可能的伤害和偏见。此外,我们在MultiMedQA上评估了Palm(540亿参数LLM)及其指令调节的变体Flan-Palm。使用促使策略的组合,Flan-Palm在每个多MultiMEDQA多项选择数据集(MEDQA,MEDMCQA,MEDMCQA,PubMedQA,MMLU临床主题)上实现最先进的准确性,包括67.6%的MEDQA(美国医疗许可考试问题)的准确性(美国医学许可考试问题),超过先前的The-Art the-Art,超过17%。但是,人类评估揭示了弗兰 - 帕尔姆反应中的关键差距。为了解决此问题,我们介绍了指令提示调整,这是一种使用一些示例将LLM对准新域的参数有效方法。由此产生的模型Med-Palm令人鼓舞,但仍不如临床医生。我们表明,通过模型量表和指导及时调整,理解,回忆和医学推理可以改善,这表明LLM在医学中的潜在效用。我们的人类评估揭示了当今模型的重要局限性,从而增强了评估框架和方法开发在为临床应用创建安全,有用的LLM模型中的重要性。

Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源