论文标题
MMLN:利用域知识多模式诊断
MMLN: Leveraging Domain Knowledge for Multimodal Diagnosis
论文作者
论文摘要
最近的研究表明,深度学习模型在诸如诊断预测之类的医学成像任务上实现了良好的性能。在模型中,多模态一直是一种新兴趋势,它整合了不同形式的数据,例如胸部X射线(CXR)图像和电子病历(EMRS)。但是,大多数现有的方法都以无模型的方式将它们融合在一起,该方式缺乏理论支持并忽略了不同数据源之间的内在关系。为了解决这个问题,我们提出了一个知识驱动的数据驱动框架,用于肺部疾病诊断。通过合并域知识,机器学习模型可以减少对标记数据的依赖并提高解释性。我们根据权威临床医学指南制定诊断规则,并从文本数据中学习规则的权重。最后,由文本和图像数据组成的多模式融合旨在推断肺部疾病的边际概率。我们对从医院收集的现实数据集进行实验。结果表明,所提出的方法在准确性和解释性方面优于最先进的多模式基线。
Recent studies show that deep learning models achieve good performance on medical imaging tasks such as diagnosis prediction. Among the models, multimodality has been an emerging trend, integrating different forms of data such as chest X-ray (CXR) images and electronic medical records (EMRs). However, most existing methods incorporate them in a model-free manner, which lacks theoretical support and ignores the intrinsic relations between different data sources. To address this problem, we propose a knowledge-driven and data-driven framework for lung disease diagnosis. By incorporating domain knowledge, machine learning models can reduce the dependence on labeled data and improve interpretability. We formulate diagnosis rules according to authoritative clinical medicine guidelines and learn the weights of rules from text data. Finally, a multimodal fusion consisting of text and image data is designed to infer the marginal probability of lung disease. We conduct experiments on a real-world dataset collected from a hospital. The results show that the proposed method outperforms the state-of-the-art multimodal baselines in terms of accuracy and interpretability.