论文标题
Radiotransformer:用于视觉注意引导疾病分类的级联全球焦点变压器
RadioTransformer: A Cascaded Global-Focal Transformer for Visual Attention-guided Disease Classification
论文作者
论文摘要
在这项工作中,我们提出了一种新型的视觉注意力驱动的变压器框架Radiotrandformer,它利用放射科医生的凝视模式,并将其视觉认知行为模拟用于胸部X光片的疾病诊断。诸如放射科医生等领域专家依靠视觉信息进行医学图像解释。另一方面,即使视觉解释具有挑战性,深度神经网络在类似任务中也表现出了巨大的希望。眼睛凝视跟踪已被用来捕获域专家的观看行为,从而洞悉了视觉搜索的复杂性。但是,深度学习框架,即使是依靠注意力机制的框架,也不会利用这一丰富的领域信息。 Radiotransformer通过从放射科医生的视觉搜索模式中学习,填补了这一关键差距,该模式被编码为级联的全球 - 焦点变压器框架中的“人类视觉注意区域”。总体“全局”图像特征和更详细的“本地”特征分别由拟议的全局和焦点模块捕获。我们通过实验验证了学生教师方法对8个数据集的疗效,涉及在推理阶段无法使用眼睛凝视数据的不同疾病分类任务。代码:https://github.com/bmi-imaginelab/radiotransformer。
In this work, we present RadioTransformer, a novel visual attention-driven transformer framework, that leverages radiologists' gaze patterns and models their visuo-cognitive behavior for disease diagnosis on chest radiographs. Domain experts, such as radiologists, rely on visual information for medical image interpretation. On the other hand, deep neural networks have demonstrated significant promise in similar tasks even where visual interpretation is challenging. Eye-gaze tracking has been used to capture the viewing behavior of domain experts, lending insights into the complexity of visual search. However, deep learning frameworks, even those that rely on attention mechanisms, do not leverage this rich domain information. RadioTransformer fills this critical gap by learning from radiologists' visual search patterns, encoded as 'human visual attention regions' in a cascaded global-focal transformer framework. The overall 'global' image characteristics and the more detailed 'local' features are captured by the proposed global and focal modules, respectively. We experimentally validate the efficacy of our student-teacher approach for 8 datasets involving different disease classification tasks where eye-gaze data is not available during the inference phase. Code: https://github.com/bmi-imaginelab/radiotransformer.