论文标题

基于注意的多尺度特征学习网络,用于多模式医学图像融合

An Attention-based Multi-Scale Feature Learning Network for Multimodal Medical Image Fusion

论文作者

Zhou, Meng, Xu, Xiaolan, Zhang, Yuxuan

论文摘要

医学图像在临床应用中起着重要作用。多模式医学图像可以提供有关患者的丰富信息,以供医师诊断。图像融合技术能够将来自多模式图像的互补信息合成为单个图像。该技术将防止放射科医生在不同图像之间来回切换,并在诊断过程中节省大量时间。在本文中,我们为医学图像融合任务介绍了一个新型的剩余注意力网络。我们的网络能够提取多尺度的深层语义特征。此外,我们提出了一种新型的固定融合策略,该策略称为基于软马克斯的权重和基质核标准的基于软疗法的加权策略。广泛的实验表明,与四个常用融合指标的参考图像融合方法相比,我们提出的网络和融合策略超过了最先进的性能。

Medical images play an important role in clinical applications. Multimodal medical images could provide rich information about patients for physicians to diagnose. The image fusion technique is able to synthesize complementary information from multimodal images into a single image. This technique will prevent radiologists switch back and forth between different images and save lots of time in the diagnostic process. In this paper, we introduce a novel Dilated Residual Attention Network for the medical image fusion task. Our network is capable to extract multi-scale deep semantic features. Furthermore, we propose a novel fixed fusion strategy termed Softmax-based weighted strategy based on the Softmax weights and matrix nuclear norm. Extensive experiments show our proposed network and fusion strategy exceed the state-of-the-art performance compared with reference image fusion methods on four commonly used fusion metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源