论文标题
基于注意的眼科图像检索的显着显着性
Attention-based Saliency Hashing for Ophthalmic Image Retrieval
论文作者
论文摘要
事实证明,深层散列方法对大规模的医学图像搜索有效,可帮助基于参考的临床医生诊断。但是,当显着区域在眼科图像中起最大歧视作用时,现有的深层哈希方法并不能完全利用深层网络的学习能力来捕获明显区域的特征。不同等级或类别的眼科图像可能具有相似的整体性能,但具有微妙的差异,可以通过采矿明显区域来区分。为了解决这个问题,我们提出了一个新颖的端到端网络,称为基于注意力的显着性哈希(ASH),用于学习紧凑的哈希代码来表示眼科图像。 Ash嵌入了空间意见模块,以更多地专注于显着区域的表示,并突出了它们在区分眼科图像中的重要作用。从空间意见模块中受益,可以将显着区域的信息映射到哈希代码中以进行相似性计算。在训练阶段,我们输入图像对以共享网络的权重,并且成对损失旨在最大程度地提高哈希代码的可区分性。在检索阶段,Ash通过以端到端方式输入图像来获得哈希代码,然后将哈希代码用于相似性计算以返回最相似的图像。对眼科图像数据集的两种不同方式的广泛实验表明,与最新的深层散列方法相比,提议的灰分可以进一步改善检索性能,这是由于空间意见模块的巨大贡献。
Deep hashing methods have been proved to be effective for the large-scale medical image search assisting reference-based diagnosis for clinicians. However, when the salient region plays a maximal discriminative role in ophthalmic image, existing deep hashing methods do not fully exploit the learning ability of the deep network to capture the features of salient regions pointedly. The different grades or classes of ophthalmic images may be share similar overall performance but have subtle differences that can be differentiated by mining salient regions. To address this issue, we propose a novel end-to-end network, named Attention-based Saliency Hashing (ASH), for learning compact hash-code to represent ophthalmic images. ASH embeds a spatial-attention module to focus more on the representation of salient regions and highlights their essential role in differentiating ophthalmic images. Benefiting from the spatial-attention module, the information of salient regions can be mapped into the hash-code for similarity calculation. In the training stage, we input the image pairs to share the weights of the network, and a pairwise loss is designed to maximize the discriminability of the hash-code. In the retrieval stage, ASH obtains the hash-code by inputting an image with an end-to-end manner, then the hash-code is used to similarity calculation to return the most similar images. Extensive experiments on two different modalities of ophthalmic image datasets demonstrate that the proposed ASH can further improve the retrieval performance compared to the state-of-the-art deep hashing methods due to the huge contributions of the spatial-attention module.