论文标题
对比图像综合和自我监督的特征适应性,用于跨模式生物医学图像分割
Contrastive Image Synthesis and Self-supervised Feature Adaptation for Cross-Modality Biomedical Image Segmentation
论文作者
论文摘要
这项工作提出了一个新颖的框架CISFA(对比图像合成和自我监督的特征适应),该框架建立在图像域翻译和无监督的特征适应性上,以进行跨模型生物医学图像分割。与现有作品不同,我们使用单方面的生成模型,并在输入图像的采样贴片和相应的合成图像之间添加加权贴片对比度损失,该图像用作形状约束。此外,我们注意到生成的图像和输入图像共享相似的结构信息,但具有不同的方式。因此,我们在生成的图像和输入图像上实施对比损失,以训练分割模型的编码器,以最大程度地减少学到的嵌入空间中成对图像之间的差异。与依靠对抗性学习进行特征适应的现有作品相比,这种方法使编码器能够以更明确的方式学习独立于域的特征。我们对包含腹腔和全心的CT和MRI图像的分割任务进行了广泛评估。实验结果表明,所提出的框架不仅输出了较小的器官形状变形的综合图像,而且还超过了最先进的域适应方法,而较大的边距则优于最先进的域适应方法。
This work presents a novel framework CISFA (Contrastive Image synthesis and Self-supervised Feature Adaptation)that builds on image domain translation and unsupervised feature adaptation for cross-modality biomedical image segmentation. Different from existing works, we use a one-sided generative model and add a weighted patch-wise contrastive loss between sampled patches of the input image and the corresponding synthetic image, which serves as shape constraints. Moreover, we notice that the generated images and input images share similar structural information but are in different modalities. As such, we enforce contrastive losses on the generated images and the input images to train the encoder of a segmentation model to minimize the discrepancy between paired images in the learned embedding space. Compared with existing works that rely on adversarial learning for feature adaptation, such a method enables the encoder to learn domain-independent features in a more explicit way. We extensively evaluate our methods on segmentation tasks containing CT and MRI images for abdominal cavities and whole hearts. Experimental results show that the proposed framework not only outputs synthetic images with less distortion of organ shapes, but also outperforms state-of-the-art domain adaptation methods by a large margin.