论文标题
图像和标签统计的联合建模,以增强模型的医学图像分割的通用性
Joint Modeling of Image and Label Statistics for Enhancing Model Generalizability of Medical Image Segmentation
论文作者
论文摘要
尽管受到监督的深度学习在医学图像细分方面取得了有希望的表现,但许多方法不能很好地概括在看不见的数据上,从而限制了其现实世界的适用性。为了解决这个问题,我们提出了一个基于深度学习的贝叶斯框架,该框架共同对图像和标签统计数据进行了建模,并利用了医学图像的域 - iRrelevant轮廓进行分割。具体而言,我们首先将图像分解为轮廓和基础的组成部分。然后,我们将预期标签建模为仅与轮廓相关的变量。最后,我们开发了一个变异的贝叶斯框架,以推断这些变量的后验分布,包括轮廓,基础和标签。该框架是通过神经网络实施的,因此称为深贝叶斯分割。跨序列心脏MRI分割的任务的结果表明,我们的方法为模型推广设定了新的最新技术。尤其是,在T2图像上良好训练的LGE MRI训练的贝斯模型超过了其他型号,即在平均骰子方面超过0.47。我们的代码可在https://zmiclab.github.io/projects.html上找到。
Although supervised deep-learning has achieved promising performance in medical image segmentation, many methods cannot generalize well on unseen data, limiting their real-world applicability. To address this problem, we propose a deep learning-based Bayesian framework, which jointly models image and label statistics, utilizing the domain-irrelevant contour of a medical image for segmentation. Specifically, we first decompose an image into components of contour and basis. Then, we model the expected label as a variable only related to the contour. Finally, we develop a variational Bayesian framework to infer the posterior distributions of these variables, including the contour, the basis, and the label. The framework is implemented with neural networks, thus is referred to as deep Bayesian segmentation. Results on the task of cross-sequence cardiac MRI segmentation show that our method set a new state of the art for model generalizability. Particularly, the BayeSeg model trained with LGE MRI generalized well on T2 images and outperformed other models with great margins, i.e., over 0.47 in terms of average Dice. Our code is available at https://zmiclab.github.io/projects.html.