论文标题

深度学习到分段骨盆骨:大规模的CT数据集和基线模型

Deep Learning to Segment Pelvic Bones: Large-scale CT Datasets and Baseline Models

论文作者

Liu, Pengbo, Han, Hu, Du, Yuanqi, Zhu, Heqin, Li, Yinhao, Gu, Feng, Xiao, Honghu, Li, Jun, Zhao, Chunpeng, Xiao, Li, Wu, Xinbao, Zhou, S. Kevin

论文摘要

目的:CT中的骨盆骨分割一直是骨盆骨病的临床诊断和手术计划中的重要一步。现有的骨盆骨分割方法是手工制作的或半自动的,在处理由于多站点结构域的转移而导致的图像外观变化时,由于缺乏大量的pelvic pelvic pelvic ctaset,在缺乏大量的方法中,因此存在对比的容器,骨折,低剂量,金属伪像等,因此获得了有限的准确性。方法:在本文中,我们旨在通过策划从多个来源和不同制造商中汇集的大骨盆CT数据集架起数据差距,其中包括1,184 CT卷和超过320个具有不同分辨率和各种上述外观变化的320,000片。然后,据我们所知,我们首次提出了一个深层的多级网络,以从多域图像同时从多域图像中分割出腰椎,s骨,左臀部和右臀部,以获得更有效且稳健的特征表示。最后,我们基于签名距离函数(SDF)引入了一个后处理工具,以消除错误的预测,同时保留正确预测的骨碎片。结果:我们数据集上的大量实验证明了我们自动方法的有效性,无金属体积的平均骰子为0.987。 SDF后处理器通过在后处理阶段保持重要的骨碎片,在Hausdorff距离下降低了10.5%。结论:我们认为,这个大规模数据集将促进整个社区的发展,并计划在https://github.com/ict-miracle-lab/ctpelvic1k上开放图像,注释,代码和训练有素的基线模型。

Purpose: Pelvic bone segmentation in CT has always been an essential step in clinical diagnosis and surgery planning of pelvic bone diseases. Existing methods for pelvic bone segmentation are either hand-crafted or semi-automatic and achieve limited accuracy when dealing with image appearance variations due to the multi-site domain shift, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Due to the lack of a large-scale pelvic CT dataset with annotations, deep learning methods are not fully explored. Methods: In this paper, we aim to bridge the data gap by curating a large pelvic CT dataset pooled from multiple sources and different manufacturers, including 1, 184 CT volumes and over 320, 000 slices with different resolutions and a variety of the above-mentioned appearance variations. Then we propose for the first time, to the best of our knowledge, to learn a deep multi-class network for segmenting lumbar spine, sacrum, left hip, and right hip, from multiple-domain images simultaneously to obtain more effective and robust feature representations. Finally, we introduce a post-processing tool based on the signed distance function (SDF) to eliminate false predictions while retaining correctly predicted bone fragments. Results: Extensive experiments on our dataset demonstrate the effectiveness of our automatic method, achieving an average Dice of 0.987 for a metal-free volume. SDF post-processor yields a decrease of 10.5% in hausdorff distance by maintaining important bone fragments in post-processing phase. Conclusion: We believe this large-scale dataset will promote the development of the whole community and plan to open source the images, annotations, codes, and trained baseline models at https://github.com/ICT-MIRACLE-lab/CTPelvic1K.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源