论文标题

使用深度卷积神经网络与无味的卡尔曼滤波器的深度卷积神经网络从解剖结构的长轴MRI序列进行全自动左心房分割

Fully Automated Left Atrium Segmentation from Anatomical Cine Long-axis MRI Sequences using Deep Convolutional Neural Network with Unscented Kalman Filter

论文作者

Zhang, Xiaoran, Noga, Michelle, Martin, David Glynn, Punithakumar, Kumaradevan

论文摘要

这项研究提出了一种使用深卷积神经网络和贝叶斯滤波的常规Cine长轴心脏磁共振图像序列的左心房分割的全自动方法。所提出的方法由一个分类网络组成,该网络自动检测长轴序列的类型和三种不同的卷积神经网络模型,然后是无用的卡尔曼过滤(UKF),该模型(UKF)描绘了左心房。提出的方法没有训练和预测所有长轴序列类型,而是将图像序列类型识别为2、3和4室视图,然后基于对该特定序列类型训练的神经网进行预测。回顾性地获取了数据集,并由专家放射科医生提供了地面真实手册细分。除了基于神经网络的分类和分割外,还对另一个神经网络进行了训练和使用,用于选择图像序列,用于使用UKF进一步处理,以在心脏周期内实现时间一致性。 UKF中引入了具有时变角频率的环状动态模型,以表征图像扫描过程中心脏运动的变化。对所提出的方法进行了培训和评估,并通过不同量的培训数据进行了分别评估,并从20、40、60和80名患者中获得的图像进行了培训。从另外20名患者中获得的每个腔室组的1515张图像的评估表明,该模型的表现优于最先进,并在2、3和4-Chamber序列中分别培训了80名患者的数据,分别为2、3和4-CHAMBER序列产生的平均骰子系数为94.1%,93.7%和90.1%。

This study proposes a fully automated approach for the left atrial segmentation from routine cine long-axis cardiac magnetic resonance image sequences using deep convolutional neural networks and Bayesian filtering. The proposed approach consists of a classification network that automatically detects the type of long-axis sequence and three different convolutional neural network models followed by unscented Kalman filtering (UKF) that delineates the left atrium. Instead of training and predicting all long-axis sequence types together, the proposed approach first identifies the image sequence type as to 2, 3 and 4 chamber views, and then performs prediction based on neural nets trained for that particular sequence type. The datasets were acquired retrospectively and ground truth manual segmentation was provided by an expert radiologist. In addition to neural net based classification and segmentation, another neural net is trained and utilized to select image sequences for further processing using UKF to impose temporal consistency over cardiac cycle. A cyclic dynamic model with time-varying angular frequency is introduced in UKF to characterize the variations in cardiac motion during image scanning. The proposed approach was trained and evaluated separately with varying amount of training data with images acquired from 20, 40, 60 and 80 patients. Evaluations over 1515 images with equal number of images from each chamber group acquired from an additional 20 patients demonstrated that the proposed model outperformed state-of-the-art and yielded a mean Dice coefficient value of 94.1%, 93.7% and 90.1% for 2, 3 and 4-chamber sequences, respectively, when trained with datasets from 80 patients.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源