论文标题
来自B模式超声图像的肌肉架构的完全自动化分析具有深度学习
Fully automated analysis of muscle architecture from B-mode ultrasound images with deep learning
论文作者
论文摘要
B模式超声通常用于成像肌肉骨骼组织,但是一个主要的瓶颈是数据解释,并且通常手动执行肌肉厚度,销售角度和束状长度的分析。在这项研究中,我们使用一组标记的肌肉骨骼超声图像训练了深神经网络(基于U-NET),以检测肌肉束和aponeuroses。然后,我们将新图像的神经网络预测与通过手动分析获得的神经网络预测以及两种现有的半/自动分析方法(SMA和Ultratrack)进行了比较。使用GPU,新方法的单个图像的推理时间约为0.7s,而CPU为4.6s。我们的方法检测浅表和深腹膜的位置,以及每个图像的多个筋膜片段。对于单个图像,该方法给出了与不可训练的自动化方法(SMA;束长度的平均差异:1.1 mm)或人手动分析(平均差异:2.1 mm)产生的结果相似的结果。绞线角度之间的方法差异在1 $^\ Circ $之内,肌肉厚度的平均差异小于0.2 mm。同样,对于视频而言,尽管分析的试验包括数百帧,但使用Ultratrack和我们的方法产生的结果和我们的方法之间存在强烈的重叠,平均ICC为0.73。我们的方法是完全自动化的和开源的,可以估算单个图像或视频以及多种浅表肌肉的束长,绞倾角和肌肉厚度。我们还为自定义模型开发提供了所有必要的代码和培训数据。
B-mode ultrasound is commonly used to image musculoskeletal tissues, but one major bottleneck is data interpretation, and analyses of muscle thickness, pennation angle and fascicle length are often still performed manually. In this study we trained deep neural networks (based on U-net) to detect muscle fascicles and aponeuroses using a set of labelled musculoskeletal ultrasound images. We then compared neural network predictions on new, unseen images to those obtained via manual analysis and two existing semi/automated analysis approaches (SMA and Ultratrack). With a GPU, inference time for a single image with the new approach was around 0.7s, compared to 4.6s with a CPU. Our method detects the locations of the superficial and deep aponeuroses, as well as multiple fascicle fragments per image. For single images, the method gave similar results to those produced by a non-trainable automated method (SMA; mean difference in fascicle length: 1.1 mm) or human manual analysis (mean difference: 2.1 mm). Between-method differences in pennation angle were within 1$^\circ$, and mean differences in muscle thickness were less than 0.2 mm. Similarly, for videos, there was strong overlap between the results produced with Ultratrack and our method, with a mean ICC of 0.73, despite the fact that the analysed trials included hundreds of frames. Our method is fully automated and open source, and can estimate fascicle length, pennation angle and muscle thickness from single images or videos, as well as from multiple superficial muscles. We also provide all necessary code and training data for custom model development.