论文标题
比较不同的深度学习架构以分类胸部X光片
Comparing Different Deep Learning Architectures for Classification of Chest Radiographs
论文作者
论文摘要
胸部X光片是放射学中最常见的图像之一,通常是计算机视觉研究的主题。但是,用于对胸部X光片进行分类的大多数模型都是从公开可用的深神经网络中得出的,该网络在大型图像数据库中训练。这些数据集通常与胸部X光片不同,因为它们主要是彩色图像,并且包含几个可能的图像类,而X光片是灰度图像,通常只包含更少的图像类。因此,对于图像功能中可以代表更复杂关系的非常深的神经网络,对于对灰度胸部X光片进行分类的更简单任务可能并不需要更复杂的关系。我们比较了15种人工神经网络的不同架构,涉及公开可用的CHExpert数据集中的训练时间和性能,以确定最合适的模型,用于胸部X光片上的深度学习任务。我们可以证明,诸如Resnet-34,Alexnet或VGG-16之类的较小网络有可能将胸部X光片分类为较深的神经网络,例如Densenet-2011或Resnet-151,而计算要求较小。
Chest radiographs are among the most frequently acquired images in radiology and are often the subject of computer vision research. However, most of the models used to classify chest radiographs are derived from openly available deep neural networks, trained on large image-datasets. These datasets routinely differ from chest radiographs in that they are mostly color images and contain several possible image classes, while radiographs are greyscale images and often only contain fewer image classes. Therefore, very deep neural networks, which can represent more complex relationships in image-features, might not be required for the comparatively simpler task of classifying grayscale chest radiographs. We compared fifteen different architectures of artificial neural networks regarding training-time and performance on the openly available CheXpert dataset to identify the most suitable models for deep learning tasks on chest radiographs. We could show, that smaller networks such as ResNet-34, AlexNet or VGG-16 have the potential to classify chest radiographs as precisely as deeper neural networks such as DenseNet-201 or ResNet-151, while being less computationally demanding.