论文标题
Corrsignet:从放射学和病理学图像中学习相关的前列腺癌特征,以改进计算机辅助诊断
CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from Radiology and Pathology Images for Improved Computer Aided Diagnosis
论文作者
论文摘要
磁共振成像(MRI)广泛用于筛查和分期前列腺癌。但是,许多前列腺癌具有微妙的特征,这些特征在MRI上不容易识别,从而导致放射科医生解释的诊断和令人震惊的变异性。已经开发了机器学习模型,以改善癌症的识别,但是当前的模型使用MRI衍生的特征定位癌症,同时未能考虑在切除组织上观察到的疾病病理特征。在本文中,我们提出了一种自动化的两步模型Corrsignet,该模型通过捕获癌症的病理特征在MRI上定位前列腺癌。首先,该模型学习了使用共同表示学习与相应的组织病理学特征相关的癌症的MRI特征。其次,该模型使用学习的相关MRI特征来训练卷积神经网络以定位前列腺癌。组织病理学图像仅在第一步中用于学习相关特征。一旦了解到,这些相关特征就可以从新患者的MRI中提取(无组织病理学或手术)来定位癌症。我们在一个独特的数据集上培训并验证了我们的框架,该数据集由75例接受MRI进行的806片患者,然后进行了前列腺切除术手术。 We tested our method on an independent test set of 20 prostatectomy patients (139 slices, 24 cancerous lesions, 1.12M pixels) and achieved a per-pixel sensitivity of 0.81, specificity of 0.71, AUC of 0.86 and a per-lesion AUC of $0.96 \pm 0.07$, outperforming the current state-of-the-art accuracy in predicting prostate cancer using MRI.
Magnetic Resonance Imaging (MRI) is widely used for screening and staging prostate cancer. However, many prostate cancers have subtle features which are not easily identifiable on MRI, resulting in missed diagnoses and alarming variability in radiologist interpretation. Machine learning models have been developed in an effort to improve cancer identification, but current models localize cancer using MRI-derived features, while failing to consider the disease pathology characteristics observed on resected tissue. In this paper, we propose CorrSigNet, an automated two-step model that localizes prostate cancer on MRI by capturing the pathology features of cancer. First, the model learns MRI signatures of cancer that are correlated with corresponding histopathology features using Common Representation Learning. Second, the model uses the learned correlated MRI features to train a Convolutional Neural Network to localize prostate cancer. The histopathology images are used only in the first step to learn the correlated features. Once learned, these correlated features can be extracted from MRI of new patients (without histopathology or surgery) to localize cancer. We trained and validated our framework on a unique dataset of 75 patients with 806 slices who underwent MRI followed by prostatectomy surgery. We tested our method on an independent test set of 20 prostatectomy patients (139 slices, 24 cancerous lesions, 1.12M pixels) and achieved a per-pixel sensitivity of 0.81, specificity of 0.71, AUC of 0.86 and a per-lesion AUC of $0.96 \pm 0.07$, outperforming the current state-of-the-art accuracy in predicting prostate cancer using MRI.