论文标题

使用注意力引导的深层实例学习网络,基于整个幻灯片图像的癌症生存预测

Whole Slide Images based Cancer Survival Prediction using Attention Guided Deep Multiple Instance Learning Networks

论文作者

Yao, Jiawen, Zhu, Xinliang, Jonnagaddala, Jitendra, Hawkins, Nicholas, Huang, Junzhou

论文摘要

传统的基于图像的生存预测模型依赖于歧视性补丁标签,这些标记使这些方法无法扩展到扩展到大型数据集。最近的研究表明,当分类任务中没有注释时,多个实例学习(MIL)框架对于组织病理学图像很有用。与当前基于图像的生存模型不同,该模型限制在整个幻灯片图像(WSIS)中得出的关键斑块或簇,我们通过引入Siamese Mi-FCN和基于注意力的MIL PORING来提出深层关注的多个实例生存学习(DEEPATTNMISL),以有效地从WSI中学习成像,然后从WSI中学习成像,然后将WSI-WSI-WSI-LELEL LELEVEL INFORMING TOPERING COTCHING CONTING TOPER INFORTION到患者信息。在最近的生存模型中,基于注意力的聚合比聚集技术更灵活和适应性。我们评估了两个大型癌症全幻灯片图像数据集的方法,我们的结果表明,所提出的方法更有效,适合大型数据集,并且在定位重要模式和特征方面具有更好的解释性,从而有助于准确的癌症生存预测。提议的框架也可用于评估个体患者的风险,从而有助于提供个性化医学。代码可在https://github.com/uta-smile/deepattnmisl_media上找到。

Traditional image-based survival prediction models rely on discriminative patch labeling which make those methods not scalable to extend to large datasets. Recent studies have shown Multiple Instance Learning (MIL) framework is useful for histopathological images when no annotations are available in classification task. Different to the current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs), we propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling to efficiently learn imaging features from the WSI and then aggregate WSI-level information to patient-level. Attention-based aggregation is more flexible and adaptive than aggregation techniques in recent survival models. We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets and has better interpretability in locating important patterns and features that contribute to accurate cancer survival predictions. The proposed framework can also be used to assess individual patient's risk and thus assisting in delivering personalized medicine. Codes are available at https://github.com/uta-smile/DeepAttnMISL_MEDIA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源