论文标题
与深度学习神经网络的辐射治疗剂量预测的性能和不确定性估计的蒙特卡洛辍学和自举汇总的比较
A comparison of Monte Carlo dropout and bootstrap aggregation on the performance and uncertainty estimation in radiation therapy dose prediction with deep learning neural networks
论文作者
论文摘要
最近,人工智能技术和算法已成为进步放射治疗计划的主要重点。随着这些开始开始纳入临床工作流程,临床医生的主要关注点不是该模型是否准确,而是该模型在不知道其答案是否正确时向人类操作员表达。我们建议在深度学习模型上使用Monte Carlo辍学(MCDO)和Bootstrap聚集技术(装袋)技术来产生放射治疗剂量预测的不确定性估计。我们表明,这两个模型均能够生成合理的不确定性图,并且通过我们提出的缩放技术,在预测和任何相关指标上创造了可解释的不确定性和界限。在性能方面,包装在本研究中研究的大多数指标中提供了统计学上显着的损失价值和错误。与基线框架相比,Bagging的增加能够进一步将DMean的错误进一步减少0.34%,而DMAX的错误平均为0.19%。总体而言,袋式框架的MAE为2.62,而基线框架的MAE为2.87。从完全的绩效角度来看,行李的有用性在很大程度上取决于问题和可接受的预测错误,以及其在培训期间的高前期计算成本应考虑到决定使用它是否有利。就不确定性估计的部署而言,这两个框架提供了相同的性能时间约为12秒。作为一个基于合奏的元疗法,可以将装袋与现有的机器学习体系结构一起使用,以提高稳定性和性能,并且MCDO可以应用于具有辍学的任何深度学习模型,作为其体系结构的一部分。
Recently, artificial intelligence technologies and algorithms have become a major focus for advancements in treatment planning for radiation therapy. As these are starting to become incorporated into the clinical workflow, a major concern from clinicians is not whether the model is accurate, but whether the model can express to a human operator when it does not know if its answer is correct. We propose to use Monte Carlo dropout (MCDO) and the bootstrap aggregation (bagging) technique on deep learning models to produce uncertainty estimations for radiation therapy dose prediction. We show that both models are capable of generating a reasonable uncertainty map, and, with our proposed scaling technique, creating interpretable uncertainties and bounds on the prediction and any relevant metrics. Performance-wise, bagging provides statistically significant reduced loss value and errors in most of the metrics investigated in this study. The addition of bagging was able to further reduce errors by another 0.34% for Dmean and 0.19% for Dmax, on average, when compared to the baseline framework. Overall, the bagging framework provided significantly lower MAE of 2.62, as opposed to the baseline framework's MAE of 2.87. The usefulness of bagging, from solely a performance standpoint, does highly depend on the problem and the acceptable predictive error, and its high upfront computational cost during training should be factored in to deciding whether it is advantageous to use it. In terms of deployment with uncertainty estimations turned on, both frameworks offer the same performance time of about 12 seconds. As an ensemble-based metaheuristic, bagging can be used with existing machine learning architectures to improve stability and performance, and MCDO can be applied to any deep learning models that have dropout as part of their architecture.