论文标题

通过贝叶斯深梯度下降量化逆问题的模型不确定性

Quantifying Model Uncertainty in Inverse Problems via Bayesian Deep Gradient Descent

论文作者

Barbano, Riccardo, Zhang, Chen, Arridge, Simon, Jin, Bangti

论文摘要

反问题的重建方法的最新进展利用强大的数据驱动模型,例如深神经网络。这些技术已经证明了几个成像任务的最新性能,但是它们通常并不能给获得的重建提供不确定性。在这项工作中,我们开发了一个可扩展的,数据驱动的知识辅助计算框架,以通过贝叶斯神经网络量化模型不确定性。该方法基于最近开发的贪婪迭代训练计划并扩展了深度梯度下降,并将其在概率框架内重塑。可伸缩性是通过在体系结构中混合来实现的:每个块的最后一层是贝叶斯人,而其他块仍然是确定性的,并且在训练中保持贪婪。该框架在一种代表性的医学成像方式上展示,即。具有稀疏视图或有限视图数据的计算机断层扫描,并且在最先进的基准测试中表现出竞争性能,例如,总变化,深度梯度下降和学到的原始偶尔。

Recent advances in reconstruction methods for inverse problems leverage powerful data-driven models, e.g., deep neural networks. These techniques have demonstrated state-of-the-art performances for several imaging tasks, but they often do not provide uncertainty on the obtained reconstruction. In this work, we develop a scalable, data-driven, knowledge-aided computational framework to quantify the model uncertainty via Bayesian neural networks. The approach builds on, and extends deep gradient descent, a recently developed greedy iterative training scheme, and recasts it within a probabilistic framework. Scalability is achieved by being hybrid in the architecture: only the last layer of each block is Bayesian, while the others remain deterministic, and by being greedy in training. The framework is showcased on one representative medical imaging modality, viz. computed tomography with either sparse view or limited view data, and exhibits competitive performance with respect to state-of-the-art benchmarks, e.g., total variation, deep gradient descent and learned primal-dual.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源