论文标题
低剂量CT重建的学习多层残差稀疏转换模型
Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT Reconstruction
论文作者
论文摘要
近年来,基于稀疏表示的信号模型受到了很大的关注。与综合词典学习相比,稀疏转换学习涉及高效的稀疏编码和操作员更新步骤。在这项工作中,我们提出了一个多层残差稀疏变换(MRST)学习模型,其中转换域残留物在层上共同稀疏。特别是,更深层的变换利用了残留图的更复杂的特性。我们研究了使用受惩罚的加权最小二乘(PWL)优化的学习MRST模型在低剂量CT重建中的应用。蛋黄酱诊所数据的实验结果表明,基于边缘披露(EP)正常化程序和单层变换(ST)模型,MRST模型的表现优于常规方法,例如FBP和PWLS方法,尤其是用于维持一些细微的细节。
Signal models based on sparse representation have received considerable attention in recent years. Compared to synthesis dictionary learning, sparsifying transform learning involves highly efficient sparse coding and operator update steps. In this work, we propose a Multi-layer Residual Sparsifying Transform (MRST) learning model wherein the transform domain residuals are jointly sparsified over layers. In particular, the transforms for the deeper layers exploit the more intricate properties of the residual maps. We investigate the application of the learned MRST model for low-dose CT reconstruction using Penalized Weighted Least Squares (PWLS) optimization. Experimental results on Mayo Clinic data show that the MRST model outperforms conventional methods such as FBP and PWLS methods based on edge-preserving (EP) regularizer and single-layer transform (ST) model, especially for maintaining some subtle details.