论文标题

学习细节结构的替代优化,以实现盲目的超分辨率

Learning Detail-Structure Alternative Optimization for Blind Super-Resolution

论文作者

Li, Feng, Wu, Yixuan, Bai, Huihui, Lin, Weisi, Cong, Runmin, Zhao, Yao

论文摘要

现有的基于卷积神经网络(CNN)的图像超分辨率(SR)方法在Bicubic内核上实现了令人印象深刻的性能,这对于处理现实世界应用中未知降解无效。最近的盲目SR方法建议重建依赖于模糊内核估计的SR图像。但是,由于估计误差,它们的结果仍然保持可见的伪影和细节失真。为了减轻这些问题,在本文中,我们提出了一个有效且无内核网络,即DSSR,该网络可实现盲目SR的重复详细结构替代性优化,而无需模糊内核。具体而言,在我们的DSSR中,构建了一个详细结构调制模块(DSMM),以利用图像细节和结构的相互作用和协作。 DSMM由两个组件组成:一个详细修复单元(DRU)和一个结构调制单元(SMU)。前者旨在从LR结构环境中回归中间的HR细节重建,后者执行在HR和LR空间的学习细节图中进行的结构环境调制。此外,我们将DSMM的输出用作隐藏状态,并从经常性的卷积神经网络(RCNN)视图中设计DSSR架构。通过这种方式,网络可以或者可以优化图像细节和结构上下文,从而在整个时间内实现合作式化。此外,配备了经常性连接,我们的DSSR可以通过在每个展开时间观察以前的HR详细信息和上下文来互补。关于合成数据集和现实世界图像的广泛实验表明,我们的方法可以针对现有方法实现最新方法。可以在https://github.com/arcananana/dssr上找到源代码。

Existing convolutional neural networks (CNN) based image super-resolution (SR) methods have achieved impressive performance on bicubic kernel, which is not valid to handle unknown degradations in real-world applications. Recent blind SR methods suggest to reconstruct SR images relying on blur kernel estimation. However, their results still remain visible artifacts and detail distortion due to the estimation errors. To alleviate these problems, in this paper, we propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR. Specifically, in our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures. The DSMM consists of two components: a detail restoration unit (DRU) and a structure modulation unit (SMU). The former aims at regressing the intermediate HR detail reconstruction from LR structural contexts, and the latter performs structural contexts modulation conditioned on the learned detail maps at both HR and LR spaces. Besides, we use the output of DSMM as the hidden state and design our DSSR architecture from a recurrent convolutional neural network (RCNN) view. In this way, the network can alternatively optimize the image details and structural contexts, achieving co-optimization across time. Moreover, equipped with the recurrent connection, our DSSR allows low- and high-level feature representations complementary by observing previous HR details and contexts at every unrolling time. Extensive experiments on synthetic datasets and real-world images demonstrate that our method achieves the state-of-the-art against existing methods. The source code can be found at https://github.com/Arcananana/DSSR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源