论文标题

使用比例流密集网络的图像超线分辨率

Image Superresolution using Scale-Recurrent Dense Network

论文作者

Purohit, Kuldeep, Mandal, Srimanta, Rajagopalan, A. N.

论文摘要

卷积神经网络(CNN)设计的最新进展已在图像超分辨率(SR)的性能方面取得了重大改善。性能的提升可以归因于这些网络中间层中残留或致密连接的存在。这种连接的有效组合可以大大减少参数的数量,同时保持恢复质量。在本文中,我们提出了一个规模复发的SR体系结构,建立在包含一系列密集连接(残留块(RDB)中的一系列密集连接的单元上,该单元允许从图像中提取丰富的局部特征。与当前的最新方法相比,我们的量表复发性设计为更高尺度因素提供了竞争性能,而参数效率更高。为了进一步提高我们的网络性能,我们在中间层(称为多分离密集块)中采用多个剩余连接,从而改善了现有层中的梯度传播。最近的作品发现,常规损失功能可以指导网络产生较高的PSNR但感觉较低的结果。我们通过利用基于生成的对抗网络(GAN)框架和深度功能(VGG)损失来训练我们的网络来缓解此问题。我们通过实验表明,VGG损失和对抗性损失的不同加权组合使我们的网络输出能够沿感知渗透曲线进行遍历。所提出的网络在感知和客观上(基于PSNR)的参数较少,反对现有方法的性能。

Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR). The boost in performance can be attributed to the presence of residual or dense connections within the intermediate layers of these networks. The efficient combination of such connections can reduce the number of parameters drastically while maintaining the restoration quality. In this paper, we propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs)) that allow extraction of abundant local features from the image. Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches. To further improve the performance of our network, we employ multiple residual connections in intermediate layers (referred to as Multi-Residual Dense Blocks), which improves gradient propagation in existing layers. Recent works have discovered that conventional loss functions can guide a network to produce results which have high PSNRs but are perceptually inferior. We mitigate this issue by utilizing a Generative Adversarial Network (GAN) based framework and deep feature (VGG) losses to train our network. We experimentally demonstrate that different weighted combinations of the VGG loss and the adversarial loss enable our network outputs to traverse along the perception-distortion curve. The proposed networks perform favorably against existing methods, both perceptually and objectively (PSNR-based) with fewer parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源