论文标题
视频面对运动自适应反馈单元的超级分辨率
Video Face Super-Resolution with Motion-Adaptive Feedback Cell
论文作者
论文摘要
由于深度卷积神经网络(CNN)的发展,视频超分辨率(VSR)方法最近取得了显着的成功。当前的最新CNN方法通常将VSR问题视为大量单独的多帧超分辨率任务,在该任务中,一批低分辨率(LR)框架被用来生成单个高分辨率(HR)框架,并在整个视频中运行幻灯片窗口,以使整个视频都可以获得一系列HR Framemes。但是,二人组合框架之间的复杂时间依赖性,随着LR输入帧数的增加,重建的HR帧的性能变得更糟。原因是这些方法缺乏对复杂的时间依赖性建模的能力,并且很难对VSR过程进行准确的运动估计和补偿。当帧中的运动复杂时,这使得性能大大降级。在本文中,我们提出了一个运动自适应反馈单元(MAFC),这是一个简单但有效的块,可以有效地捕获运动补偿并以自适应方式将其送回网络。我们的方法有效地利用了框架间运动的信息,可以避免网络对运动估计和补偿方法的依赖性。此外,从MAFC的出色性质中受益,在极其复杂的运动场景的情况下,该网络可以实现更好的性能。广泛的评估和比较验证了我们方法的优势,实验结果表明,所提出的框架表现优于最新方法。
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN). Current state-of-the-art CNN methods usually treat the VSR problem as a large number of separate multi-frame super-resolution tasks, at which a batch of low resolution (LR) frames is utilized to generate a single high resolution (HR) frame, and running a slide window to select LR frames over the entire video would obtain a series of HR frames. However, duo to the complex temporal dependency between frames, with the number of LR input frames increase, the performance of the reconstructed HR frames become worse. The reason is in that these methods lack the ability to model complex temporal dependencies and hard to give an accurate motion estimation and compensation for VSR process. Which makes the performance degrade drastically when the motion in frames is complex. In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way. Our approach efficiently utilizes the information of the inter-frame motion, the dependence of the network on motion estimation and compensation method can be avoid. In addition, benefiting from the excellent nature of MAFC, the network can achieve better performance in the case of extremely complex motion scenarios. Extensive evaluations and comparisons validate the strengths of our approach, and the experimental results demonstrated that the proposed framework is outperform the state-of-the-art methods.