论文标题
深度时空视频上取样网络
Deep Space-Time Video Upsampling Networks
论文作者
论文摘要
视频超分辨率(VSR)和框架插值(FI)是传统的计算机视觉问题,并且最近通过深入学习来改善性能。在本文中,我们调查了在时空中共同提高视频的问题,随着展示系统的进步,这变得越来越重要。一个解决方案是独立运行VSR和FI。这是高效的,因为每个溶液中都涉及沉重的深神经网络(DNN)。为此,我们通过将VSR和FI合并到一个关节框架中,为时空视频提出一个端到端DNN框架。在我们的框架中,提出了一种新颖的加权方案,以有效地融合输入框架,而无需明确的运动补偿,以有效地处理视频。结果显示出更好的定量和定性结果,同时减少计算时间(x7更快)和与基准相比的参数数量(30%)。
Video super-resolution (VSR) and frame interpolation (FI) are traditional computer vision problems, and the performance have been improving by incorporating deep learning recently. In this paper, we investigate the problem of jointly upsampling videos both in space and time, which is becoming more important with advances in display systems. One solution for this is to run VSR and FI, one by one, independently. This is highly inefficient as heavy deep neural networks (DNN) are involved in each solution. To this end, we propose an end-to-end DNN framework for the space-time video upsampling by efficiently merging VSR and FI into a joint framework. In our framework, a novel weighting scheme is proposed to fuse input frames effectively without explicit motion compensation for efficient processing of videos. The results show better results both quantitatively and qualitatively, while reducing the computation time (x7 faster) and the number of parameters (30%) compared to baselines.