论文标题

使用中间域映射生成的合成数据,低光视频增强功能

Low Light Video Enhancement using Synthetic Data Produced with an Intermediate Domain Mapping

论文作者

Triantafyllidou, Danai, Moran, Sean, McDonagh, Steven, Parisot, Sarah, Slabaugh, Gregory

论文摘要

低光视频的原始视频到RGB翻译的进步正在为商品设备(例如智能手机摄像机)上快速低光成像的可能性而无需三脚架。但是,要收集所需的成对长期曝光框架以学习监督映射是一项挑战。当前的方法需要专门的钻机或使用没有主题或对象运动的静态视频,从而导致大小,多样性和运动的数据集有限。我们通过提出一种称为Sidgan的数据综合机制来解决低光视频原始视频对RGB的数据收集瓶颈,该机制可生成丰富的动态视频培训对。 Sidgan映射视频“在野外”(例如互联网视频)中发现了一个弱光(短而长的曝光)域。通过合成生成动态视频数据,我们启用了最近提出的最先进的RAW-RGB模型,以获得更高的图像质量(改进的颜色,减少的伪像)和提高时间一致性,与仅使用静态真实视频数据训练的相同模型相比。

Advances in low-light video RAW-to-RGB translation are opening up the possibility of fast low-light imaging on commodity devices (e.g. smartphone cameras) without the need for a tripod. However, it is challenging to collect the required paired short-long exposure frames to learn a supervised mapping. Current approaches require a specialised rig or the use of static videos with no subject or object motion, resulting in datasets that are limited in size, diversity, and motion. We address the data collection bottleneck for low-light video RAW-to-RGB by proposing a data synthesis mechanism, dubbed SIDGAN, that can generate abundant dynamic video training pairs. SIDGAN maps videos found 'in the wild' (e.g. internet videos) into a low-light (short, long exposure) domain. By generating dynamic video data synthetically, we enable a recently proposed state-of-the-art RAW-to-RGB model to attain higher image quality (improved colour, reduced artifacts) and improved temporal consistency, compared to the same model trained with only static real video data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源