论文标题
通过任务转移学习去除雨条
Removing Rain Streaks via Task Transfer Learning
论文作者
论文摘要
由于很难收集配对的现实世界训练数据,因此图像deraining当前由监督学习主导,例如Photoshop渲染生成的合成数据。但是,由于合成数据和现实世界数据之间的差距,对真实下雨场景的概括通常受到限制。在本文中,我们首先从统计学上探讨了为什么监督模型不能很好地推广到真实的雨天病例,并找到合成和真实雨水数据的实质差异。受我们的研究的启发,我们建议通过从其他连接的任务中学习有利的代表来消除雨水。在连接的任务中,可以轻松获得真实数据的标签。因此,我们的核心思想是通过任务传输从真实数据中学习表示形式,以改善概括。因此,我们将学习策略称为\ textit {任务传输学习}。如果有一个以上的连接任务,我们建议通过知识蒸馏降低模型大小。连接任务的预处理模型被视为教师,他们的所有知识都被蒸馏到学生网络,以便我们降低模型规模,同时保留所有连接的任务中有效的先前表示。最后,学生网络对少数配对的合成雨数据进行了微调,以指导预定的先前表示以去除雨水。广泛的实验表明,提出的任务转移学习策略令人惊讶地成功,并且与最先进的监督学习方法相比,并显然超过了其他半监督者的综合数据方法。特别是,它表现出对实际场景的较高概括。
Due to the difficulty in collecting paired real-world training data, image deraining is currently dominated by supervised learning with synthesized data generated by e.g., Photoshop rendering. However, the generalization to real rainy scenes is usually limited due to the gap between synthetic and real-world data. In this paper, we first statistically explore why the supervised deraining models cannot generalize well to real rainy cases, and find the substantial difference of synthetic and real rainy data. Inspired by our studies, we propose to remove rain by learning favorable deraining representations from other connected tasks. In connected tasks, the label for real data can be easily obtained. Hence, our core idea is to learn representations from real data through task transfer to improve deraining generalization. We thus term our learning strategy as \textit{task transfer learning}. If there are more than one connected tasks, we propose to reduce model size by knowledge distillation. The pretrained models for the connected tasks are treated as teachers, all their knowledge is distilled to a student network, so that we reduce the model size, meanwhile preserve effective prior representations from all the connected tasks. At last, the student network is fine-tuned with minority of paired synthetic rainy data to guide the pretrained prior representations to remove rain. Extensive experiments demonstrate that proposed task transfer learning strategy is surprisingly successful and compares favorably with state-of-the-art supervised learning methods and apparently surpass other semi-supervised deraining methods on synthetic data. Particularly, it shows superior generalization over them to real-world scenes.