论文标题

雾产生的类似图像翻译

Analogical Image Translation for Fog Generation

论文作者

Gong, Rui, Dai, Dengxin, Chen, Yuhua, Li, Wen, Van Gool, Luc

论文摘要

图像到图像翻译是将图像从给定的\ emph {style}映射到另一个给定的\ emph {style}。尽管当前的方法非常成功,但当前方法假设源和目标域中的训练图像的可用性并不总是在实践中。受到人类类比的推理能力的启发,我们提出了类似图像翻译(AIT)。给定源域中两种样式的图像:$ \ Mathcal {a} $和$ \ Mathcal {a}^\ prime $,以及目标域中第一个样式的图像$ \ Mathcal {b} $ $ \ MATHCAL {A}:\ MATHCAL {A}^\ PRIME :: \ MATHCAL {B}:\ MATHCAL {B}^\ PRIME $。 AIT对于翻译方案特别有用,在翻译方案中很难获得一种样式的培训数据,但是可以使用另一个域中相同两种样式的训练数据。例如,在从正常条件到极端条件的情况下,获得后一种情况的真实训练图像是具有挑战性的,但是两种情况下获得合成数据都相对容易。在这项工作中,我们有兴趣在清晰的天气中添加不利的天气效果,更具体地说是雾效应。为了避免收集真实有雾图像的挑战,AIT通过合成的清澈天气图像,合成的雾图图像和真实的天气图像来学习,以在训练过程中看到任何真实的雾图图像,而无需看到任何真实的雾图像。 AIT通过在合成域中耦合监督训练方案,真实域中的周期一致性策略,两个域之间的对抗训练方案以及新型的网络设计来实现这种零击图像翻译能力。实验显示了我们方法对零短图像翻译的有效性及其对下游任务(例如语义雾化场景理解)的好处。

Image-to-image translation is to map images from a given \emph{style} to another given \emph{style}. While exceptionally successful, current methods assume the availability of training images in both source and target domains, which does not always hold in practice. Inspired by humans' reasoning capability of analogy, we propose analogical image translation (AIT). Given images of two styles in the source domain: $\mathcal{A}$ and $\mathcal{A}^\prime$, along with images $\mathcal{B}$ of the first style in the target domain, learn a model to translate $\mathcal{B}$ to $\mathcal{B}^\prime$ in the target domain, such that $\mathcal{A}:\mathcal{A}^\prime ::\mathcal{B}:\mathcal{B}^\prime$. AIT is especially useful for translation scenarios in which training data of one style is hard to obtain but training data of the same two styles in another domain is available. For instance, in the case from normal conditions to extreme, rare conditions, obtaining real training images for the latter case is challenging but obtaining synthetic data for both cases is relatively easy. In this work, we are interested in adding adverse weather effects, more specifically fog effects, to images taken in clear weather. To circumvent the challenge of collecting real foggy images, AIT learns with synthetic clear-weather images, synthetic foggy images and real clear-weather images to add fog effects onto real clear-weather images without seeing any real foggy images during training. AIT achieves this zero-shot image translation capability by coupling a supervised training scheme in the synthetic domain, a cycle consistency strategy in the real domain, an adversarial training scheme between the two domains, and a novel network design. Experiments show the effectiveness of our method for zero-short image translation and its benefit for downstream tasks such as semantic foggy scene understanding.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源