论文标题
DSI2I:未配对图像到图像翻译的密集风格
DSI2I: Dense Style for Unpaired Image-to-Image Translation
论文作者
论文摘要
基于未配对的示例图像到图像图像(UEI2I)的翻译旨在将源图像转换为目标图像示例的样式,而没有地面真实输入输入译本。现有的UEI2I方法代表每个图像使用一个向量或依靠语义监督来定义每个对象的一个样式向量的样式。相比之下,我们建议将样式表示为密集的特征图,从而可以将更细粒度传输到源图像,而无需任何外部语义信息。然后,我们依靠感知和对抗性损失来消除我们的密集风格和内容表示。为了用示例样式对源内容进行样式化,我们提取无监督的跨域语义对应关系,并将示例样式扭曲到源内容。我们使用标准指标以及我们建议的本地样式指标在四个数据集上演示了我们的方法的有效性,该指标以班级方式衡量样式相似性。我们的结果表明,与最先进的方法相比,我们方法产生的翻译更加多样化,更好地保留源内容,并且更接近示例。项目页面:https://github.com/ivrl/dsi2i
Unpaired exemplar-based image-to-image (UEI2I) translation aims to translate a source image to a target image domain with the style of a target image exemplar, without ground-truth input-translation pairs. Existing UEI2I methods represent style using one vector per image or rely on semantic supervision to define one style vector per object. Here, in contrast, we propose to represent style as a dense feature map, allowing for a finer-grained transfer to the source image without requiring any external semantic information. We then rely on perceptual and adversarial losses to disentangle our dense style and content representations. To stylize the source content with the exemplar style, we extract unsupervised cross-domain semantic correspondences and warp the exemplar style to the source content. We demonstrate the effectiveness of our method on four datasets using standard metrics together with a localized style metric we propose, which measures style similarity in a class-wise manner. Our results show that the translations produced by our approach are more diverse, preserve the source content better, and are closer to the exemplars when compared to the state-of-the-art methods. Project page: https://github.com/IVRL/dsi2i