论文标题

多密度的草图到图像翻译网络

Multi-Density Sketch-to-Image Translation Network

论文作者

Huang, Jialu, Liao, Jing, Tan, Zhifeng, Kwong, Sam

论文摘要

素描到图像(S2I)翻译在图像综合和操纵任务(例如照片编辑和着色)中起着重要作用。一些特定的S2I翻译,包括素描到绘画和绘画,可以用作艺术设计行业的强大工具。但是,以前的方法仅支持具有单个密度的S2I翻译,这为用户提供了控制输入草图的灵活性。在这项工作中,我们提出了第一个多级密度素描到图像翻译框架,该框架允许输入草图涵盖从粗糙对象轮廓到微型结构的广泛范围。此外,为了解决多级密度输入草图的非连续表示的问题,我们将密度级别投影到连续的潜在空间,然后可以通过参数线性控制。这使用户可以方便地控制输入草图的密度和图像的生成。此外,我们的方法已在各种数据集中成功验证了不同的应用程序,包括面部编辑,多模式的草图到photo Translation和动漫着色,为这些应用程序提供了粗到最新的控件。

Sketch-to-image (S2I) translation plays an important role in image synthesis and manipulation tasks, such as photo editing and colorization. Some specific S2I translation including sketch-to-photo and sketch-to-painting can be used as powerful tools in the art design industry. However, previous methods only support S2I translation with a single level of density, which gives less flexibility to users for controlling the input sketches. In this work, we propose the first multi-level density sketch-to-image translation framework, which allows the input sketch to cover a wide range from rough object outlines to micro structures. Moreover, to tackle the problem of noncontinuous representation of multi-level density input sketches, we project the density level into a continuous latent space, which can then be linearly controlled by a parameter. This allows users to conveniently control the densities of input sketches and generation of images. Moreover, our method has been successfully verified on various datasets for different applications including face editing, multi-modal sketch-to-photo translation, and anime colorization, providing coarse-to-fine levels of controls to these applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源