论文标题
关节几何语音驱动角色线绘图生成
Joint Geometric-Semantic Driven Character Line Drawing Generation
论文作者
论文摘要
字符线图合成可以作为图像到图像翻译问题的特殊情况,该案例自动操纵照片对线绘图样式转换。在本文中,我们介绍了第一个基于端到端的端到端可训练的翻译体系结构(称为p2ldgan),用于自动从输入照片/图像中自动生成高质量的角色图。我们方法的核心组成部分是接头语义驱动的发电机,它使用我们精心设计的跨尺度密度跳过连接框架来嵌入学习的几何和语义信息,以生成精致的线条图。为了支持对我们的模型的评估,我们发布了一个新的数据集,包括1,532对徒手的角色线图和相应的角色图像/照片,其中这些带有不同样式的线条图是由熟练的艺术家手动绘制的。在我们引入的数据集中进行的广泛实验证明了我们提出的模型在定量,定性和人类评估方面与最先进的方法相比。我们的代码,模型和数据集将在GitHub上找到。
Character line drawing synthesis can be formulated as a special case of image-to-image translation problem that automatically manipulates the photo-to-line drawing style transformation. In this paper, we present the first generative adversarial network based end-to-end trainable translation architecture, dubbed P2LDGAN, for automatic generation of high-quality character drawings from input photos/images. The core component of our approach is the joint geometric-semantic driven generator, which uses our well-designed cross-scale dense skip connections framework to embed learned geometric and semantic information for generating delicate line drawings. In order to support the evaluation of our model, we release a new dataset including 1,532 well-matched pairs of freehand character line drawings as well as corresponding character images/photos, where these line drawings with diverse styles are manually drawn by skilled artists. Extensive experiments on our introduced dataset demonstrate the superior performance of our proposed models against the state-of-the-art approaches in terms of quantitative, qualitative and human evaluations. Our code, models and dataset will be available at Github.