论文标题
基于共发生的纹理合成
Co-occurrence Based Texture Synthesis
论文作者
论文摘要
随着图像生成技术的成熟,人们对易于理解和直观操纵的可解释表示越来越兴趣。在这项工作中,我们转向长期用于纹理分析的同时统计数据,以学习可控的纹理合成模型。我们提出了一个完全卷积的生成对抗网络,该网络本地以同时的统计数据为条件,以在对纹理外观的局部,可解释的控制时生成任意的大图像。为了鼓励对输入条件的保真度,我们引入了一种新颖的可区分共同流量损失,该损失以端到端的方式无缝集成到我们的框架中。我们证明我们的解决方案为纹理合成提供了稳定,直观和可解释的潜在表示,可用于在不同纹理之间生成光滑的纹理变体。我们进一步展示了一种交互式纹理工具,该工具允许用户直接使用共发生值调整综合纹理图像的本地特征。
As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over the texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis, which can be used to generate a smooth texture morph between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture image using the co-occurrence values directly.