论文标题
基于全景的图像合成
Panoptic-based Image Synthesis
论文作者
论文摘要
用于生成影像现实主义图像的条件图像合成为内容编辑提供了各种应用程序。先前的条件图像合成算法主要依赖于语义图,并且通常在多个实例相互遮挡的复杂环境中失败。我们提出了一个全景意识图像合成网络,以生成在统一语义和实例信息的Panoptic图上生成的高保真度和光真逼真的图像。为了实现这一目标,我们有效地在卷积和上采样层中使用了全景图。我们表明,通过对生成器的提议更改,我们可以通过在更高的忠诚度和更小的对象中生成图像来改进先前的最新方法。此外,我们提出的方法还表现出均值IOU(与联合交叉)和拆卸(检测平均精度)的指标的先前最新方法。
Conditional image synthesis for generating photorealistic images serves various applications for content editing to content generation. Previous conditional image synthesis algorithms mostly rely on semantic maps, and often fail in complex environments where multiple instances occlude each other. We propose a panoptic aware image synthesis network to generate high fidelity and photorealistic images conditioned on panoptic maps which unify semantic and instance information. To achieve this, we efficiently use panoptic maps in convolution and upsampling layers. We show that with the proposed changes to the generator, we can improve on the previous state-of-the-art methods by generating images in complex instance interaction environments in higher fidelity and tiny objects in more details. Furthermore, our proposed method also outperforms the previous state-of-the-art methods in metrics of mean IoU (Intersection over Union), and detAP (Detection Average Precision).