论文标题

自由形式图像通过对比度注意网络介绍

Free-Form Image Inpainting via Contrastive Attention Network

论文作者

Ma, Xin, Zhou, Xiaoqiang, Huang, Huaibo, Chai, Zhenhua, Wei, Xiaolin, He, Ran

论文摘要

大多数基于深度学习的图像介绍方法采用自动编码器或其变体来填补图像中缺少的区域。编码器通常用于学习强大的代表空间,这对于处理复杂的学习任务很重要。具体而言,在图像介绍任务中,具有任何形状的掩模可以出现在图像中的任何位置(即自由形式掩模),这些掩膜形成复杂的模式。在这种复杂情况下,编码器很难捕获如此强大的表示。为了解决这个问题,我们提出了一个自制的暹罗推理网络,以改善稳健性和概括。它可以从完整分辨率图像中编码上下文语义,并获得更多的判别性表示。我们进一步提出了一个具有新颖的双重注意融合模块(DAF)的多尺度解码器,该模块可以平滑地结合恢复和已知区域。这种多尺度体系结构有益于解码由编码器逐层学习为图像中的区分表示。这样,未知区域将自然地从外部到内部填充。在多个数据集上进行的定性和定量实验,包括面部和天然数据集(即Celeb-HQ,Pairs Street View,Place2和Imagenet),表明我们所提出的方法优于产生高质量的inpain-inpain-inpain-inpain-opain in Paintage inperforms的最先进方法。

Most deep learning based image inpainting approaches adopt autoencoder or its variants to fill missing regions in images. Encoders are usually utilized to learn powerful representational spaces, which are important for dealing with sophisticated learning tasks. Specifically, in image inpainting tasks, masks with any shapes can appear anywhere in images (i.e., free-form masks) which form complex patterns. It is difficult for encoders to capture such powerful representations under this complex situation. To tackle this problem, we propose a self-supervised Siamese inference network to improve the robustness and generalization. It can encode contextual semantics from full resolution images and obtain more discriminative representations. we further propose a multi-scale decoder with a novel dual attention fusion module (DAF), which can combine both the restored and known regions in a smooth way. This multi-scale architecture is beneficial for decoding discriminative representations learned by encoders into images layer by layer. In this way, unknown regions will be filled naturally from outside to inside. Qualitative and quantitative experiments on multiple datasets, including facial and natural datasets (i.e., Celeb-HQ, Pairs Street View, Places2 and ImageNet), demonstrate that our proposed method outperforms state-of-the-art methods in generating high-quality inpainting results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源