论文标题

图像插入图像的关注

Region-aware Attention for Image Inpainting

论文作者

Huang, Zhilin, Qin, Chujun, Weng, Zhenyu, Zhu, Yuesheng

论文摘要

最近的基于注意力的图像介绍方法通过对单个图像中的长距离依赖性进行建模,从而取得了鼓舞的进步。但是,它们倾向于产生模糊的内容,因为每个像素对之间的相关性总是被孔中的不可预测的特征误导。为了解决这个问题,我们提出了一种新颖的区域感知注意力(RA)模块。通过避免单个样品中每个像素对之间的直接计算率并考虑不同样本之间的相关性,可以避免孔中无效的信息的误导。同时,引入了可学习的区域词典(LRD)以存储整个数据集中的重要信息,这不仅简化了相关建模,而且还避免了信息冗余。通过在我们的体系结构中应用RA,我们的MethodScan产生了具有现实细节的语义上合理的结果。与现有方法相比,有关Celeba,Place2和Paris Streetview数据集的广泛实验验证了我们方法的优势。

Recent attention-based image inpainting methods have made inspiring progress by modeling long-range dependencies within a single image. However, they tend to generate blurry contents since the correlation between each pixel pairs is always misled by ill-predicted features in holes. To handle this problem, we propose a novel region-aware attention (RA) module. By avoiding the directly calculating corralation between each pixel pair in a single samples and considering the correlation between different samples, the misleading of invalid information in holes can be avoided. Meanwhile, a learnable region dictionary (LRD) is introduced to store important information in the entire dataset, which not only simplifies correlation modeling, but also avoids information redundancy. By applying RA in our architecture, our methodscan generate semantically plausible results with realistic details. Extensive experiments on CelebA, Places2 and Paris StreetView datasets validate the superiority of our method compared with existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源