论文标题
高分辨率图像与迭代信心反馈介绍并指导提升采样
High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling
论文作者
论文摘要
现有的图像入介质方法通常在处理实际应用中的大孔时会产生伪影。为了应对这一挑战,我们提出了一种具有反馈机制的迭代介绍方法。具体而言,我们引入了一个深层生成模型,该模型不仅输出了介绍结果,还输出相应的置信图。将此地图作为反馈,它通过仅信任每次迭代的孔中的高信心像素来逐渐填充孔,并专注于下一次迭代中的其余像素。由于它重复了以前的迭代作为已知像素的部分预测,因此此过程逐渐改善了结果。此外,我们提出了一个引导性的上采样网络,以产生高分辨率的介绍结果。我们通过将上下文注意模块扩展到输入图像中的高分辨率特征补丁来实现这一目标。此外,为了模仿真实的对象删除方案,我们收集一个大对象掩码数据集并合成更现实的训练数据,以更好地模拟用户输入。实验表明,我们的方法在定量和定性评估中都显着优于现有方法。可以在https://zengxianyu.github.io/iic上获得更多结果和Web应用程序。
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications. To address this challenge, we propose an iterative inpainting method with a feedback mechanism. Specifically, we introduce a deep generative model which not only outputs an inpainting result but also a corresponding confidence map. Using this map as feedback, it progressively fills the hole by trusting only high-confidence pixels inside the hole at each iteration and focuses on the remaining pixels in the next iteration. As it reuses partial predictions from the previous iterations as known pixels, this process gradually improves the result. In addition, we propose a guided upsampling network to enable generation of high-resolution inpainting results. We achieve this by extending the Contextual Attention module to borrow high-resolution feature patches in the input image. Furthermore, to mimic real object removal scenarios, we collect a large object mask dataset and synthesize more realistic training data that better simulates user inputs. Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations. More results and Web APP are available at https://zengxianyu.github.io/iic.