论文标题
泛滥,有色人知的感知损失和指导性重新化
Pan-Sharpening with Color-Aware Perceptual Loss and Guided Re-Colorization
论文作者
论文摘要
我们提出了一种新颖的色彩感知感知(CAP)损失,以学习泛伴的任务。我们的上限损失旨在集中于预先训练的VGG网络的深度特征,该网络对空间细节更敏感,并忽略颜色信息,以使网络可以从PAN图像中提取结构信息,同时将颜色从较低分辨率的MS图像中保留下来。此外,我们提出了“引导重新颜色”,它通过“挑选”每个泛光泽的像素的最接近的MS像素颜色,从而生成带有MS输入的真实颜色的泛贴图像,因为人类操作员将在手动着色中进行。这样的重色(RC)图像与Pan-sharped(PS)网络输出完全对齐,并且可以在训练过程中用作自学信号,或者在测试过程中增强PS图像中的颜色。我们提出了几项实验,我们的网络接受了盖帽损失的训练,它们会产生自然外观的泛贴图像,其伪像较少,并且在ERGAS,SCC和QNR指标方面胜过WorldView3数据集中最先进的图像。
We present a novel color-aware perceptual (CAP) loss for learning the task of pan-sharpening. Our CAP loss is designed to focus on the deep features of a pre-trained VGG network that are more sensitive to spatial details and ignore color information to allow the network to extract the structural information from the PAN image while keeping the color from the lower resolution MS image. Additionally, we propose "guided re-colorization", which generates a pan-sharpened image with real colors from the MS input by "picking" the closest MS pixel color for each pan-sharpened pixel, as a human operator would do in manual colorization. Such a re-colorized (RC) image is completely aligned with the pan-sharpened (PS) network output and can be used as a self-supervision signal during training, or to enhance the colors in the PS image during test. We present several experiments where our network trained with our CAP loss generates naturally looking pan-sharpened images with fewer artifacts and outperforms the state-of-the-arts on the WorldView3 dataset in terms of ERGAS, SCC, and QNR metrics.