论文标题

X-Modalnet:半监督的深层跨模式网络,用于分类遥感数据

X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data

论文作者

Hong, Danfeng, Yokoya, Naoto, Xia, Gui-Song, Chanussot, Jocelyn, Zhu, Xiao Xiang

论文摘要

本文解决了半监督转移学习的问题,而遥感中的跨模式数据有限。大量多模式地球观测图像,例如多光谱图像(MSI)或合成孔径雷达(SAR)数据,在全球范围内公开可用,从而通过遥控感测图像可以解析全球城市场景。但是,由于嘈杂的收集环境和差异性信息以及有限数量的通知培训图像,它们识别材料(像素分类)的能力仍然有限。为此,我们提出了一个新型的跨模式深入学习框架,称为X-Modalnet,具有三个精心设计的模块:自我逆转模块,交互式学习模块和标签繁殖模块,通过学习将更多的歧视性信息转移到小型高度高光谱图像(HSI)中,以使用大型级别的MSI或SARS cale sar cale cale cale cale。值得注意的是,由于在网络顶部由高级特征构建的可更新图上的繁殖图上的繁殖标签,X-Modalnet可以很好地概括,从而产生了半监督的交叉模式学习。我们在两个多模式遥感数据集(HSI-MSI和HSI-SAR)上评估了X-Modalnet,并与几种最新方法相比,取得了重大改进。

This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源