论文标题
域自适应转移攻击(数据)基于基于空中图像提取的分割网络
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images
论文作者
论文摘要
基于卷积神经网络(CNN)的语义分割模型已在遥感方面引起了很多关注,并且在从高分辨率的航空图像中提取建筑物方面取得了出色的性能。但是,对看不见的图像的概括有限。当培训数据集和测试数据集之间存在域间隙时,由培训数据集培训的基于CNN的分割模型无法为测试数据集进行分割。在本文中,我们建议基于域自适应传输攻击(数据)方案进行分割网络,以从空中图像中提取提取。提出的系统结合了域转移和对抗攻击概念。基于数据方案,可以将输入图像的分布转移到目标图像的分布,同时将图像变成针对目标网络的对抗性示例。捍卫适合目标域的对抗示例可以克服由于域间隙而导致的性能降解,并增加分割模型的鲁棒性。跨数据库实验和消融研究是针对三个不同数据集进行的:Inria空中图像标签数据集,马萨诸塞州建筑数据集和WHU East Asia数据集。与没有数据方案的分割网络的性能相比,提出的方法显示了整体IOU的改进。此外,已验证,即使与特征适应(FA)和输出空间适应(OSA)相比,所提出的方法也优于外观。
Semantic segmentation models based on convolutional neural networks (CNNs) have gained much attention in relation to remote sensing and have achieved remarkable performance for the extraction of buildings from high-resolution aerial images. However, the issue of limited generalization for unseen images remains. When there is a domain gap between the training and test datasets, CNN-based segmentation models trained by a training dataset fail to segment buildings for the test dataset. In this paper, we propose segmentation networks based on a domain adaptive transfer attack (DATA) scheme for building extraction from aerial images. The proposed system combines the domain transfer and adversarial attack concepts. Based on the DATA scheme, the distribution of the input images can be shifted to that of the target images while turning images into adversarial examples against a target network. Defending adversarial examples adapted to the target domain can overcome the performance degradation due to the domain gap and increase the robustness of the segmentation model. Cross-dataset experiments and the ablation study are conducted for the three different datasets: the Inria aerial image labeling dataset, the Massachusetts building dataset, and the WHU East Asia dataset. Compared to the performance of the segmentation network without the DATA scheme, the proposed method shows improvements in the overall IoU. Moreover, it is verified that the proposed method outperforms even when compared to feature adaptation (FA) and output space adaptation (OSA).