论文标题

使用高分辨率和变压器的高分辨率图像的语义标记

Semantic Labeling of High Resolution Images Using EfficientUNets and Transformers

论文作者

AlMarzouqi, Hasan, Saoud, Lyes Saad

论文摘要

语义细分需要在处理大量数据的同时学习高级特征的方法。卷积神经网络(CNN)可以学习独特和适应性的特征,以实现这一目标。但是,由于遥感图像的较大尺寸和高空间分辨率,这些网络无法有效地分析整个场景。最近,Deep Transformers证明了它们能够记录图像中不同对象之间的全局相互作用的能力。在本文中,我们提出了一个新的分割模型,该模型将卷积神经网络与变压器相结合,并表明这种局部和全局特征提取技术的混合物在遥感分割中提供了显着优势。此外,提出的模型还包括两个融合层,这些融合层旨在有效地表示网络的多模式输入和输出。输入融合层提取物具有总结图像内容与高程图(DSM)之间关系的地图。输出融合层使用一种新型的多任务分割策略,其中使用特定于类的特征提取层和损耗函数来识别类标签。最后,使用快速制定的方法将所有不明的类标签转换为其最接近的邻居。我们的结果表明,与最先进的技术相比,提出的方法提高了细分精度。

Semantic segmentation necessitates approaches that learn high-level characteristics while dealing with enormous amounts of data. Convolutional neural networks (CNNs) can learn unique and adaptive features to achieve this aim. However, due to the large size and high spatial resolution of remote sensing images, these networks cannot analyze an entire scene efficiently. Recently, deep transformers have proven their capability to record global interactions between different objects in the image. In this paper, we propose a new segmentation model that combines convolutional neural networks with transformers, and show that this mixture of local and global feature extraction techniques provides significant advantages in remote sensing segmentation. In addition, the proposed model includes two fusion layers that are designed to represent multi-modal inputs and output of the network efficiently. The input fusion layer extracts feature maps summarizing the relationship between image content and elevation maps (DSM). The output fusion layer uses a novel multi-task segmentation strategy where class labels are identified using class-specific feature extraction layers and loss functions. Finally, a fast-marching method is used to convert all unidentified class labels to their closest known neighbors. Our results demonstrate that the proposed methodology improves segmentation accuracy compared to state-of-the-art techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源