论文标题

多个阶段注意力 - 用于精细分辨率遥感图像语义分割的语义分割

Multi-stage Attention ResU-Net for Semantic Segmentation of Fine-Resolution Remote Sensing Images

论文作者

Li, Rui, Zheng, Shunyi, Duan, Chenxi, Su, Jianlin, Zhang, Ce

论文摘要

注意机制可以完善提取的特征图并提高深网的分类性能,这已成为计算机视觉和自然语言处理中的必不可少的技术。但是,点产生注意机制的记忆和计算成本随着输入的时空大小四倍地增加。这种增长在具有大规模投入的应用方案中极大地阻碍了注意机制的使用。在这封信中,我们提出了一种线性注意机制(LAM)来解决此问题,这大约等同于以计算效率和计算效率的点产生关注。这样的设计使注意机制和深层网络之间的合并更加灵活和通用。根据提议的LAM,我们将RAW U-NET中的Skip连接重新开始,并设计一个多阶段的Resu-NET(MARESU-NET),以从精细分辨率遥感图像中进行语义分割。在Vaihingen数据集上进行的实验证明了Maresu-NET的有效性和效率。开放式代码可在https://github.com/lironui/multistage-atection-resu-net上找到。

The attention mechanism can refine the extracted feature maps and boost the classification performance of the deep network, which has become an essential technique in computer vision and natural language processing. However, the memory and computational costs of the dot-product attention mechanism increase quadratically with the spatio-temporal size of the input. Such growth hinders the usage of attention mechanisms considerably in application scenarios with large-scale inputs. In this Letter, we propose a Linear Attention Mechanism (LAM) to address this issue, which is approximately equivalent to dot-product attention with computational efficiency. Such a design makes the incorporation between attention mechanisms and deep networks much more flexible and versatile. Based on the proposed LAM, we re-factor the skip connections in the raw U-Net and design a Multi-stage Attention ResU-Net (MAResU-Net) for semantic segmentation from fine-resolution remote sensing images. Experiments conducted on the Vaihingen dataset demonstrated the effectiveness and efficiency of our MAResU-Net. Open-source code is available at https://github.com/lironui/Multistage-Attention-ResU-Net.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源