论文标题
注意力指导全球增强和局部改进网络,用于语义细分
Attention guided global enhancement and local refinement network for semantic segmentation
论文作者
论文摘要
编码器架构被广泛用作轻量级语义分割网络。但是,与精心设计的扩张FCN模型相比,它的性能有限。首先,在解码器(例如插值和反卷积)中常用的UP抽样方法遭受了局部接收场的影响,无法编码全局上下文。其次,低级功能可能会通过跳过连接到网络解码器的声音,以使早期编码器层中语义概念不足。为了应对这些挑战,提出了一种全球增强方法,以从高级特征图中汇总全球信息,并将它们自适应地分配给不同的解码器层,从而减轻了在提升过程中全球环境的短缺。此外,通过利用解码器特征作为语义指导来开发局部细化模块,以在融合这两个(解码器功能和编码器功能)之前完善嘈杂的编码器特征。然后,将这两种方法集成到上下文融合块中,并基于这一点,新颖的注意力指导的全球增强和局部改进网络(AGLN)经过精心设计。关于Pascal环境,ADE20K和Pascal VOC 2012数据集的广泛实验证明了该方法的有效性。特别是,使用香草Resnet-101骨干,AGLN在Pascal上下文数据集上实现了最先进的结果(56.23%的均值IOU)。该代码可从https://github.com/zhasen1996/agln获得。
The encoder-decoder architecture is widely used as a lightweight semantic segmentation network. However, it struggles with a limited performance compared to a well-designed Dilated-FCN model for two major problems. First, commonly used upsampling methods in the decoder such as interpolation and deconvolution suffer from a local receptive field, unable to encode global contexts. Second, low-level features may bring noises to the network decoder through skip connections for the inadequacy of semantic concepts in early encoder layers. To tackle these challenges, a Global Enhancement Method is proposed to aggregate global information from high-level feature maps and adaptively distribute them to different decoder layers, alleviating the shortage of global contexts in the upsampling process. Besides, a Local Refinement Module is developed by utilizing the decoder features as the semantic guidance to refine the noisy encoder features before the fusion of these two (the decoder features and the encoder features). Then, the two methods are integrated into a Context Fusion Block, and based on that, a novel Attention guided Global enhancement and Local refinement Network (AGLN) is elaborately designed. Extensive experiments on PASCAL Context, ADE20K, and PASCAL VOC 2012 datasets have demonstrated the effectiveness of the proposed approach. In particular, with a vanilla ResNet-101 backbone, AGLN achieves the state-of-the-art result (56.23% mean IoU) on the PASCAL Context dataset. The code is available at https://github.com/zhasen1996/AGLN.