论文标题
伪装对象检测的功能聚合和传播网络
Feature Aggregation and Propagation Network for Camouflaged Object Detection
论文作者
论文摘要
伪装的对象检测(COD)旨在检测嵌入环境中的伪装物体,这在过去几十年中引起了越来越多的关注。尽管已经开发了几种COD方法,但由于前景对象和背景周围环境之间的内在相似性,它们的性能仍然不令人满意。在本文中,我们提出了一个新颖的特征聚集和传播网络(FAP-NET),以伪装对象检测。具体而言,我们建议一个边界指导模块(BGM)明确对边界特征进行建模,该特征可以提供边界增强功能以提高COD性能。为了捕获伪装对象的比例变化,我们提出了一个多尺度特征聚合模块(MFAM),以表征每一层的多尺度信息并获取聚合特征表示。此外,我们提出了一个跨级融合和传播模块(CFPM)。在CFPM中,特征融合零件可以有效地整合相邻层的特征以利用跨层相关性,并且特征传播零件可以通过GAITE单元将有价值的上下文信息从编码器传输到解码器网络。最后,我们制定了一个统一的端到端可训练框架,可以有效地融合和传播跨层次的功能,以捕获丰富的上下文信息。在三个基准伪装数据集上进行的大量实验表明,我们的FAP网络表现优于其他最先进的COD模型。此外,我们的模型可以扩展到息肉分割任务,比较结果进一步验证了所提出的模型在分割息肉中的有效性。源代码和结果将在https://github.com/taozh2017/fapnet上发布。
Camouflaged object detection (COD) aims to detect/segment camouflaged objects embedded in the environment, which has attracted increasing attention over the past decades. Although several COD methods have been developed, they still suffer from unsatisfactory performance due to the intrinsic similarities between the foreground objects and background surroundings. In this paper, we propose a novel Feature Aggregation and Propagation Network (FAP-Net) for camouflaged object detection. Specifically, we propose a Boundary Guidance Module (BGM) to explicitly model the boundary characteristic, which can provide boundary-enhanced features to boost the COD performance. To capture the scale variations of the camouflaged objects, we propose a Multi-scale Feature Aggregation Module (MFAM) to characterize the multi-scale information from each layer and obtain the aggregated feature representations. Furthermore, we propose a Cross-level Fusion and Propagation Module (CFPM). In the CFPM, the feature fusion part can effectively integrate the features from adjacent layers to exploit the cross-level correlations, and the feature propagation part can transmit valuable context information from the encoder to the decoder network via a gate unit. Finally, we formulate a unified and end-to-end trainable framework where cross-level features can be effectively fused and propagated for capturing rich context information. Extensive experiments on three benchmark camouflaged datasets demonstrate that our FAP-Net outperforms other state-of-the-art COD models. Moreover, our model can be extended to the polyp segmentation task, and the comparison results further validate the effectiveness of the proposed model in segmenting polyps. The source code and results will be released at https://github.com/taozh2017/FAPNet.