论文标题

用涂鸦注释弱监督的伪装对象检测

Weakly-Supervised Camouflaged Object Detection with Scribble Annotations

论文作者

He, Ruozhen, Dong, Qihua, Lin, Jiaying, Lau, Rynson W. H.

论文摘要

现有的伪装对象检测(COD)方法在很大程度上依赖于带有像素注释的大规模数据集。但是,由于边界模棱两可,注释伪装对象像素 - 智能非常耗时且劳动力密集,需要大约60分钟的时间来标记一个图像。在本文中,我们将第一个使用涂鸦注释作为监督提出了第一个弱监督的COD方法。为了实现这一目标,我们首先在现有的伪装对象数据集中重新标记4,040张图像,该数据集用涂鸦进行了〜10张来标记一个图像。由于涂鸦注释仅描述了没有细节的物体的主要结构,因此网络要学习本地化伪装对象的边界,我们提出了一个新颖的一致性损失,由两个部分组成:跨视图损失以在不同的图像上获得可靠的一致性,以及在单个预测图内保持一致性的一致性。此外,我们观察到,人类使用语义信息将伪装物体边界附近的区域进行分段。因此,我们进一步提出了一个特征引导的损失,其中包括直接从图像中提取的视觉特征和模型捕获的语义上具有重要意义的特征。最后,我们通过对结构信息和语义关系的涂鸦学习提出了一个新颖的网络,用于COD。我们的网络有两个新的模块:局部封闭式对比(LCC)模块,它们模仿视觉抑制以增强图像对比度/清晰度,并将涂鸦扩展到潜在的伪装区域,以及逻辑语义关系(LSR)模块,从而分析语义关系以确定代表伪装对象的区域。实验结果表明,我们的模型在三个COD基准上优于相关的SOTA方法,MAE的平均提高为11.0%,S量级为3.2%,e量为2.5%,而加权F量的平均值为4.4%。

Existing camouflaged object detection (COD) methods rely heavily on large-scale datasets with pixel-wise annotations. However, due to the ambiguous boundary, annotating camouflage objects pixel-wisely is very time-consuming and labor-intensive, taking ~60mins to label one image. In this paper, we propose the first weakly-supervised COD method, using scribble annotations as supervision. To achieve this, we first relabel 4,040 images in existing camouflaged object datasets with scribbles, which takes ~10s to label one image. As scribble annotations only describe the primary structure of objects without details, for the network to learn to localize the boundaries of camouflaged objects, we propose a novel consistency loss composed of two parts: a cross-view loss to attain reliable consistency over different images, and an inside-view loss to maintain consistency inside a single prediction map. Besides, we observe that humans use semantic information to segment regions near the boundaries of camouflaged objects. Hence, we further propose a feature-guided loss, which includes visual features directly extracted from images and semantically significant features captured by the model. Finally, we propose a novel network for COD via scribble learning on structural information and semantic relations. Our network has two novel modules: the local-context contrasted (LCC) module, which mimics visual inhibition to enhance image contrast/sharpness and expand the scribbles into potential camouflaged regions, and the logical semantic relation (LSR) module, which analyzes the semantic relation to determine the regions representing the camouflaged object. Experimental results show that our model outperforms relevant SOTA methods on three COD benchmarks with an average improvement of 11.0% on MAE, 3.2% on S-measure, 2.5% on E-measure, and 4.4% on weighted F-measure.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源