论文标题
通过对比和聚类视觉语言嵌入开放世界的语义分割
Open-world Semantic Segmentation via Contrasting and Clustering Vision-Language Embedding
论文作者
论文摘要
为了弥合受监督的语义细分与现实世界应用程序之间的差距,这些应用程序获取一个模型以识别任意的新概念,最近的零弹性细分通过探索未看到的对象类别之间的关系,但需要大量的与多样化的基本类别密集地宣布的数据,从而吸引了很多关注。在本文中,我们提出了一种新的开放世界语义分割管道,该管道首次尝试通过纯粹利用自然存在于Internet上的图像捕获数据来学习各种开放世界类别的语义对象,而无需对密集注释进行任何努力。我们的方法,视觉语言驱动的语义分割(VIL-SEG),采用图像和文本编码器来生成图像捕获数据的视觉和文本嵌入,具有两个核心组件,具有赋予其分段能力的两个核心组件:首先,该图像是由基于视觉的对比和远距离构造的,该图像编码器是共同训练的,该图像训练了远距离的构造,以启用远距离的对比,以使其构成直觉的范围,以使其构成直觉的范围,以使其构造 - 对于细分任务至关重要的类别信息。此外,在图像编码器上设计了一个在线聚类头,该群集可以动态地将视觉嵌入到不同的语义组中,以便可以通过与各种文本嵌入来完成分类以完成我们的细分管道来对其进行分类。实验表明,如果不使用任何具有密集注释的数据,我们的方法可以直接分割任意类别的对象,超过了需要在三个基准数据集上进行数据标记的零摄像分割方法。
To bridge the gap between supervised semantic segmentation and real-world applications that acquires one model to recognize arbitrary new concepts, recent zero-shot segmentation attracts a lot of attention by exploring the relationships between unseen and seen object categories, yet requiring large amounts of densely-annotated data with diverse base classes. In this paper, we propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations, by purely exploiting the image-caption data that naturally exist on the Internet. Our method, Vision-language-driven Semantic Segmentation (ViL-Seg), employs an image and a text encoder to generate visual and text embeddings for the image-caption data, with two core components that endow its segmentation ability: First, the image encoder is jointly trained with a vision-based contrasting and a cross-modal contrasting, which encourage the visual embeddings to preserve both fine-grained semantics and high-level category information that are crucial for the segmentation task. Furthermore, an online clustering head is devised over the image encoder, which allows to dynamically segment the visual embeddings into distinct semantic groups such that they can be classified by comparing with various text embeddings to complete our segmentation pipeline. Experiments show that without using any data with dense annotations, our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.