论文标题

语言偏见的图像分类:基于语义表示的评估

Language-biased image classification: evaluation based on semantic representations

论文作者

Lemesle, Yoann, Sawayama, Masataka, Valle-Perez, Guillermo, Adolphe, Maxime, Sauzéon, Hélène, Oudeyer, Pierre-Yves

论文摘要

人类显示出语言偏见的图像识别图像,称为图像词干扰。这种干扰取决于层次的语义类别,并反映了人类语言处理高度与视觉处理相互作用。与人类类似,最近在文本和图像上联合训练的人工模型,例如Openai剪辑,显示语言偏见的图像分类。探索偏见是否导致干扰与人类观察到的偏见相似,可以有助于了解该模型从联合语言和视觉学习中获取层次结构语义表示的程度。本研究介绍了认知科学文献中的方法论工具,以评估人工模型的偏见。具体而言,我们引入了一项基准任务,以测试叠加在图像上的单词是否会扭曲不同类别级别的图像分类,如果可以,扰动是否是由于语言和视觉之间的共享语义表示造成的。我们的数据集是一组单词插入的图像,由自然图像数据集和具有上级/基本类别级别的层次单词标签组成。使用此基准测试,我们评估了剪辑模型。我们表明,呈现单词在不同类别级别上通过模型扭曲了图像分类,但是效果并不取决于图像和嵌入式单词之间的语义关系。这表明剪辑视觉处理中的语义单词表示形式并未与图像表示形式共享,尽管该单词表示为单词插入的图像强烈主导。

Humans show language-biased image recognition for a word-embedded image, known as picture-word interference. Such interference depends on hierarchical semantic categories and reflects that human language processing highly interacts with visual processing. Similar to humans, recent artificial models jointly trained on texts and images, e.g., OpenAI CLIP, show language-biased image classification. Exploring whether the bias leads to interference similar to those observed in humans can contribute to understanding how much the model acquires hierarchical semantic representations from joint learning of language and vision. The present study introduces methodological tools from the cognitive science literature to assess the biases of artificial models. Specifically, we introduce a benchmark task to test whether words superimposed on images can distort the image classification across different category levels and, if it can, whether the perturbation is due to the shared semantic representation between language and vision. Our dataset is a set of word-embedded images and consists of a mixture of natural image datasets and hierarchical word labels with superordinate/basic category levels. Using this benchmark test, we evaluate the CLIP model. We show that presenting words distorts the image classification by the model across different category levels, but the effect does not depend on the semantic relationship between images and embedded words. This suggests that the semantic word representation in the CLIP visual processing is not shared with the image representation, although the word representation strongly dominates for word-embedded images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源