论文标题

一千个单词比一张图片更值得:以自然语言为中心的外部知识视觉问题回答

A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering

论文作者

Gao, Feng, Ping, Qing, Thattai, Govind, Reganti, Aishwarya, Wu, Ying Nian, Natarajan, Prem

论文摘要

外部知识的视觉问题回答(OK-VQA)要求代理人理解图像,利用整个网络中的相关知识,并消化所有信息以回答问题。以前的大多数工作首先将图像和问题融合在多模式空间中,这是不灵活的,这对于与大量外部知识的进一步融合是不灵活的。在本文中,我们呼吁针对OK-VQA任务进行范式转换,该任务将图像转换为纯文本,以便我们可以在自然语言空间中启用知识段落检索和产生的提问。该范式利用了巨大的巨大知识基础和预训练的语言模型的丰富性。提出了一个转换 - 重新产生的框架(TRIG)框架,可以使用替代图像到文本模型和文本知识库来插入插件。实验结果表明,我们的TRIG框架的表现优于所有最新监督方法的绝对边距至少11.1%。

Outside-knowledge visual question answering (OK-VQA) requires the agent to comprehend the image, make use of relevant knowledge from the entire web, and digest all the information to answer the question. Most previous works address the problem by first fusing the image and question in the multi-modal space, which is inflexible for further fusion with a vast amount of external knowledge. In this paper, we call for a paradigm shift for the OK-VQA task, which transforms the image into plain text, so that we can enable knowledge passage retrieval, and generative question-answering in the natural language space. This paradigm takes advantage of the sheer volume of gigantic knowledge bases and the richness of pre-trained language models. A Transform-Retrieve-Generate framework (TRiG) framework is proposed, which can be plug-and-played with alternative image-to-text models and textual knowledge bases. Experimental results show that our TRiG framework outperforms all state-of-the-art supervised methods by at least 11.1% absolute margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源