论文标题
Flamingo:一种用于几次学习的视觉语言模型
Flamingo: a Visual Language Model for Few-Shot Learning
论文作者
论文摘要
可以快速使用少数带注释的示例来快速适应新任务的建筑模型是多模式机器学习研究的开放挑战。我们介绍了具有这种能力的视觉语言模型(VLM)家族Flamingo。我们提出关键的体系结构创新:(i)桥接强大的仅视力和仅语言模型,(ii)处理任意交织的视觉和文本数据的序列,以及(iii)无缝摄入图像或视频作为输入。由于它们的灵活性,火烈鸟模型可以在包含任意交织的文本和图像的大规模多模式网络中进行培训,这是赋予他们赋予他们少数几个学习能力的关键。我们对模型进行彻底评估,探索和衡量它们快速适应各种图像和视频任务的能力。这些包括开放式任务,例如视觉提问,在其中提示了该模型,该问题必须回答一个问题;字幕的任务,评估描述场景或事件的能力;以及诸如多项选择的视觉问题避开诸如近端任务。对于该范围上任何地方的任务,单个火烈鸟模型只需通过以特定于任务的示例提示该模型就可以实现新的技术状态。在众多基准测试中,火烈鸟的表现胜过数千倍的特定于任务数据。
Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.