论文标题

奥斯卡:对象 - 符合视力语言任务的预先培训对准预训练

Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks

论文作者

Li, Xiujun, Yin, Xi, Li, Chunyuan, Zhang, Pengchuan, Hu, Xiaowei, Zhang, Lei, Wang, Lijuan, Hu, Houdong, Dong, Li, Wei, Furu, Choi, Yejin, Gao, Jianfeng

论文摘要

图像文本对学习跨模式表示的大规模训练方法在视觉任务中变得流行。 While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments.我们的方法是由观察到可以准确检测到图像中的显着物体的观察,并且经常在配对文本中提及。我们在650万个文本图像对的公共语料库上预先培训了奥斯卡模型,并将其在下游任务上进行了微调,从而在六个良好的视觉理解和发电任务上创建了新的最先进的。

Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源