论文标题
反复出现的视觉和语言bert导航
A Recurrent Vision-and-Language BERT for Navigation
论文作者
论文摘要
许多粘性语言任务的准确性已从视觉和语言(V&L)BERT的应用中受益匪浅。但是,其在视觉和语言导航任务(VLN)中的应用仍然有限。原因之一是很难将BERT架构适应VLN中存在的部分可观察到的马尔可夫决策过程,这需要与历史有关的关注和决策。在本文中,我们提出了一个经过反复的BERT模型,该模型是在VLN中使用的时间。具体而言,我们为BERT模型配备了复发函数,该功能可维持代理的跨模式状态信息。通过对R2R和Reverie的广泛实验,我们证明了我们的模型可以取代更复杂的编码器模型,以实现最新的结果。此外,我们的方法可以推广到其他基于变压器的体系结构,支持预训练,并能够同时解决导航和参考表达任务。
Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language(V&L) BERT. However, its application for the task of vision-and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.