论文标题
Babywalk:通过采取婴儿脚步走进视力和语言导航
BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby Steps
论文作者
论文摘要
学习遵循指示对于视力和语言导航(VLN)的自主代理至关重要。在本文中,我们研究了代理商在从较短的语料库中学习时如何导航较长的道路。我们表明,现有的最先进的代理不能很好地概括。为此,我们提出了BabyWalk,这是一种新的VLN代理,通过将长时间的说明分解为较短的说明(BabySteps)并依次完成它们来导航。代理使用特殊的设计记忆缓冲区将其过去的经验转变为未来步骤的上下文。学习过程由两个阶段组成。在第一阶段,代理商使用模仿从演示中学习来完成babysteps。在第二阶段,代理商使用基于课程的增强学习学习来最大程度地利用越来越长的指令进行导航任务的奖励。我们创建了两个新的基准数据集(长导航任务),并与现有的数据集使用它们来检查Babywalk的概括能力。经验结果表明,BabyWalk在几个指标上取得了最新的结果,尤其能够更好地遵循长期的说明。代码和数据集在我们的项目页面上发布https://github.com/sha-lab/babywalk。
Learning to follow instructions is of fundamental importance to autonomous agents for vision-and-language navigation (VLN). In this paper, we study how an agent can navigate long paths when learning from a corpus that consists of shorter ones. We show that existing state-of-the-art agents do not generalize well. To this end, we propose BabyWalk, a new VLN agent that is learned to navigate by decomposing long instructions into shorter ones (BabySteps) and completing them sequentially. A special design memory buffer is used by the agent to turn its past experiences into contexts for future steps. The learning process is composed of two phases. In the first phase, the agent uses imitation learning from demonstration to accomplish BabySteps. In the second phase, the agent uses curriculum-based reinforcement learning to maximize rewards on navigation tasks with increasingly longer instructions. We create two new benchmark datasets (of long navigation tasks) and use them in conjunction with existing ones to examine BabyWalk's generalization ability. Empirical results show that BabyWalk achieves state-of-the-art results on several metrics, in particular, is able to follow long instructions better. The codes and the datasets are released on our project page https://github.com/Sha-Lab/babywalk.