论文标题

ema-vio:深层视觉惯性探针,外部记忆注意

EMA-VIO: Deep Visual-Inertial Odometry with External Memory Attention

论文作者

Tu, Zheming, Chen, Changhao, Pan, Xianfei, Liu, Ruochen, Cui, Jiarui, Mao, Jun

论文摘要

准确,稳健的本地化是移动代理的基本需求。视觉惯性进程(VIO)算法将信息从相机和惯性传感器中利用到估计位置和翻译。最近基于深度学习的VIO模型以数据驱动的方式提供姿势信息时,它引起了注意,而无需设计手工制作的算法。现有的基于学习的VIO模型依赖于经常性模型来融合多模式数据和过程传感器信号,而过程传感器信号很难训练并且不够有效。我们提出了一个基于学习的新型VIO框架,并有效地结合了视觉和惯性特征,以供各州估计。我们提出的模型也能够准确,稳健地估算姿势,即使在具有挑战性的情况下,例如在阴天和充满水的地面上,对于传统的VIO算法而言,这很难提取视觉特征。实验验证了它在不同场景中的表现优于传统和基于学习的Vio基线。

Accurate and robust localization is a fundamental need for mobile agents. Visual-inertial odometry (VIO) algorithms exploit the information from camera and inertial sensors to estimate position and translation. Recent deep learning based VIO models attract attentions as they provide pose information in a data-driven way, without the need of designing hand-crafted algorithms. Existing learning based VIO models rely on recurrent models to fuse multimodal data and process sensor signal, which are hard to train and not efficient enough. We propose a novel learning based VIO framework with external memory attention that effectively and efficiently combines visual and inertial features for states estimation. Our proposed model is able to estimate pose accurately and robustly, even in challenging scenarios, e.g., on overcast days and water-filled ground , which are difficult for traditional VIO algorithms to extract visual features. Experiments validate that it outperforms both traditional and learning based VIO baselines in different scenes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源