论文标题
GMMLOC:结构与高斯混合模型一致的视觉定位
GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Models
论文作者
论文摘要
将先前的结构信息纳入视觉状态估计通常可以改善本地化性能。在这封信中,我们旨在解决与结构约束耦合视觉因素的准确性和效率之间的悖论。为此,我们提出了一种交叉模式方法,该方法在以高斯混合模型(GMM)为模型的先前地图中跟踪相机。最初由前端估计的姿势,局部视觉观测和地图组件有效地关联,并且三角剖分的视觉结构同时完善。通过将混合结构因子引入关节优化,相机姿势与局部视觉结构进行了调整。通过评估我们的完整系统,即GMMLOC在公共数据集上,我们展示了我们的系统如何仅使用琐碎的计算开销提供厘米级的本地化精度。此外,与最先进的视力为主导状态估计器的比较研究证明了我们方法的竞争性能。
Incorporating prior structure information into the visual state estimation could generally improve the localization performance. In this letter, we aim to address the paradox between accuracy and efficiency in coupling visual factors with structure constraints. To this end, we present a cross-modality method that tracks a camera in a prior map modelled by the Gaussian Mixture Model (GMM). With the pose estimated by the front-end initially, the local visual observations and map components are associated efficiently, and the visual structure from the triangulation is refined simultaneously. By introducing the hybrid structure factors into the joint optimization, the camera poses are bundle-adjusted with the local visual structure. By evaluating our complete system, namely GMMLoc, on the public dataset, we show how our system can provide a centimeter-level localization accuracy with only trivial computational overhead. In addition, the comparative studies with the state-of-the-art vision-dominant state estimators demonstrate the competitive performance of our method.