论文标题

VPR台式:一个开源视觉位置识别评估框架,具有可量化的观点和外观变化

VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change

论文作者

Zaffar, Mubariz, Garg, Sourav, Milford, Michael, Kooij, Julian, Flynn, David, McDonald-Maier, Klaus, Ehsan, Shoaib

论文摘要

视觉位置识别(VPR)是使用视觉信息识别先前访问的位置的过程,通常在不同的外观条件和观点变化以及计算约束下。 VPR与本地化,循环封闭,图像检索的概念有关,并且是许多自动导航系统的关键组成部分,从自动驾驶汽车到无人机和计算机视觉系统。尽管地位识别的概念已经存在很多年了,但由于改进的相机硬件及其对深度学习技术的潜力,VPR研究在过去十年中迅速发展,并且已成为计算机视觉和机器人社区中广泛研究的主题。然而,这种增长导致该领域的分裂和缺乏标准化,尤其是关于绩效评估。此外,VPR技术的观点和照明不变性的概念在很大程度上是在定性上和过去的模棱两可的。在本文中,我们通过一个新的综合开源框架来解决这些差距,以评估VPR技术的性能,并称为“ VPR基础”。 VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance.我们从计算机愿景和机器人社区中为VPR应用和分析流行的评估指标,并讨论这些不同的指标如何补充和/或彼此替换,具体取决于基本的应用程序和系统要求。

Visual Place Recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed "VPR-Bench". VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源