论文标题

对努力识别的硬积极性的对抗性学习

Adversarial Learning of Hard Positives for Place Recognition

论文作者

Fang, Wenxuan, Zhang, Kai, Shavit, Yoli, Feng, Wensen

论文摘要

用于位置识别的图像检索方法学习用于在推理时获取地理标签图像的全局图像描述符。最近的作品建议采用弱小的自学意识,以提高可见性变化的本地化准确性和鲁棒性(例如,在照明或观察点)。但是,产生的硬积极因素对于获得鲁棒性至关重要,仍然仅限于硬编码或全球增强。在这项工作中,我们提出了一种对抗性方法,以指导创建训练图像检索网络的硬积极因素。我们的方法学习了本地和全球增强政策,这将增加培训损失,而图像检索网络被迫学习更强大的功能,以区分日益困难的例子。这种方法允许图像检索网络超出数据中显示的硬示例,并学习对各种变化的功能。我们的方法实现了PITTS250和东京24/7基准的最新回忆,并且在Roxford和RPARIS数据集上的最新图像检索方法以明显的差距为单位。

Image retrieval methods for place recognition learn global image descriptors that are used for fetching geo-tagged images at inference time. Recent works have suggested employing weak and self-supervision for mining hard positives and hard negatives in order to improve localization accuracy and robustness to visibility changes (e.g. in illumination or view point). However, generating hard positives, which is essential for obtaining robustness, is still limited to hard-coded or global augmentations. In this work we propose an adversarial method to guide the creation of hard positives for training image retrieval networks. Our method learns local and global augmentation policies which will increase the training loss, while the image retrieval network is forced to learn more powerful features for discriminating increasingly difficult examples. This approach allows the image retrieval network to generalize beyond the hard examples presented in the data and learn features that are robust to a wide range of variations. Our method achieves state-of-the-art recalls on the Pitts250 and Tokyo 24/7 benchmarks and outperforms recent image retrieval methods on the rOxford and rParis datasets by a noticeable margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源