论文标题
学习图像检索的超级功能
Learning Super-Features for Image Retrieval
论文作者
论文摘要
结合本地和全球功能的方法最近在多个具有挑战性的深层图像检索基准上表现出了出色的性能,但是它们对本地功能的使用至少提出了两个问题。首先,这些局部功能只需归结为神经网络的局部图激活,因此可能非常多余。其次,他们通常接受全球损失的培训,仅在当地特征的汇总之上起作用。相比之下,测试基于本地功能匹配,这在训练和测试之间造成了差异。在本文中,我们仅基于我们称为超级功能的中层特征,为深层图像检索提出了一种新颖的架构。这些超级功能由迭代注意模块构建,构成有序集,每个元素都集中在局部和判别图像模式上。对于培训,它们仅需要图像标签。对比损失直接在超级功能的水平上进行,并专注于跨图像匹配的损失。第二次补充损失鼓励多样性。对公共地标检索基准测试的实验验证了使用相同数量的功能时,超级功能的最先进方法大大优于最先进的方法,并且仅需要较小的内存足迹即可匹配其性能。代码和型号可在以下网址提供:https://github.com/naver/fire。
Methods that combine local and global features have recently shown excellent performance on multiple challenging deep image retrieval benchmarks, but their use of local features raises at least two issues. First, these local features simply boil down to the localized map activations of a neural network, and hence can be extremely redundant. Second, they are typically trained with a global loss that only acts on top of an aggregation of local features; by contrast, testing is based on local feature matching, which creates a discrepancy between training and testing. In this paper, we propose a novel architecture for deep image retrieval, based solely on mid-level features that we call Super-features. These Super-features are constructed by an iterative attention module and constitute an ordered set in which each element focuses on a localized and discriminant image pattern. For training, they require only image labels. A contrastive loss operates directly at the level of Super-features and focuses on those that match across images. A second complementary loss encourages diversity. Experiments on common landmark retrieval benchmarks validate that Super-features substantially outperform state-of-the-art methods when using the same number of features, and only require a significantly smaller memory footprint to match their performance. Code and models are available at: https://github.com/naver/FIRe.