论文标题
有条件图像匹配的共同注意
Co-Attention for Conditioned Image Matching
论文作者
论文摘要
我们提出了一种新方法,以确定野生图像对之间的对应关系,在照明,观点,上下文和材料的大变化下。虽然其他方法通过独立处理图像对对应关系,但我们在两个图像上都有条件,以隐式地考虑它们之间的差异。为了实现这一目标,我们介绍了(i)一种空间注意机制(一个共同注意模块,COAM),用于调节这两个图像上学习的特征,以及(ii)在测试时选择最佳匹配的独特性得分。可以将COAM添加到标准体系结构中,并使用自主或监督数据进行培训,并在硬条件下实现显着的性能改善,例如大型观点变化。我们证明,使用COAM的模型实现了最新的或竞争成果,以实现多种任务:本地匹配,摄像机定位,3D重建和图像样式化。
We propose a new approach to determine correspondences between image pairs in the wild under large changes in illumination, viewpoint, context, and material. While other approaches find correspondences between pairs of images by treating the images independently, we instead condition on both images to implicitly take account of the differences between them. To achieve this, we introduce (i) a spatial attention mechanism (a co-attention module, CoAM) for conditioning the learned features on both images, and (ii) a distinctiveness score used to choose the best matches at test time. CoAM can be added to standard architectures and trained using self-supervision or supervised data, and achieves a significant performance improvement under hard conditions, e.g. large viewpoint changes. We demonstrate that models using CoAM achieve state of the art or competitive results on a wide range of tasks: local matching, camera localization, 3D reconstruction, and image stylization.