论文标题
if-net:一个照明不变的功能网络
IF-Net: An Illumination-invariant Feature Network
论文作者
论文摘要
功能描述符匹配是一个关键的步骤是许多计算机视觉应用程序,例如图像缝合,图像检索和视觉定位。但是,它通常会受到许多实际因素的影响,这些因素会降低其性能。在这些因素中,照明变化是最有影响力的变化,尤其是以前的描述符学习著作没有重点是解决这个问题。在本文中,我们提出了IF-NET,旨在在关键照明下产生强大而通用的描述符。我们不仅发现培训数据的种类很重要,而且还发现了它的顺序。为此,我们研究了几种数据集调度方法,并提出了一种分离培训方案,以提高匹配的准确性。此外,我们提出了ROI损失和硬阳性的采矿策略以及培训计划,这可以增强生成的描述符的能力,以处理较大的照明变化条件。我们评估了与几种最新方法相比,我们在公共补丁匹配基准的方法上取得了最佳结果。为了展示实用性,我们进一步评估了在大照明下视觉定位的任务改变场景,并实现最佳本地化精度。
Feature descriptor matching is a critical step is many computer vision applications such as image stitching, image retrieval and visual localization. However, it is often affected by many practical factors which will degrade its performance. Among these factors, illumination variations are the most influential one, and especially no previous descriptor learning works focus on dealing with this problem. In this paper, we propose IF-Net, aimed to generate a robust and generic descriptor under crucial illumination changes conditions. We find out not only the kind of training data important but also the order it is presented. To this end, we investigate several dataset scheduling methods and propose a separation training scheme to improve the matching accuracy. Further, we propose a ROI loss and hard-positive mining strategy along with the training scheme, which can strengthen the ability of generated descriptor dealing with large illumination change conditions. We evaluate our approach on public patch matching benchmark and achieve the best results compared with several state-of-the-arts methods. To show the practicality, we further evaluate IF-Net on the task of visual localization under large illumination changes scenes, and achieves the best localization accuracy.