论文标题
校准社区意识到的深度度量学习的置信度度量
Calibrated neighborhood aware confidence measure for deep metric learning
论文作者
论文摘要
深度学习成功后,近年来,深度度量学习取得了令人鼓舞的改善。它已成功应用于几次学习,图像检索和开放式分类中的问题。但是,测量深度度量学习模型的信心并确定不可靠的预测仍然是一个开放的挑战。本文着重于定义一个校准且可解释的置信度度量,该指标密切反映其分类准确性。在使用学习距离度量的直接在潜在空间中进行相似性比较时,我们的方法使用高斯内核平滑函数近似每个类的数据点的分布。在固定验证数据集上具有拟议置信度度的后处理校准算法可改善最先进的深度度量学习模型的概括和鲁棒性,同时提供了对置信度的可解释估计。即使在与其他噪声或对抗性示例有关的测试数据中,即使存在分配数据,对四个受欢迎的基准数据集(Caltech-UCSD鸟,Stanford Online产品,Stanford Car-196和店内服装检索)进行了广泛的测试也显示出一致的改进。
Deep metric learning has gained promising improvement in recent years following the success of deep learning. It has been successfully applied to problems in few-shot learning, image retrieval, and open-set classifications. However, measuring the confidence of a deep metric learning model and identifying unreliable predictions is still an open challenge. This paper focuses on defining a calibrated and interpretable confidence metric that closely reflects its classification accuracy. While performing similarity comparison directly in the latent space using the learned distance metric, our approach approximates the distribution of data points for each class using a Gaussian kernel smoothing function. The post-processing calibration algorithm with proposed confidence metric on the held-out validation dataset improves generalization and robustness of state-of-the-art deep metric learning models while provides an interpretable estimation of the confidence. Extensive tests on four popular benchmark datasets (Caltech-UCSD Birds, Stanford Online Product, Stanford Car-196, and In-shop Clothes Retrieval) show consistent improvements even at the presence of distribution shifts in test data related to additional noise or adversarial examples.