论文标题
自适应社区指标学习
Adaptive neighborhood Metric learning
论文作者
论文摘要
在本文中,我们揭示了如果没有提供信息的样本挖掘,公制学习将遭受严重的不可分割的问题。由于不可分割的样品通常与硬样品混合在一起,因此目前用于处理不可分割问题的信息的样本挖掘策略可能会带来一些副作用,例如目标功能的不稳定性等,以减轻此问题,我们提出了一种新颖的远程度量度量学习算法,命名为Adaptive nekool nekool Marticric学习(ANML)。在ANML中,我们设计了两个阈值,以自适应地识别训练过程中不可分割的相似和不同样本,因此在同一过程中实现了不可分割的样本删除和度量参数学习。由于所提出的ANML的非连续性,我们开发了一种巧妙的函数,称为\ emph {log-exp均值函数}来构建连续的配方以替代其,可以通过梯度下降方法有效地解决。与三胞胎损失类似,ANML可用于学习线性和深层嵌入。通过分析提出的方法,我们发现它具有一些有趣的属性。例如,当使用ANML学习线性嵌入时,当前著名的度量学习算法(例如大边距最近的邻居(LMNN)和邻域组件分析(NCA))是通过设置参数不同值的特殊情况。当它用于学习深度特征时,最新的深度度量学习算法,例如三胞胎丢失,结构丢失和多相似性损失,成为ANML的特殊情况。此外,在我们的方法中提出的\ emph {log-exp均值函数}给出了一个新的观点,可以回顾深度度量学习方法,例如Prox-NCA和N-Pairs损失。最后,有希望的实验结果证明了该方法的有效性。
In this paper, we reveal that metric learning would suffer from serious inseparable problem if without informative sample mining. Since the inseparable samples are often mixed with hard samples, current informative sample mining strategies used to deal with inseparable problem may bring up some side-effects, such as instability of objective function, etc. To alleviate this problem, we propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML). In ANML, we design two thresholds to adaptively identify the inseparable similar and dissimilar samples in the training procedure, thus inseparable sample removing and metric parameter learning are implemented in the same procedure. Due to the non-continuity of the proposed ANML, we develop an ingenious function, named \emph{log-exp mean function} to construct a continuous formulation to surrogate it, which can be efficiently solved by the gradient descent method. Similar to Triplet loss, ANML can be used to learn both the linear and deep embeddings. By analyzing the proposed method, we find it has some interesting properties. For example, when ANML is used to learn the linear embedding, current famous metric learning algorithms such as the large margin nearest neighbor (LMNN) and neighbourhood components analysis (NCA) are the special cases of the proposed ANML by setting the parameters different values. When it is used to learn deep features, the state-of-the-art deep metric learning algorithms such as Triplet loss, Lifted structure loss, and Multi-similarity loss become the special cases of ANML. Furthermore, the \emph{log-exp mean function} proposed in our method gives a new perspective to review the deep metric learning methods such as Prox-NCA and N-pairs loss. At last, promising experimental results demonstrate the effectiveness of the proposed method.