论文标题

深度度量学习和机会限制

Deep Metric Learning with Chance Constraints

论文作者

Gurbuz, Yeti Z., Can, Ogul, Alatan, A. Aydin

论文摘要

深度度量学习(DML)的目的是最大程度地减少嵌入空间中成对内部/间阶层接近性的经验预期损失。我们将DML与有限机会限制的可行性问题相关联。我们表明,基于代理的DML的最小化器满足某些机会限制,并且基于代理方法的最坏情况可以通过围绕类代理的最小球的半径来表征,以覆盖相应的类样本的整个域,这表明每个课程都有多个代理有助于表现。为了提供可扩展的算法并利用更多代理,我们考虑了基于代理的DML实例的最小化器所隐含的机会限制,并将DML重新制定为在此类约束的交叉点中找到可行的点,从而导致问题被迭代的预测近似解决。简而言之,我们反复训练基于代理的损失,并用故意选择的新样本的嵌入来重新定位代理。我们应用了4种公认的DML损失的方法,并通过对4个流行的DML基准进行了广泛的评估显示了有效性。代码可在以下网址找到:https://github.com/yetigurbuz/ccp-dml

Deep metric learning (DML) aims to minimize empirical expected loss of the pairwise intra-/inter- class proximity violations in the embedding space. We relate DML to feasibility problem of finite chance constraints. We show that minimizer of proxy-based DML satisfies certain chance constraints, and that the worst case generalization performance of the proxy-based methods can be characterized by the radius of the smallest ball around a class proxy to cover the entire domain of the corresponding class samples, suggesting multiple proxies per class helps performance. To provide a scalable algorithm as well as exploiting more proxies, we consider the chance constraints implied by the minimizers of proxy-based DML instances and reformulate DML as finding a feasible point in intersection of such constraints, resulting in a problem to be approximately solved by iterative projections. Simply put, we repeatedly train a regularized proxy-based loss and re-initialize the proxies with the embeddings of the deliberately selected new samples. We applied our method with 4 well-accepted DML losses and show the effectiveness with extensive evaluations on 4 popular DML benchmarks. Code is available at: https://github.com/yetigurbuz/ccp-dml

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源