论文标题
通过列表自我介绍改善公制学习的概括
Improving Generalization of Metric Learning via Listwise Self-distillation
论文作者
论文摘要
大多数深度度量学习(DML)方法采用了一种策略,该策略迫使所有积极样本在嵌入空间中靠近,同时使它们远离负面样本。但是,这种策略忽略了正(负)样本的内部关系,并且通常会导致过度拟合,尤其是在存在硬样品和标签错误的情况下。在这项工作中,我们提出了一个简单而有效的正则化,即列表自我验证(LSD),它逐步提炼了模型的知识,以适应批处理中每个样本对的更合适的距离目标。 LSD鼓励在正(负)样本中更平稳的嵌入和信息挖掘,以减轻过度拟合并从而改善概括。我们的LSD可以直接集成到一般的DML框架中。广泛的实验表明,LSD始终提高多个数据集上各种度量学习方法的性能。
Most deep metric learning (DML) methods employ a strategy that forces all positive samples to be close in the embedding space while keeping them away from negative ones. However, such a strategy ignores the internal relationships of positive (negative) samples and often leads to overfitting, especially in the presence of hard samples and mislabeled samples. In this work, we propose a simple yet effective regularization, namely Listwise Self-Distillation (LSD), which progressively distills a model's own knowledge to adaptively assign a more appropriate distance target to each sample pair in a batch. LSD encourages smoother embeddings and information mining within positive (negative) samples as a way to mitigate overfitting and thus improve generalization. Our LSD can be directly integrated into general DML frameworks. Extensive experiments show that LSD consistently boosts the performance of various metric learning methods on multiple datasets.