论文标题

走向向后兼容的表示

Towards Backward-Compatible Representation Learning

论文作者

Shen, Yantao, Xiong, Yuanjun, Xia, Wei, Soatto, Stefano

论文摘要

我们提出了一种学习视觉特征的方法,这些方式即使具有不同的维度,并且通过不同的神经网络体系结构和损失功能学习。兼容意味着,如果使用此类功能比较图像,则可以将“新”功能直接与“旧”功能进行比较,因此可以互换使用它们。这使视觉搜索系统能够在更新嵌入模型时绕过所有先前看到的图像的新功能,该过程称为回填。向后兼容性对于快速部署新的嵌入模型至关重要,以利用不断增长的大规模培训数据集以及深度学习体系结构和培训方法的改进。我们为培训嵌入模型(称为向后兼容训练(BCT))提出了一个框架,作为向后兼容表示学习的第一步。在学习面部识别嵌入的实验中,经过BCT训练的模型成功地实现了向后兼容性而无需牺牲准确性,从而实现了视觉嵌入的无回填模型更新。

We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions and are learned via different neural network architectures and loss functions. Compatible means that, if such features are used to compare images, then "new" features can be compared directly to "old" features, so they can be used interchangeably. This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models, a process known as backfilling. Backward compatibility is critical to quickly deploy new embedding models that leverage ever-growing large-scale training datasets and improvements in deep learning architectures and training methods. We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning. In experiments on learning embeddings for face recognition, models trained with BCT successfully achieve backward compatibility without sacrificing accuracy, thus enabling backfill-free model updates of visual embeddings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源