论文标题
跨模型兼容性的统一表示学习
Unified Representation Learning for Cross Model Compatibility
论文作者
论文摘要
我们提出了一个统一的表示学习框架,以在视觉搜索应用程序的背景下解决跨模型兼容性(CMC)问题。不同嵌入模型之间的交叉兼容性使视觉搜索系统能够正确识别和检索身份,而无需重新编码用户图像,而用户图像通常由于隐私问题而无法获得。尽管有现有的方法可以在面部识别中解决CMC,但它们无法在更具挑战性的环境中工作,在这种情况下,嵌入模型的分布急剧变化。提出的解决方案通过引入轻质残留瓶颈转换(RBT)模块和新的训练方案来优化嵌入式空间,从而改善了CMC性能。广泛的实验表明,我们提出的解决方案的表现优于先前的方法,可以在各种具有挑战性的视觉搜索场景中的面部识别和人重新识别。
We propose a unified representation learning framework to address the Cross Model Compatibility (CMC) problem in the context of visual search applications. Cross compatibility between different embedding models enables the visual search systems to correctly recognize and retrieve identities without re-encoding user images, which are usually not available due to privacy concerns. While there are existing approaches to address CMC in face identification, they fail to work in a more challenging setting where the distributions of embedding models shift drastically. The proposed solution improves CMC performance by introducing a light-weight Residual Bottleneck Transformation (RBT) module and a new training scheme to optimize the embedding spaces. Extensive experiments demonstrate that our proposed solution outperforms previous approaches by a large margin for various challenging visual search scenarios of face recognition and person re-identification.