论文标题
XCO:面部验证任务的可解释的余弦指标
xCos: An Explainable Cosine Metric for Face Verification Task
论文作者
论文摘要
我们在面部识别任务上研究XAI(可解释的AI),尤其是这里的面部验证。面部验证是最近几天至关重要的任务,它已被部署到大量应用程序中,例如访问控制,监视和移动设备的自动个人日志。随着数据量的增加,深层卷积神经网络可以达到面部验证任务的高度准确性。除了表现出色之外,深面验证模型还需要更多的可解释性,以便我们可以相信它们产生的结果。在本文中,我们提出了一个新颖的相似性度量,称为可解释的余弦($ xcos $),它带有一个可学习的模块,可以将其插入大多数验证模型中以提供有意义的解释。在$ XCOS $的帮助下,我们可以看到两个输入面的哪些部分相似,该模型将其注意力付诸实践,以及如何加权本地相似性以形成输出$ XCOS $得分。我们证明了我们提出的方法对LFW和各种竞争基准的有效性,这不仅提供了新颖和渴望的模型可解释性来进行面部验证,而且还确保了将其准确性插入现有的面部识别模型。
We study the XAI (explainable AI) on the face recognition task, particularly the face verification here. Face verification is a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this paper, we propose a novel similarity metric, called explainable cosine ($xCos$), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of $xCos$, we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output $xCos$ score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, resulting in not only providing novel and desiring model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.