论文标题
更公平的面部识别的不对称拒绝损失
Asymmetric Rejection Loss for Fairer Face Recognition
论文作者
论文摘要
近年来,面部识别性能取得了巨大的收益,这主要是由于大型面部图像数据集的可用性,这些数据集可以被深层神经网络利用,以学习强大的面部表征。但是,最近的研究表明,不同种族的面部识别表现差异主要是由于训练数据集的种族失衡,高加索人身份在很大程度上占据了其他种族。实际上,这是名人群体中非高加索人群体代表性不足的症状,通常会收集面部数据集,从而使代表性不足的群体的标记数据获得了挑战。在本文中,我们提出了一种不对称的拒绝损失,该损失旨在充分利用那些代表性不足的群体的未标记图像,以减少面部识别模型的种族偏见。我们将每个未标记的图像视为唯一类别,但是由于我们不能保证两个未标记的样本来自一个不同类别,我们以不对称的方式利用了标记和未标记的数据。广泛的实验表明,我们的方法在减轻种族偏见方面的强度,表现优于最先进的半统一方法。代表性不足的种族群体的表现增加,而代表性良好的群体几乎没有改变。
Face recognition performance has seen a tremendous gain in recent years, mostly due to the availability of large-scale face images dataset that can be exploited by deep neural networks to learn powerful face representations. However, recent research has shown differences in face recognition performance across different ethnic groups mostly due to the racial imbalance in the training datasets where Caucasian identities largely dominate other ethnicities. This is actually symptomatic of the under-representation of non-Caucasian ethnic groups in the celebdom from which face datasets are usually gathered, rendering the acquisition of labeled data of the under-represented groups challenging. In this paper, we propose an Asymmetric Rejection Loss, which aims at making full use of unlabeled images of those under-represented groups, to reduce the racial bias of face recognition models. We view each unlabeled image as a unique class, however as we cannot guarantee that two unlabeled samples are from a distinct class we exploit both labeled and unlabeled data in an asymmetric manner in our loss formalism. Extensive experiments show our method's strength in mitigating racial bias, outperforming state-of-the-art semi-supervision methods. Performance on the under-represented ethnicity groups increases while that on the well-represented group is nearly unchanged.