论文标题

越安全,越少地使用:沿安全阈值深处识别深度识别的性别和种族(联合国)

The More Secure, The Less Equally Usable: Gender and Ethnicity (Un)fairness of Deep Face Recognition along Security Thresholds

论文作者

Atzori, Andrea, Fenu, Gianni, Marras, Mirko

论文摘要

Face Biometrics在使现代智能城市应用程序更加安全和可用方面发挥了关键作用。通常,面部识别系统的识别阈值是根据被考虑的用例的安全性来调整的。例如,在付款交易验证的情况下,设置高门槛可以减少匹配的可能性。不幸的是,先前的面部识别工作表明,某些人群组的错误率通常更高。因此,这些差异引起了质疑,即用面部生物识别技术授权的系统的公平性。在本文中,我们研究了人口组之间的差异在不同的安全水平下变化的程度。我们的分析包括十个面部识别模型,三个安全阈值以及基于性别和种族的六个人口组。实验表明,系统的安全性越高,人口统计组之间可用性的差异越高。因此,令人信服的不公平问题存在并敦促在需要严重安全水平的现实高风险环境中进行对策。

Face biometrics are playing a key role in making modern smart city applications more secure and usable. Commonly, the recognition threshold of a face recognition system is adjusted based on the degree of security for the considered use case. The likelihood of a match can be for instance decreased by setting a high threshold in case of a payment transaction verification. Prior work in face recognition has unfortunately showed that error rates are usually higher for certain demographic groups. These disparities have hence brought into question the fairness of systems empowered with face biometrics. In this paper, we investigate the extent to which disparities among demographic groups change under different security levels. Our analysis includes ten face recognition models, three security thresholds, and six demographic groups based on gender and ethnicity. Experiments show that the higher the security of the system is, the higher the disparities in usability among demographic groups are. Compelling unfairness issues hence exist and urge countermeasures in real-world high-stakes environments requiring severe security levels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源