论文标题
抑制大型面部表达识别的不确定性
Suppressing Uncertainties for Large-Scale Facial Expression Recognition
论文作者
论文摘要
注释定性的大规模面部表达数据集非常困难,这是由于面部表情模棱两可,低质量的面部图像和注释者的主观性所引起的不确定性。这些不确定性导致了深度学习时代大规模面部表达识别(FER)的关键挑战。为了解决这个问题,本文提出了一个简单而有效的自我固化网络(SCN),该网络(SCN)有效地抑制了不确定性,并防止深层网络无法拟合过度不确定的面部图像。具体而言,SCN抑制了两个不同方面的不确定性:1)在微型批次上的自我注意机制以对每个训练样本进行加权和排名正则化的重量,以及2)仔细的重新标记机制,以在最低的组中修改这些样品的标签。关于合成FER数据集和我们收集的Weblemotion数据集的实验验证了我们方法的有效性。公共基准测试的结果表明,我们的SCN在Raf-db,\ textbf {60.23} \%上的\ textbf {88.14} \%均优于当前的最新方法,而fallictnet上的结果是\ textbf {\ textbf {89.35} \%on ferplus。该代码将在\ href {https://github.com/kaiwang960112/self-cure-network} {https://github.com/kaiwang.com/kaiwang960112/self-cure-network}中获得。
Annotating a qualitative large-scale facial expression dataset is extremely difficult due to the uncertainties caused by ambiguous facial expressions, low-quality facial images, and the subjectiveness of annotators. These uncertainties lead to a key challenge of large-scale Facial Expression Recognition (FER) in deep learning era. To address this problem, this paper proposes a simple yet efficient Self-Cure Network (SCN) which suppresses the uncertainties efficiently and prevents deep networks from over-fitting uncertain facial images. Specifically, SCN suppresses the uncertainty from two different aspects: 1) a self-attention mechanism over mini-batch to weight each training sample with a ranking regularization, and 2) a careful relabeling mechanism to modify the labels of these samples in the lowest-ranked group. Experiments on synthetic FER datasets and our collected WebEmotion dataset validate the effectiveness of our method. Results on public benchmarks demonstrate that our SCN outperforms current state-of-the-art methods with \textbf{88.14}\% on RAF-DB, \textbf{60.23}\% on AffectNet, and \textbf{89.35}\% on FERPlus. The code will be available at \href{https://github.com/kaiwang960112/Self-Cure-Network}{https://github.com/kaiwang960112/Self-Cure-Network}.