论文标题
AU引导的无监督域自适应面部表达识别
AU-Guided Unsupervised Domain Adaptive Facial Expression Recognition
论文作者
论文摘要
不同面部表达识别(FER)数据集中不可避免地存在包括不一致的注释和各种图像收集条件在内的领域多样性,这对适应在一个数据集中训练的FER模型构成了明显的挑战。最近的作品主要集中于具有对抗性学习机制的领域不变性深度学习,而忽略了取得了巨大进展的同级面部动作单元(AU)检测任务。考虑到AUS客观地确定面部表情,本文提出了AU引导的无监督域自适应FER(ADAFER)框架,以减轻不同的FER数据集之间的注释偏差。在ADAFER中,我们首先利用了源和目标域的AU检测的高级模型。然后,我们将AU结果进行比较,以执行AU引导的注释,即拥有相同的AU带有源面的目标面将继承来自源域的标签。同时,为了获得域不变的紧凑型特征,我们使用了Au引导的三重态训练,该训练随机在两个域上随机收集锚阳性阴性三胞胎,并带有AUS。我们对几个流行的基准进行了广泛的实验,并表明Adafer在所有这些基准测试中都取得了最新的结果。
The domain diversities including inconsistent annotation and varied image collection conditions inevitably exist among different facial expression recognition (FER) datasets, which pose an evident challenge for adapting the FER model trained on one dataset to another one. Recent works mainly focus on domain-invariant deep feature learning with adversarial learning mechanism, ignoring the sibling facial action unit (AU) detection task which has obtained great progress. Considering AUs objectively determine facial expressions, this paper proposes an AU-guided unsupervised Domain Adaptive FER (AdaFER) framework to relieve the annotation bias between different FER datasets. In AdaFER, we first leverage an advanced model for AU detection on both source and target domain. Then, we compare the AU results to perform AU-guided annotating, i.e., target faces that own the same AUs with source faces would inherit the labels from source domain. Meanwhile, to achieve domain-invariant compact features, we utilize an AU-guided triplet training which randomly collects anchor-positive-negative triplets on both domains with AUs. We conduct extensive experiments on several popular benchmarks and show that AdaFER achieves state-of-the-art results on all these benchmarks.