论文标题

用带导的残留物揭露深冰面伪造

Exposing Deepfake Face Forgeries with Guided Residuals

论文作者

Guo, Zhiqing, Yang, Gaobo, Chen, Jiyou, Sun, Xingming

论文摘要

残留域特征对于DeepFake检测非常有用,因为它抑制了无关紧要的内容特征并保留了关键的操纵轨迹。但是,不适当的剩余预测将对检测准确性产生副作用。另外,残留域特征很容易受到图像操作(例如压缩)的影响。大多数现有作品都利用空间域特征或残留域特征,同时忽略了两种类型的特征是相互关联的。在本文中,我们提出了一个带有指导的残差网络,即GRNET,该网络以相互加固的方式融合了空间域和残留域特征,以暴露由Deepfake产生的面部图像。与现有基于预测的残差提取方法不同,我们提出了一个操纵痕量提取器(MTE),以直接删除内容特征并保留操纵轨迹。 MTE是一种细粒度的方法,可以避免由于不适当的预测引起的潜在偏见。此外,注意融合机制(AFM)旨在选择性地强调特征通道图并自适应分配两个流的权重。实验结果表明,拟议的GRNET比在四个公共假面数据集(包括HFF,FaceForensics ++,DFDC和Celeb-DF)上的最先进作品取得了更好的性能。特别是,GRNET在HFF数据集上的平均准确度为97.72%,该数据集比现有作品高5.25%。

Residual-domain feature is very useful for Deepfake detection because it suppresses irrelevant content features and preserves key manipulation traces. However, inappropriate residual prediction will bring side effects on detection accuracy. In addition, residual-domain features are easily affected by image operations such as compression. Most existing works exploit either spatial-domain features or residual-domain features, while neglecting that two types of features are mutually correlated. In this paper, we propose a guided residuals network, namely GRnet, which fuses spatial-domain and residual-domain features in a mutually reinforcing way, to expose face images generated by Deepfake. Different from existing prediction based residual extraction methods, we propose a manipulation trace extractor (MTE) to directly remove the content features and preserve manipulation traces. MTE is a fine-grained method that can avoid the potential bias caused by inappropriate prediction. Moreover, an attention fusion mechanism (AFM) is designed to selectively emphasize feature channel maps and adaptively allocate the weights for two streams. The experimental results show that the proposed GRnet achieves better performances than the state-of-the-art works on four public fake face datasets including HFF, FaceForensics++, DFDC and Celeb-DF. Especially, GRnet achieves an average accuracy of 97.72% on the HFF dataset, which is at least 5.25% higher than the existing works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源