论文标题

Vis和NIR场景中的视频伪造检测的时空频率伪造线索

Spatial-Temporal Frequency Forgery Clue for Video Forgery Detection in VIS and NIR Scenario

论文作者

Wang, Yukai, Peng, Chunlei, Liu, Decheng, Wang, Nannan, Gao, Xinbo

论文摘要

近年来,随着面部编辑和发电的迅速发展,越来越多的虚假视频在社交媒体上流传,这引起了极端的公众关注。基于频域的现有面部伪造方法发现,与真实图像相比,gan锻造图像在频谱中具有明显的网格视觉伪像。但是,对于合成的视频,这些方法仅局限于单帧,并且很少关注不同框架之间最歧视的部分和时间频率线索。为了充分利用视频序列中丰富的信息,本文对空间和时间频域进行了视频伪造检测,并提出了一个离散的基于余弦转换的伪造线索增强网络(FCAN-DCT),以实现更全面的空间 - 周期性特征表示。 FCAN-DCT由一个骨干网络和两个分支组成:紧凑特征提取(CFE)模块和频率时间注意(FTA)模块。我们对两个可见光(VIS)数据集Wilddeepfake和Celeb-DF(V2)进行了彻底的实验评估,以及我们的自我构建的视频伪造数据集DeepFakenir,这是第一个近膜模态上的视频伪造数据集。实验结果证明了我们方法在VIS和NIR场景中检测伪造视频的有效性。

In recent years, with the rapid development of face editing and generation, more and more fake videos are circulating on social media, which has caused extreme public concerns. Existing face forgery detection methods based on frequency domain find that the GAN forged images have obvious grid-like visual artifacts in the frequency spectrum compared to the real images. But for synthesized videos, these methods only confine to single frame and pay little attention to the most discriminative part and temporal frequency clue among different frames. To take full advantage of the rich information in video sequences, this paper performs video forgery detection on both spatial and temporal frequency domains and proposes a Discrete Cosine Transform-based Forgery Clue Augmentation Network (FCAN-DCT) to achieve a more comprehensive spatial-temporal feature representation. FCAN-DCT consists of a backbone network and two branches: Compact Feature Extraction (CFE) module and Frequency Temporal Attention (FTA) module. We conduct thorough experimental assessments on two visible light (VIS) based datasets WildDeepfake and Celeb-DF (v2), and our self-built video forgery dataset DeepfakeNIR, which is the first video forgery dataset on near-infrared modality. The experimental results demonstrate the effectiveness of our method on detecting forgery videos in both VIS and NIR scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源