论文标题

深色假货的心如何跳动?通过用生物信号解释残留物,深度假源检测

How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals

论文作者

Ciftci, Umur Aybars, Demir, Ilke, Yin, Lijun

论文摘要

假肖像的视频生成技术一直以影照相的深层假货对政治宣传,名人模仿,伪造的证据和其他与身份有关的操纵构成新的威胁。遵循这些一代技术,由于它们的高分类精度,也证明了某些检测方法有用。然而,几乎没有花费任何努力来追踪深层假货的来源。我们提出了一种方法,不仅是将深层假货与真实视频分开,还要发现深层假背后的特定生成模型。一些基于深度学习的方法试图使用CNN实际上学习发电机残差的CNN进行分类。我们认为,这些残差包含更多信息,我们可以通过将它们解开生物信号来揭示这些操纵工件。我们的主要观察结果表明,生物信号中的时空模式可以被认为是残留物的代表性投影。为了证明这一观察是合理的,我们从真实和虚假视频中提取PPG单元格,并将其馈送到最先进的分类网络,以检测每个视频的生成模型。我们的结果表明,我们的方法可以以97.29%的精度检测假视频,而准确度为93.39%的源模型。

Fake portrait video generation techniques have been posing a new threat to the society with photorealistic deep fakes for political propaganda, celebrity imitation, forged evidences, and other identity related manipulations. Following these generation techniques, some detection approaches have also been proved useful due to their high classification accuracy. Nevertheless, almost no effort was spent to track down the source of deep fakes. We propose an approach not only to separate deep fakes from real videos, but also to discover the specific generative model behind a deep fake. Some pure deep learning based approaches try to classify deep fakes using CNNs where they actually learn the residuals of the generator. We believe that these residuals contain more information and we can reveal these manipulation artifacts by disentangling them with biological signals. Our key observation yields that the spatiotemporal patterns in biological signals can be conceived as a representative projection of residuals. To justify this observation, we extract PPG cells from real and fake videos and feed these to a state-of-the-art classification network for detecting the generative model per video. Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源