论文标题
重新思考梯度操作员,以揭示AI支持AI的伪造
Rethinking Gradient Operator for Exposing AI-enabled Face Forgeries
论文作者
论文摘要
对于图像取证,卷积神经网络(CNN)倾向于学习内容特征,而不是微妙的操纵痕迹,这限制了取证性能。现有方法主要通过遵循一般管道来解决上述挑战,即从预测的像素值中减去原始像素值,以使CNN注意操纵轨迹。但是,由于学习机制复杂,这些方法可能会带来一些不必要的绩效损失。在这项工作中,我们重新考虑了梯度操作员在露出面部伪造方面的优势,并通过将梯度操作员与CNN相结合,即张贴预处理(TP)和操作跟踪注意力(MTA)模块来设计两个插件模块。具体而言,TP模块通过渐变操作员优化网络中每个通道的特征张量,以突出操作轨迹并改善特征表示。此外,MTA模块考虑了两个维度,即通道和操纵轨迹,以迫使网络学习操纵轨迹的分布。这两个模块可以无缝集成到CNN中进行端到端培训。实验表明,所提出的网络比五个公共数据集上的先前工作取得更好的结果。尤其是,与现有的预处理模块相比,TP模块仅通过简单张量的细化而极大地提高了至少4.60%。该代码可在以下网址提供:https://github.com/ericgzq/gocnet-pytorch。
For image forensics, convolutional neural networks (CNNs) tend to learn content features rather than subtle manipulation traces, which limits forensic performance. Existing methods predominantly solve the above challenges by following a general pipeline, that is, subtracting the original pixel value from the predicted pixel value to make CNNs pay attention to the manipulation traces. However, due to the complicated learning mechanism, these methods may bring some unnecessary performance losses. In this work, we rethink the advantages of gradient operator in exposing face forgery, and design two plug-and-play modules by combining gradient operator with CNNs, namely tensor pre-processing (TP) and manipulation trace attention (MTA) module. Specifically, TP module refines the feature tensor of each channel in the network by gradient operator to highlight the manipulation traces and improve the feature representation. Moreover, MTA module considers two dimensions, namely channel and manipulation traces, to force the network to learn the distribution of manipulation traces. These two modules can be seamlessly integrated into CNNs for end-to-end training. Experiments show that the proposed network achieves better results than prior works on five public datasets. Especially, TP module greatly improves the accuracy by at least 4.60% compared with the existing pre-processing module only via simple tensor refinement. The code is available at: https://github.com/EricGzq/GocNet-pytorch.