论文标题

思考多参考的融合策略面对重演

Thinking the Fusion Strategy of Multi-reference Face Reenactment

论文作者

Yashima, Takuya, Narihira, Takuya, Kojima, Tamaki

论文摘要

在最新生成模型的最新进展中,面对重新制定 - 操纵和控制人的面孔,包括其头部运动引起了广泛的适用性的关注。尽管具有很强的表现力,但不可避免地,模型无法重建或准确地产生给定的单个参考图像的表面看不见的一侧。大多数现有方法通过从大量数据中学习人脸的出现并在推理时产生逼真的纹理来减轻此问题。我们并没有完全依靠生成模型所学的内容,而是通过使用多个参考图像来显着提高发电质量,这表明了简单的扩展。我们通过1)在公开可用数据集上执行重建任务,2)在我们的原始数据集中进行面部运动转移,该数据集由多人的头部运动视频序列组成,以及3)使用新提出的评估度量标准来验证我们的方法可以实现更好的定量结果。

In recent advances of deep generative models, face reenactment -manipulating and controlling human face, including their head movement-has drawn much attention for its wide range of applicability. Despite its strong expressiveness, it is inevitable that the models fail to reconstruct or accurately generate unseen side of the face of a given single reference image. Most of existing methods alleviate this problem by learning appearances of human faces from large amount of data and generate realistic texture at inference time. Rather than completely relying on what generative models learn, we show that simple extension by using multiple reference images significantly improves generation quality. We show this by 1) conducting the reconstruction task on publicly available dataset, 2) conducting facial motion transfer on our original dataset which consists of multi-person's head movement video sequences, and 3) using a newly proposed evaluation metric to validate that our method achieves better quantitative results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源