论文标题

旋转和渲染:单视图像的无监督的感性的面部旋转

Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images

论文作者

Zhou, Hang, Liu, Jihao, Liu, Ziwei, Liu, Yu, Wang, Xiaogang

论文摘要

尽管近年来面部轮换取得了快速的进步,但缺乏高质量的配对训练数据仍然是现有方法的巨大障碍。当前的生成模型在很大程度上依赖具有同一个人的多视图图像的数据集。因此,它们生成的结果受数据源的规模和域的限制。为了克服这些挑战,我们提出了一个新颖的无监督框架,可以仅使用野外的单视图像收集来综合逼真的旋转面。我们的关键见解是,在3D空间中旋转面部来回旋转,然后将它们重新呈现到2D平面可以用作强大的自我诉讼。我们利用3D面部建模和高分辨率GAN的最新进展构成了我们的基础。由于面部上的3D旋转和渲染可以应用于任意角度而不会丢失细节,因此我们的方法非常适合野外场景(即没有配对的数据可用),而现有方法不足。广泛的实验表明,我们的方法在各种姿势和域中具有优于最先进方法的质量和身份保存。此外,我们验证我们的旋转和渲染框架自然可以充当有效的数据增强引擎,即使在强大的基线模型上,也可以增强现代面部识别系统。

Though face rotation has achieved rapid progress in recent years, the lack of high-quality paired training data remains a great hurdle for existing methods. The current generative models heavily rely on datasets with multi-view images of the same person. Thus, their generated results are restricted by the scale and domain of the data source. To overcome these challenges, we propose a novel unsupervised framework that can synthesize photo-realistic rotated faces using only single-view image collections in the wild. Our key insight is that rotating faces in the 3D space back and forth, and re-rendering them to the 2D plane can serve as a strong self-supervision. We leverage the recent advances in 3D face modeling and high-resolution GAN to constitute our building blocks. Since the 3D rotation-and-render on faces can be applied to arbitrary angles without losing details, our approach is extremely suitable for in-the-wild scenarios (i.e. no paired data are available), where existing methods fall short. Extensive experiments demonstrate that our approach has superior synthesis quality as well as identity preservation over the state-of-the-art methods, across a wide range of poses and domains. Furthermore, we validate that our rotate-and-render framework naturally can act as an effective data augmentation engine for boosting modern face recognition systems even on strong baseline models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源