论文标题

通过模块化编解码化头像表达性触觉

Expressive Telepresence via Modular Codec Avatars

论文作者

Chu, Hang, Ma, Shugao, De la Torre, Fernando, Fidler, Sanja, Sheikh, Yaser

论文摘要

VR触觉包括与阿凡达(Avatar)代表的虚拟空间中的另一个人互动。如今,大多数化身都像卡通一样,但是很快该技术将允许视频逼真。本文的目的是朝这个方向发展,并提出了模块化的编解码器化身(MCA),这是一种生成由VR耳机中摄像机驱动的超现实面孔的方法。 MCA通过用学习的模块化表示来替换整体模型来扩展传统的编解码化化身(CA)。重要的是要注意,传统的特定于人的CA是从很少的培训样本中学到的,通常缺乏鲁棒性,并且在转移面部表情时表现力有限。 MCA通过学习不同面部成分的调制自适应混合以及基于示例的潜在比对来解决这些问题。我们证明,MCA在各种现实世界中的数据集和实用场景中提高了表现力和鲁棒性W.R.T。最后,我们在建议模型启用的VR触觉中展示了新应用程序。

VR telepresence consists of interacting with another human in a virtual space represented by an avatar. Today most avatars are cartoon-like, but soon the technology will allow video-realistic ones. This paper aims in this direction and presents Modular Codec Avatars (MCA), a method to generate hyper-realistic faces driven by the cameras in the VR headset. MCA extends traditional Codec Avatars (CA) by replacing the holistic models with a learned modular representation. It is important to note that traditional person-specific CAs are learned from few training samples, and typically lack robustness as well as limited expressiveness when transferring facial expressions. MCAs solve these issues by learning a modulated adaptive blending of different facial components as well as an exemplar-based latent alignment. We demonstrate that MCA achieves improved expressiveness and robustness w.r.t to CA in a variety of real-world datasets and practical scenarios. Finally, we showcase new applications in VR telepresence enabled by the proposed model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源