论文标题
敷料化身:物理模拟服装的深度现实主义外观
Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing
论文作者
论文摘要
尽管最近在开发动画全身化身方面取得了进展,但服装的现实建模(人类自我表达的核心方面之一)仍然是一个开放的挑战。最先进的物理模拟方法可以以交互速度产生现实行为的服装几何形状。但是,建模光真逼真的外观通常需要基于物理的渲染,对于交互式应用来说太昂贵了。另一方面,数据驱动的深层外观模型能够有效地产生逼真的外观,但在合成高度动态服装的几何形状并处理具有挑战性的身体套构型方面挣扎。为此,我们通过对服装的明确建模介绍了姿势驱动的化身,这些化身表现出从现实世界数据和逼真的服装动力学中汲取的逼真的外观。关键的想法是引入一个在明确的几何形状之上运行的神经服装外观模型:在训练时,我们使用高保真跟踪,而在动画的时候,我们依靠物理模拟的几何形状。我们的核心贡献是一个具有物理启发的外观网络,能够生成具有视图依赖性和动态阴影效果的感性外观,即使对于看不见的身体透明构型也是如此。我们对我们的模型进行了彻底的评估,并在几种受试者和不同类型的衣服上展示了各种动画结果。与以前关于影迷全身化身的工作不同,即使对于许多宽松衣服的例子,我们的方法也可以产生更丰富的动态和更现实的变形。我们还证明,我们的配方自然允许服装与不同人的化身一起使用,同时保持完全动画,因此首次可以采用新颖的衣服来实现逼真的化身。
Despite recent progress in developing animatable full-body avatars, realistic modeling of clothing - one of the core aspects of human self-expression - remains an open challenge. State-of-the-art physical simulation methods can generate realistically behaving clothing geometry at interactive rates. Modeling photorealistic appearance, however, usually requires physically-based rendering which is too expensive for interactive applications. On the other hand, data-driven deep appearance models are capable of efficiently producing realistic appearance, but struggle at synthesizing geometry of highly dynamic clothing and handling challenging body-clothing configurations. To this end, we introduce pose-driven avatars with explicit modeling of clothing that exhibit both photorealistic appearance learned from real-world data and realistic clothing dynamics. The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry. Our core contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations. We conduct a thorough evaluation of our model and demonstrate diverse animation results on several subjects and different types of clothing. Unlike previous work on photorealistic full-body avatars, our approach can produce much richer dynamics and more realistic deformations even for many examples of loose clothing. We also demonstrate that our formulation naturally allows clothing to be used with avatars of different people while staying fully animatable, thus enabling, for the first time, photorealistic avatars with novel clothing.