论文标题
通过成像和深层图形化的头发颜色数字化
Hair Color Digitization through Imaging and Deep Inverse Graphics
论文作者
论文摘要
由于头发的几何形状以及光线如何在不同的头发纤维上弹起,头发外观是一种复杂的现象。因此,在渲染环境中重现特定的头发颜色是一项具有挑战性的任务,需要手动工作和计算机图形方面的专家知识才能直观地调整结果。尽管当前的头发捕获方法着重于头发形状估计,许多应用可能会受益于一种自动化方法,用于捕获物理样品的外观,从增强/虚拟现实到脱发的发育。在使用深神经网络逆转图形和材料捕获的最新进展的基础上,我们引入了一种新颖的头发颜色数字化方法。我们提出的管道允许捕获物理头发样品的颜色外观,并使头发的合成图像具有相似的外观,模拟了不同的发型和/或照明环境。由于呈现逼真的头发图像需要路径追踪渲染,因此基于可区分渲染的常规反向图形方法是无法提取的。我们的方法基于受控成像设备,路径跟踪渲染器的组合以及基于自我监督的机器学习的逆图形模型,该模型不需要使用可培训的可区分渲染。我们说明了在真实和合成图像上的头发数字化方法的性能,并表明我们的方法可以准确捕获和呈现头发颜色。
Hair appearance is a complex phenomenon due to hair geometry and how the light bounces on different hair fibers. For this reason, reproducing a specific hair color in a rendering environment is a challenging task that requires manual work and expert knowledge in computer graphics to tune the result visually. While current hair capture methods focus on hair shape estimation many applications could benefit from an automated method for capturing the appearance of a physical hair sample, from augmented/virtual reality to hair dying development. Building on recent advances in inverse graphics and material capture using deep neural networks, we introduce a novel method for hair color digitization. Our proposed pipeline allows capturing the color appearance of a physical hair sample and renders synthetic images of hair with a similar appearance, simulating different hair styles and/or lighting environments. Since rendering realistic hair images requires path-tracing rendering, the conventional inverse graphics approach based on differentiable rendering is untractable. Our method is based on the combination of a controlled imaging device, a path-tracing renderer, and an inverse graphics model based on self-supervised machine learning, which does not require to use differentiable rendering to be trained. We illustrate the performance of our hair digitization method on both real and synthetic images and show that our approach can accurately capture and render hair color.