论文标题
神经链:从多视图图像学习头发几何和外观
Neural Strands: Learning Hair Geometry and Appearance from Multi-View Images
论文作者
论文摘要
我们提出了神经链,这是一个新颖的学习框架,用于对多视图图像输入进行准确的头发几何形状和外观进行建模。从任何观点都可以实时渲染学习的头发模型,并具有高保真依赖性效果。我们的模型可以实现直观的形状和风格控制,而与体积相对应不同。为了实现这些特性,我们提出了一种基于神经头皮纹理的新型头发表示,该神经头皮纹理编码每个Texel位置的单个链的几何形状和外观。此外,我们基于学习的头发链的栅格化引入了一个新型的神经渲染框架。我们的神经渲染是链精确的,具有抗氧化性,使渲染视图一致且逼真。结合外观与多视图几何学之前,我们首次从多视图设置中启用了外观和明确的头发几何学的联合学习。我们证明了我们的方法在忠诚度和效率方面的功效。
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs. The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects. Our model achieves intuitive shape and style control unlike volumetric counterparts. To enable these properties, we propose a novel hair representation based on a neural scalp texture that encodes the geometry and appearance of individual strands at each texel location. Furthermore, we introduce a novel neural rendering framework based on rasterization of the learned hair strands. Our neural rendering is strand-accurate and anti-aliased, making the rendering view-consistent and photorealistic. Combining appearance with a multi-view geometric prior, we enable, for the first time, the joint learning of appearance and explicit hair geometry from a multi-view setup. We demonstrate the efficacy of our approach in terms of fidelity and efficiency for various hairstyles.