论文标题
XCLOTH:从单眼图像中提取无模板的3D衣服
xCloth: Extracting Template-free Textured 3D Clothes from a Monocular Image
论文作者
论文摘要
现有的3D服装重建方法假设服装几何形状的预定义模板(将它们限制为固定服装样式),或者产生顶点有色网格(缺少高频纹理细节)。我们新颖的框架共同学习的几何和语义信息来自输入单眼图像,用于无模板纹理的3D服装数字化。更具体地说,我们建议扩展去皮的表示形式,以预测像素对齐的分层深度和语义图以提取3D服装。进一步利用分层表示,以参数化提取服装的任意表面,而没有任何人类干预以形成紫外线图集。然后,通过将像素从输入图像从输入图像投射到可见区域的紫外线空间,然后以混合方式将纹理以混合方式赋予,然后添加闭塞区域。因此,我们能够将任意放松的衣服样式数字化,同时从单眼图像中保留高频质地细节。我们在三个公开可用的数据集上获得了高保真3D服装重建结果,并在Internet图像上概括。
Existing approaches for 3D garment reconstruction either assume a predefined template for the garment geometry (restricting them to fixed clothing styles) or yield vertex colored meshes (lacking high-frequency textural details). Our novel framework co-learns geometric and semantic information of garment surface from the input monocular image for template-free textured 3D garment digitization. More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps to extract 3D garments. The layered representation is further exploited to UV parametrize the arbitrary surface of the extracted garment without any human intervention to form a UV atlas. The texture is then imparted on the UV atlas in a hybrid fashion by first projecting pixels from the input image to UV space for the visible region, followed by inpainting the occluded regions. Thus, we are able to digitize arbitrarily loose clothing styles while retaining high-frequency textural details from a monocular image. We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.