论文标题
BCNET:从单图像学习身体和布形状
BCNet: Learning Body and Cloth Shape from A Single Image
论文作者
论文摘要
在本文中,我们考虑了从单个近前视图RGB图像自动重建服装和身体形状的问题。为此,我们在SMPL之上提出了一个分层的服装表示,并在新颖地使衣服的皮肤重量独立于身体网格,这显着提高了我们的服装模型的表达能力。与现有方法相比,我们的方法可以支持更多的服装类别并恢复更准确的几何形状。为了训练我们的模型,我们构建了两个大规模数据集,该数据集具有地面真相和服装几何形状以及配对的颜色图像。与单个网格或非参数表示相比,我们的方法可以通过单独的网格实现更灵活的控制,从而使应用程序重新置换,服装转移和服装纹理映射等应用。代码和某些数据可在https://github.com/jby1993/bcnet上找到。
In this paper, we consider the problem to automatically reconstruct garment and body shapes from a single near-front view RGB image. To this end, we propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh, which significantly improves the expression ability of our garment model. Compared with existing methods, our method can support more garment categories and recover more accurate geometry. To train our model, we construct two large scale datasets with ground truth body and garment geometries as well as paired color images. Compared with single mesh or non-parametric representation, our method can achieve more flexible control with separate meshes, makes applications like re-pose, garment transfer, and garment texture mapping possible. Code and some data is available at https://github.com/jby1993/BCNet.