论文标题
Tetratsdf:来自单个图像的3D人类重建,具有四面体外壳
TetraTSDF: 3D human reconstruction from a single image with a tetrahedral outer shell
论文作者
论文摘要
由于歧义,从其2D外观中恢复了一个人的3D形状。然而,借助卷积神经网络(CNN)和关于3D人体的先验知识,可以克服这种歧义,从单图像中恢复人体的详细3D形状。但是,当前的解决方案未能重建穿着松散衣服的人的所有细节。这是因为(a)即使在现代GPU上也无法维持的巨大内存要求,或者(b)无法编码所有细节的紧凑型3D表示。在本文中,我们提出了人体的四面体外壳体积截断的距离距离函数(TetratsDF)模型,以及其3D人体形状回归的相应零件连接网络(PCN)。我们提出的模型是紧凑,密集,准确但非常适合基于CNN的回归任务。我们提出的PCN使我们能够以端到端的方式从单个图像中学习TSDF在四面体体积中的分布。结果表明,我们提出的方法允许重建人类穿着单个RGB图像的宽松衣服的详细形状。
Recovering the 3D shape of a person from its 2D appearance is ill-posed due to ambiguities. Nevertheless, with the help of convolutional neural networks (CNN) and prior knowledge on the 3D human body, it is possible to overcome such ambiguities to recover detailed 3D shapes of human bodies from single images. Current solutions, however, fail to reconstruct all the details of a person wearing loose clothes. This is because of either (a) huge memory requirement that cannot be maintained even on modern GPUs or (b) the compact 3D representation that cannot encode all the details. In this paper, we propose the tetrahedral outer shell volumetric truncated signed distance function (TetraTSDF) model for the human body, and its corresponding part connection network (PCN) for 3D human body shape regression. Our proposed model is compact, dense, accurate, and yet well suited for CNN-based regression task. Our proposed PCN allows us to learn the distribution of the TSDF in the tetrahedral volume from a single image in an end-to-end manner. Results show that our proposed method allows to reconstruct detailed shapes of humans wearing loose clothes from single RGB images.