论文标题

SPSG:RGB-D扫描中的自我监管的光度现场生成

SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans

论文作者

Dai, Angela, Siddiqui, Yawar, Thies, Justus, Valentin, Julien, Nießner, Matthias

论文摘要

我们提出了SPSG,这是一种新颖的方法,可以通过学习以自我监督的方式来推断未观察到的场景几何形状和颜色,从而产生高质量的彩色3D场景模型。我们的自我监督方法通过将不完整的RGB-D扫描与该扫描的更完整版本相关联,学会了共同涂层几何和颜色。值得注意的是,我们建议在2D渲染上运行的对抗性和感知损失,而不是依靠3D重建损失来告知我们的3D几何和颜色重建,以实现场景的高分辨率,高质量的色彩重构。与融合的3D重建相比,从各个原始RGB-D框架中利用了高分辨率,自洽的信号,这些框架的融合3D重建从依赖视图依赖性效果(例如颜色平衡或姿势不一致)出现不一致。因此,通过直接通过2D信号告知我们的3D场景生成,我们产生了3D场景的高质量有色重建,在合成和真实数据上都超过了最新的最新状态。

We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion. Our self-supervised approach learns to jointly inpaint geometry and color by correlating an incomplete RGB-D scan with a more complete version of that scan. Notably, rather than relying on 3D reconstruction losses to inform our 3D geometry and color reconstruction, we propose adversarial and perceptual losses operating on 2D renderings in order to achieve high-resolution, high-quality colored reconstructions of scenes. This exploits the high-resolution, self-consistent signal from individual raw RGB-D frames, in contrast to fused 3D reconstructions of the frames which exhibit inconsistencies from view-dependent effects, such as color balancing or pose inconsistencies. Thus, by informing our 3D scene generation directly through 2D signal, we produce high-quality colored reconstructions of 3D scenes, outperforming state of the art on both synthetic and real data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源