论文标题

RGB-D扫描中的对抗纹理优化

Adversarial Texture Optimization from RGB-D Scans

论文作者

Huang, Jingwei, Thies, Justus, Dai, Angela, Kundu, Abhijit, Jiang, Chiyu Max, Guibas, Leonidas, Nießner, Matthias, Funkhouser, Thomas

论文摘要

逼真的颜色纹理生成是RGB-D表面重建中的重要一步,但由于重建几何形状,未对准的摄像头姿势和视图依赖性成像伪像,在实践中仍然具有挑战性。 在这项工作中,我们使用从弱监督的观点获得的条件对抗损失提出了一种新颖的颜色纹理生成方法。 具体而言,我们提出了一种方法,即通过学习一个对这些错误强大的目标函数来生成近似表面,即使是从未对齐的图像中产生的逼真的纹理。 我们方法的关键思想是学习一个基于补丁的条件歧视器,该歧视器指导纹理优化以容忍未对准。 我们的歧视者采用合成的观点和真实的形象,并在对现实主义的扩展下评估合成的观点是否是现实的。 我们通过提供“真实”示例对成对的输入视图及其未对准版本来训练歧视者 - 从而使学习的对抗性损失可以容忍扫描中的错误。 在定量或定性评估下进行的合成和真实数据实验证明了我们的方法与艺术状态相比。我们的代码可公开提供视频演示。

Realistic color texture generation is an important step in RGB-D surface reconstruction, but remains challenging in practice due to inaccuracies in reconstructed geometry, misaligned camera poses, and view-dependent imaging artifacts. In this work, we present a novel approach for color texture generation using a conditional adversarial loss obtained from weakly-supervised views. Specifically, we propose an approach to produce photorealistic textures for approximate surfaces, even from misaligned images, by learning an objective function that is robust to these errors. The key idea of our approach is to learn a patch-based conditional discriminator which guides the texture optimization to be tolerant to misalignments. Our discriminator takes a synthesized view and a real image, and evaluates whether the synthesized one is realistic, under a broadened definition of realism. We train the discriminator by providing as `real' examples pairs of input views and their misaligned versions -- so that the learned adversarial loss will tolerate errors from the scans. Experiments on synthetic and real data under quantitative or qualitative evaluation demonstrate the advantage of our approach in comparison to state of the art. Our code is publicly available with video demonstration.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源