论文标题

具有可区分虚拟物体插入的街道场景的神经光场估计

Neural Light Field Estimation for Street Scenes with Differentiable Virtual Object Insertion

论文作者

Wang, Zian, Chen, Wenzheng, Acuna, David, Kautz, Jan, Fidler, Sanja

论文摘要

我们考虑了户外照明估算的挑战性问题,即影片虚拟对象将插入照片插入照片的目标。室外照明估计的现有作品通常将场景照明简化为环境图,该图无法捕获室外场景中的空间变化的照明效果。在这项工作中,我们提出了一种神经方法,该神经方法可以从单个图像中估算5D HDR光场,以及一个可区分的对象插入公式,该公式可以通过基于图像的损失来端对端训练,从而鼓励现实主义。具体而言,我们设计了针对室外场景量身定制的混合照明代表,其中包含一个HDR Sky Dome,可处理太阳的极端强度,以及一个体积的照明表示,该代表模拟了周围场景的空间变化外观。通过估计的照明,我们的阴影感知对象插入是完全可区分的,这使得对复合图像的对抗训练可以为照明预测提供其他监督信号。我们在实验上证明,与现有的室外照明估计方法相比,混合照明表示更具性能。我们进一步显示了AR对象插入在自主驾驶应用程序中的好处,在对我们的增强数据进行培训时,我们在其中获得3D对象检测器的性能提高。

We consider the challenging problem of outdoor lighting estimation for the goal of photorealistic virtual object insertion into photographs. Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map which cannot capture the spatially-varying lighting effects in outdoor scenes. In this work, we propose a neural approach that estimates the 5D HDR light field from a single image, and a differentiable object insertion formulation that enables end-to-end training with image-based losses that encourage realism. Specifically, we design a hybrid lighting representation tailored to outdoor scenes, which contains an HDR sky dome that handles the extreme intensity of the sun, and a volumetric lighting representation that models the spatially-varying appearance of the surrounding scene. With the estimated lighting, our shadow-aware object insertion is fully differentiable, which enables adversarial training over the composited image to provide additional supervisory signal to the lighting prediction. We experimentally demonstrate that our hybrid lighting representation is more performant than existing outdoor lighting estimation methods. We further show the benefits of our AR object insertion in an autonomous driving application, where we obtain performance gains for a 3D object detector when trained on our augmented data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源