论文标题

单眼相机的全球连贯深度的动态场景的新型视图综合

Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera

论文作者

Yoon, Jae Shin, Kim, Kihwan, Gallo, Orazio, Park, Hyun Soo, Kautz, Jan

论文摘要

本文提出了一种新的方法,可以从任意视图和时间综合一个动态场景的图像中综合图像。新型视图综合的关键挑战是由动态场景重建引起的,在动态场景重建中,阴性几何形状不适用于动态内容的局部运动。为了应对这一挑战,我们建议将深度从单视图(DSV)和多视图立体声(DMV)的深度相结合,其中DSV已完成,即,将深度分配给每个像素,但在其规模上进行了视图变量,而DMV则是视图的,却是不合时宜的。我们的见解是,尽管其规模和质量与其他视图不一致,但可以使用单个视图的深度估计来推理动态内容的全球相干几何形状。我们将这个问题提出,因为学习纠正DSV的规模,并在视图之间以局部一致的运动来完善每个深度,以形成连贯的深度估计。我们以自我监督的方式将这些任务集成到深度融合网络中。鉴于融合的深度图,我们将在特定位置和时间与我们的深层混合网络合成一个逼真的虚拟视图,该网络完成场景并呈现虚拟视图。我们评估了深度估计方法的方法,并查看了对各种现实世界动态场景的综合方法,并显示了现有方法的出色性能。

This paper presents a new method to synthesize an image from arbitrary views and times given a collection of images of a dynamic scene. A key challenge for the novel view synthesis arises from dynamic scene reconstruction where epipolar geometry does not apply to the local motion of dynamic contents. To address this challenge, we propose to combine the depth from single view (DSV) and the depth from multi-view stereo (DMV), where DSV is complete, i.e., a depth is assigned to every pixel, yet view-variant in its scale, while DMV is view-invariant yet incomplete. Our insight is that although its scale and quality are inconsistent with other views, the depth estimation from a single view can be used to reason about the globally coherent geometry of dynamic contents. We cast this problem as learning to correct the scale of DSV, and to refine each depth with locally consistent motions between views to form a coherent depth estimation. We integrate these tasks into a depth fusion network in a self-supervised fashion. Given the fused depth maps, we synthesize a photorealistic virtual view in a specific location and time with our deep blending network that completes the scene and renders the virtual view. We evaluate our method of depth estimation and view synthesis on diverse real-world dynamic scenes and show the outstanding performance over existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源