论文标题

从无约束的多视频视频中对动态事件的4D可视化

4D Visualization of Dynamic Events from Unconstrained Multi-View Videos

论文作者

Bansal, Aayush, Vo, Minh, Sheikh, Yaser, Ramanan, Deva, Narasimhan, Srinivasa

论文摘要

我们提出了一种数据驱动的方法,用于从手持多个摄像机捕获的视频中对动态事件的4D时空可视化。我们方法的关键是使用特定于场景的自我监督神经网络来组成事件的静态和动态方面。尽管从离散的角度捕获,但该模型使我们能够连续绕活动的时空移动。该模型使我们能够创建有助于:(1)冻结时间并探索视图的虚拟摄像机; (2)冻结视图并在时间上移动; (3)同时改变时间和视图。如果其他视图中可见,我们还可以编辑视频并揭示给定视图的遮挡对象。我们验证了挑战使用多达15台移动摄像机捕获的野外事件的方法。

We present a data-driven approach for 4D space-time visualization of dynamic events from videos captured by hand-held multiple cameras. Key to our approach is the use of self-supervised neural networks specific to the scene to compose static and dynamic aspects of an event. Though captured from discrete viewpoints, this model enables us to move around the space-time of the event continuously. This model allows us to create virtual cameras that facilitate: (1) freezing the time and exploring views; (2) freezing a view and moving through time; and (3) simultaneously changing both time and view. We can also edit the videos and reveal occluded objects for a given view if it is visible in any of the other views. We validate our approach on challenging in-the-wild events captured using up to 15 mobile cameras.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源