论文标题

未来的城市场景通过车辆综合发电

Future Urban Scenes Generation Through Vehicles Synthesis

论文作者

Simoni, Alessandro, Bergamini, Luca, Palazzi, Andrea, Calderara, Simone, Cucchiara, Rita

论文摘要

在这项工作中,我们提出了一条深度学习的管道,以预测城市场景的视觉未来外观。尽管最近进步,但以端到端的方式产生整个场景仍然远远没有实现。取而代之的是,我们遵循两个阶段的方法,其中包含可解释的信息,每个演员都独立建模。我们利用人为小说的新观点综合范式;即,在3D空间中生成经历几何旋转式翻译的物体的合成表示。我们的模型可以轻松地通过最新跟踪方法或用户本身提供的约束(例如输入轨迹)来调节。这使我们能够以多模式的方式从相同的输入开始产生一组不同的现实未来。我们从视觉和定量地显示了这种方法比传统的端到端场景生成方法在CityFlow上的优越性,这是一个具有挑战性的现实世界数据集。

In this work we propose a deep learning pipeline to predict the visual future appearance of an urban scene. Despite recent advances, generating the entire scene in an end-to-end fashion is still far from being achieved. Instead, here we follow a two stages approach, where interpretable information is included in the loop and each actor is modelled independently. We leverage a per-object novel view synthesis paradigm; i.e. generating a synthetic representation of an object undergoing a geometrical roto-translation in the 3D space. Our model can be easily conditioned with constraints (e.g. input trajectories) provided by state-of-the-art tracking methods or by the user itself. This allows us to generate a set of diverse realistic futures starting from the same input in a multi-modal fashion. We visually and quantitatively show the superiority of this approach over traditional end-to-end scene-generation methods on CityFlow, a challenging real world dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源