论文标题

草图RT3D:如何重建每秒数十亿个光子

Sketched RT3D: How to reconstruct billions of photons per second

论文作者

Tachella, Julián, Sheehan, Michael P., Davies, Mike E.

论文摘要

单光子光检测和射程(LIDAR)捕获了3D场景的深度和强度信息。从观察到的光子中重建场景是一项艰巨的任务,这是由于与背景照明源相关的虚假检测。为了解决这个问题,有很多3D重建算法利用自然场景的空间规律性来提供稳定的重建。但是,大多数现有算法具有与记录光子数量成正比的计算和内存复杂性。这种复杂性阻碍了他们对现代激光雷达阵列的实时部署,该阵列每秒获得数十亿个光子。利用最近的激光雷达素描框架,我们表明可以修改现有的重建算法,以便它们只需要光子信息的小草图。特别是,我们提出了一种最新的最新算法的草图版本,该算法使用Point Cloud Denoisers提供空间正规化的重建。在实际激光雷德数据集上执行的一系列实验表明,执行时间和内存要求的大幅减少,而在完整数据案例中,实现了相同的重建性能。

Single-photon light detection and ranging (lidar) captures depth and intensity information of a 3D scene. Reconstructing a scene from observed photons is a challenging task due to spurious detections associated with background illumination sources. To tackle this problem, there is a plethora of 3D reconstruction algorithms which exploit spatial regularity of natural scenes to provide stable reconstructions. However, most existing algorithms have computational and memory complexity proportional to the number of recorded photons. This complexity hinders their real-time deployment on modern lidar arrays which acquire billions of photons per second. Leveraging a recent lidar sketching framework, we show that it is possible to modify existing reconstruction algorithms such that they only require a small sketch of the photon information. In particular, we propose a sketched version of a recent state-of-the-art algorithm which uses point cloud denoisers to provide spatially regularized reconstructions. A series of experiments performed on real lidar datasets demonstrates a significant reduction of execution time and memory requirements, while achieving the same reconstruction performance than in the full data case.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源