论文标题
动态点云通过歧管到manifold距离
Dynamic Point Cloud Denoising via Manifold-to-Manifold Distance
论文作者
论文摘要
3D动态点云提供了现实世界中的对象或运动场景的自然离散表示形式,并在沉浸式触觉,自主驾驶,监视,\等中具有广泛的应用。然而,由于硬件,软件或其他原因,动态点云通常受到噪声的扰动。尽管已经提出了许多方法来用于静态点云降级,但对于动态点云的降级,很少做出努力,这是由于在空间和时间上的不规则采样模式而艰巨的挑战。在本文中,我们表示动态点自然在时空图上自然地云,并利用相对于基础表面(歧管)的时间一致性。特别是,我们在图表上定义了一个歧管距离及其离散的距离,以测量时间域中的表面斑块之间的基于变异的内在距离,前提是图形运算符是Riemannian歧管上功能的离散对应物。然后,我们基于时间距离和空间域中相邻斑块中的点之间的相应表面贴片之间构建时空图连接。利用初始图表示,我们将动态点云降级为所需点云的关节优化和基础图表示,并通过空间平滑度和时间一致性正规化。我们重新制定优化并提出有效的算法。实验结果表明,所提出的方法在高斯噪声和模拟激光雷德噪声上都从最新的静态点云降解方法中明显优于每个帧的独立降级。
3D dynamic point clouds provide a natural discrete representation of real-world objects or scenes in motion, with a wide range of applications in immersive telepresence, autonomous driving, surveillance, \etc. Nevertheless, dynamic point clouds are often perturbed by noise due to hardware, software or other causes. While a plethora of methods have been proposed for static point cloud denoising, few efforts are made for the denoising of dynamic point clouds, which is quite challenging due to the irregular sampling patterns both spatially and temporally. In this paper, we represent dynamic point clouds naturally on spatial-temporal graphs, and exploit the temporal consistency with respect to the underlying surface (manifold). In particular, we define a manifold-to-manifold distance and its discrete counterpart on graphs to measure the variation-based intrinsic distance between surface patches in the temporal domain, provided that graph operators are discrete counterparts of functionals on Riemannian manifolds. Then, we construct the spatial-temporal graph connectivity between corresponding surface patches based on the temporal distance and between points in adjacent patches in the spatial domain. Leveraging the initial graph representation, we formulate dynamic point cloud denoising as the joint optimization of the desired point cloud and underlying graph representation, regularized by both spatial smoothness and temporal consistency. We reformulate the optimization and present an efficient algorithm. Experimental results show that the proposed method significantly outperforms independent denoising of each frame from state-of-the-art static point cloud denoising approaches, on both Gaussian noise and simulated LiDAR noise.