论文标题

GRNET:密集点云完成的保存残差网络

GRNet: Gridding Residual Network for Dense Point Cloud Completion

论文作者

Xie, Haozhe, Yao, Hongxun, Zhou, Shangchen, Mao, Jiageng, Zhang, Shengping, Sun, Wenxiu

论文摘要

在许多视觉和机器人应用程序中,从不完整的一个不完整的3D点云中估算完整的3D点云是一个关键问题。主流方法(例如PCN和TopNet)使用多层感知器(MLP)直接处理点云,这可能会导致细节丢失,因为点云的结构和上下文未完全考虑。为了解决此问题,我们将3D网格作为中间表示,以使无序的点云正常。因此,我们提出了一个新型的网格剩余网络(GRNET),以完成点云完成。特别是,我们设计了两个新型可区分层,称为网格和网格反向,以在点云和3D网格之间转换而不会丢失结构信息。我们还提出了可区分的立方特征采样层,以提取相邻点的特征,从而保留上下文信息。此外,我们设计了一个新的损耗函数,即网格损失,以计算预测的3D网格和地面真相点云之间的L1距离,这有助于恢复细节。实验结果表明,所提出的GRNET针对Shapenet,Postemion3D和Kitti基准的最新方法的表现良好。

Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications. Mainstream methods (e.g., PCN and TopNet) use Multi-layer Perceptrons (MLPs) to directly process point clouds, which may cause the loss of details because the structural and context of point clouds are not fully considered. To solve this problem, we introduce 3D grids as intermediate representations to regularize unordered point clouds. We therefore propose a novel Gridding Residual Network (GRNet) for point cloud completion. In particular, we devise two novel differentiable layers, named Gridding and Gridding Reverse, to convert between point clouds and 3D grids without losing structural information. We also present the differentiable Cubic Feature Sampling layer to extract features of neighboring points, which preserves context information. In addition, we design a new loss function, namely Gridding Loss, to calculate the L1 distance between the 3D grids of the predicted and ground truth point clouds, which is helpful to recover details. Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源