论文标题
LIDAR ROAD-ATLAS:一般3D城市环境的有效地图表示
LiDAR Road-Atlas: An Efficient Map Representation for General 3D Urban Environment
论文作者
论文摘要
在这项工作中,我们建议在一般城市环境中自动驾驶机器人或车辆导航,这是一种可LIDAR ROAD-ATLAS,这是一种可压制和高效的3D地图表示。 LIDAR ROAD-ATLA可以通过基于逐步合并本地2D占用网格地图(2D-OGM)的在线映射框架生成。具体而言,我们的LiDar Road-Atlas代表的贡献是三倍。首先,我们解决了基于LIDAR POINT云中可穿越和路缘区域的实时划定,在非结构化城市场景中创建本地2D-OGM的挑战性问题。其次,我们通过概率融合方案在多层城市道路方案中实现了准确的3D映射。第三,由于自动局部ogm诱导的可穿越区域标签和稀疏的概率局部点云编码,我们实现了一般环境的非常有效的3D地图表示。鉴于LiDar Road-Atlas,可以实现准确的车辆定位,路径计划和其他一些任务。我们的地图表示对动态对象不敏感,该对象可以根据概率融合在结果地图中过滤。从经验上讲,我们将地图表示与机器人和自主驾驶社会中的几种流行地图表示方法进行了比较,而我们的地图表示在效率,可扩展性和紧凑性方面更为有利。此外,考虑到几个公共基准数据集中创建的LiDAR Road-Atlas表示形式,我们还广泛评估本地化精度。借助16个通道激光雷达传感器,我们的方法在阿波罗数据集上达到了0.26m(翻译)和1.07度(旋转)的平均全球定位误差,在Mulran DataSet上分别在10Hz上,在10Hz的驾驶中,分别在Mulran DataSet上,在Mulran DataSet上分别在Mulran DataSet上进行了验证。
In this work, we propose the LiDAR Road-Atlas, a compactable and efficient 3D map representation, for autonomous robot or vehicle navigation in general urban environment. The LiDAR Road-Atlas can be generated by an online mapping framework based on incrementally merging local 2D occupancy grid maps (2D-OGM). Specifically, the contributions of our LiDAR Road-Atlas representation are threefold. First, we solve the challenging problem of creating local 2D-OGM in non-structured urban scenes based on a real-time delimitation of traversable and curb regions in LiDAR point cloud. Second, we achieve accurate 3D mapping in multiple-layer urban road scenarios by a probabilistic fusion scheme. Third, we achieve very efficient 3D map representation of general environment thanks to the automatic local-OGM induced traversable-region labeling and a sparse probabilistic local point-cloud encoding. Given the LiDAR Road-Atlas, one can achieve accurate vehicle localization, path planning and some other tasks. Our map representation is insensitive to dynamic objects which can be filtered out in the resulting map based on a probabilistic fusion. Empirically, we compare our map representation with a couple of popular map representation methods in robotics and autonomous driving societies, and our map representation is more favorable in terms of efficiency, scalability and compactness. In addition, we also evaluate localization accuracy extensively given the created LiDAR Road-Atlas representations on several public benchmark datasets. With a 16-channel LiDAR sensor, our method achieves an average global localization errors of 0.26m (translation) and 1.07 degrees (rotation) on the Apollo dataset, and 0.89m (translation) and 1.29 degrees (rotation) on the MulRan dataset, respectively, at 10Hz, which validates the promising performance of our map representation for autonomous driving.