论文标题
Granet:ALS Point Cloud分类的全球关系感知网络
GraNet: Global Relation-aware Attentional Network for ALS Point Cloud Classification
论文作者
论文摘要
在这项工作中,我们提出了一个专注于ALS Point Clouds语义标签的新型神经网络,该网络研究了远程空间和频道关系的重要性,并被称为全球关系 - 意识到的注意力网络(Granet)。 Granet首先使用局部空间差异注意卷积模块(LOSDA)学习局部几何描述和局部依赖性。在LOSDA中,通过堆叠几个局部空间几何学习模块和局部依赖关系,通过使用注意池池嵌入局部依赖性,可以完全考虑方向信息,空间分布和高程差异。然后,研究了由空间关系吸引注意模块(SRA)和渠道关系意识到注意模块(CRA)组成的全球关系感知模块(GRA),以进一步学习任何空间位置和特征向量之间的全球空间和渠道关系。上述两个重要模块嵌入了多尺度网络体系结构中,以进一步考虑大型城市地区的规模变化。我们在两个ALS Point Cloud数据集上进行了全面的实验,以评估我们提出的框架的性能。结果表明,与其他常用的高级分类方法相比,我们的方法可以实现更高的分类精度。我们在ISPRS基准数据集上的方法的总体准确性(OA)可以提高到84.5%,以对九个语义类别进行分类,平均F1度量(AVGF1)为73.5%。详细说明,每个对象类别类别的F1值:电力线:66.3%,低植被:82.8%,不透水的表面:91.8%,汽车:80.7%,栅栏:51.2%,屋顶:94.6%,外墙:62.1%,灌木:49.9%,树木,树木:82.1%。此外,使用涵盖高度密集城市地区的新的ALS Point Cloud数据集进行了实验。
In this work, we propose a novel neural network focusing on semantic labeling of ALS point clouds, which investigates the importance of long-range spatial and channel-wise relations and is termed as global relation-aware attentional network (GraNet). GraNet first learns local geometric description and local dependencies using a local spatial discrepancy attention convolution module (LoSDA). In LoSDA, the orientation information, spatial distribution, and elevation differences are fully considered by stacking several local spatial geometric learning modules and the local dependencies are embedded by using an attention pooling module. Then, a global relation-aware attention module (GRA), consisting of a spatial relation-aware attention module (SRA) and a channel relation aware attention module (CRA), are investigated to further learn the global spatial and channel-wise relationship between any spatial positions and feature vectors. The aforementioned two important modules are embedded in the multi-scale network architecture to further consider scale changes in large urban areas. We conducted comprehensive experiments on two ALS point cloud datasets to evaluate the performance of our proposed framework. The results show that our method can achieve higher classification accuracy compared with other commonly used advanced classification methods. The overall accuracy (OA) of our method on the ISPRS benchmark dataset can be improved to 84.5% to classify nine semantic classes, with an average F1 measure (AvgF1) of 73.5%. In detail, we have following F1 values for each object class: powerlines: 66.3%, low vegetation: 82.8%, impervious surface: 91.8%, car: 80.7%, fence: 51.2%, roof: 94.6%, facades: 62.1%, shrub: 49.9%, trees: 82.1%. Besides, experiments were conducted using a new ALS point cloud dataset covering highly dense urban areas.