论文标题

通过姿势解开从部分点云中进行自我监督的特征学习

Self-Supervised Feature Learning from Partial Point Clouds via Pose Disentanglement

论文作者

Tsai, Meng-Shiun, Chiang, Pei-Ze, Tsai, Yi-Hsuan, Chiu, Wei-Chen

论文摘要

在Point Clouds上的自我监督学习最近引起了很多关注,因为它解决了点云任务上的标签效率和域间隙问题。在本文中,我们提出了一个新颖的自我监管框架,以从部分云中学习信息的表示。我们利用LIDAR扫描的部分点云,其中包含内容和姿势属性,我们表明将这两个因素从部分点云中解散会增强特征表示的学习。为此,我们的框架由三个主要部分组成:1)捕获点云的整体语义的完成网络; 2)一个姿势回归网络,以了解从中扫描部分数据的视角; 3)部分重建网络,以鼓励模型学习内容和姿势功能。为了证明学习特征表示的鲁棒性,我们执行了几项下游任务,包括分类,部分分割和注册,并与最新方法进行比较。我们的方法不仅胜过现有的自我监督方法,而且在合成和现实世界中的数据集中显示出更好的推广性。

Self-supervised learning on point clouds has gained a lot of attention recently, since it addresses the label-efficiency and domain-gap problems on point cloud tasks. In this paper, we propose a novel self-supervised framework to learn informative representations from partial point clouds. We leverage partial point clouds scanned by LiDAR that contain both content and pose attributes, and we show that disentangling such two factors from partial point clouds enhances feature representation learning. To this end, our framework consists of three main parts: 1) a completion network to capture holistic semantics of point clouds; 2) a pose regression network to understand the viewing angle where partial data is scanned from; 3) a partial reconstruction network to encourage the model to learn content and pose features. To demonstrate the robustness of the learnt feature representations, we conduct several downstream tasks including classification, part segmentation, and registration, with comparisons against state-of-the-art methods. Our method not only outperforms existing self-supervised methods, but also shows a better generalizability across synthetic and real-world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源