论文标题

无标记体积传感器对齐的深软倾向

Deep Soft Procrustes for Markerless Volumetric Sensor Alignment

论文作者

Sterzentsenko, Vladimiros, Doumanoglou, Alexandros, Thermos, Spyridon, Zioulis, Nikolaos, Zarpalas, Dimitrios, Daras, Petros

论文摘要

随着消费级深度传感器的出现,低成本的体积捕获系统更容易部署。尽管他们的更广泛的采用取决于它们的可用性,并扩展了空间对齐多个传感器的实用性。大多数现有的对齐方法采用视觉模式,例如Checkerboards或标记,需要高度的用户参与和技术知识。更易于用户友好,易于使用的方法取决于利用物理结构的几何模式的无标记方法。但是,当前的SOA方法受到放置和传感器数量的限制。在这项工作中,我们改善了无标记的数据驱动的对应关系估计,以实现更健壮和灵活的多传感器空间对准。特别是,我们以端到端方式将几何约束纳入了基于典型的分割模型中,并将中间密集分类任务与目标姿势估计1一起桥接。这是通过柔软的,可区分的procrustes分析来完成的,该分析将分割的正规化并在扩展的传感器放置配置中实现较高的外部校准性能,同时不受体积捕获系统的传感器数量的不受限制。我们的模型在实验上显示出可以通过基于标记的方法获得相似的结果,并且表现出色,同时也对校准结构的姿势变化也很强。代码和预告片的模型可在https://vcl3d.github.io/Structurenet/上找到。

With the advent of consumer grade depth sensors, low-cost volumetric capture systems are easier to deploy. Their wider adoption though depends on their usability and by extension on the practicality of spatially aligning multiple sensors. Most existing alignment approaches employ visual patterns, e.g. checkerboards, or markers and require high user involvement and technical knowledge. More user-friendly and easier-to-use approaches rely on markerless methods that exploit geometric patterns of a physical structure. However, current SoA approaches are bounded by restrictions in the placement and the number of sensors. In this work, we improve markerless data-driven correspondence estimation to achieve more robust and flexible multi-sensor spatial alignment. In particular, we incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one. This is accomplished by a soft, differentiable procrustes analysis that regularizes the segmentation and achieves higher extrinsic calibration performance in expanded sensor placement configurations, while being unrestricted by the number of sensors of the volumetric capture system. Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure. Code and pretrained models are available at https://vcl3d.github.io/StructureNet/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源