论文标题

prgflow:基准互换统一的统一深视觉惯性进程

PRGFlow: Benchmarking SWAP-Aware Unified Deep Visual Inertial Odometry

论文作者

Sanket, Nitin J., Singh, Chahat Deep, Fermüller, Cornelia, Aloimonos, Yiannis

论文摘要

空中机器人的探测器必须具有低潜伏期和较高的鲁棒性,同时还要尊重机器人大小所需的尺寸,重量,面积和功率(交换)约束。事实证明,视觉传感器与惯性测量单元(IMU)结合在一起,是在资源约束的空中机器人上获得稳健且低的潜伏能仪的最佳组合。最近,由于其高精度和稳健性,视觉惯性融合的深度学习方法已获得动力。但是,这些技术的显着优势是它们固有的可伸缩性(适应了不同尺寸的空中机器人)和统一(相同的方法在不同尺寸的空中机器人上起作用),通过使用压缩方法和硬件加速度,这些加速度从以前的方法中缺乏。 为此,我们提出了一种深度学习方法,以进行视觉翻译估计,并将其与惯性传感器融合,以进行完整的6DOF探测仪估计。我们还提出了一个详细的基准测试,以比较不同的体系结构,损耗函数和压缩方法以启用可伸缩性。我们在MSCOCO数据集上评估我们的网络,并评估多个实际飞行轨迹的VI融合。

Odometry on aerial robots has to be of low latency and high robustness whilst also respecting the Size, Weight, Area and Power (SWAP) constraints as demanded by the size of the robot. A combination of visual sensors coupled with Inertial Measurement Units (IMUs) has proven to be the best combination to obtain robust and low latency odometry on resource-constrained aerial robots. Recently, deep learning approaches for Visual Inertial fusion have gained momentum due to their high accuracy and robustness. However, the remarkable advantages of these techniques are their inherent scalability (adaptation to different sized aerial robots) and unification (same method works on different sized aerial robots) by utilizing compression methods and hardware acceleration, which have been lacking from previous approaches. To this end, we present a deep learning approach for visual translation estimation and loosely fuse it with an Inertial sensor for full 6DoF odometry estimation. We also present a detailed benchmark comparing different architectures, loss functions and compression methods to enable scalability. We evaluate our network on the MSCOCO dataset and evaluate the VI fusion on multiple real-flight trajectories.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源