论文标题
两个框架光流的异质范围可能性模型
A Heteroscedastic Likelihood Model for Two-frame Optical Flow
论文作者
论文摘要
机器视觉是移动机器人系统中使用的重要传感技术。提高此类系统的自主权需要对传感器不确定性进行准确的表征。视觉包括由于摄像机传感器和由于环境照明和纹理而引起的外部不确定性引起的内在不确定性,这些不确定性通过用于产生视觉测量的图像处理算法传播。为了忠实地表征视觉测量,我们必须考虑到这些不确定性。 在本文中,我们提出了一类新的似然函数,这些函数表征了两框光流的误差分布的不确定性,该分布能够依赖于纹理。我们采用拟议类来表征Farneback和Lucas Kanade光流算法,并与模拟环境中各种质地上各自的经验误差分布达成密切的一致性。在视觉探射仪自我运动研究中证明了拟议的可能性模型的实用性,这导致具有当代方法的性能竞争。经验上一致的可能性模型的开发推进了基于视觉的贝叶斯推论的必要工具集,并使传感器数据融合与GPS,LIDAR和IMU可以提高强大的自主导航。
Machine vision is an important sensing technology used in mobile robotic systems. Advancing the autonomy of such systems requires accurate characterisation of sensor uncertainty. Vision includes intrinsic uncertainty due to the camera sensor and extrinsic uncertainty due to environmental lighting and texture, which propagate through the image processing algorithms used to produce visual measurements. To faithfully characterise visual measurements, we must take into account these uncertainties. In this paper, we propose a new class of likelihood functions that characterises the uncertainty of the error distribution of two-frame optical flow that enables a heteroscedastic dependence on texture. We employ the proposed class to characterise the Farneback and Lucas Kanade optical flow algorithms and achieve close agreement with their respective empirical error distributions over a wide range of texture in a simulated environment. The utility of the proposed likelihood model is demonstrated in a visual odometry ego-motion study, which results in performance competitive with contemporary methods. The development of an empirically congruent likelihood model advances the requisite tool-set for vision-based Bayesian inference and enables sensor data fusion with GPS, LiDAR and IMU to advance robust autonomous navigation.