激光与光电子学进展, 2018, 55 (2): 021501, 网络出版: 2018-09-10
点线特征融合的单目视觉里程计 下载: 2020次
Point-Line Feature Fusion in Monocular Visual Odometry
机器视觉 巡逻机器人 自主定位与建图 点、线特征融合 视觉里程计 深度滤波器 machine vision patrol robot autonomous localization and mapping point-line feature fusion visual odometry depth filter
摘要
为了解决地下工程场景下巡逻机器人的定位与建图问题,提出了一种基于点线特征融合的半直接单目视觉里程计(SVO)算法。本文算法可分为特征提取、状态估计和深度滤波器3个线程。特征提取线程负责图像点、线特征的提取;状态估计线程利用点、线特征不同的匹配与跟踪策略获得相机的6自由度位姿,并通过帧与帧、特征与特征、局部帧之间的约束关系进一步优化相机位姿;而深度滤波器线程通过概率分布的方式刻画三维路标点相对于相机光心的深度信息,该方式相对于固定深度值的方式能够提高深度估计的稳健性。本文算法在Euroc公开数据集运行的平均定位精度相对于LSD-SLAM算法提高了17.6%,而在Tum公开数据集上运行的平均定位精度相对于SVO 算法提高了6.4%。利用加载摄像头的机器人平台进行测试,实际运行的定位误差大约为1.17%,满足实际需求。
Abstract
A semi-direct monocular visual odometry (SVO) algorithm with point-line feature fusion is proposed to solve the problem of localization and mapping in underground engineering for patrol robot. The proposed algorithm is divided into feature extraction, state estimation and depth filter. The point-line feature of image is extracted in the feature extraction thread. The camera pose with 6 degrees of freedom is obtained with different matching and tracking strategies of point-line feature, and it is further optimized by the constraint relationships between frame and frame, feature and feature, and local frames. And the depth information from three-dimensional landmarks to the camera optical center is described through the depth of filter threads with probability distribution. The proposed method can improve the robustness of depth estimation with respect to the fixed depth values. The average positioning accuracy of the proposed algorithm increases by 17.6% on the Euroc dataset compared with that of the LSD-SLAM algorithm, and increases by 6.4% on the Tum dataset compared with that of SVO algorithm. We adopt the robot camera platform to test, and the actual positioning error of about 1.17% meets the actual requirements.
袁梦, 李艾华, 郑勇, 崔智高, 鲍振强. 点线特征融合的单目视觉里程计[J]. 激光与光电子学进展, 2018, 55(2): 021501. Meng Yuan, Aihua Li, Yong Zheng, Zhigao Cui, Zhengqiang Bao. Point-Line Feature Fusion in Monocular Visual Odometry[J]. Laser & Optoelectronics Progress, 2018, 55(2): 021501.