首页 > 论文 > 激光与光电子学进展 > 57卷 > 20期(pp:201501--1)

融合光流法和特征匹配的视觉里程计

Visual Odometer Based on Optical Flow Method and Feature Matching

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

针对传统视觉里程计中光流法定位精度差、特征点法耗时多的问题,提出一种融合光流法和特征匹配的视觉里程计模型。该模型融合了基于帧间优化的LK光流位姿估计和基于关键帧的光流/特征点位姿优化算法。针对传统参考帧/当前帧跟踪方式容易产生累积误差的问题,在光流法的基础上,通过引入局部优化算法对相机进行初步位姿估计;针对特征法中图像插入频率过高、耗时多的问题,在关键帧的基础上,通过构建光流/特征点统一损失函数进行相机位姿优化。在EuRoC数据集上进行了算法定位精度测试,结果表明,简单环境下所提算法与特征点法的定位精度相当;在特征点缺失情况下,所提算法的定位精度较特征点法有所提高,具有一定的鲁棒性。运行时间测试结果表明,在保证定位精度的基础上,所提算法的运行时间较特征点法减少了37.9%,具有一定的实时性。

Abstract

Aiming at the problems that there exist poor accuracy of the optical flow method and time consumption of the feature point method in traditional visual odometers, we propose the model of a visual odometer by integrating optical flow with feature matching. This model mainly fuses the LK optical flow pose estimation based on inter-frame optimization with the optical flow / feature point pose optimization based on key frames. In addition, aiming at the problem that there occur accumulation errors in the traditional reference-frame/current-frame tracking method, we introduce a local optimization algorithm on the basis of the optical flow method to preliminarily estimate the camera''s pose. Simultaneously, aiming at the problems that the image insertion frequency is too high and time consumption in the feature method, we construct a unified loss function of optical flow/feature points on the basis of the key frames to optimize the camera’s pose. The position accuracy test results of the algorithm on the EuRoC dataset show that the position accuracy of the proposed algorithm in simple environments is equivalent to that of the feature point method, and in the case of missing feature points, the proposed algorithm possesses position accuracy higher than that of the feature point method and has certain robustness. The running time test results show that on the basis of ensuring the positioning accuracy, the running time of the proposed algorithm is 37.9% less than that of the feature point method, and the algorithm has the certain real-time performance.

广告组1 - 空间光调制器+DMD
补充资料

中图分类号:TP24

DOI:10.3788/LOP57.201501

所属栏目:机器视觉

基金项目:国家自然科学基金;

收稿日期:2020-01-13

修改稿日期:2020-02-24

网络出版日期:2020-10-01

作者单位    点击查看

许广富:东南大学仪器科学与工程学院, 江苏 南京 210096微惯性仪表与先进导航技术教育部重点实验室, 江苏 南京 210096
曾继超:东南大学仪器科学与工程学院, 江苏 南京 210096微惯性仪表与先进导航技术教育部重点实验室, 江苏 南京 210096
刘锡祥:东南大学仪器科学与工程学院, 江苏 南京 210096微惯性仪表与先进导航技术教育部重点实验室, 江苏 南京 210096

联系人作者:刘锡祥(scliuseu@163.com)

备注:国家自然科学基金;

【1】Lin F C, Liu Y H, Zhou J F, et al. Optimization of visual odometry algorithm based on ORB feature [J]. Laser & Optoelectronics Progress. 2019, 56(21): 211507.
林付春, 刘宇红, 周进凡, 等. 基于ORB特征的视觉里程计算法优化 [J]. 激光与光电子学进展. 2019, 56(21): 211507.

【2】Xing J W, Tian H F, Wang F. Study on object position and pose estimation method of monocular camera [J]. Navigation Positioning and Timing. 2019, 6(4): 71-77.
邢加伟, 田海峰, 王芳. 单目相机物体位姿估计方法研究 [J]. 导航定位与授时. 2019, 6(4): 71-77.

【3】Hou Y H, Liu Y, Lü H L, et al. An autonomous navigation systems of UAVs based on binocular vision [J]. Journal of Tianjin University (Science and Technology). 2019, 52(12): 1262-1269.
侯永宏, 刘艳, 吕华龙, 等. 一种基于双目视觉的无人机自主导航系统 [J]. 天津大学学报(自然科学与工程技术版). 2019, 52(12): 1262-1269.

【4】Opower H. Multiple view geometry in computer vision [J]. Optics and Lasers in Engineering. 2002, 37(1): 85-86.

【5】Mur-Artal R. Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system [J]. IEEE Transactions on Robotics. 2015, 31(5): 1147-1163.

【6】Mur-Artal R, Tardos J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras [J]. IEEE Transactions on Robotics. 2017, 33(5): 1255-1262.

【7】Rublee E, Rabaud V, Konolige K, et al. ORB: an efficient alternative to SIFT or SURF . [C]∥2011 International Conference on Computer Vision, November 6-13 , 2011, Barcelona, Spain. New York: IEEE. 2011, 2564-2571.

【8】Klein G, Murray D. Parallel tracking and mapping for small AR workspaces . [C]∥2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, November 13-16 ,2007, Nara, Japan. New York:IEEE. 2007, 225-234.

【9】Guo R F, Jia R. Research on multi-information fusion target tracking algorithm based on LK optical flow method [J]. Modern Electronics Technique. 2019, 42(18): 55-59.
郭瑞峰, 贾榕. LK光流法的多信息融合目标跟踪算法研究 [J]. 现代电子技术. 2019, 42(18): 55-59.

【10】Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry . [C]∥2014 IEEE International Conference on Robotics and Automation (ICRA), May 31- June 7, 2014, Hong Kong, China. New York: IEEE. 2014, 15-22.

【11】Zhang G L, Yao E L, Lin Z L, et al. Fast binocular SLAM algorithm combining the direct method and the feature-based method [J]. Robot. 2017, 39(6): 879-888.
张国良, 姚二亮, 林志林, 等. 融合直接法与特征法的快速双目SLAM算法 [J]. 机器人. 2017, 39(6): 879-888.

【12】Qi N X, Yang X G, Li X F, et al. Visual odometry algorithm based on ORB features and LK optical flow [J]. Chinese Journal of Scientific Instrument. 2018, 39(12): 216-227.
齐乃新, 杨小冈, 李小峰, 等. 基于ORB特征和LK光流的视觉里程计算法 [J]. 仪器仪表学报. 2018, 39(12): 216-227.

【13】Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age [J]. IEEE Transactions on Robotics. 2016, 32(6): 1309-1332.

【14】Baker S, Matthews I. Lucas-Kanade 20 years on: a unifying framework [J]. International Journal of Computer Vision. 2004, 56(3): 221-255.

引用该论文

Xu Guangfu,Zeng Jichao,Liu Xixiang. Visual Odometer Based on Optical Flow Method and Feature Matching[J]. Laser & Optoelectronics Progress, 2020, 57(20): 201501

许广富,曾继超,刘锡祥. 融合光流法和特征匹配的视觉里程计[J]. 激光与光电子学进展, 2020, 57(20): 201501

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF