首页 > 论文 > 激光与光电子学进展 > 57卷 > 4期(pp:41503--1)

室内移动机器人双目视觉全局定位

Global Localization for Indoor Mobile Robot Based on Binocular Vision

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

针对当前基于单目视觉的室内移动机器人全局定位算法复杂度大等问题,提出一种室内移动机器人双目视觉全局定位方法。双目视觉下,为保证室内移动机器人在运动过程中能够保持稳定的特征提取,提出基于标定板的全局定位方案,以标定板的中心作为移动机器人的定位点。在此基础上,为提高定位的实时性,缩小标定板角点的提取范围,基于高斯混合模型背景减除法和形态学方法实现了对移动机器人运动区域的检测;基于所建立的标定板角点判据,对移动机器人提取的角点进行筛选,得到了标定板四个角点的图像坐标;结合双目相机标定后的内、外参数和全局定位数学模型,实现对移动机器人定位点坐标的计算。通过实验和分析验证了所提方法的可行性和有效性,为室内移动机器人全局视觉定位提供一种新的思路。

Abstract

The global localization algorithm for an indoor mobile robot based on monocular vision is significantly complex at present. To solve this problem, this study proposes a global localization method for an indoor mobile robot based on binocular vision. To ensure stable feature extraction during the motion of the indoor mobile robot using binocular vision, a calibration board-based global localization scheme is presented. The center of the calibration board is used as the localization point of the mobile robot. Based on this, to improve real-time localization and reduce the extraction range of corner points on the calibration board, the motion area detection of the mobile robot is achieved using the Gaussian mixture model background subtraction method and morphological method. Further, according to the established criterion of corner points on the calibration board, image coordinates of four corner points on the calibration board are obtained by screening the corner points extracted from the mobile robot. The coordinates of the localization point are calculated by combining the intrinsic and extrinsic parameters of the binocular camera and the global localization mathematical model, and the feasibility and effectiveness of the proposed method are verified by experiments and analysis. This provides a new idea for the global vision localization of indoor mobile robots.

中国激光微信矩阵
补充资料

中图分类号:TP391

DOI:10.3788/LOP57.041503

所属栏目:机器视觉

基金项目:国家自然科学基金、中国博士后科学基金 、辽宁省自然科学基金、中央高校基本科研业务费专项;

收稿日期:2019-06-11

修改稿日期:2019-07-26

网络出版日期:2020-02-01

作者单位    点击查看

李鹏:大连海事大学信息科学技术学院, 辽宁 大连 116026
张洋洋:大连海事大学船舶电气工程学院, 辽宁 大连 116026

联系人作者:李鹏(lp20131012@dlmu.edu.cn)

备注:国家自然科学基金、中国博士后科学基金 、辽宁省自然科学基金、中央高校基本科研业务费专项;

【1】Li Y H. Research on indoor localization technology for mobile robot based on passive beacon [D]. Hangzhou: Zhejiang University. 2018.
李月华. 基于无源信标的移动机器人室内定位技术研究 [D]. 杭州: 浙江大学. 2018.

【2】Martín F, Matellán V, Rodríguez F J, et al. Octree-based localization using RGB-D data for indoor robots [J]. Engineering Applications of Artificial Intelligence. 2019, 77: 177-185.

【3】Xia H. Research on binocular vision localization system for indoor robot based on natural landmarks [D]. Changchun: Jilin University. 2017.
夏华. 基于自然路标的室内机器人双目视觉定位系统的研究 [D]. 长春: 吉林大学. 2017.

【4】Li X. Research on navigation technology of indoor mobile robot based on SLAM [D]. Harbin: Harbin Institute of Technology. 2018.
李想. 基于SLAM的室内移动机器人导航技术研究 [D]. 哈尔滨: 哈尔滨工业大学. 2018.

【5】Huang R M. Design and implementation of global positioning algorithm for mobile robot based on vision [D]. Qinhuangdao: Yanshan University. 2017.
黄瑞民. 基于视觉的移动机器人全局定位算法设计与实现 [D]. 秦皇岛: 燕山大学. 2017.

【6】Weitzenfeld A, Biswas J, Akar M, et al. RoboCup small-size league: past, present and future [M]. ∥Bianchi R, Akin H, Ramamoorthy S, et al. RoboCup 2014: robot world cup XVIII. Lecture notes in computer science. Cham: Springer. 2015, 8992: 611-623.

【7】Zhang X. Design and realization of the parallel tracking control system by Client/Server mode for bio-mimetic robotic fish based on global vision [D]. Beijing: China University of Geosciences. 2016.
张枭. C/S全局视觉仿生机器鱼并行跟踪定位控制系统的设计与实现 [D]. 北京: 中国地质大学. 2016.

【8】Wang C Y. Design and implementation of vision-based mobile robot localization system [D]. Tianjin: Nankai University. 2015.
王聪媛. 基于视觉的移动机器人定位系统设计与实现 [D]. 天津: 南开大学. 2015.

【9】Chen J. Research on global localization of indoor mobile robot using a static camera [D]. Nanjing: Southeast University. 2004.
陈军. 基于静态摄像机的室内移动机器人全局定位研究 [D]. 南京: 东南大学. 2004.

【10】Zhao W C. Research on real-time visual localizaton for indoor exprimental multiple-robot system [D]. Harbin: Harbin Institute of Technology. 2007.
赵文闯. 基于视觉的多机器人实验系统室内实时定位研究 [D]. 哈尔滨: 哈尔滨工业大学. 2007.

【11】He J. Positioning and distributed control technology of indoor multi-robot system [D]. Nanjing: Nanjing University of Science & Technology. 2017.
何俊. 室内多机器人系统定位与分布式控制技术 [D]. 南京: 南京理工大学. 2017.

【12】Lee S, Tewolde G, Lim J, et al. Vision based localization for multiple mobile robots using low-cost vision sensor [J]. International Journal of Handheld Computing Research. 2016, 7(1): 12-25.

【13】Krajník T, Nitsche M, Faigl J, et al. A practical multirobot localization system [J]. Journal of Intelligent & Robotic Systems. 2014, 76(3/4): 539-562.

【14】Xu D, Tan M, Li Y. Visual measurement and control for robots[M]. Beijing: National Defense Industry Press, 2016.
徐德, 谭民, 李原. 机器人视觉测量与控制[M]. 3版. 北京: 国防工业出版社, 2016.

【15】Jin J J, Lu W L, Guo X T, et al. Position registration method of simultaneous phase-shifting interferograms based on SURF and RANSAC algorithms [J]. Acta Optica Sinica. 2017, 37(10): 1012002.
靳京京, 卢文龙, 郭小庭, 等. 基于SURF和RANSAC算法的同步相移干涉图位置配准方法 [J]. 光学学报. 2017, 37(10): 1012002.

【16】Su J D, Qi X H, Duan X S. Plane pose measurement method based on monocular vision and checkerboard target [J]. Acta Optica Sinica. 2017, 37(8): 0815002.
苏建东, 齐晓慧, 段修生. 基于单目视觉和棋盘靶标的平面姿态测量方法 [J]. 光学学报. 2017, 37(8): 0815002.

【17】Zhang Z. A flexible new technique for camera calibration [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000, 22(11): 1330-1334.

【18】Li P, Zhang Y. Video smoke detection based on Gaussian mixture model and convolutional neural network [J]. Laser & Optoelectronics Progress. 2019, 56(21): 211502.
李鹏, 张炎. 基于高斯混合模型和卷积神经网络的视频烟雾检测 [J]. 激光与光电子学进展. 2019, 56(21): 211502.

【19】Shao Q K, Zhou Y, Li L, et al. Adaptive background subtraction approach of Gaussian mixture model [J]. Journal of Image and Graphics. 2015, 20(6): 756-763.
邵奇可, 周宇, 李路, 等. 复杂场景下自适应背景减除算法 [J]. 中国图象图形学报. 2015, 20(6): 756-763.

【20】Stauffer C. Grimson W E L. Learning patterns of activity using real-time tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000, 22(8): 747-757.

【21】Liu R, Wang D J, Jia P, et al. Point target detection based on omnidirectional morphology filtering and local characteristic criterion [J]. Acta Optica Sinica. 2017, 37(11): 1104001.
刘让, 王德江, 贾平, 等. 基于全方位形态学滤波和局部特征准则的点目标检测 [J]. 光学学报. 2017, 37(11): 1104001.

【22】Zhang S Y, Li C L. Thick cloud restoration of aerial images based on improved Criminisi algorithm [J]. Laser & Optoelectronics Progress. 2018, 55(12): 121012.
张思雨, 李从利. 基于改进Criminisi算法的航拍图像厚云修复 [J]. 激光与光电子学进展. 2018, 55(12): 121012.

【23】Rosten E, Drummond T. Machine learning for high-speed corner detection [M]. ∥Leonardis A, Bischof H, Pinz A. Computer vision-ECCV 2006. Lecture notes in computer science. Berlin, Heidelberg: Springer. 2006, 3951: 430-443.

【24】Wang M, Dai Y P. Local robust feature based on FAST corner detection [J]. Transactions of Beijing Institute of Technology. 2013, 33(10): 1045-1050.
王蒙, 戴亚平. 基于FAST角点检测的局部鲁棒特征 [J]. 北京理工大学学报. 2013, 33(10): 1045-1050.

引用该论文

Li Peng,Zhang Yangyang. Global Localization for Indoor Mobile Robot Based on Binocular Vision[J]. Laser & Optoelectronics Progress, 2020, 57(4): 041503

李鹏,张洋洋. 室内移动机器人双目视觉全局定位[J]. 激光与光电子学进展, 2020, 57(4): 041503

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF