红外与激光工程, 2015, 44 (7): 2231, 网络出版: 2016-01-26   

TOF激光相机六自由度位姿变换估计

Estimating 6 DOF pose transformation of a TOF laser camera
作者单位
哈尔滨工业大学 机器人技术与系统国家重点实验室,黑龙江 哈尔滨 150080
摘要
相对位姿估计是机器人视觉领域的研究热点。通过两帧数据来估计相机的六自由度位姿变换。充分挖掘TOF相机优势,提出了多个有效算法,用以保证估计精度。采用迭代最近点(ICP)算法估计位姿变换,为了克服ICP算法迭代发散问题,利用尺度不变特征点对估计初始值。为了提取有效特征点,根据统计学原理尺度化灰度图像,提高图像对比度。为了提高相机的测量精度,根据曝光时间越长,测量精度越高的原理,提出了融合多帧数据算法,使得融合后的数据帧中每个像素值均是在最长合理曝光时间下采集得到。同时提出了度量两个六自由度位姿变换差异的算法,并首次利用其跟踪ICP迭代过程。实验证明提出的算法可以有效估计相机六自由度位姿变换。
Abstract
Relative pose estimation is a hot research topic in the community of robotic vision. 6 DOF pose transformation was estimated by two frames data. Several effective algorithms were proposed to guarantee the precision of the estimation which made full used of TOF camera. Iterative Closest Point(ICP) algorithm was used to estimate the pose transformation, in order to conquer the divergence problem of ICP, scaled Invariant Feature Transform(SIFT) feature pairs were employed to compute the initial value for ICP. The contrast of the image was increased for extracting the effective features by scaling the original gray image according to principle of statistics. Multiple frames were fused to improve the accracy of depth measurement based on the fact that the longer the exposure time was, the higher the accuracy was, and every pixels in the fused frame were captured with the longest valid exposure time. A methed for measuring the difference of two 6 DOF pose transformations was proposed, which was applied to track the iterations of ICP. The experiments have demonstrated the effectiveness of the algorithms proposed in this paper.
参考文献

[1] Clemente L A, Davison A J, Reid I D, et al. Mapping large loops with a single hand-held camera[C]//Robotics: Science and Systems Conference. Atlanta, 2007: 1-8.

[2] Stelzer A, Hirschmueller H, Goerner M. Stereo-vision-based navigation of a six-legged walking robot in unknown rough terrain[J]. International Journal of Robotics Research, 2012, 31(4): 381-402.

[3] Konolige K, Agrawal M, Bolles R C, et al. Outdoor Mapping and Navigation Using Stereovision[M]. Berlin: Springer Transactions in Advanced Robotics, 2008: 179-190.

[4] Belter D, Skrzypczynski P. Rough terrain mapping and classification for foothold selection in a walking robot[J]. Journal of Field Robot, 2011, 28(4): 497-528.

[5] Przemyslaw L, Dawid R, Piotr S. Terrain map building for a walking robot equipped with an active 2D range sensor[J]. Journal of Automation, Mobile Robotics & Intelligent Systems, 2011, 5(3): 67-78.

[6] May S, Droschel D, Holz D, et al. Three-dimensional mapping with time-of-flight cameras[J]. Journal of Field Robotics, 2009, 26(12): 934-965.

[7] Henry P, Krainin M, Herbst E, et al. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. International Journal of Robotics Research, 2012, 31(5): 647-663.

[8] Foix S, Alenya G, Andrade-Cetto J, et al. Object modeling using a ToF camera under an uncertainty reduction approach[C]//IEEE International Conference of Robotics and Automation.Piscataway, 2010: 1306-1312.

[9] Xingdong L, Wei G, Mantian L, et al. Generating colored point cloud under the calibration between TOF and RGB cameras[C]//Proceedings of the Fifth International Conference on Information and Automation, 2013: 483-488.

[10] Liang Mingjie, Min Huaqing, Luo Ronghua. Graph-based Slam: A Survey[J]. Robot, 2013, 35(4): 500-512.

[11] Lindner M, Schiller I, Kolb A, et al. Time-of-flight sensor calibration for accurate range sensing[J]. Computer Vision and Image Understanding, 2010, 114(12): 1318-1328.

[12] Besl P J, McKay N D. A method for registration of 3-D shapes[C]//Robotics-DL tentative. International Society for Optics and Photonics, 1992: 586-606.

[13] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110.

[14] Fischler M A, Bolles R C. Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography[J]. Communication of the ACM,1981, 24(6): 381-395

[15] Li Xingdong, Guo Wei, Li Mantian etc. A closed-form solution for estimating the accuracy of depth camera's relative pose[J]. Robot, 2014, 36(2): 194-202, 209.

[16] Horn B K P. Closed-form solution of absolute orientation using unit quaternions[J]. JOSA A, 1987, 4(4): 629-642.

[17] Rusu R B, Cousins S. 3d is here: Point cloud library (pcl)[C]//IEEE International Conference on Robotics and Automation(ICRA), 2011: 1-4.

李兴东, 李满天, 郭伟, 陈超, 孙立宁. TOF激光相机六自由度位姿变换估计[J]. 红外与激光工程, 2015, 44(7): 2231. Li Xingdong, Li Mantian, Guo Wei, Chen Chao, Sun Lining. Estimating 6 DOF pose transformation of a TOF laser camera[J]. Infrared and Laser Engineering, 2015, 44(7): 2231.

本文已被 2 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!