光子学报, 2017, 46 (11): 1112004, 网络出版: 2017-12-08   

光场相机视觉测量误差分析

Error Analysis of Vision Measurement Based on the Light Field Camera
作者单位
合肥工业大学 计算机与信息学院, 合肥 230009
摘要
光场相机通过一次曝光可以获取空间目标的位置和方向信息, 具有重聚焦和多视角的特性, 利用光场的这些特性可以进行视觉测量.本文对光场极平面图像视觉测量、重聚焦视觉测量、双目视觉以及多目视觉测量方法的测量原理和误差影响因素进行了理论分析, 并通过实验验证了光场视觉测量误差跟不同视角基线长度, 主透镜焦距大小, 目标离相机的实际距离等结构参量的关系;理论分析和实验结果表明, 由于相机基线较短, 远距离测量误差较大, 近距离测量具有较高的精度;在光场微透镜阵列大小有限条件下, 采用多个视角组合的测量方法具有更高的测量精度.
Abstract
Light field camera can obtain position information and direction information of space target through one exposure. It has the characteristics of refocus and multi-view, which can be used for vision measurement. In this paper, the measurement principle and error influence factors of epipolar plane image (EPI), refocus-based, binocular and trinocular measurement methods were analyzed. The relationship between the measurement errors and the structural parameters, including the baseline length of different views, the focal length of the main lens and the actual distance of the target from the camera, were verified by experiments. Theoretical analysis and experiment results show that, due to the short baseline of the camera, the distance measurement errors are large, and short distance is of high precision; under the condition of fixed size of microlens array, the multi-view combination measurement method has higher accuracy.
参考文献

[1] NG R, LEVOY M, BRDIF M, et al. Light field photography with a hand-held plenoptic camera[J]. Computer Science Technical Report, 2005, 2(11): 1-11.

[2] GEORGIEV T, LUMSDAINE A. Focused plenoptic camera and rendering[J]. Journal of Electronic Imaging, 2010, 19(2): 021106.

[3] KIM M J, OH T H, KWEON I S. Cost-aware depth map estimation for Lytro camera[C]. IEEE International Conference on Image Processing, IEEE, 2014: 36-40.

[4] LI J, LU M, LI Z N. Continuous depth map reconstruction from light fields[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3257-3265.

[5] TAO M W, SRINIVASAN P P, MALIK J, et al. Depth from shading, defocus, and correspondence using light-field angular coherence[C]. IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2015: 1940-1948.

[6] XU Y, JIN X, DAI Q. Depth fused from intensity range and blur estimation for light-field cameras[C]. IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, 2016: 2857-2861.

[7] JEON H G, PARK J, CHOE G, et al. Accurate depth map estimation from a lenslet light field camera[C]. IEEE Conference on FU W, YAN F, CHEN K, et al. Scene distance measurement method based on light field imaging[J]. Applied Optics, 2015, 54(20): 6237-6243.

[8] HAHNE C, AGGOUN A, HAXHA S, et al. Light field geometry of a standard plenoptic camera[J]. Optics Express, 2014, 22(22): 26659-26673.

[9] CHEN Y, JIN X, DAI Q. Distance measurement based on light field geometry and ray tracing[J]. Optics Express, 2017, 25(1): 59-76.

[10] BAKER H H, BOLLES R C. Generalizing epipolar-plane image analysis on the spatiotemporal surface[J]. International Journal of Computer Vision, 1989, 3(1): 33-49.

[11] SCHECHNER Y Y, KIRYATI N. Depth from defocus vs. stereo: How different really are they [J]. International Journal of Computer Vision, 2000, 39(2): 141-162.

[12] XU Y, ZHAO Y, WU F, et al. Error analysis of calibration parameters estimation for binocular stereo vision system[C]. IEEE International Conference on Imaging Systems and Techniques, 2014: 317-320.

[13] 赵连军, 刘恩海, 张文明, 等. 单目三点位置测量精度分析[J]. 光学精密工程, 2014, 22(5): 1190-1197.

    ZHAO Lian-jun, LIU En-hai, ZHANG Wen-ming, et al. Analysis of position estimation precision by cooperative target with three feature points[J]. Optics and Precision Engineering, 2014, 22(5): 1190-1197.

[14] 霍炬, 崔家山, 王伟兴. 基于共面特征点的单目视觉位姿测量误差分析[J]. 光子学报, 2014, 43(5):0512003.

    HUO Ju, CUI Jia-shan, WANG Wei-xing. Error analysis of monocular visual position measurement based on coplanar feature points[J]. Acta Photonica Sinica, 2014, 43(5): 0512003.

[15] LEVOY M, HANRAHAN P. Light field rendering[C]. Conference on Computer Graphics and Interactive Techniques, ACM, 1996:31-42.

[16] WANNER S, GOLDLUECKE B. Globally consistent depth labeling of 4D light fields[C]. IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2012: 41-48.

[17] PAPPAS T N. Single lens 3D-camera with extended depth-of-field[C]. SPIE, 2012, 8291(1): 829108.

[18] LKE J P, ROSA F, SANLUIS J C, et al. Error analysis of depth estimations based on orientation detection in EPI-representations of 4D light fields[C]. Information Optics, IEEE, 2013: 1-3.

[19] LV H, GU K, ZHANG Y, et al. Light field depth estimation exploiting linear structure in EPI[C]. IEEE International Conference on Multimedia & Expo Workshops, IEEE, 2015: 1-6.

[20] 王向军, 卞越新, 刘峰, 等. 远距离三维坐标测量中双目视觉系统结构参数的优化[J]. 光学精密工程, 2015, 23(10):2902-2908.

    WANG Xiang-jun, BIAN Yue-xin, LIU Feng, et al. Optimization of structural parameters of binocular vision system in remote 3-D coordinate measurement[J]. Optics and Precision Engineering, 2015, 23(10): 2902-2908.

[21] NG R. Fourier slice photography[C]. ACM SIGGRAPH, ACM, 2005, 24(3): 735-744.

汪义志, 张旭东, 熊伟, 邓武. 光场相机视觉测量误差分析[J]. 光子学报, 2017, 46(11): 1112004. WANG Yi-zhi, ZHANG Xu-dong, XIONG Wei, DENG Wu. Error Analysis of Vision Measurement Based on the Light Field Camera[J]. ACTA PHOTONICA SINICA, 2017, 46(11): 1112004.

本文已被 3 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!