光学学报, 2020, 40 (2): 0215001, 网络出版: 2020-01-02
融合多尺度局部特征与深度特征的双目立体匹配 下载: 1636次
Binocular Stereo Matching by Combining Multiscale Local and Deep Features
机器视觉 立体匹配 多尺度局部特征融合 浅层次特征 孪生网络 卷积神经网络 machine vision stereo matching multi-scale local feature fusion shallow features siamese network convolutional neural network
摘要
针对立体匹配中不适定区域难以找到精确匹配点的问题,提出一种融合多尺度局部特征与深度特征的立体匹配方法。特征融合阶段包括两部分,其一是融合不同尺度下Log-Gabor特征和局部二值模式特征组合的浅层次特征,其二是将多尺度浅层融合特征和卷积神经网络提取的深度特征进行级联,形成既包含语义信息又包含结构化信息的特征图像。通过在极线垂直方向添加不同强度的噪声来构造正负样本,减小图像中极线对齐欠准带来的误差。将该方法与两种变体方法(改变或舍弃部分模块)在KITTI数据集进行对比实验,结果表明各模块设置具有合理性;与一些经典方法相比,所提方法取得了有竞争力的匹配性能。
Abstract
In this study, a method is proposed based on multiscale local and deep features to address the difficulty associated with finding exactly matching pixels from the ill-posed regions in stereo matching. The feature fusion stage comprises two parts. First, the shallow features with different scales, including the Log-Gabor features and the local binary pattern features, are fused. The second part integrates the multiscale shallow fused features and deep features via a convolutional neural network and forms the final feature image, which contains both the semantic and structural information. Further, a positive and negative sample construction method is proposed by adding some noise in the vertical direction to reduce the error that can be attributed to imprecise epipolar alignment in an image. The proposed method is compared with two variant methods (changing or discarding of some modules) with respect to the KITTI datasets. The experimental results validate the effectiveness of the module settings with respect to the proposed method. This method also achieves competitive matching results when compared with those achieved using some classical methods.
王旭初, 刘辉煌, 牛彦敏. 融合多尺度局部特征与深度特征的双目立体匹配[J]. 光学学报, 2020, 40(2): 0215001. Xuchu Wang, Huihuang Liu, Yanmin Niu. Binocular Stereo Matching by Combining Multiscale Local and Deep Features[J]. Acta Optica Sinica, 2020, 40(2): 0215001.