光学学报, 2018, 38 (1): 0115002, 网络出版: 2018-08-31   

基于水平树结构的可变权重代价聚合立体匹配算法 下载: 971次

Variable Weight Cost Aggregation Algorithm for Stereo Matching Based on Horizontal Tree Structure
作者单位
江南大学轻工过程先进控制教育部重点实验室, 江苏 无锡 214122
摘要
针对基于树结构的代价聚合方法中只利用颜色信息选择权值支持区域时,在图像边界区域易产生误匹配的问题,提出了一种基于水平树结构的可变权重代价聚合立体匹配算法。采用水平树代价聚合得到初始视差值,结合初始视差值与颜色信息重构水平树,在更新后的树结构上进行代价聚合,得到最终视差图。在视差后处理阶段,提出了一种改进的非局部视差后处理算法,将不满足左右一致性匹配的像素点引入匹配代价量构造中,提高了最终视差图的匹配精度。在Middlebury数据集的31对图像上进行测试,结果表明,未进行视差后处理时所提算法在未遮挡区域的平均误匹配率为6.96%,代价聚合平均耗时1.52 s。
Abstract
In the cost aggregation methods based on tree structure, the weight support region is selected only by color information, and therefore it is easy to produce mismatching problem in the image boundary area. A variable weight cost aggregation algorithm for stereo matching based on horizontal tree structure is proposed to solve the problem. The initial disparity value is obtained by the cost aggregation of horizontal tree, the horizontal tree is reconstructed with initial disparity value and color information, and the final disparity map is obtained on the updated tree structure by cost aggregation. In the disparity refinement stage, an improved non-local disparity refinement algorithm is proposed with the pixel points that do not satisfy left-right consistency constraint introduced into the matching cost volume, which improves the matching accuracy of final disparity map. Performance evaluation experiments on all 31 Middlebury stereo pairs demonstrate that the proposed algorithm achieves an average error matching rate of 6.96% in the non-occluded areas without disparity refinement, and the cost aggregation takes 1.52 s on average.

1 引言

立体匹配是根据同一场景内不同视角下的两幅或多幅图像,通过寻找像素之间的对应关系来获得物体三维深度信息的过程,是计算机视觉领域中存在已久且尚未解决的问题之一[1]。Scharstein等[2]将立体匹配算法分为全局算法和局部算法两类。现行的全局匹配算法通常考虑局部的颜色与结构信息,并为图像建立全局能量代价函数,通过置信传播[3]、图割法[4]、动态规划[5]等优化方法为每个像素分配视差值。局部算法利用基于窗口或树结构的代价聚合计算得到像素的视差值,如自适应窗口[6]。虽然其精度不如全局算法,但计算复杂度低,效率高,易于实现。局部算法主要包含匹配代价计算、代价聚合、计算视差值和视差后处理4个步骤。

代价聚合是局部算法中最重要的一个步骤,其隐含的基本假设为图像局部块具有视差平滑性,因此可以将其看成是在匹配代价基础上进行滤波处理。最简单快速的局部滤波器为盒滤波,但是图像边界会出现视差模糊,即前景膨胀。之后,具有边界保留效果的双边滤波[7]和引导滤波[8]被引入代价聚合,可得到与全局方法相媲美的视差图,然而局部方法需要预先定义支持窗口的大小,大的支持窗口会使边界区域过于平滑,而小的支持窗口会导致纹理区域的误匹配。为了解决聚合阶段受预先定义的支持窗口大小限制的问题,相继提出了基于递归[9]、水平树[10-11]、最小生成树[12]、分割树[13-14]的非局部方法,对图像中任一像素选择的支持窗口为整幅图像,有效避免了在无纹理区域支持窗口过小的情况。与局部方法相比,非局部方法在运算速度和无纹理区域的视差精度方面都有较大的改善。然而,当前大多数非局部方法只利用颜色信息判断相邻像素是否属于相同的视差值,未考虑实际场景中颜色相同而视差不同或颜色不同而视差相同的情况,从而导致图像背景区域和相同颜色边界区域视差估计不准确。

视差后处理是立体匹配算法中的最后一个步骤,其视差精度决定着立体匹配的最终精度。加权中值滤波[15-16]是应用非常广泛的一种视差后处理方法,通过对相邻区域的视差值进行直方图加权,选择中值对应的视差值作为当前像素的视差,能有效抑制错误的视差值。Yang[12]基于最小生成树提出了一种非局部视差后处理方法,通过左右一致性匹配将所有像素分成稳定点和非稳定点,基于稳定点重新构造匹配代价量,再次代价聚合得到最终视差值。与加权中值滤波相比,该方法计算速度快,视差精度高,但是忽视了不满足左右一致性匹配而视差估计准确的像素点,因此在新匹配代价量的计算中没有发挥任何作用。

针对图像中颜色边界未必是视差边界的问题,本文提出了一种基于水平树结构的可变权重代价聚合立体匹配算法。该算法对传统的非局部方法只利用颜色信息计算相邻像素的权重进行了改进,引入代价聚合后的初始视差值,通过再次代价聚合使算法在边界区域取得很好的匹配结果,与未引入初始视差值得到的结果相比,精度有所提高。在视差后处理步骤中,提出了一种改进的匹配代价量构造方式,在未增加额外计算量的条件下,提高了视差精度。

2 算法描述

2.1 匹配代价计算

匹配代价用来衡量同一场景中不同视角下的两幅或多幅图像,在不同视差下对应像素点之间的相似关系。匹配代价计算用fRW×H×3×RW×H×3RW×H×L表示,其中WH表示图像分辨率的宽和高,上标3表示图像的R、G、B 3个通道,L表示两对应像素点之间的最大视差值dmax。不同视角下的左右图像ILIR在视差为0~dmax时的匹配代价量可表示为

C=f(IL,IR)(1)

对于图像中每一个像素点(x,y),在视差d下的匹配代价为C(x,y,d),将图像的亮度信息与水平梯度信息相结合计算匹配代价:

C(x,y,d)=α·min13k(R,G,B)ILk(x,y+d)-IRk(x,y),Tc+(1-α)·min[xIL(x,y+d)-xIR(x,y),Tg],(2)

式中k表示图像的R,G,B 3个通道; xILxIR分别表示左右彩色图像转换为灰度图像后的水平方向梯度;TcTg为截断阈值,Tc=7,Tg=2;α为平衡因子,取值为0.11。

利用(2)式计算IL中每一个像素点在视差0~dmax下的匹配代价,得到一个维度为W×H×L的三维矩阵,即最终的匹配代价量。

2.2 代价聚合

由于单个像素的匹配代价区分度不强,且易受噪声影响,基于局部视差平滑性假设,利用相邻像素的信息进行代价聚合,以提高视差区分度。代价聚合可表示为

C'i=1NijΩWi,jICj,(3)

式中Ni=jΩWi,j(I)为归一化常数,Wi,j(I)表示像素j对像素i的支持权重,I为引导图像(一般通过对左右图像盒滤波获得),Ω表示支持区域范围;C'C分别为聚合后和聚合前的匹配代价量。

计算每对像素之间的支持权重,采用基于水平树结构的权值传播代价聚合算法[10],首先基于引导图像I构建四连通权值树,如图1所示。每个像素被当成一个节点,像素ij之间的支持权重定义为

Wi,j(I)=p,qpi,jTp,q(I),(4)

式中p,qpi,j表示在像素ij之间权值传播路径上的两相邻节点,Tp,q(I)为相邻两节点之间的支持权重,定义为

Tp,q(I)=exp-max(IRp-IRq,IGp-IGq,IBp-IBq)σ,(5)

式中R、G、B为像素3个通道;σ表示平滑参数,用于调节相邻像素之间的支持权重。

图 1. 基于水平树结构的权值传播

Fig. 1. Weight propagation based on horizontal tree structure

下载图片 查看所有图片

绝大多数代价聚合方法只是在单独视差下对匹配代价进行滤波,未考虑倾斜平面不满足图像局部区域视差相近原则的情况,因此采用文献[ 11]提出的方法,将正则项引入匹配代价聚合,并基于上述建立的树结构,匹配代价沿着图1路径聚合,最终匹配代价量为

Cp(dp)=mp(dp)+qν(p)mindqDsdp,dq)+Cq(dq)]Wp,q(I),(6)

式中mp(dp)为未进行代价聚合前像素p在视差dp下的匹配代价;ν(p)表示以p为根节点的所有叶节点,亦为图像中所有节点;D为视差搜索范围,Wp,q(I)为(4)式定义的像素p,q之间的支持权重;s(dp,dq)表示平滑惩戒因子,定义为

s(dp,dq)=0,dp-dq=0psmooth,0<dp-dq1,else,(7)

式中psmooth为常量,取值为2。最后采用Winner-Takes-All策略,在视差区间内选择匹配代价最小的点作为最优点对p点进行视差选择,获得初始视差

d0(p)=argmindD[Cp(dp)](8)

2.3 代价再聚合

只基于颜色信息选择像素权重支持区域对图像边界视差估计不准确,初始视差图中还存在大量误匹配,因此引入初始视差重构水平树结构,从而改善匹配精度。与2.2节代价聚合方式不同,代价再聚合重构水平树中相邻两节点之间的支持权重定义为

wp,q=(1-k)max(IRp-IRq,IGp-IGq,IBp-IBq)+kdp-dq,(9)Tp,q(I)=exp-wp,qσ,(10)

式中k为颜色和视差权重平衡因子,k∈[0,1];dp,dq分别为像素p,q的初始视差。利用重构后的水平树和(4)、(6)~(8)式得到代价再聚合后的视差值。图2为采用代价再聚合前后中心像素的权值支持区域图。如图2(a)所示,图像中心像素(红色方框)与其邻域像素的视差值相近,而颜色信息相差很

图 2. 权值支持区域图(红色方框为中心像素,邻域像素亮度值表示对中心像素的支持权重)。(a)图像局部块;(b)未采用代价再聚合的支持权重图;(c)采用代价再聚合的支持权重图

Fig. 2. Support weights for selected regions (the brightness value of neighborhood pixel represents the support weight for central pixel which is marked with red box). (a) Image partial block; (b) support weight map without iterative cost aggregation; (c) support weight map after iterative cost aggregation

下载图片 查看所有图片

大,只利用颜色信息计算支持权重得到如图2(b)所示的支持权重图。因支持区域过小易产生误匹配,图2(c)为引入初始视差获得的支持权重图,支持区域明显扩大,有效降低了像素匹配歧义性。

图3(c)所示,未采用代价再聚合时,在视差不连续和图像边界区域均存在大量的误匹配点(红色区域),得到的视差图比较粗糙。图3(d)为采用代价再聚合后得到的视差图,与图3(c)相比,在图像颜色边界和背景区域视差估计方面有明显改善(黄色方框内),提高了匹配精度。

图 3. 采用代价再聚合前后的视差变化图。(a) Reindeer左图像;(b) Reindeer右图像;(c)未采用代价再聚合得到的左视差图;(d)采用代价再聚合得到的左视差图

Fig. 3. Disparity variation maps computed before and after iterative cost aggregation. (a) Reindeer left image; (b) Reindeer right image; (c) left disparity map computed before iterative cost aggregation; (d) left disparity map computed after iterative cost aggregation

下载图片 查看所有图片

2.4 视差后处理

通过上述方法得到的初始视差在遮挡区域(左右图像不能同时观测区域)存在大量误匹配,需要进行后处理。首先通过左右一致性来检测匹配异常点, 判定公式为

dL(x,y)=dR[x-max(dL,0),y],(11)

式中dL,dR为左右初始视差图。不满足(11)式的点即可视为异常点,否则为稳定点。利用稳定点与异常点重新构造匹配代价量,构建方式定义为

Cdnew(p)=d-D(p),pisastablepointandD(p)>0k1d-D(p),pisanunstablepoint,(12)

式中D(p)表示初始左视差图;d∈[0,dmax];k1为比例因子,用于调节异常点对匹配代价量的贡献,取值范围为[0,1]。最后,利用2.3节代价再聚合生成的水平树对新匹配代价量进行代价聚合,采用Winner-Takes-All策略获得最终视差图。

图4(c)为左右一致性检测后得到的视差图,其中黑色区域像素为异常点。图4(d)为进行视差后处理之前的初始左视差图。文献[ 11]的视差后处理阶段未考虑黄色方框区域内的准确视差值对新匹配代价量的积极作用,最终视差图在该区域还存在大量误匹配。图5为标准Dolls图像利用改进后的视差后处理算法获得的视差图。与文献[ 11]的视差后处理方法相比,两者匹配代价计算、代价聚合处理方法相同。图5中红色区域为误匹配像素点,误差阈值为1。改进方法获得的视差图在不满足左右一致性的检测区域有明显改善,提高了匹配精度。

图 4. 视差后处理的机理。(a) Dolls左图像;(b) Dolls右图像;(c)左右一致性检测;(d)未进行视差后处理的初始左视差图

Fig. 4. Principle of disparity refinement. (a) Dolls left image; (b) Dolls right image; (c) left-right consistence test; (d) initial left disparity map without disparity refinement

下载图片 查看所有图片

图 5. 采用不同视差后处理方法得到的视差图。(a)文献[ 11]的视差后处理方法;(b)改进的视差后处理方法

Fig. 5. Disparity maps with different disparity refinement methods. (a) Disparity refinement method in Ref. [11]; (b) improved disparity refinement method

下载图片 查看所有图片

3 实验结果与分析

为了验证算法的有效性,选用Middlebury测试平台提供的图像进行测试。计算机配置为Pentium E6700 3.20 GHz主频CPU、2G内存。实验中算法涉及的具体参数如表1所示。

表 1. 本文立体匹配算法参数

Table 1. Parameters of the proposed stereo matching algorithm

ParameterαTcTgσpsmoothkk1
Value0.1172255×0.0820.50.1

查看所有表

3.1 代价聚合算法性能测试

为了验证本文提出的代价聚合算法的性能,将

所提算法与近年来的前沿局部立体匹配算法进行比较,包括最小生成树(MST)[12]、引导滤波(GF)[8]、跨尺度立体匹配[17][其中包含跨尺度最小生成树(CS-MST)、跨尺度分割树(CS-ST)]以及基于水平树平滑正则化的代价聚合立体匹配算法(LSECVR)[11]。评判标准为代价聚合后的初始视差图在非遮挡区域的匹配精度,其中设置误差阈值为1。为了更好地展示对比实验的效果,将误匹配像素点用红色标记,实验结果如图6所示,可以看出红色区域明显减少。为验证改进算法的普遍适用性,在Middlebury数据集的31对图像上进行测试,非遮挡区域的匹配误差如表2所示,其中下标的数字表示各算法匹配精度

图 6. 不同代价聚合算法得到的视差图(红色区域为误匹配像素点)。(a)真实视差图;(b)最小生成树;(c)跨尺度最小生成树;(d)引导滤波;(e)跨尺度分割树;(f)文献[ 11]代价聚合算法;(g)改进的代价聚合算法

Fig. 6. Disparity maps obtained by different cost aggregation algorithms (mismatched pixels are marked in red area). (a) Real disparity map; (b) by minimum spanning tree; (c) by cross-scale minimum spanning tree; (d) by guided filtering; (e) by cross-scale segment tree; (f) by cost aggregation algorithm in Ref. [11]; (g) by improved cost aggregation algorithm

下载图片 查看所有图片

表 2. 不同立体匹配算法在未进行视差后处理的非遮挡区域中的匹配误差(单位:%)

Table 2. Error of different stereo matching methods in non-occluded areas without disparity refinement (unit: %)

Stereo pairMSTCS-MSTGFCS-STLSECVRProposed
Tsukuba2.1241.5712.5161.7422.2951.773
Venus0.8431.3842.0361.4550.5620.341
Teddy7.6155.5338.4866.0744.9124.251
Cones4.1044.1553.6134.4263.4423.361
Aloe4.1434.6345.5364.7152.8822.671
Art9.79410.7969.03310.5056.7226.461
Baby17.3758.3964.6944.5332.8822.591
Baby211.95413.3756.08315.1162.6121.601
Baby35.6437.2565.7946.2353.6823.661
Books9.56310.26610.22410.2456.7125.631
Bowling116.81420.89514.52321.7268.8226.591
Bowling29.31410.1557.08311.1864.8823.401
Cloth10.5130.6141.0860.6650.2720.151
Cloth22.8534.1363.4644.0451.4321.071
Cloth31.7732.6652.1542.7261.4121.061
Cloth41.3031.8761.6241.7551.1321.101
Dolls5.0035.9565.0445.5253.1122.901
Flowerpots16.67519.41612.79315.22412.66211.431
Lampshade110.43311.99611.57510.6149.0028.221
Lampshade220.88518.20421.13612.0837.4225.781
Laundry13.69412.94316.40614.51511.07210.701
Midd132.32527.85340.11626.95127.62229.524
Midd234.50532.09435.85624.56125.51325.092
Moebius7.6718.6959.2568.5548.1128.163
Monopoly22.51124.21227.99625.50326.37427.145
Plastic42.53447.03639.29242.72540.71334.871
Reindeer9.1559.8767.2338.3345.0823.671
Rocks12.2332.8362.7052.6441.1420.911
Rocks21.5732.0861.6141.9050.8120.781
Wood18.68511.0664.8335.9640.2410.252
Wood20.9935.6152.3446.4260.6320.621
Average rank3.6534.8764.4554.4242.1921.421
Average error10.5611.2110.5210.287.556.96

查看所有表

的排列顺序。从表2可以看出,改进算法的平均排名和平均匹配精度都优于其他几种算法,平均匹配误差仅为6.96%,相比于引导滤波算法和最小生成树,平均匹配误差分别降低了3.56%和3.6%。此外,代价聚合平均运行时间为1.52 s,近似满足实时性要求。

3.2 视差后处理算法性能测试

为了验证本文提出的视差后处理算法的性能,将所提视差后处理算法与文献[ 11]的后处理方法进行比较,其中匹配代价计算和代价聚合与本文采用方法相同,避免了匹配代价计算和代价聚合方式的不同对实验结果的影响。在Middlebury数据集的31对图像上进行测试,实验结果如表3所示,其中下标数字为匹配精度排名,评价标准为全图像区域匹配误差百分比,误差阈值设置为1。从表3可以看出,与文献[ 11]的方法相比,本文提出的视差后处理方法的全图像平均匹配误差下降了1.93%。此外,两种视差后处理方法(不包括匹配代价计算和代价聚合)耗时分别为0.826 s(文献[ 11]方法)和0.839 s(本文方法),几乎相等,本文算法在未增加计算量的情况下提高了匹配精度。图7直观展示了本文视差后处理算法相对于文献[ 11]后处理方法的优势,其中红色区域为误匹配像素。从图中可以看出,改进的视差后处理方法在不满足左右一致性匹配区域的匹配精度有所提高,但是视差图中仍存在一些误匹配点,需进一步改进。

表 3. 不同立体匹配算法视差后处理在全图像区域的匹配误差(单位:%)

Table 3. Matching error of different stereo matching methods in image areas with disparity refinement (unit: %)

Stereo pairLSECVRProposedStereo pairLSECVRProposed
Tsukuba3.6314.012Dolls17.81212.811
Venus2.4813.232Flowerpots20.83122.262
Teddy16.08211.831Lampshade123.84222.291
Cones14.11211.261Lampshade228.79221.031
Aloe11.37111.862Laundry26.56223.261
Art25.41221.371Midd138.73234.271
Baby18.8028.451Midd233.14228.711
Baby29.7925.781Moebius18.66119.082
Baby314.48115.612Monopoly34.03233.651
Books18.93218.011Plastic41.28141.672
Bowling126.19223.441Reindeer15.93213.921
Bowling219.14218.901Rocks113.13210.061
Cloth114.55210.341Rocks214.55210.351
Cloth217.03212.461Wood19.1619.222
Cloth311.62211.411Wood28.8018.822
Cloth418.17217.731Average rank1.7121.291
Average error18.6116.68

查看所有表

图 7. Middlebury 标准图像实验结果。(a)左参考图像;(b)右参考图像;(c)左真实视差图;(d)文献[ 11]的视差后处理方法;(e)改进的视差后处理方法

Fig. 7. Experimental results of the Middlebury benchmark images. (a) Left reference images; (b) right reference images; (c) left real disparity maps; (d) by disparity refinement method in Ref. [11]; (e) by improved disparity refinement method

下载图片 查看所有图片

4 结论

针对初始匹配视差在图像颜色边界存在的误匹配问题,提出了一种基于水平树结构的可变权重代价聚合立体匹配算法。将初始代价聚合获得的视差图引入到水平权值树构造中,有效缓解了图像在颜色相同但深度不同以及颜色不同但深度相同区域的误匹配问题。在视差后处理阶段,引入视差对匹配代价量的权重系数,有效降低了左右视差不一致区域的误匹配率。本文算法在非遮挡区域的匹配精度高于其他基于滤波器的方法,而在遮挡区域的匹配精度与某些全局算法有一定差距。在今后的研究中,将重点关注提高遮挡区域的匹配精度。

参考文献

[1] BleyerM, BreitenederC. Stereo matching—state-of-the-art and research challenges[M]. Advanced topics in computer vision. London: Springer, 2013: 143- 179.

    BleyerM, BreitenederC. Stereo matching—state-of-the-art and research challenges[M]. Advanced topics in computer vision. London: Springer, 2013: 143- 179.

    BleyerM, BreitenederC. Stereo matching—state-of-the-art and research challenges[M]. Advanced topics in computer vision. London: Springer, 2013: 143- 179.

[2] Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. International Journal of Computer Vision, 2002, 47: 7-42.

    Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. International Journal of Computer Vision, 2002, 47: 7-42.

    Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. International Journal of Computer Vision, 2002, 47: 7-42.

[3] Sun J, Zheng N N, Shum H Y. Stereo matching using belief propagation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(7): 787-800.

    Sun J, Zheng N N, Shum H Y. Stereo matching using belief propagation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(7): 787-800.

    Sun J, Zheng N N, Shum H Y. Stereo matching using belief propagation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(7): 787-800.

[4] Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(11): 1222-1239.

    Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(11): 1222-1239.

    Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(11): 1222-1239.

[5] BleyerM, GelautzM. Simple buteffective tree structures for dynamic programming-based stereo matching[C]∥International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2008: 415- 422.

    BleyerM, GelautzM. Simple buteffective tree structures for dynamic programming-based stereo matching[C]∥International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2008: 415- 422.

    BleyerM, GelautzM. Simple buteffective tree structures for dynamic programming-based stereo matching[C]∥International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2008: 415- 422.

[6] 祝世平, 李政. 基于改进梯度和自适应窗口的立体匹配算法[J]. 光学学报, 2015, 35(1): 0110003.

    祝世平, 李政. 基于改进梯度和自适应窗口的立体匹配算法[J]. 光学学报, 2015, 35(1): 0110003.

    祝世平, 李政. 基于改进梯度和自适应窗口的立体匹配算法[J]. 光学学报, 2015, 35(1): 0110003.

    Zhu S P, Li Z. A stereo matching algorithm using improved gradient and adaptive window[J]. Acta Optica Sinica, 2015, 35(1): 0110003.

    Zhu S P, Li Z. A stereo matching algorithm using improved gradient and adaptive window[J]. Acta Optica Sinica, 2015, 35(1): 0110003.

    Zhu S P, Li Z. A stereo matching algorithm using improved gradient and adaptive window[J]. Acta Optica Sinica, 2015, 35(1): 0110003.

[7] Yoon K J, Kweon I S. Adaptive support-weight approach for correspondence search[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 650-656.

    Yoon K J, Kweon I S. Adaptive support-weight approach for correspondence search[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 650-656.

    Yoon K J, Kweon I S. Adaptive support-weight approach for correspondence search[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 650-656.

[8] Hosni A, Rhemann C, Bleyer M, et al. Fast cost-volume filtering for visual correspondence and beyond[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(2): 504-511.

    Hosni A, Rhemann C, Bleyer M, et al. Fast cost-volume filtering for visual correspondence and beyond[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(2): 504-511.

    Hosni A, Rhemann C, Bleyer M, et al. Fast cost-volume filtering for visual correspondence and beyond[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(2): 504-511.

[9] CiglaC. Recursive edge-aware filters for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015: 27- 34.

    CiglaC. Recursive edge-aware filters for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015: 27- 34.

    CiglaC. Recursive edge-aware filters for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015: 27- 34.

[10] Yang Q, Li D, Wang L, et al. Full-image guided filtering for fast stereo matching[J]. IEEE Signal Processing Letters, 2013, 20(3): 237-240.

    Yang Q, Li D, Wang L, et al. Full-image guided filtering for fast stereo matching[J]. IEEE Signal Processing Letters, 2013, 20(3): 237-240.

    Yang Q, Li D, Wang L, et al. Full-image guided filtering for fast stereo matching[J]. IEEE Signal Processing Letters, 2013, 20(3): 237-240.

[11] Yang Q. Local smoothness enforced cost volume regularization for fast stereo correspondence[J]. IEEE Signal Processing Letters, 2015, 22(9): 1429-1433.

    Yang Q. Local smoothness enforced cost volume regularization for fast stereo correspondence[J]. IEEE Signal Processing Letters, 2015, 22(9): 1429-1433.

    Yang Q. Local smoothness enforced cost volume regularization for fast stereo correspondence[J]. IEEE Signal Processing Letters, 2015, 22(9): 1429-1433.

[12] YangQ. A non-local cost aggregation method for stereo matching[C]∥IEEE Conference on Computer Vision and Pattern Recognition, 2012: 1402- 1409.

    YangQ. A non-local cost aggregation method for stereo matching[C]∥IEEE Conference on Computer Vision and Pattern Recognition, 2012: 1402- 1409.

    YangQ. A non-local cost aggregation method for stereo matching[C]∥IEEE Conference on Computer Vision and Pattern Recognition, 2012: 1402- 1409.

[13] MeiX, SunX, Dong WM, et al. Segment-tree based cost aggregation for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013: 313- 320.

    MeiX, SunX, Dong WM, et al. Segment-tree based cost aggregation for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013: 313- 320.

    MeiX, SunX, Dong WM, et al. Segment-tree based cost aggregation for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013: 313- 320.

[14] YaoP, ZhangH, XueY, et al. Segment-tree based cost aggregation for stereo matching with enhanced segmentation advantage[C]∥IEEE International Conference on Acoustics, Speech and Signal Processing, 2017: 2027- 2031.

    YaoP, ZhangH, XueY, et al. Segment-tree based cost aggregation for stereo matching with enhanced segmentation advantage[C]∥IEEE International Conference on Acoustics, Speech and Signal Processing, 2017: 2027- 2031.

    YaoP, ZhangH, XueY, et al. Segment-tree based cost aggregation for stereo matching with enhanced segmentation advantage[C]∥IEEE International Conference on Acoustics, Speech and Signal Processing, 2017: 2027- 2031.

[15] Ma ZY, He KM, Wei YC, et al. Constant time weighted median filtering for stereo matching and beyond[C]∥Proceedings of the IEEE International Conference on Computer Vision, 2013: 49- 56.

    Ma ZY, He KM, Wei YC, et al. Constant time weighted median filtering for stereo matching and beyond[C]∥Proceedings of the IEEE International Conference on Computer Vision, 2013: 49- 56.

    Ma ZY, He KM, Wei YC, et al. Constant time weighted median filtering for stereo matching and beyond[C]∥Proceedings of the IEEE International Conference on Computer Vision, 2013: 49- 56.

[16] SunX, MeiX, Jiao SH, et al. Stereo matching with reliable disparity propagation[C]∥IEEE International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 2011: 132- 139.

    SunX, MeiX, Jiao SH, et al. Stereo matching with reliable disparity propagation[C]∥IEEE International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 2011: 132- 139.

    SunX, MeiX, Jiao SH, et al. Stereo matching with reliable disparity propagation[C]∥IEEE International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 2011: 132- 139.

[17] ZhangK, FangY, MinD, et al. Cross-scale cost aggregation for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014: 1590- 1597.

    ZhangK, FangY, MinD, et al. Cross-scale cost aggregation for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014: 1590- 1597.

    ZhangK, FangY, MinD, et al. Cross-scale cost aggregation for stereo matching[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014: 1590- 1597.

彭建建, 白瑞林. 基于水平树结构的可变权重代价聚合立体匹配算法[J]. 光学学报, 2018, 38(1): 0115002. Jianjian Peng, Ruilin Bai. Variable Weight Cost Aggregation Algorithm for Stereo Matching Based on Horizontal Tree Structure[J]. Acta Optica Sinica, 2018, 38(1): 0115002.

本文已被 2 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!