红外技术, 2020, 42 (8): 775, 网络出版: 2020-11-06   

基于灰度能量差异性的红外与可见光图像融合

Fusion of Infrared and Visible Images Based on Gray Energy Difference
作者单位
1 南京理工大学电子工程与光电技术学院,江苏南京 210094
2 昆明物理研究所,云南昆明 650223
摘要
为了获取红外图像中的突出的目标特征,提取可见光图像中重要的细节信息,以及解决传统算法中目标信息不够突出,细节、纹理缺失严重的问题,本文提出了一种基于灰度能量差异性的红外与可见光图像融合方法。首先通过基于灰度能量差异性的显著目标提取算法检测出红外图像中的目标特征;然后采用非下采样轮廓波变换(non-subsampled contourlet transform,NSCT)对红外图像和可见光图像进行高低频的分解;将灰度能量差异图作为融合权重对红外图像和可见光图像的低频部分进行融合,对于高频部分采用加权方差的规则进行融合;最后对融合后的高频系数和低频系数进行 NSCT逆变换得到最终的融合图像。本文选取了 3组经典的红外与可见光图像进行融合实验,并且通过主观视觉和客观指标两个方面与其他几种方法作比较。实验结果证明了算法在突出目标信息、提高对比度、清晰度和保留纹理细节方面十分有效。
Abstract
This paper proposes an infrared and visible image fusion method based on gray energy difference for two purposes: one, to obtain the prominent target features in an infrared image for extracting the important details in the visible image, and two, to solve the problem that the target information in traditional algorithms is not sufficiently prominent and that the details and textures are often missing. In this method, first, the target feature in the infrared image is detected by a target extraction algorithm based on gray energy difference. Second, infrared and visible images are decomposed to high and low frequencies using a non-subsampled contourlet transform (NSCT). Third, the gray energy difference map is used as the fusion weight to fuse the low-frequency parts of the infrared image and the visible image. The high-frequency part is fused by the rule of weighted variance. Finally, the NSCT inverse transform is used to fuse the high-frequency and low-frequency coefficients to obtain the final fused image. In this study, three groups of classical infrared and visible images are selected for fusion experiments and compared with other methods through subjective vision and objective indicators. Experimental results show that the algorithm can effectively highlight target information, improving contrast and sharpness and retaining texture details.
参考文献

[1] TIAN Zhuxiang, LI Yan, GAO Rongrong. A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain[J]. Infrared Physics and Technology, 2015, 69: 53-61.

[2] 徐冠雷, 王孝通 , 徐晓刚 , 等. 基于限邻域经验模式分解的多波段图像融合[J].红外与毫米波学报 , 2006(3): 225-228. XU Guanlei, WANG Xiaotong, XU Xiaogang, et al. Multi-band image fusion algorithm based on neighborhood limited empirical mode decomposition[J]. Journal of Infrared and Millimeter Waves, 2006(3): 225-228.

[3] ZHAO J, GAO X, CHEN Y , et al. Multi-window visual saliency extraction for fusion of visible and infrared images[J]. Infrared Physics & Technology, 2016, 76: 295-302.

[4] 陈浩, 王延杰. 基于拉普拉斯金字塔变换的图像融合算法研究 [J].激光与红外, 2009, 39(4): 439-442. CHEN Hao, WANG Yanjie. Research on image fusion algorithm based on Laplacian pyramid transform[J]. Laser & Infrared, 2009, 39(4): 439-442.

[5] TIAN Pu, GUO Giangni. Contrast-based image fusion using the discrete wavelet transform[J]. Optical Engineering, 2000, 39(8): 2075.

[6] 朱攀, 刘泽阳, 黄战华. 基于 DTCWT和稀疏表示的红外偏振与光强图像融合[J].光子学报 , 2017, 46(12): 213-221. ZHU Pan, LIU Zeyang, HUANG Zhanhua. Infrared polarization and intensity image fusion based on dual-tree complex wavelet transform and sparse representation[J]. Acta Photonica Sinica, 2017, 46(12): 213-221.

[7] Do Minh N, Vetterli Martin. The contourlet transform: an efficient directional multiresolution image representation[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2005, 14(12): 2091-2106.

[8] LI Huafeng, QIU Hongmei, YU Zhengtao, et al. Infrared and visible image fusion scheme based on NSCT and low-level visual features[J]. Infrared Physics and Technology, 2016, 76: 174-184.

[9] 王珺, 彭进业, 何贵青, 等. 基于非下采样 Contourlet变换和稀疏表示的红外与可见光图像融合方法[J].兵工学报 , 2013, 34(7): 815-820. WANG Jun, PENG Jinye, HE Guiqing, et al. Fusion method for visible and infrared images based on non-subsampled contourlet transform and sparse representation[J]. Acta Armamentarii, 2013, 34(7): 815-820.

[10] 刘先红 , 陈志斌 . 基于多尺度方向引导滤波和卷积稀疏表示的红外与可见光图像融合[J].光学学报 , 2017, 37(11): 111-120. LIU Xianhong, CHEN Zhibin. Fusion of infrared and visible images based on multi-scale directional guided filter and convolution sparse representation[J]. Acta Optica Sinica, 2017, 37(11): 111-120.

[11] 易翔, 王炳健. 视觉显著性指导的红外与可见光图像融合算法 [J].西安电子科技大学学报, 2019, 46(1): 27-32, 38. YI Xiang, WANG Bingjian. Fusion of infrared and visual images guided by visual saliency[J]. Journal of XIDIAN University, 2019, 46(1): 27-32, 38.

[12] 林子慧, 魏宇星 , 张建林 , 等. 基于显著性图的红外与可见光图像融合[J].红外技术 , 2019, 41(7): 640-645. LIN Zihui, WEI Yuxing, ZHANG Jianlin, et al. Image fusion of infrared and visible images based on saliency map[J]. Infrared Technology, 2019, 41(7): 640-645.

[13] 张宝辉, 闵超波, 窦亮, 等. 目标增强的红外与微光图像融合算法 [J].红外与激光工程, 2014, 43(7): 2349-2353. ZHANG Baohui, MIN Chaobo, DOU Liang, et al. Fusion algorithm of target enhancing infrared and low-level-light image[J]. Infrared and Laser Engineering, 2014, 43(7): 2349-2353.

[14] ZHAO Jufeng, ZHOU Qiang. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition[J]. Infrared Physics & Technology, 2013, 56(2013): 93-99.

[15] Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[C]//Computer Vision and Pattern Recognition of IEEE, 2009: 1597-1604.

[16] 陈震, 杨小平 , 张聪炫, 等. 基于补偿机制的 NSCT域红外与可见光图像融合[J].仪器仪表学报 , 2016, 37(4): 860-870. CHEN Zhen, YANG Xiaoping, ZHANG Congxuan, et al. Infrared and visible image fusion based on the compensation mechanism[J]. Chinese Journal of Scientific Instrument, 2016, 37(4): 860-870.

[17] 王焕清. 结合 NSCT和邻域特性的红外与可见光图像融合[J].信息通信, 2018(4): 17-20. WANG Huanqing. Image fusion visible and infrared image based on NSCT and neighborhood features[J]. Information and Communication, 2018(4): 17-20.

[18] 刘先红, 陈志斌 , 秦梦泽 . 结合引导滤波和卷积稀疏表示的红外与可见光图像融合[J].光学精密工程, 2018, 26(5): 1242-1253. LIU Xianhong, CHEN Zhibin, QIN Mengze. Infrared and visible image fusion using guided filter and convolutional sparse representation[J]. Optics and Precision Engineering, 2018, 26(5): 1242-1253.

赵立昌, 张宝辉, 吴杰, 吴旭东, 吉莉. 基于灰度能量差异性的红外与可见光图像融合[J]. 红外技术, 2020, 42(8): 775. ZHAO Lichang, ZHANG Baohui, WU Jie, WU Xudong, JI Li. Fusion of Infrared and Visible Images Based on Gray Energy Difference[J]. Infrared Technology, 2020, 42(8): 775.

本文已被 5 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!