激光与光电子学进展, 2019, 56 (16): 161004, 网络出版: 2019-08-05   

基于生成对抗网络的多模态图像融合 下载: 2221次

Multimodal Image Fusion Based on Generative Adversarial Networks
作者单位
1 中北大学大数据学院, 山西 太原 030051
2 酒泉卫星发射中心, 甘肃 酒泉 735000
摘要
针对多模态图像融合中多尺度几何工具和融合规则设计困难的问题,提出一种基于生成对抗网络(GANs)的图像融合方法,实现了多模态图像端到端的自适应融合。将多模态源图像同步输入基于残差的卷积神经网络(生成网络),通过网络的自适应学习生成融合图像;将融合图像和标签图像分别送入判别网络,通过判别器的特征表示和分类识别逐渐优化生成器,在生成器和判别器的动态平衡中得到最终融合图像。与具有代表性的融合方法相比,实验结果表明,本文方法的融合结果更干净,没有伪影,提供了更好的视觉质量。
Abstract
This study proposes a new network based on generative adversarial networks to achieve an end-to-end image adaptive fusion, thus solving the difficulties in designing multiscale geometric tools and fusion rules in multimodal image fusion. First, the multimodal source image is synchronously input into the generative network, whose structure is created based on a residual-based convolutional neural network proposed herein. The network can generate the fused image through adaptive learning. Second, the fused and label images are sent to the discriminant network. The generator is gradually optimized through the feature representation and classification identification of the discriminator. The final fused image is obtained in the dynamic balance of the generator and discriminator. In comparison with the existing representative fusion methods, the proposed algorithm obtains more cleaner fusion results and has no artifacts, thereby providing a better visual quality.

杨晓莉, 蔺素珍, 禄晓飞, 王丽芳, 李大威, 王斌. 基于生成对抗网络的多模态图像融合[J]. 激光与光电子学进展, 2019, 56(16): 161004. Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004.

本文已被 5 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!