光子学报, 2020, 49 (5): 0510002, 网络出版: 2020-06-04   

基于深度多分支特征融合网络的光学遥感场景分类

Remote Sensing Image Scene Classification Based on Deep Multi-branch Feature Fusion Network
作者单位
1 陕西科技大学 电气与控制工程学院, 西安 710021
2 西北工业大学 无人系统技术研究院, 西安 710072
3 陆军装备部 装备技术合作中心, 北京 100000
摘要
针对遥感图像背景复杂且存在某场景图像中关键物体小且尺度变化较大, 需提升模型表征能力来准确辨别各类场景的问题, 提出了一种深度多分支特征融合网络的方法进行遥感图像场景分 类.利用多分支网络结构提取高、中、低三个层次的特征信息, 将三个层次的特征进行基于拆分-融合-聚合的分组融合, 最后为了关注难辨别样本和标签位置损失, 提出一种损失函数.试验结果证明 , 本文所提出的方法对于提高分类准确率十分有效, 在UCM、AID和OPTIMAL三个数据集上的准确率超过其他算法.在数据集UCM上80%样本训练, 准确率达到了99.29%, 与ARCNet-VGG16算法相比分类准 确率提高了1.35%.在数据集AID上50%样本训练, 准确率达到了95.56%, 与Two-Stream算法相比提高了0.98%.在数据集OPTIMAL上80%样本训练, 准确率达到95.43%, 与ARCNet-VGG16算法相比提升 2.73%.
Abstract
For the complex background of remote sensing images, the key objects in a scene image are small and large-scale variations, so that it needs to improve model representation ability for scene classification. Therefore, a deep multi-branch feature fusion network is proposed for remote sensing image scene classification. The multi-branch network structure is utilized to extract high-level, middle-level and low-level feature, and the three levels of features are then split-fused-aggregated into a grouped fusion. The fusion method is based on the proposed split-fusion-aggregation group fusion method. Finally, in order to pay attention to the loss of difficult to distinguish samples and labels, a loss function is proposed. The experimental results proved that the method proposed in this paper is very effective for improving the accuracy of classification. The accuracy rate on the UCM, AID, and OPTIMAL datasets surpasses other state-of-art algorithms. On the UCM dataset, 50% of the samples are trained, the accuracy rate is 99.29%, and the classification accuracy rate is increased by 1.35% compared with ARCNet-VGG16 algorithm. On the dataset AID, 50% of the samples are trained, and the accuracy rate is 95.56%, an increase of 0.98% compared with Two-Stream algorithm. 80% of the samples are trained on the dataset OPTIMAL, and the accuracy rate reached 95.43%, with an improvement of 2.73% compared with ARCNet-VGG16 algorithm.

张桐, 郑恩让, 沈钧戈, 高安同. 基于深度多分支特征融合网络的光学遥感场景分类[J]. 光子学报, 2020, 49(5): 0510002. ZHANG Tong, ZHENG En-rang, SHEN Jun-ge, GAO An-tong. Remote Sensing Image Scene Classification Based on Deep Multi-branch Feature Fusion Network[J]. ACTA PHOTONICA SINICA, 2020, 49(5): 0510002.

本文已被 3 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!