光谱学与光谱分析, 2023, 43 (4): 1248, 网络出版: 2023-05-03  

基于语义分割和可见光谱图的作物叶部病斑分割方法

Segmentation Method for Crop Leaf Spot Based on Semantic Segmentation and Visible Spectral Images
作者单位
1 中国农业大学信息与电气工程学院, 北京 100083
2 河南农业大学信息与管理科学学院, 河南 郑州 450018
3 中国农业科学院农业环境与可持续发展研究所, 北京 100081
摘要
病害严重影响作物品质, 并造成经济损失。 病斑分割是病害定量诊断的重要过程, 其分割结果可为后续的识别和严重度估算提供有效依据。 由于病斑具有不规则性和复杂性, 且自然环境下病斑可见光谱图像易受光照变化等影响, 传统的图像处理方法对病斑图像分割存在准确率低、 普适性低和鲁棒性不高等问题。 该工作提出了基于语义分割和可见光谱图像的作物叶部病害病斑分割方法。 首先, 以花生褐斑病、 烟草赤星病为研究对象, 使用尼康D300s单反相机共采集到165张可见光谱图像。 通过Matlab Image Labeler APP对病害可见光谱图像进行像素标记, 分别标记出褐斑病病斑、 赤星病病斑和背景区域。 其次, 对标记后的数据采用水平翻转、 垂直翻转、 改变亮度等图像扩充方式, 获得1 850份增强后样本数据集。 为了节约计算成本, 将数据集的像素分辨率调整为300×300。 最后, 基于FCN, SegNet和U-Net 3种语义分割网络, 构建4 种作物叶部病害病斑分割模型, 探索了数据增强、 病害类别对病斑分割模型的影响, 并采用4种分割指标评价模型效果。 结果表明: 仅对于病斑分割, 图像增强能够提高模型的分割精度, 增强后FCN模型的平均精度(MP)和平均交并比(MIoU)分别为95.71%和93.36%。 4个语义分割模型显著优于支持向量机(SVM), 其中FCN与U-Net, SegNet-2和SegNet-4分割模型相比, 能够有效避免光线变化等影响, 病斑分割精度(P)和交并比(IoU)分别达到99.25%和97.55%。 对于病斑分类分割实验, FCN对两种病害的分割精度Pd分别达到97.54%和90.41%, 对两种病害的交并比IOUd分别为95.61%和70.30%, 均优于其他3种分割模型。 FCN能够在分割病斑的同时也准确地识别病害类别, 有较好的泛化性和鲁棒性, 实现了自然场景下作物叶部病害病斑的识别与分割, 为计算混合病害严重度提供了技术参考。
Abstract
Diseases affect crop quality seriously and cause economic losses. Disease spot segmentation is an important process of identification and disease severity estimation, whose segmentation results can provide an effective basis for subsequent identification and severity estimation. Due to the irregularity and complexity of lesions, and the visible spectrum image of lesions in the natural environment is susceptible to be change in illumination, traditional image processing methods have low accuracy, low universality and robustness for image segmentation of lesions. In this regard, this article proposed a method for the segmentation of crop leaf diseases based on semantic segmentation and visible spectrum images. Firstly, taking peanut brown spot and tobacco brown spot as the research objects, 165 visible spectrum images were collected using a Nikon D300s SLR camera. The visible spectrum images of the diseases were pixel-labeled through the Matlab Image Labeler APP, and the brown spot and background area were respectively marked. Secondly, the labeled dataset adopted image enhancement methods such as horizontal flipping, vertical flipping, changing brightness, etc., to obtain 1 850 enhanced sample data sets and randomly divided them into the training set, validation set and test set according to the ratio of 8∶1∶1. At the same time, in order to save computational cost, the pixel resolution of the data set was adjusted to 300×300. Finally, four types of disease spot segmentation models were constructed based on the three semantic segmentation networks of FCN, SegNet and U-Net. The effects of data enhancement and disease types on the lesion segmentation model were explored. Four segmentation indicators were used to evaluate the model’s segmentation effect. The test results showed that only for lesion segmentation, image enhancement could improve the segmentation accuracy of the model. The model’s Mean Precision (MP) and Mean Intersection over Union (MIoU) were 95.71% and 93.36%, respectively. The 4 semantic segmentation models were significantly better than the Support Vector Machine (SVM). Compared with the U-Net, SegNet-2 and SegNet-4 segmentation models, FCN can effectively avoid the influence of light changes. The Precision (P) of lesion segmentation and the Intersection over Union(IoU) reached 99.25% and 97.55%, respectively. For the lesion classification and segmentation experiment, the Precision (P) of FCN for the two diseases reached 90.41% and 97.54%, and the Intersection over Union(IoU) of the two diseases reached 95.61% and 70.30%, respectively, which were better than the other three segmentation models. FCN can distinguish disease types well while segmenting disease spots, which has good generalization and robustness and realize the identification and segmentation of disease spots in natural scenes and provide a technical reference for the severity estimation of mixed diseases.
参考文献

[1] Ma J, K Du, Zheng F, et al. Computers and Electronics in Agriculture, 2018, 154: 18.

[2] Zhang S, You Z, Wu X. Neural Computing and Applications, 2019, 31(2): 1225.

[3] Zhang S, Wu X, You Z, et al. Computers and Electronics in Agriculture, 2017, 134: 135.

[4] LIAO Juan, CHEN Min-hui, ZHANG Kai, et al(廖 娟, 陈民慧, 张 锴, 等). Transactions of the Chinese Society of Agricultural Machinery(农业机械学报), 2021, 52(12): 171.

[5] Alencastre M M, Johnson R R, Krebs H I. IEEE Transactions on Industrial Informatics, 2021, 17(2): 787.

[6] Xiong L, Zhang D, Li K, et al. Soft Computing, 2020, 24(10): 7253.

[7] Chen L C, Papandreou G, Kokkinos I, et al. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834.

[8] CHEN Yan, ZHU Cheng-yu, HU Xiao-chun, et al(陈 燕, 朱成宇, 胡小春, 等). Transactions of the Chinese Society of Agricultural Machinery(农业机械学报), 2021, 52(7): 169.

[9] WANG Can, WU Xin-hui, ZHANG Yan-qing, et al(王 璨, 武新慧, 张燕青, 等). Transactions of the Chinese Society of Agricultural Engineering(农业工程学报), 2021, 37(9): 211.

[10] GU Xing-jian, ZHU Jian-feng, REN Shou-gang, et al(顾兴健, 朱剑峰, 任守纲, 等). Computer Science(计算机科学), 2021, 48(S2): 360.

[11] SUN Jun, TAN Wen-jun, WU Xiao-hong, et al(孙 俊, 谭文军, 武小红, 等). Transactions of the Chinese Society of Agricultural Engineering(农业工程学报), 2019, 35(12): 184.

[12] Li Y X, Liu H J, Ma J C, et al. Computers and Electronics in Agriculture, 2021, 190: 106480.

李凯雨, 张慧, 马浚诚, 张领先. 基于语义分割和可见光谱图的作物叶部病斑分割方法[J]. 光谱学与光谱分析, 2023, 43(4): 1248. LI Kai-yu, ZHANG Hui, MA Jun-cheng, ZHANG Ling-xian. Segmentation Method for Crop Leaf Spot Based on Semantic Segmentation and Visible Spectral Images[J]. Spectroscopy and Spectral Analysis, 2023, 43(4): 1248.

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!