CN111860330A - Apple leaf disease identification method based on multi-feature fusion and convolutional neural network - Google Patents

Apple leaf disease identification method based on multi-feature fusion and convolutional neural network Download PDF

Info

Publication number
CN111860330A
CN111860330A CN202010705693.XA CN202010705693A CN111860330A CN 111860330 A CN111860330 A CN 111860330A CN 202010705693 A CN202010705693 A CN 202010705693A CN 111860330 A CN111860330 A CN 111860330A
Authority
CN
China
Prior art keywords
image
neural network
convolutional neural
layer
apple leaf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010705693.XA
Other languages
Chinese (zh)
Other versions
CN111860330B (en
Inventor
李丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Polytechnic Institute
Original Assignee
Shaanxi Polytechnic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Polytechnic Institute filed Critical Shaanxi Polytechnic Institute
Priority to CN202010705693.XA priority Critical patent/CN111860330B/en
Publication of CN111860330A publication Critical patent/CN111860330A/en
Application granted granted Critical
Publication of CN111860330B publication Critical patent/CN111860330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an apple leaf disease identification method based on multi-feature fusion and a convolutional neural network, which comprises the steps of firstly, denoising and segmenting an original image by using a multi-feature fusion method; and then, taking the segmented image as an original data set of the convolutional neural network, expanding the original data set in a data expansion mode, finally training a network model by using the expanded data set, and optimizing model weight parameters by using a gradient descent method. When the method disclosed by the invention is used for identifying the diseases of the apple leaves, manual marking is not needed, the diseases and insect pests of the apple leaves can be accurately identified under a complex background, the accuracy rate reaches 97.05%, the identification time is 2.7s, and the problem of automatic identification of the diseases and insect pests of the apple leaves is effectively solved.

Description

Apple leaf disease identification method based on multi-feature fusion and convolutional neural network
Technical Field
The invention relates to the field of apple leaf disease identification methods, in particular to an apple leaf disease identification method based on multi-feature fusion and a convolutional neural network.
Background
Diseases are one of important factors influencing the growth of apples, and diseases on a few fruit trees can infect the whole orchard, thereby seriously influencing the yield and quality of apples. The apple leaf is a high-incidence part of diseases, and the identification of the leaf diseases is a key technology in the fruit tree cultivation process. Accurately identifies the apple leaf diseases, and has guiding significance for preventing and controlling the diseases in the growth process of fruit trees.
Traditional apple leaf disease identification mainly relies on the manual work, and this kind of mode is wasted time and energy and has great subjectivity, is not suitable for the management of modernized agriculture. With the development of computer vision and pattern recognition technology, researchers have made many studies on disease recognition by using a machine vision method. For example, the cucumber pest and disease damage recognition is realized by combining a Markov conditional random field and an SVM classifier based on a radial basis function. The cotton diseases are used as research objects, such as Zhanghua and the like, firstly, the color characteristics and the texture characteristics of an image are extracted, and then, a rough set and a neural network are combined to obtain a better identification effect on three different cotton diseases. The Qin Lifeng and the like utilize principal component analysis to reduce the high-dimensional characteristics of the cucumber diseases to a plurality of different low-dimensional subspaces, train a BP neural network on the subspaces and finish the identification tasks of five different cucumber diseases. In the identification of eggplant brown streak disease, by extracting the color, texture and shape characteristic parameters of a lesion spot and finally identifying the disease by using a Fisher discriminant function, the identification accuracy rate is over 95 percent. The method comprises the steps of extracting five different growth environment characteristics of leaves by using an attribute reduction method, extracting 35 statistical characteristic vectors of scabs by using an image processing method, and finally identifying the types of the scabs by using a maximum membership criterion by using a mode of combining statistics and image processing, wherein the identification rate of the method in identifying diseases of three different cucumbers is more than 90%. The color characteristics and the texture characteristics of the characteristic image are extracted by the Wangheng et al based on the color moment and the gray level co-occurrence matrix, and the classifier is constructed by utilizing the BP neural network optimized by the genetic algorithm, so that the accuracy rate in the identification of the tea plant scab reaches 94.17 percent. And (3) extracting 8 features of the histogram by Buchong and the like, then carrying out feature normalization, and identifying the strawberry diseases and insect pests by training an SVM classifier to obtain higher identification accuracy. The above methods are all based on the traditional machine vision technology, and although the method has high identification accuracy, the identification method is complicated, and the generalization capability of the model is poor and the method has no universality.
At present, the convolutional neural network is widely applied to the fields of semantic segmentation, target recognition and the like, and a plurality of researchers apply the deep convolutional neural network to plant disease and insect pest recognition to obtain a certain effect. Yangtdan and the like provide a convolutional neural network based on mixed pooling, the maximum pooling layer of the original CNN is replaced by the mixed pooling layer, and a good effect is achieved in identification of powdery mildew of strawberry leaf parts. The model can accurately distinguish the target and the background of an identification object under a complex background, and the average identification rate reaches over 96.78 percent in identification experiments of 5 different rice insect pests. Dredging et al propose a greenhouse cucumber disease recognition system based on CNN, firstly, the collected data is subjected to feature extraction by using composite color features, and then the extracted image is sent to a convolutional neural network for training, and the accuracy of the system in cucumber disease recognition reaches 97.29%. In the identification process of multiple kiwi fruit images in the field, such as the Fulongsheng field, a multiple kiwi fruit image identification method based on a VGG-16 convolutional neural network is provided, and in the identification test of kiwi fruits under a complex background, the identification precision reaches 94.78%. In the process of identifying wheat seedlings and weeds, the Sunjiang et al uses a convolutional neural network combining cavity convolution with global pooling, and the network model can achieve more than 90% of identification accuracy rate only through 4 iterations, so that the training time of the network model is greatly reduced. Although the research has high identification accuracy in the identification of the plant diseases and insect pests, a large amount of time is needed to manually label the data set in the construction process of the model data set.
Disclosure of Invention
Aiming at the problems, the invention aims to provide the apple leaf disease identification method based on the multi-feature fusion and the convolutional neural network, which does not need manual labeling, has high identification accuracy and short identification time.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the apple leaf disease identification method based on multi-feature fusion and the convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
s1: carrying out segmentation pretreatment on the apple leaf image by using a multi-feature fusion method;
s2: constructing an apple leaf disease identification convolutional neural network model by using the image data of the apple leaves subjected to segmentation pretreatment;
s3: carrying out network model training on the convolutional neural network model;
s4: and identifying the disease category of the apple leaf according to the final output result of the convolutional neural network model.
Further, the image segmentation preprocessing in step S1 includes:
s11: acquiring RGB (red, green and blue) characteristics of an original image, and extracting ultragreen characteristics by using an ultragreen characteristic calculation formula; the ultragreen characteristic calculation formula is IExB=2IG-IB-IRIn the formula IG,IB,IR3 color components in RGB color space;
s12: respectively converting an original image from an RGB color space to an HSV color space and a CIELAB color space, and respectively extracting an H component color characteristic and an L component color characteristic of the converted image;
S13: performing two-dimensional convolution operation on three color components including an ultragreen component, an H component and an L component by using Gaussian difference filtering and circular mean filtering respectively, and combining the filtered color features into a fusion feature, wherein the fusion calculation formula is MMF (Gf I)H)+(Af*IL)+IExBIn the formula, Gf is Gaussian filter, Af is mean filter, and MMF is the feature after fusion;
s14: and performing image segmentation on the fused feature image by using a maximum inter-class variance method, optimizing a segmentation result through morphological processing, and performing masking operation by using an original image and the optimized image to obtain a final segmentation result.
Further, the specific operation of step S14 includes,
s141: uniformly scaling the apple leaf images subjected to multi-feature fusion in the step S13 into three-channel RGB color images of 125 x 3;
s142: equally dividing the three-channel RGB color image into four areas, namely an upper left area, an upper right area, a lower left area and a lower right area, cutting a central area in the input image, wherein the central area is concentric with the three-channel RGB color image, and cutting 1/2 areas in the horizontal direction and the vertical direction respectively to obtain 5 sub-images with the size of 1/4 of the original image;
s143: interpolating the 5 sub-images by using a bilinear interpolation method, and amplifying the 5 sub-images by 4 times in an equal proportion;
S144: and converting the divided gray-scale image into a color image, namely performing masking operation.
Further, the convolutional neural network model for identifying apple leaf diseases in step S2 includes 13 convolutional layers, 4 pooling layers and 1 global pooling layer.
Further, the concrete operation of performing network model training on the convolutional neural network model in step S3 includes:
s31: calculating the segmented image in the step S1 with convolution kernels of different convolution layers to obtain different characteristics of the input image, and activating the obtained characteristics through an activation function to obtain an output characteristic diagram; the calculation formula of the characteristic diagram is xl=f(Wlxl-1+bl) In the formula, xl-1Is the output of the l-1 th hidden layer, xlIs the output of the convolutional layer in the first hidden layer, x0For input images of the input layer, WlIs the weight feature matrix of the first hidden layer, blFor biasing of the l-th hidden layer, the activation function f (x) max (0, x);
s32: reducing the dimension of the characteristic diagram by performing pooling operation on the characteristic diagram output by the convolutional layer, wherein the pooling operation adopts a maximum pooling method and has a calculation formula of
Figure BDA0002594670250000051
Where l is the number of layers, down is the down-sampling operation, w is the pooling operation, bsIs an additional bias;
s33: the global pooling layer carries out weighted summation on the characteristics of the characteristic diagram and integrates local information with category distinction in the convolutional layer and the pooling layer;
S34: merging the characteristic graphs obtained from each layer, transmitting the merged characteristic graphs to a loss layer, and fusing the detection results of all the layers by using non-maximum inhibition; the parameters of the loss layer are calculated by a loss function, and the calculation formula of the loss function is
Figure BDA0002594670250000052
In the formula, LlocFor confidence loss, LconfThe confidence coefficient loss is obtained, z is a matching result of a default category and different categories, c is the confidence coefficient of a prediction target, l is the position information of a prediction object frame, g is the position information of a real frame, and alpha is a weighing parameter of the confidence loss and the position loss;
s35: and optimizing the network model weight parameters by using a gradient descent method, and repeating the steps S31-S35 until the optimal value of the network weight is obtained.
Further, the specific operation of optimizing the convolutional layer weight parameters by using the gradient descent method in step S36 includes,
s361: randomly initializing a weight parameter of the network model;
s362: calculating the error between the output value calculated by the model and the true value;
s363: carrying out weight adjustment on each neuron generating errors to reduce error values;
s364: and repeating the iteration until the optimal value of the network weight is obtained.
Further, the specific operation of identifying the disease category of the apple leaf according to the final output result of the convolutional neural network model in step S4 includes: identifying and classifying the image by using a Softmax classifier on the feature map subjected to the operation processing of the global pooling layer, wherein the identification calculation formula is
Figure BDA0002594670250000061
In the formula, wiWeights for a number of neurons in the fully connected layer to connect with the i output neurons of the Softmax classifier.
The invention has the beneficial effects that:
1. and (3) segmenting the disease image on the training set by using a multi-feature fusion mode without manual marking. After the input image is equally divided, each subregion characteristic of the image is extracted, and the detail characteristic of the special lesion is fully extracted. And a full connection layer is changed into a convolution layer, so that the network depth is increased, and the recognition rate of a network model is improved.
2. The method can accurately identify the apple leaf disease and insect pest image under the complex background, has a simple network model structure and strong transportability, and provides a theoretical basis for the development of agricultural intelligent equipment.
3. Compared with the traditional recognition algorithm, the method has good performance in recognition rate and recognition time, and if the recognition accuracy is further improved, the training set can be properly expanded or the network depth can be increased.
Drawings
FIG. 1 is an image of leaf rust of apple in accordance with an embodiment of the present invention;
FIG. 2 is an image of apple leaf scab in an embodiment of the present invention;
FIG. 3 is an image of defoliation of apple leaves according to an embodiment of the present invention;
FIG. 4 is an image of apple leaf virus disease in an embodiment of the present invention;
FIG. 5 is an image of apple leaf silver leaf disease according to an embodiment of the present invention;
FIG. 6 is an image of powdery mildew of apple leaves in an embodiment of the present invention;
FIG. 7 is a network model architecture diagram for apple leaf disease image recognition according to the present invention;
FIG. 8 is a flow chart of an apple leaf disease area detection module of the present invention;
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following further describes the technical solution of the present invention with reference to the drawings and the embodiments.
Example (b):
in a fruit tree test base of a rural academy of farming in Baoji city of Shaanxi province, a Canon digital camera with an effective pixel value of 2200 ten thousand is used for collecting an apple leaf image. The acquisition time is set to 09:00 to 16:00, and different illumination conditions are included. The collected images comprise 6 common apple diseases such as apple leaf rust disease, scab, defoliation, virus disease, silver leaf disease, powdery mildew and the like, and 1200 images are collected in total, and partial images are shown in the attached figures 1-6.
The leaf images are cut by Python3.5, the sizes are uniformly adjusted to 125 x 125, and the leaf images are stored in a uniform storage format. In order to enhance the network learning ability and prevent overfitting, the image data set is expanded, images with different diseases are respectively expanded by 10 times in the modes of rotation, translation, scaling, color dithering and the like, and the expanded image data set is divided into a training set, a verification set and a test set according to the mode of 3:1: 1.
Specifically, the acquired original data volume comprises 865 apple disease leaves, the apple disease leaves are expanded by 10 times, the expanded data comprises 8650 apple disease leaves, the apple disease leaves are divided into a training set, a verification set and a test set according to the proportion of 3:1:1, wherein the training set comprises 5190 images, the verification set comprises 1730 images, and the test set comprises 1730 images.
Further, the apple leaf disease identification method based on multi-feature fusion and the convolutional neural network specifically comprises the following steps:
s1: carrying out segmentation pretreatment on the apple leaf image by using a multi-feature fusion method;
apple leaf diseases have different types, and the disease spot characteristics of the apple leaf diseases also have great difference among the types. In the process of identifying diseases, the key part is to extract the characteristics of different disease images. Because the image acquisition process is under the conditions of natural scenes and complex backgrounds, the acquired image contains a large amount of noise. If the plant diseases and insect pests are not pretreated, the identification accuracy of the plant diseases and insect pests is reduced. Usually, the ratio of the lesion area to the whole leaf area is small, and if a manual labeling data set is used, the conditions of inaccurate labeling or wrong labeling area and the like are caused. Therefore, the invention provides a multi-feature fusion method for preprocessing the image and realizing lesion segmentation.
Specifically, the operation steps of preprocessing the image by the multi-feature fusion method are as follows:
s11: acquiring RGB (red, green and blue) characteristics of an original image, and extracting ultragreen characteristics by using an ultragreen characteristic calculation formula; the ultragreen characteristic calculation formula is IExB=2IG-IB-IRIn the formula IG,IB,IR3 color components in RGB color space;
s12: respectively converting an original image from an RGB color space to an HSV color space and a CIELAB color space, and respectively extracting an H component color characteristic and an L component color characteristic of the converted image;
s13: performing two-dimensional convolution operation on three color components including an ultragreen component, an H component and an L component by using Gaussian difference filtering and circular mean filtering respectively, and combining the filtered color features into a fusion feature, wherein the fusion calculation formula is MMF (Gf I)H)+(Af*IL)+IExBIn the formula, Gf is Gaussian filter, Af is mean filter, and MMF is the feature after fusion;
s14: and performing image segmentation on the fused features by using a maximum inter-class variance method, optimizing a segmentation result through morphological processing, and performing masking operation by using an original image and the optimized image to obtain a final segmentation result.
The leaf disease image features are complex, and the proportion of the scab area in the whole leaf area is small, so that a plurality of small-area scabs are difficult to identify. In order to improve the identification accuracy of the network model, firstly, dividing the disease and insect leaf image subjected to multi-feature fusion into 4 sub-images with the same size, and simultaneously cutting a central region sub-image to enhance the detection capability of small disease spots.
Specifically, in order to enhance the detail detection capability of the lesion image, the multi-feature fused image is uniformly scaled into a 125 × 125 × 3 three-channel RGB color image, then the image is equally divided into four regions, namely, an upper left region, an upper right region, a lower left region and a lower right region, and a central region is cut out from the input image. The central area is an area concentric with the input size image and cut out 1/2 in each of the horizontal and vertical directions. A total of 5 sub-images 1/4 of the original image are obtained, and then 5 sub-images are interpolated by using bilinear interpolation, as shown in fig. 8. During detection, the method adopts an interpolation algorithm to amplify the image by 4 times in an equal proportion, so that a detected object is not deformed when the detected object is sent into a network model.
The segmentation result of the fusion features is a gray image, the original image and the optimized image are used for performing masking operation to obtain a final segmentation result, the masking operation is to convert the gray image into a color image, and the final segmentation result is specifically described as that the apple disease area is colored, and the rest areas are black.
Further, step S2 is: constructing an apple leaf disease identification convolutional neural network model by using the image data of the apple leaves subjected to segmentation pretreatment;
The conventional convolutional neural network mainly comprises a convolutional layer, an activation layer, a pooling layer and a full-link layer. Because the VGG-16 network model has a deeper network structure and better data processing capability, and has higher recognition rate in image recognition, the model architecture suitable for apple leaf disease image recognition is constructed on the basis of the traditional VGG-16 network structure, and the specific structure is shown in figure 7.
The apple leaf disease identification network model mainly comprises 13 convolutional layers (Conv 1-Conv 13), 4 pooling layers and 1 global pooling layer. Setting all convolution kernels to be 3 multiplied by 3, and setting the sliding step length of the convolution layer to be 1; to keep the dimension consistent with the dimension of the input image, the pad parameter is set to 1 by means of boundary extension to complete the convolution layer edge. The size of the Pooling (Pooling 1-Pooling 4) is set to be 3 multiplied by 3, the operation mode of maximum Pooling is adopted, the Pooling window is set to be 2 multiplied by 2, and the sliding step length is set to be 2; since identification was only performed for 6 different apple leaf diseases, the Softmax classification number was set to 6.
S3: carrying out network model training on the convolutional neural network model;
and sending the preprocessed image into a convolutional neural network to perform multi-level feature extraction through a convolutional layer, a pooling layer and a global pooling layer, and then selecting candidate regions with different sizes and different aspect ratios at each position in a feature map.
Specifically, the operation steps of constructing the apple leaf disease identification convolutional neural network model by using the image data of the apple leaves after the segmentation pretreatment comprise:
s31: calculating the image divided in the step S1 with convolution kernels of different convolution layers to obtain different characteristics of the input image, and obtaining an output characteristic diagram by the obtained characteristics through an activation function; the calculation formula of the characteristic diagram is xl=f(Wlxl-1+bl) In the formula, xl-1Is the output of l-1 hidden layers, xlIs the output of the convolutional layer in the first hidden layer, x0For input images of the input layer, WlIs the weight feature matrix of the first hidden layer, blFor biasing of the l-th hidden layer, the activation function f (x) max (0, x);
s32: reducing the dimension of the characteristic diagram by performing pooling operation on the characteristic diagram output by the convolutional layer, wherein the pooling operation adopts a maximum pooling method and has a calculation formula of
Figure BDA0002594670250000111
Where l is the number of layers, down is the down-sampling operation, w is the pooling operation, bsIs an additional bias;
s33: the global pooling layer carries out weighted summation on the characteristics of the characteristic diagram and integrates local information with category distinction in the convolutional layer and the pooling layer;
s34: merging the characteristic graphs obtained from each layer, transmitting the merged characteristic graphs to a loss layer, and fusing the detection results of all the layers by using non-maximum inhibition; the parameters of the loss layer are calculated through a loss function, the loss function consists of classification and regression, and the calculation formula of the loss function is
Figure BDA0002594670250000121
In the formula, LlocFor confidence loss, LconfThe confidence coefficient loss is obtained, z is a matching result of a default category and different categories, c is the confidence coefficient of a prediction target, l is the position information of a prediction object frame, g is the position information of a real frame, and alpha is a weighing parameter of the confidence loss and the position loss;
s35: and optimizing the network model weight parameters by using a gradient descent method, and repeating the steps S31-S35 until the optimal value of the network weight is obtained.
The testing software environment is Ubuntu 16.04 LTS, Matlab is used as a programming development language, the hardware environment is an Intel (R) core I7-7550k cpu @3.60GHz processor, the RAM is 32GB, and the GPU is GTX1080 Ti. The deep learning development framework used is matconcent.
In the model training process, a small batch gradient descent (SGD) algorithm with a momentum factor (momentum) is adopted, and activation functions of convolutional layers are all ReLu. Considering the requirements of computer hardware, dividing the training set images into different batch sizes (batch size) and inputting the batch sizes into the network model; the steps of a particular gradient descent algorithm are as follows,
s361: randomly initializing a weight parameter of the network model;
s362: calculating the error between the output value calculated by the model and the true value;
S363: carrying out weight adjustment on each neuron generating errors to reduce error values;
s364: and repeating the iteration until the optimal value of the network weight is obtained.
The Batch size was set to 64, 128 and 256, respectively, in this study, the momentum factor was set to 0.9, and the number of iterations (epoch) was set to 100. In the initial training process of the network, the network weight is randomly initialized by using Gaussian distribution with the mean value of 0 and the variance of 0.01, the initial learning rate is set to be 0.01, and the regularization coefficient is set to be 0.005.
Further, the specific operation of identifying the disease category of the apple leaf according to the final output result of the convolutional neural network model in step S4 includes: identifying and classifying the image by using a Softmax classifier on the feature map subjected to the operation processing of the global pooling layer, wherein the identification calculation formula is
Figure BDA0002594670250000131
In the formula, wiWeights for a number of neurons in the fully connected layer to connect with the i output neurons of the Softmax classifier.
In order to verify the identification time and the identification rate of the apple leaf disease identification method of the invention on the apple leaf disease, 5 different methods are used to compare the identification rate, the model training time and the identification time of 6 different apple leaf diseases with the method of the invention respectively. The 5 different methods are respectively a disease identification method (LNNF) based on leaf characteristics, a disease identification method based on SVM, a disease identification (PCAA) based on bag-of-word characteristics, and a disease identification method (CLBP) based on rough set and BP neural network. The results of the different methods are shown in table 1.
TABLE 15 recognition rates, training times and recognition times of different apple leaf disease recognition methods
Figure BDA0002594670250000141
As can be seen from the table 1, the apple leaf disease identification method has good performance in the aspects of identification rate and identification time, and shows good robustness and stability in different pest type identifications although the training time is long. Once the network model is trained, different diseases can be identified, and the identification time is short.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. The apple leaf disease identification method based on multi-feature fusion and the convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
s1: carrying out segmentation pretreatment on the apple leaf image by using a multi-feature fusion method;
S2: constructing an apple leaf disease identification convolutional neural network model by using the image data of the apple leaves subjected to segmentation pretreatment;
s3: carrying out network model training on the convolutional neural network model;
s4: and identifying the disease category of the apple leaf according to the final output result of the convolutional neural network model.
2. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network of claim 1, wherein the image segmentation preprocessing in step S1 comprises the following specific steps:
s11: acquiring RGB (red, green and blue) characteristics of an original image, and extracting ultragreen characteristics by using an ultragreen characteristic calculation formula; the ultragreen characteristic calculation formula is IExB=2IG-IB-IRWherein IG, IB and IR are 3 color components in RGB color space;
s12: respectively converting an original image from an RGB color space to an HSV color space and a CIELAB color space, and respectively extracting an H component color characteristic and an L component color characteristic of the converted image;
s13: performing two-dimensional convolution operation on three color components including an ultragreen component, an H component and an L component by using Gaussian difference filtering and circular mean filtering respectively, and combining the filtered color features into a fusion feature, wherein the fusion calculation formula is MMF (Gf I) H)+(Af*IL)+IExBIn the formula, Gf is Gaussian filter, Af is mean filter, and MMF is the feature after fusion;
s14: and performing image segmentation on the fused feature image by using a maximum inter-class variance method, optimizing a segmentation result through morphological processing, and performing masking operation by using an original image and the optimized image to obtain a final segmentation result.
3. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network of claim 1, wherein the specific operation steps of step S14 include,
s141: uniformly scaling the apple leaf images subjected to multi-feature fusion in the step S13 into three-channel RGB color images of 125 x 3;
s142: equally dividing the three-channel RGB color image into four areas, namely an upper left area, an upper right area, a lower left area and a lower right area, cutting a central area in the input image, wherein the central area is concentric with the three-channel RGB color image, and cutting 1/2 areas in the horizontal direction and the vertical direction respectively to obtain 5 sub-images with the size of 1/4 of the original image;
s143: interpolating the 5 sub-images by using a bilinear interpolation method, and amplifying the 5 sub-images by 4 times in an equal proportion;
s144: and converting the divided gray-scale image into a color image, namely performing masking operation.
4. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network of claim 1, wherein the apple leaf disease identification convolutional neural network model in step S2 includes 13 convolutional layers, 4 pooling layers and 1 global pooling layer.
5. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network of claim 4, wherein the specific operation of performing network model training on the convolutional neural network model in step S3 includes:
s31: calculating the segmented image in the step S1 with convolution kernels of different convolution layers to obtain different characteristics of the input image, and activating the obtained characteristics through an activation function to obtain an output characteristic diagram; the calculation formula of the characteristic diagram is xl=f(Wlxl-1+bl) In the formula, xl-1Is the output of the 1 st-1 st hidden layer, xlIs the output of the convolution layer in the 1 st hidden layer, x0For input images of the input layer, WlIs a weight feature matrix of the 1 st hidden layer, blFor the bias of the 1 st hidden layer, the activation function f (x) max (0, x);
s32: reducing the dimension of the characteristic diagram by performing pooling operation on the characteristic diagram output by the convolutional layer, wherein the pooling operation adopts a maximum pooling method The calculation formula is
Figure FDA0002594670240000031
Where 1 is the number of layers, down is the down-sampling operation, w is the pooling operation, bsIs an additional bias;
s33: the global pooling layer carries out weighted summation on the characteristics of the characteristic diagram and integrates local information with category distinction in the convolutional layer and the pooling layer;
s34: merging the characteristic graphs obtained from each layer, transmitting the merged characteristic graphs to a loss layer, and fusing the detection results of all the layers by using non-maximum inhibition; the parameters of the loss layer are calculated by a loss function, and the calculation formula of the loss function is
Figure FDA0002594670240000032
In the formula, LlocFor confidence loss, LconfThe confidence coefficient loss is obtained, z is a matching result of a default category and different categories, c is the confidence coefficient of a prediction target, l is the position information of a prediction object frame, g is the position information of a real frame, and alpha is a weighing parameter of the confidence loss and the position loss;
s35: and optimizing the network model weight parameters by using a gradient descent method, and repeating the steps S31-S35 until the optimal value of the network weight is obtained.
6. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network of claim 5, wherein the specific operation step of optimizing the convolutional layer weight parameters by using gradient descent method in step S36 comprises,
S361: randomly initializing a weight parameter of the network model;
s362: calculating the error between the output value calculated by the model and the true value;
s363: carrying out weight adjustment on each neuron generating errors to reduce error values;
s364: and repeating the iteration until the optimal value of the network weight is obtained.
7. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network of claim 6, wherein: the specific operation of identifying the disease category of the apple leaf according to the final output result of the convolutional neural network model in the step S4 includes: identifying and classifying the image by using a Softmax classifier on the feature map subjected to the operation processing of the global pooling layer, wherein the identification calculation formula is
Figure FDA0002594670240000041
In the formula, wi is the weight of the connection of a plurality of neurons in the full connection layer and i output neurons of the Softmax classifier.
CN202010705693.XA 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network Active CN111860330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010705693.XA CN111860330B (en) 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010705693.XA CN111860330B (en) 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network

Publications (2)

Publication Number Publication Date
CN111860330A true CN111860330A (en) 2020-10-30
CN111860330B CN111860330B (en) 2023-08-11

Family

ID=73001339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010705693.XA Active CN111860330B (en) 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network

Country Status (1)

Country Link
CN (1) CN111860330B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580720A (en) * 2020-12-18 2021-03-30 华为技术有限公司 Model training method and device
CN112766364A (en) * 2021-01-18 2021-05-07 南京信息工程大学 Tomato leaf disease classification method for improving VGG19
CN112884025A (en) * 2021-02-01 2021-06-01 安徽大学 Tea disease classification system based on multi-feature sectional type training
CN113627258A (en) * 2021-07-12 2021-11-09 河南理工大学 Apple leaf pathological detection method
CN113989639A (en) * 2021-10-20 2022-01-28 华南农业大学 Hyperspectral image analysis processing method-based automatic litchi disease identification method and device
CN113989509A (en) * 2021-12-27 2022-01-28 衡水学院 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
CN114332087A (en) * 2022-03-15 2022-04-12 杭州电子科技大学 Three-dimensional cortical surface segmentation method and system for OCTA image
CN114494828A (en) * 2022-01-14 2022-05-13 中国农业大学 Grape disease identification method and device, electronic equipment and storage medium
CN114842240A (en) * 2022-04-06 2022-08-02 盐城工学院 Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism
CN115631417A (en) * 2022-11-11 2023-01-20 生态环境部南京环境科学研究所 Butterfly image identification method based on convolutional neural network

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004007165A (en) * 2002-05-31 2004-01-08 Nikon Corp Image processing method, image processing program, and image processor
US6915432B1 (en) * 1999-01-29 2005-07-05 International Business Machines Corporation Composing a realigned image
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN108510504A (en) * 2018-03-22 2018-09-07 北京航空航天大学 Image partition method and device
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN109712165A (en) * 2018-12-29 2019-05-03 安徽大学 A kind of similar foreground picture image set dividing method based on convolutional neural networks
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN110008912A (en) * 2019-04-10 2019-07-12 东北大学 A kind of social platform matching process and system based on plants identification
US20190295269A1 (en) * 2018-03-22 2019-09-26 Microsoft Technology Licensing, Llc Replicated dot maps for simplified depth computation using machine learning
CN110555383A (en) * 2019-07-31 2019-12-10 中国地质大学(武汉) Gesture recognition method based on convolutional neural network and 3D estimation
CN110633720A (en) * 2018-06-22 2019-12-31 西北农林科技大学 Corn disease identification method
CN111178177A (en) * 2019-12-16 2020-05-19 西京学院 Cucumber disease identification method based on convolutional neural network
CN111415302A (en) * 2020-03-25 2020-07-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915432B1 (en) * 1999-01-29 2005-07-05 International Business Machines Corporation Composing a realigned image
JP2004007165A (en) * 2002-05-31 2004-01-08 Nikon Corp Image processing method, image processing program, and image processor
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
US20190295269A1 (en) * 2018-03-22 2019-09-26 Microsoft Technology Licensing, Llc Replicated dot maps for simplified depth computation using machine learning
CN108510504A (en) * 2018-03-22 2018-09-07 北京航空航天大学 Image partition method and device
CN110633720A (en) * 2018-06-22 2019-12-31 西北农林科技大学 Corn disease identification method
CN109712165A (en) * 2018-12-29 2019-05-03 安徽大学 A kind of similar foreground picture image set dividing method based on convolutional neural networks
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN110008912A (en) * 2019-04-10 2019-07-12 东北大学 A kind of social platform matching process and system based on plants identification
CN110555383A (en) * 2019-07-31 2019-12-10 中国地质大学(武汉) Gesture recognition method based on convolutional neural network and 3D estimation
CN111178177A (en) * 2019-12-16 2020-05-19 西京学院 Cucumber disease identification method based on convolutional neural network
CN111415302A (en) * 2020-03-25 2020-07-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
唐朝霞;张粤;: "基于遗传算法的玉米病害图像特征优化与识别", 安徽农业大学学报, no. 02, pages 174 - 179 *
李丹;: "基于迁移学习和改进残差神经网络的黄瓜叶部病害识别方法", 农业工程, no. 06, pages 46 - 50 *
许庆勇;江顺亮;徐少平;葛芸;唐玲;: "基于三通道卷积神经网络的纹身图像检测算法", 计算机应用, no. 09, pages 279 - 285 *
黄小玉;李光林;马驰;杨士航;: "基于改进判别区域特征融合算法的近色背景绿色桃子识别", 农业工程学报, no. 23, pages 150 - 156 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580720A (en) * 2020-12-18 2021-03-30 华为技术有限公司 Model training method and device
CN112766364A (en) * 2021-01-18 2021-05-07 南京信息工程大学 Tomato leaf disease classification method for improving VGG19
CN112884025B (en) * 2021-02-01 2022-11-04 安徽大学 Tea disease classification system based on multi-feature sectional type training
CN112884025A (en) * 2021-02-01 2021-06-01 安徽大学 Tea disease classification system based on multi-feature sectional type training
CN113627258A (en) * 2021-07-12 2021-11-09 河南理工大学 Apple leaf pathological detection method
CN113627258B (en) * 2021-07-12 2023-09-26 河南理工大学 Apple leaf pathology detection method
CN113989639A (en) * 2021-10-20 2022-01-28 华南农业大学 Hyperspectral image analysis processing method-based automatic litchi disease identification method and device
CN113989639B (en) * 2021-10-20 2024-04-16 华南农业大学 Automatic litchi disease identification method and device based on hyperspectral image analysis processing method
CN113989509B (en) * 2021-12-27 2022-03-04 衡水学院 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
CN113989509A (en) * 2021-12-27 2022-01-28 衡水学院 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
CN114494828A (en) * 2022-01-14 2022-05-13 中国农业大学 Grape disease identification method and device, electronic equipment and storage medium
CN114332087A (en) * 2022-03-15 2022-04-12 杭州电子科技大学 Three-dimensional cortical surface segmentation method and system for OCTA image
CN114842240A (en) * 2022-04-06 2022-08-02 盐城工学院 Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism
CN115631417A (en) * 2022-11-11 2023-01-20 生态环境部南京环境科学研究所 Butterfly image identification method based on convolutional neural network

Also Published As

Publication number Publication date
CN111860330B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN111860330B (en) Apple leaf disease identification method based on multi-feature fusion and convolutional neural network
AU2020102885A4 (en) Disease recognition method of winter jujube based on deep convolutional neural network and disease image
CN109308697B (en) Leaf disease identification method based on machine learning algorithm
WO2022160771A1 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
CN111369540B (en) Plant leaf disease identification method based on mask convolutional neural network
CN109344883A (en) Fruit tree diseases and pests recognition methods under a kind of complex background based on empty convolution
CN114341948A (en) System and method for plant species identification
CN106845497B (en) Corn early-stage image drought identification method based on multi-feature fusion
CN110222215B (en) Crop pest detection method based on F-SSD-IV3
CN111598001B (en) Identification method for apple tree diseases and insect pests based on image processing
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN107153840A (en) A kind of crop pests image-recognizing method based on convolutional Neural
Zhang et al. Robust image segmentation method for cotton leaf under natural conditions based on immune algorithm and PCNN algorithm
Hu et al. Self-adversarial training and attention for multi-task wheat phenotyping
Patil Pomegranate fruit diseases detection using image processing techniques: a review
Hu et al. Computer vision based method for severity estimation of tea leaf blight in natural scene images
CN114677606A (en) Citrus fine-grained disease identification method based on attention mechanism and double-branch network
CN113344009A (en) Light and small network self-adaptive tomato disease feature extraction method
CN113077452A (en) Apple tree pest and disease detection method based on DNN network and spot detection algorithm
CN111695560A (en) Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network
Rony et al. BottleNet18: Deep Learning-Based Bottle Gourd Leaf Disease Classification
CN111144464B (en) Fruit automatic identification method based on CNN-Kmeans algorithm
Cao et al. Plant leaf segmentation and phenotypic analysis based on fully convolutional neural network
Farahani et al. Identification of grape leaf diseases using proposed enhanced VGG16
Deng et al. A paddy field segmentation method combining attention mechanism and adaptive feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant