CN114067313A - Crop leaf disease identification method of bilinear residual error network model - Google Patents

Crop leaf disease identification method of bilinear residual error network model Download PDF

Info

Publication number
CN114067313A
CN114067313A CN202111376030.9A CN202111376030A CN114067313A CN 114067313 A CN114067313 A CN 114067313A CN 202111376030 A CN202111376030 A CN 202111376030A CN 114067313 A CN114067313 A CN 114067313A
Authority
CN
China
Prior art keywords
image
bilinear
feature
constructing
residual error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111376030.9A
Other languages
Chinese (zh)
Inventor
何云
李彤
马自飞
高泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Agricultural University
Original Assignee
Yunnan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Agricultural University filed Critical Yunnan Agricultural University
Priority to CN202111376030.9A priority Critical patent/CN114067313A/en
Publication of CN114067313A publication Critical patent/CN114067313A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of digital agricultural pest image diagnosis, and particularly relates to a crop leaf disease identification method of a bilinear residual error network model, which comprises the following steps: (1) constructing a feature extractor, (2) constructing a bilinear pooling function, and (3) constructing a classifier function. The method integrates two feature extractors in a bilinear mode, carries out interactive modeling extraction on local features, and can more fully extract fine-grained image features in the problem of disease image identification by the model, thereby improving the accuracy of disease image identification. The method of the invention is used for training the residual error network model in an end-to-end mode, and can effectively utilize the local characteristic information of the disease spots of the disease image on the basis of not carrying out image segmentation.

Description

Crop leaf disease identification method of bilinear residual error network model
Technical Field
The invention belongs to the technical field of digital agricultural pest image diagnosis, and particularly relates to a crop leaf disease identification method of a bilinear residual error network model.
Background
The crop leaf disease image identification is one of important support technologies for crop disease identification control, and can provide effective guarantee for the safety of agricultural production. How to rapidly and accurately identify the crop leaf diseases is one of the key research problems in the field of crop leaf disease image identification.
In order to solve the problem, the traditional machine learning method is applied to the image recognition research of the crop leaf diseases for a long time, and the more representative methods include a support vector machine, linear discriminant analysis, a K mean value, a Bayesian network and the like. However, such methods require manual feature selection, which is costly in labor. And the identification capability of the constructed model is limited, and generally, the constructed model can only realize effective identification on a few disease types of the same crop. In recent years, with the development of deep learning technology, the application of deep learning technology is becoming a hotspot in the field of crop disease image identification and research. Compared with the traditional machine learning method, the deep learning technology has the greatest advantages that the end-to-end application can be carried out, the automatic and accurate extraction of the features is realized, and the image recognition performance superior to that of the traditional method is obtained on a plurality of field data sets. The concept of deep learning is derived from the research of artificial neural networks, and the main difference is that the deep learning technology constructs a deeper network model. As early as 2007, researchers have applied artificial neural network technology to disease diagnosis of butterfly orchid seedlings, but this method still requires manual feature selection before use. In 2012, researchers proposed the use of Convolutional Neural Networks (CNN) in the image recognition problem and achieved the best error rate of 15.3% on the current year's ImageNet classification dataset. Since then, researchers have begun to explore how to apply a deep learning model related to a convolutional neural network to crop disease image recognition studies. The following are more representative: the image recognition of banana leaf diseases is realized based on a LeNet model, and the effectiveness of the deep learning method in a complex environment is verified. The researchers applied *** lenet to image recognition of 5 tomato leaf diseases, achieving 94.33% accuracy. Researchers applied AlexNet to the classification of 10 tomato diseases, obtained an accuracy of 95.65%. Researchers can realize the identification of the disease images of 13 crops by applying a Deep Convolutional Neural Network (DCNN) model, and the accuracy rate is 91% -98%. Researchers applied the ResNet model to the classification of 59 types of disease images, and finally obtained 85.22% accuracy. LeNet, ResNet, AlexNet and GoogLeNet all belong to classical convolutional neural network models. The researches fully prove the effectiveness of the deep learning technology in the crop disease image identification problem, and can support more crop types and disease types. The current research trend is not limited to simply judging the health and the disease of crops, and the specific disease types of the diseased crops need to be judged, namely, the same model needs to be capable of specifically classifying various disease types of different crops. Comprehensive research can find that the accuracy is difficult to ensure with the increasing classification types. This more granular classification of the class to which the image belongs is generally referred to as a fine-grained image analysis problem. Different from general image classification, the difference of fine-grained images is reflected in subtleties, and the extraction of dominant features is more difficult. For a crop disease image, the main characteristic of disease diagnosis exists in tiny disease spots, and how to effectively analyze and detect the disease image and find important disease spot local area characteristic information is a key problem to be solved by the current fine-grained disease image identification algorithm.
In order to solve the problem, the invention provides a crop leaf disease identification method of a bilinear residual error network model. The method integrates two feature extractors in a bilinear mode, wherein one feature extractor is used for positioning the position of a local lesion spot, and the other feature extractor is used for extracting the features of the positioned lesion spot position. The capacity of the existing end-to-end disease image identification model is improved through the fusion of the two feature extractors.
Disclosure of Invention
In order to achieve the purpose, the invention provides a crop leaf disease identification method of a bilinear residual error network model, which integrates two feature extractors in a bilinear mode, wherein one feature extractor is used for positioning a local scab position, the other feature extractor is used for extracting the features of the positioned scab position, and the capability of the existing end-to-end disease image identification method is improved through the fusion of the two feature extractors.
In order to achieve the purpose, the invention adopts the technical scheme that: a crop leaf disease identification method of a bilinear residual error network model comprises the following steps:
A. constructing a feature extractor: construction of two feature extractors f based on residual error networkRNAAnd fRNBEach feature extractor is a residual network of n layers, input image ItUsing fRNAAnd fRNBFor image ItIn each position LsThe above features are extracted and respectively expressed as a function fRNA(It,Ls) And fRNB(It,Ls) By function fBRM(It,Ls) Representing the text method for image ItAt position LsThe extraction of the processing characteristics comprises the following steps:
fBRM(It,Ls)=fRNA(It,Ls)TfRNB(It,Ls);
B. constructing a bilinear pooling function: features f extracted from all positions of the imageBRM(It,Ls) Bilinear pooling is carried out, and bilinear feature vectors vec are obtained after poolingtThen, there are:
vect=fBP(∑s∈tfRRM(It,Ls));
C. constructing a classifier function: the feature vector vectIs input to a classification function fcIn which the classification result is calculated, the classifier function fcIs defined as follows:
Figure BDA0003363870440000031
wherein z isjIs the output value of the j-th node, and n is the classified category number.
Further, the specific steps of constructing the feature extractor include:
a1, constructing a basic network: constructing n layers of residual error networks, wherein each residual error network comprises n-1 convolution layers and 1 maximum pooling layer, an input image is a 3-channel color image, the resolution is 224 x 224, after the image is input, the first layer is a convolution layer, the convolution kernel 7 x 7, the output channel 64, the step length is 2, and the filling is 3, the second layer is a maximum pooling layer, the convolution kernel 3 x 3, the step length is 2, and the filling is 1, the third layer and the subsequent layers are convolution layers, and the convolution kernel 3 x 3, the step length is 1, and the filling is 1; wherein n is an integer of 17 to 50;
a2, when the number of the convolutional layers is more than or equal to 3, adding a shortcut connection between every two convolutional layers to form a residual block, stacking a plurality of residual blocks to form a residual network, and alternately calculating a residual by adopting the following formula for every two layers:
H(x)=f(x)+x,
H(x)=f(x)+w(x);
and A3, inputting the image into a feature extractor for calculation, wherein the feature size output by each feature extractor is 512 × 7.
Further, the specific steps of constructing the feature extractor include:
b1, after the step A is executed, two different feature maps A and B are obtained after the image is input into the network, wherein each dimension of the feature maps is the image feature at one position and is respectively represented as fRNA(It,Ls) And fRNB(It,Ls) Based on which each position L in the image It is calculatedsThe characteristics of (A) above:
fBRM(It,Ls)=fRNA(It,Ls)TfRNB(It,Ls);
b2, converting (reshape) the matrix into vector representation, and directly performing matrix tensor expansion:
Figure BDA0003363870440000041
b3, pair vecxRespectively carrying out moment normalization operation and L2 normalization operation to obtain fused bilinear features
Figure BDA0003363870440000042
The specific calculation method is as follows:
Figure BDA0003363870440000043
Figure BDA0003363870440000044
the beneficial technical effects of the invention are as follows:
(1) the method integrates two feature extractors in a bilinear mode and carries out interactive modeling extraction on local features. The model can more fully extract the fine-grained image features in the disease image recognition problem, so that the disease image recognition accuracy is improved.
(2) The method of the invention is used for training the residual error network model in an end-to-end mode, and can effectively utilize the local characteristic information of the disease spots of the disease image on the basis of not carrying out image segmentation.
(3) Compared with a single residual error network model, the method provided by the invention obtains a more accurate crop leaf disease image identification result.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a frame diagram of a crop leaf disease identification method of a bilinear residual error network model according to the present invention;
FIG. 2 is a schematic diagram of a basic network constructed in step A in an experiment of the crop leaf disease identification method using the bilinear residual error network model according to the present invention;
FIG. 3 is a frame diagram of a residual block of step A2 in an experiment of the method for identifying diseases of leaves of crops using a bilinear residual network model according to the present invention;
FIG. 4 is a graph of experimental comparison results of the crop leaf disease identification method of the bilinear residual error network model on a PlantVillage apple leaf image dataset;
FIG. 5 is a graph of experimental comparison results of the crop leaf disease identification method of the bilinear residual error network model on a plantaVillage corn leaf image dataset;
FIG. 6 is a graph of experimental comparison results of the crop leaf disease identification method of the bilinear residual error network model on a PlantVillage grape leaf image dataset;
FIG. 7 is a graph of experimental comparison results of the crop leaf disease identification method of the bilinear residual error network model on a plant Village potato leaf image dataset;
FIG. 8 is a graph of experimental comparison results of the crop leaf disease identification method of the bilinear residual error network model on a plant Village tomato leaf image dataset.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A basic framework of the crop disease image identification method based on the bilinear residual error network model is shown in figure 1, the method is composed of two characteristic extractors constructed by residual error networks, and output images are calculated at each position by using a bilinear pooling function and combined into a bilinear characteristic vector. And on the basis of the feature vector, classifying and identifying the disease image. The method can be used for carrying out interactive modeling extraction on the local features in a mode of translation invariance, realizes the extraction of the image features from two different angles, and is suitable for fine-grained classification.
The method specifically comprises the following steps:
step A: the feature extractor is constructed by adopting two residual error networks, namely a network A and a network B, wherein the network A and the network B work in a coordinated manner to complete two most important tasks in the disease image identification process: the method comprises lesion location and feature extraction, the end-to-end characteristic of the deep learning method is reserved, the extraction of local features can be further refined, the two feature extractors are n layers of residual error networks, and n is an integer of 18-50.
The step A is constructed by a residual error network with n layers, wherein n-1 convolutional layers are provided in total, 1 maximum pooling layer is provided, and n is an integer of 17-50. The input image is a 3-channel color image with a resolution of 224 x 224. And adding a shortcut connection between every two convolution layers to form a residual block, and stacking a plurality of residual blocks to form a residual network. And (3) calculating the residual error once by adopting the following formula alternately for every two layers:
H(x)=f(x)+x,
H(x)=f(x)+w(x).
where w (x) is a convolution function that upsamples and downsamples the input x.
The image is input into a color picture with 224 × 224 resolution of 3 channels, and feature extraction is carried out through two networks respectivelyEach network generates a feature map of 512 by 7, for which each 512 dimension represents an image ItMiddle position LsAnd each image is represented as 7 by 7 to 49 positions. By a characteristic function fRNA(It,Ls) And fRNB(It,Ls) When the same sub-picture I is expressedtFor example, the position s ∈ [1, 49 ]]. For a certain image ItThe first position (i.e. s-1), respectively with fRNA(It,Ls) And fRNB(It,Ls) By performing the calculation, two 512-dimensional one-dimensional vectors can be obtained:
fRNA(It,L1)=[0.213,0.58,0.11,...,0.476],
fRNB(It,L1)=[0.367,0.987,0.224,...,0.677]。
each image is input to a feature extractor, which generates feature vectors for 49 positions in total, and these feature vectors are input to the pooling function of step 2.
And B: each image is input into the feature extractor constructed in step a, and each feature extractor generates feature vectors of 49 positions in total.
Performing matrix multiplication on the vectors to obtain a fused integrated vector, and calculating by adopting the following formula:
Figure BDA0003363870440000071
in the formula, the characteristics respectively extracted by two ResNet models are subjected to bilinear fusion in a multiplication mode to obtain a characteristic matrix
Figure BDA0003363870440000072
Then, this matrix is converted (reshape) into a vector representation:
Figure BDA0003363870440000073
then for vecxRespectively carrying out moment normalization operation and L2 normalization operation to obtain fused bilinear features
Figure BDA0003363870440000074
The specific calculation method is as follows:
Figure BDA0003363870440000075
Figure BDA0003363870440000076
each graph is converted into a vector
Figure BDA0003363870440000077
And (4) performing representation.
C: finally, will
Figure BDA0003363870440000078
Input to a classifier function fcIn which the classification result is calculated, the classifier function fcIs defined as follows:
Figure BDA0003363870440000079
wherein z isjIs the output value of the j-th node, and n is the classified category number.
Assuming a total of 5 classes, 1 healthy class, 4 disease classes, then fc(zj) A probability distribution over 10 classes, i.e. f, is generatedc(zj)∈[0,1]。
And finally, the probability distribution value of which category is the highest, and the image belongs to which category. Assuming that a set of tomato leaf disease images are input into the method of the invention, the final calculation result is as follows:
fc(z1)=0.28,fc(z2)=0.37,fc(z3)=0.76,fc(z4)=0.55,fc(z5)=0.41
the 3 rd category has the highest probability, and the category corresponds to the tomato early blight, so that the image is the leaf image of the tomato early blight.
Application example 1: apple image dataset experiment
To verify the validity of the method, the method is implemented using a pytorech platform. Experimental validation was performed on the public data set plantatvollage using the single residual network model approach (only one 18-layer residual network model) as the baseline approach. Firstly, verification is carried out on an apple image data set, and basic information of the data set is as follows:
Figure BDA0003363870440000081
in the experiment, step 1 constructs a residual error network with 18 layers, 17 convolutional layers in total and 1 maximum pooling layer. The input image is a 3-channel color image with a resolution of 224 x 224. In fig. 2, the first parameter of each layer represents the size of the convolution kernel, conv represents the convolution layer, maxpool represents the maximum pooling layer, s represents the step size, and p represents the padding. And adding a shortcut connection between every two convolution layers to form a residual block, and stacking a plurality of residual blocks to form a residual network. The solid and dashed arrows in fig. 2 are increasing shortcut links, the solid arrow represents using x itself directly as the residual, and the dashed arrow represents not using x itself directly as the residual. If h (x) is used to represent the mapping function of the residual block, it can be expressed as:
H(x)=f(x)+x,
H(x)=f(x)+w(x)。
where w (x) is a convolution function that upsamples and downsamples the input x. The specific connection and calculation of the residual block is shown in fig. 3.
Image input is 3-channel 224 × 224 resolution colorThe picture is respectively subjected to feature extraction through two networks, each network generates a feature map of 512 x 7, and for the feature map, each 512 dimension represents an image ItMiddle position LsAnd each image is represented as 7 by 7 to 49 positions.
Step 2: each image is input into the feature extractor constructed in step 1, and each feature extractor generates feature vectors of 49 positions in total. Performing matrix multiplication on the vectors to obtain a fused integrated vector, and calculating by adopting the following formula:
Figure BDA0003363870440000091
in the formula, the features extracted by the two feature extractors are subjected to bilinear fusion in a multiplication mode. Then, according to the steps of the method, matrix conversion and normalization operation are carried out to obtain the characteristic vector.
And step 3: and classifying the vector by using a classifier to obtain a result.
In the experiment, 110 rounds of training and testing were performed on the apple image dataset, and the Loss function value (Loss), accuracy, recall, precision, and F1-harmonic mean performance index were recorded for each round. In image classification research, model parameters under the highest accuracy round number in a verification set are usually taken as final model parameters, and the performance of the model is taken as the optimal performance of the model for evaluation. The loss function values and accuracy variations of the 110 training runs are shown in fig. 4, and the optimal performance is shown in the following table.
Figure BDA0003363870440000092
Compared with a single model method, the method improves 0.4732 percentage points in accuracy performance, and respectively improves 0.61825, 0.36155 and 0.48875 percentage points in recall rate, accuracy rate and F1-harmonic mean.
Application example 2: corn image dataset experiments
The basic procedure of this example is the same as example 1 except that the data set was replaced with the leaf disease image data set of maize in the plantavivlage data set. The data set basic information is as follows:
Figure BDA0003363870440000093
in the experiment, 110 training and testing cycles were performed on the corn image data set, the loss function values and the accuracy variation during the process are shown in fig. 5, and the optimal performance is shown in the following table.
Figure BDA0003363870440000094
Compared with a single model method, the method improves 0.6494 percentage points in accuracy performance, and respectively improves 0.7528, 1.8168 and 1.0917 percentage points in recall rate, accuracy rate and F1-harmonic mean number.
Application example 3: grape image dataset experiments
The basic procedure of this example is the same as example 1 except that the data set was replaced with a grape leaf lesion image data set in the PlantVillage data set. The basic information is as follows:
Figure BDA0003363870440000101
in the experiment, 110 training and testing cycles were performed on the grape image data set, the loss function values and the accuracy variation during the process are shown in fig. 6, and the optimal performance is shown in the following table.
Figure BDA0003363870440000102
For the grape leaf disease image data set, the accuracy performance of the method is improved by 0.0764 percentage points compared with a single model method, and the recall rate, the accuracy rate and an F1-harmonic average book are respectively improved by 0.3603 percentage points, 1.3952 percentage points and 0.1488 percentage points.
Application example 4: potato image dataset experiments
The basic procedure of this example was the same as in example 1 except that the data set was replaced with the leaf defect image data set of potato in the plantavivlage data set. The basic information is as follows:
Figure BDA0003363870440000103
in the experiment, 110 rounds of training and testing were performed on a potato image data set, the loss function values and the accuracy variation during the process are shown in fig. 7, and the optimal performance is shown in the following table.
Figure BDA0003363870440000104
Compared with a single model method, the method improves the accuracy performance by 0.2325 percentage points, and respectively improves the recall rate, the accuracy rate and the F1-harmonic mean by 0.2106, 0.077 and 0.164 percentage points.
Application example 5: tomato image dataset experiments
The basic procedure of this example is the same as example 1 except that the data set was replaced with the tomato leaf disease image data set in the plantavivollage data set.
Figure BDA0003363870440000111
In the experiment, 110 rounds of training and testing were performed on a potato image dataset, and the loss function values and the accuracy variation during the process are shown in fig. 8, and the optimum performance is shown in the following table.
Figure BDA0003363870440000112
For the tomato leaf disease image data set, the accuracy performance of the method is improved by 0.0275 percentage points compared with that of a single model method, and the recall rate, the accuracy rate and the F1-harmonic mean are respectively improved by 1.0173 percentage points, 0.4315 percentage points and 0.7143 percentage points.
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention and it is intended to cover in the claims the invention as defined in the appended claims.

Claims (3)

1. A crop leaf disease identification method of a bilinear residual error network model is characterized by comprising the following steps:
A. constructing a feature extractor: construction of two feature extractors f based on residual error networkRNAAnd fRNBEach feature extractor is a residual network of n layers, input image ItUsing fRNAAnd fRNBFor image ItIn each position LsThe above features are extracted and respectively expressed as a function fRNA(It,Ls) And fRNB(It,Ls) By function fBRM(It,Ls) Representing the text method for image ItAt position LsThe extraction of the processing characteristics comprises the following steps:
fBRM(It,Ls)=fRNA(It,Ls)TfRNB(It,Ls);
B. constructing a bilinear pooling function: features f extracted from all positions of the imageBRM(It,Ls) Bilinear pooling is carried out, and bilinear feature vectors are obtained after poolingvectThen, there are:
vect=fBP(∑s∈tfBRM(It,Ls));
C. constructing a classifier function: the feature vector vectIs input to a classification function fcIn which the classification result is calculated, the classifier function fcIs defined as follows:
Figure FDA0003363870430000011
wherein z isjIs the output value of the j-th node, and n is the classified category number.
2. The method of claim 1, wherein the step of constructing the feature extractor comprises:
a1, constructing a basic network: constructing n layers of residual error networks, wherein each residual error network comprises n-1 convolution layers and 1 maximum pooling layer, an input image is a 3-channel color image, the resolution is 224 x 224, after the image is input, the first layer is a convolution layer, the convolution kernel 7 x 7, the output channel 64, the step length is 2, and the filling is 3, the second layer is a maximum pooling layer, the convolution kernel 3 x 3, the step length is 2, and the filling is 1, the third layer and the subsequent layers are convolution layers, and the convolution kernel 3 x 3, the step length is 1, and the filling is 1; wherein n is an integer of 18-50;
a2, when the number of the convolutional layers is more than or equal to 3, adding a shortcut connection between every two convolutional layers to form a residual block, stacking a plurality of residual blocks to form a residual network, and alternately calculating a residual by adopting the following formula for every two layers:
H(x)=f(x)+x,
H(x)=f(x)+w(x);
and A3, inputting the image into a feature extractor for calculation, wherein the feature size output by each feature extractor is 512 × 7.
3. The method of claim 1, wherein the constructing of the feature extractor comprises:
b1, after the step A is executed, two different feature maps A and B are obtained after the image is input into the network, wherein each dimension of the feature maps is the image feature at one position and is respectively represented as fRNA(It,Ls) And fRNB(It,Ls) Based on which the image I is calculatedtIn each position LsThe characteristics of (A) above:
fBRM(It,Ls)=fRNA(It,Ls)TfRNB(It,Ls);
b2, converting (reshape) the matrix into vector representation, and directly performing matrix tensor expansion:
Figure FDA0003363870430000021
b3, pair vecxRespectively carrying out moment normalization operation and L2 normalization operation to obtain fused bilinear features
Figure FDA0003363870430000022
The specific calculation method is as follows:
Figure FDA0003363870430000023
Figure FDA0003363870430000024
CN202111376030.9A 2021-11-19 2021-11-19 Crop leaf disease identification method of bilinear residual error network model Withdrawn CN114067313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111376030.9A CN114067313A (en) 2021-11-19 2021-11-19 Crop leaf disease identification method of bilinear residual error network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111376030.9A CN114067313A (en) 2021-11-19 2021-11-19 Crop leaf disease identification method of bilinear residual error network model

Publications (1)

Publication Number Publication Date
CN114067313A true CN114067313A (en) 2022-02-18

Family

ID=80278780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111376030.9A Withdrawn CN114067313A (en) 2021-11-19 2021-11-19 Crop leaf disease identification method of bilinear residual error network model

Country Status (1)

Country Link
CN (1) CN114067313A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548190A (en) * 2022-04-27 2022-05-27 西安易诺敬业电子科技有限责任公司 Wind turbine fault diagnosis method based on self-adaptive residual error neural network
CN115937471A (en) * 2023-03-10 2023-04-07 云南农业大学 Shanghai green morphological model and visualization method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199214A (en) * 2020-01-04 2020-05-26 西安电子科技大学 Residual error network multispectral image ground feature classification method
CN112989912A (en) * 2020-12-14 2021-06-18 北京林业大学 Oil tea fruit variety identification method based on unmanned aerial vehicle image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199214A (en) * 2020-01-04 2020-05-26 西安电子科技大学 Residual error network multispectral image ground feature classification method
CN112989912A (en) * 2020-12-14 2021-06-18 北京林业大学 Oil tea fruit variety identification method based on unmanned aerial vehicle image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘鹏鹏: ""基于深度学习的番茄叶面型病虫害识别研究"", 《中国优秀硕士学位论文全文数据库 (农业科技辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548190A (en) * 2022-04-27 2022-05-27 西安易诺敬业电子科技有限责任公司 Wind turbine fault diagnosis method based on self-adaptive residual error neural network
CN115937471A (en) * 2023-03-10 2023-04-07 云南农业大学 Shanghai green morphological model and visualization method

Similar Documents

Publication Publication Date Title
CN108492297B (en) MRI brain tumor positioning and intratumoral segmentation method based on deep cascade convolution network
Liu et al. A framework of wound segmentation based on deep convolutional networks
CN114067313A (en) Crop leaf disease identification method of bilinear residual error network model
Özbılge et al. Tomato disease recognition using a compact convolutional neural network
CN113344864A (en) Ultrasonic thyroid nodule benign and malignant prediction method based on deep learning
CN112651450B (en) Medical image classification method based on multi-example deep learning
CN111956208B (en) ECG signal classification method based on ultra-lightweight convolutional neural network
CN111598894B (en) Retina blood vessel image segmentation system based on global information convolution neural network
CN113344933B (en) Glandular cell segmentation method based on multi-level feature fusion network
CN107944479A (en) Disease forecasting method for establishing model and device based on semi-supervised learning
CN113808747B (en) Ischemic cerebral apoplexy recurrence prediction method
CN113288157A (en) Arrhythmia classification method based on depth separable convolution and improved loss function
CN110738660A (en) Spine CT image segmentation method and device based on improved U-net
CN115985503B (en) Cancer prediction system based on ensemble learning
CN115294075A (en) OCTA image retinal vessel segmentation method based on attention mechanism
CN113627391B (en) Cross-mode electroencephalogram signal identification method considering individual difference
CN117132849A (en) Cerebral apoplexy hemorrhage transformation prediction method based on CT flat-scan image and graph neural network
CN113408603B (en) Coronary artery stenosis degree identification method based on multi-classifier fusion
Khan et al. GLNET: global–local CNN's-based informed model for detection of breast cancer categories from histopathological slides
Dong et al. Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network
Xing et al. ZooME: Efficient melanoma detection using zoom-in attention and metadata embedding deep neural network
CN117195027A (en) Cluster weighted clustering integration method based on member selection
CN115937590A (en) Skin disease image classification method with CNN and Transformer fused in parallel
CN116433679A (en) Inner ear labyrinth multi-level labeling pseudo tag generation and segmentation method based on spatial position structure priori
CN116797817A (en) Autism disease prediction technology based on self-supervision graph convolution model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220218