CN115601634A - Image blade identification method and device based on hierarchical attention mechanism - Google Patents

Image blade identification method and device based on hierarchical attention mechanism Download PDF

Info

Publication number
CN115601634A
CN115601634A CN202110710190.6A CN202110710190A CN115601634A CN 115601634 A CN115601634 A CN 115601634A CN 202110710190 A CN202110710190 A CN 202110710190A CN 115601634 A CN115601634 A CN 115601634A
Authority
CN
China
Prior art keywords
image
sampling
attention
candidate
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110710190.6A
Other languages
Chinese (zh)
Inventor
王蕊
邹聪
操晓春
吕飞霄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN202110710190.6A priority Critical patent/CN115601634A/en
Publication of CN115601634A publication Critical patent/CN115601634A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image blade identification method and device based on a hierarchical attention mechanism, which comprises the steps of generating different hierarchical category activation graphs of an input image; fusing different level category activation maps into an attention map, and obtaining a plurality of different sampling images by using the attention map; and splicing the characteristic vector of each sampling image and the characteristic vector of the input image, and acquiring an image blade identification result based on the obtained spliced characteristic vectors. The invention constructs a hierarchical attention mechanism based on a classification strategy of a human phytologist, and amplifies a key area in a picture by combining a sampling transformation and deep learning method, thereby obtaining high accuracy.

Description

Image blade identification method and device based on hierarchical attention mechanism
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an image blade identification method and device based on a hierarchical attention mechanism, which can accurately identify blade categories in a natural scene.
Background
Plants are widely distributed in natural habitats and human society, provide nutrients required for human life, and play an important role in regulating carbon-oxygen balance and environmental temperature and humidity. The classification of plants has very important significance in the species protection, agriculture and garden industries, but the manual classification cost is huge, so that the classification personnel is required to have very professional plant knowledge and rely on very long training, and the automatic classification of plants has more application prospect and research necessity. The automatic plant classification comprises automatic classification of flowers, fruits, leaves and tissues of plants, and compared with flowers and fruits with great variation in different growth stages, the leaf characteristics of the plants are more stable, and the leaf classification is one of main implementation ways of the automatic plant classification.
In computer vision, despite many attempts (such as using complex computational vision algorithms), plant identification is still considered to be a challenging unsolved problem due to the large variety of plants in nature and the small differences in shape between similar species. The main difficulty of plant leaf identification is that the difference between plant categories is small and the intra-category variance is large. On one hand, compared with general image recognition, leaves of similar plant categories often have only a few slight differences, which puts high demands on the recognition capability of the model, and the model is required to be able to locate an area with identification degree on the leaves, so that the leaves can be further correctly classified. On the other hand, the leaf images of the same type of plants have a large variance, for example, the shapes, colors and pulse sequences of the plant leaves at different growth stages are changed greatly, which further increases the difficulty of automatic identification of the plant leaves.
In general, most plants can be well identified by processing plant leaf images by a semi-supervised method in machine learning, but the method is time-consuming and labor-consuming and is difficult to popularize and apply further. At present, with the development of image technology, automatic classification of leaf images after feature extraction by using computer technology has become the mainstream method. In related art research, leaf features are generally extracted manually, for example, leaves are generally classified by using shape differences between different leaves, and a leaf edge is an important feature and is generally taken as an extraction target. In addition, the vein textural feature detection technology is utilized, singular Value Decomposition (SVD) and Sparse Representation (SR) are combined to process the reduced-dimension plant image, and a multi-component shape moment invariant method and an artificial neural network of a support vector machine are also successful to a certain extent. The above methods rely on feature selection and manual processing, and the processing of advanced features and multi-scale features adds significant complexity and effort, which leads to a problem of reduced versatility of these methods after increasing the number and types of identified plants.
In recent years, due to the excellent performance of deep learning convolutional neural networks in the field of computer vision, the deep learning convolutional neural networks have become a main means for solving the problems of image classification, image recognition, semantic segmentation and the like. The deep learning method achieves good effect in plant classification, and the comprehensive performance of the deep learning method is superior to that of most artificial feature extraction and classification methods, particularly the excellent generalization performance of the deep learning method. Most of blade identification methods in recent years are based on deep convolutional neural networks for end-to-end training, and blade features under a single scale are used, so that the accuracy of blade identification is limited. In addition, a large amount of labeled training data is needed for the deep learning method based on data driving, and due to the difficulty of fine-grained labeling, the labels are often difficult to obtain in practical application, so that the application of the leaf identification method based on deep learning in practical scenes is limited. Therefore, it is necessary to develop a method for accurately identifying the blade by using the multi-scale features of the blade image.
Disclosure of Invention
The invention provides an image blade identification method and device based on a hierarchical attention mechanism, aiming at images in natural scenes. The method is inspired by classification strategies of human botanists, and based on the inherent hierarchical structure of the deep neural network, a hierarchical attention mechanism is designed to extract leaf features from coarse to fine, selective sampling is further carried out according to attention force diagrams for image enhancement, and S-CNN is finally trained for leaf image classification. In particular, the hierarchical attention mechanism uses a class activation Map (classmactive Map) to construct an attention Map that can be located on the area most relevant to leaf identification. In addition, a trainable gaussian kernel is used to transform the original attention map, suppressing background regions while enhancing key regions.
The technical scheme adopted by the invention is as follows:
an image blade identification method based on a hierarchical attention mechanism comprises the following steps:
1) Generating different hierarchy category activation maps of the input images;
2) Fusing different level category activation maps into an attention map, and obtaining a plurality of different sampling images by using the attention map;
3) And splicing the characteristic vector of each sampling image and the characteristic vector of the input image, and acquiring an image blade identification result based on the obtained spliced characteristic vectors.
Further, the different-level category activation maps are obtained by the following steps:
1) Inputting an input image into a convolutional neural network ResNet-50 to respectively obtain a first-stage output characteristic with the size of 8n, a second-stage output characteristic with the size of 4n, a third-stage output characteristic with the size of 2n, and a fourth-stage output characteristic with the size of n, wherein n is more than or equal to 1;
2) And respectively generating category activation graphs with the sizes of 4n x 4n, 2n x 2n and n x n by utilizing the second-stage output characteristics, the third-stage output characteristics and the fourth-stage output characteristics.
Further, an attention map is synthesized by the following steps:
1) Amplifying the class activation maps with the sizes of 2n x 2n and n x n to the size of 4n x 4n respectively by using a bilinear interpolation method;
2) And performing weighted fusion on the two amplification results and the class activation map with the size of 4 n-4 n to obtain the attention map.
Further, several different sampled images are obtained by:
1) Performing sliding window on the attention diagram to obtain local maximum values in different windows as candidate extreme points;
2) Grouping the candidate extreme points by using a set threshold;
3) Smoothing each group of candidate points through a learnable Gaussian kernel respectively to obtain a plurality of smoothed sampling graphs;
4) And selectively sampling the input image according to the sampling image to obtain a corresponding sampling image.
Further, based on formula T d ={(x,y)|(x,y)∈T if A x,y > delta and formula T c ={(x,y)|(x,y)∈T if A x,y δ grouping the candidate points, where (x, y) is the coordinate of the candidate extreme point, A x,y Is the value of the candidate extreme point (x, y) on the attention map, delta is the set threshold value, T d For the discriminating group, T c As a supplemental group.
Further, a smooth attention map A corresponding to the candidate extreme point (x, y) i,x,y Comprises the following steps:
Figure BDA0003133327450000031
wherein beta is a parameter of a Gaussian kernel, (x, y) is a coordinate of a candidate extreme point, A x,y The candidate extreme point (x, y) is the value on the attention map.
Further, the feature vector of each sampling image and the feature vector of the input image are obtained through the following steps:
1) Respectively extracting the characteristics of each sampling image and the input image by using the connected convolutional neural network S-CNN;
2) And acquiring a feature vector of each sampling image and a feature vector of the input image according to the features of each sampling image and the input image.
Further, the connected convolutional neural network S-CNN performs end-to-end training, and the loss function is a cross entropy loss function.
Further, obtaining an image blade identification result through the following steps:
1) Inputting the splicing characteristic vector into a full-connection layer to obtain probability distribution of each category;
2) Normalizing each probability distribution by using a softmax function;
3) And taking the category corresponding to the maximum normalized result as an image blade identification result.
Further, by identifying accuracy
Figure BDA0003133327450000032
And as the evaluation index, N is the number of correctly identified samples, and N is the total number of the test samples.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above method when executed.
An electronic device comprising a memory and a processor, wherein the memory stores a program for performing the method described above
Compared with the prior art, the invention has the advantages that:
1. a hierarchical attention mechanism is provided, which conforms to the classification strategy of human bothers.
2. And combining sampling transformation to enlarge a key area in the picture, so that the leaf identification is facilitated.
3. And by combining a deep learning method, high accuracy can be obtained on blade identification.
Drawings
FIG. 1 is a diagram of a hierarchical attention mechanism based image blade identification network framework.
Detailed Description
The present invention will be described in further detail below with reference to specific examples and the accompanying drawings.
The shadow detection method based on deep learning in natural scenes is shown in fig. 1, and mainly comprises a training stage and a testing stage.
The training phase comprises the following steps:
1) Training the pictures to obtain a plurality of attention diagrams through a convolutional neural network and a hierarchical attention mechanism for a subsequent sampling process;
the treatment process of the step 1) comprises the following steps: firstly, a training picture passes through a convolutional neural network to obtain the multi-level characteristics of the training picture. With the hierarchical structure of the convolutional neural network, a plurality of classifiers are constructed for generating class activation maps of different levels. In the convolutional neural network ResNet-50 used in the present invention, for the leaf image with an input size of 224 × 224, the output characteristic size of the first stage of ResNet-50 is 56 × 56, the output characteristic size of the second stage is reduced to 28 × 28, the output characteristic size of the third stage is 14 × 14, and the output characteristic size of the last stage is 7*7. The resulting three class activation maps were 28 × 28, 14 × 14 and 7*7, respectively.
Because shallow features are poor in robustness and not suitable for image recognition, the hierarchical attention mechanism does not use image features of a first stage, utilizes feature outputs of a second stage, a third stage and a fourth stage in a convolutional neural network ResNet-50, trains three classifiers with different granularities respectively, and inputs classifier weights and features of corresponding stages by using a method of a class activation graph to obtain three class activation graphs with different granularities.
This step is inspired by human bothers classification strategies, fully simulating the human bothers classification process. The classification strategy of the botanic scientist is divided into three steps, namely observing the appearance outline of the leaves firstly, observing the shape and the color of the leaves secondly and observing the texture information of the leaves finally. The hierarchical attention mechanism provided by the invention focuses on the edge and contour information of an input leaf image on the shallow layer of the network, focuses on the shape and color information of the leaf on the middle layer, focuses on the texture information of the leaf on the high layer, and accords with the strategy of human plant leaf classification. Compared with the black box model of the traditional convolutional neural network, the convolutional neural network model is more consistent with human cognition and is more interpretable.
2) Based on the class activation map obtained in the step 1), fusing the class activation map into an attention map with the same size by using a bilinear interpolation method, and obtaining two different sampling images through a sampling transformation network by using the attention map to increase the diversity of the images.
The step 2) performs bilinear interpolation on the 7*7 and 14 × 14 category activation maps to 28 × 28, and performs weighting fusion on the category activation maps output by the two stages of the network to obtain an attention map with 28 × 28. And then, firstly, obtaining a local maximum value on the attention map by a sliding window method, and acquiring a candidate extreme point. These candidate extreme points indicate that the region has a higher response value and are more likely to be the key region for classification. These candidate extreme points are then grouped using a set threshold to get a wider variety of image features. After grouping, smoothing the two groups of candidate points through learnable Gaussian kernels respectively to obtain two smoothed sampling images, selectively sampling the original image according to the sampling images, amplifying important foreground areas, reducing unimportant background areas and classified irrelevant areas, and obtaining two sampling images. Specifically, the method comprises the following steps:
construction of a sample graph Q using a smooth attention graph d And Q c ,Q d And Q c The attention area with two sets of candidate extreme points collected produces a summation effect, which enables each sample graph to focus on a plurality of areas beneficial to classification, without being limited by the artificially defined number:
Figure BDA0003133327450000051
Figure BDA0003133327450000052
the image sampling formula is X new = X (f (v), g (v)), where f and g are sampling functions indicating from which position in the original image the sampling is performed, and the sampling coordinate functions f and g satisfy:
Figure BDA0003133327450000053
the solution satisfies:
Figure BDA0003133327450000054
Figure BDA0003133327450000055
the k is a Gaussian kernel to weight different coordinate positions, the weight of the coordinate values far away from the central point is smaller, and the weight of the coordinate values close to the central point is larger, so that the image area near the candidate extreme point can be sampled more, the image area of the candidate extreme point can be sampled less, and the functions of a key foreground area, a background area and an irrelevant area are realized in an amplifying mode.
In acquiring a sampled image I d And I c Then sending the data into two branches of S-CNN, respectively extracting features and obtaining a feature vector v through global mean pooling d And v c And then the feature vectors are spliced together with the feature vector v of the original input image and sent into a classifier to obtain a final classification result y.
3) Sending the sampling image obtained in the step 2) into a disjunctive convolutional neural network discriminative group, and respectively extracting features for final identification.
In the step 3), firstly, feature extraction is carried out on two sampled images, respective feature vectors are obtained through global mean pooling, then the feature vectors are spliced with image feature vectors obtained from original images to obtain feature vectors for classification, probability distribution of each category is obtained through a layer of full connection layer, and then the probability distribution is normalized by using a softmax function. And finally, taking the category with the maximum output value as the recognition result of the image. In FIG. 1, conv _1-1, conv1_2, conv1_3, conv1_4, conv _2-1, conv2_2, conv2_3 and conv2_4 all represent different stages of convolutional layer composition in ResNet-50. The GAP refers to global mean pooling, and the feature tensor can be converted into feature vectors through the global mean pooling, so that the feature vectors can be conveniently input into a linear classifier for classification. The upper and lower networks use a mode of sharing parameters in the S-CNN, so that the parameters of the two networks are the same. The network uses an end-to-end training mode, and the objective function uses a cross entropy loss function.
The test phase comprises the following steps:
1) And inputting the test picture into the trained network, wherein the output type is the network identification result.
2) And judging the test result by using the accuracy acc.
The blade identification network of the present invention is further described below. The image blade identification method based on the hierarchical attention mechanism is mainly improved by the S-CNN. The hierarchical attention mechanism utilizes the inherent hierarchical structure of the convolutional neural network, and firstly utilizes the characteristics of different hierarchies to calculate the class activation graph:
Figure BDA0003133327450000061
the CAM is a category activation graph, i represents the ith level, f represents the image features of the level, w represents the classification weight of the correct category in the corresponding classifier, C represents the C channel of the image features, and C is the total number of channels. By weighting the classification weights of the correct classes, an attention map for the correct class can be constructed to show how important different regions in the image are for classification.
And then, carrying out weighted fusion on the multi-level class activation graphs to obtain an attention graph capable of representing the importance of the image area:
Figure BDA0003133327450000062
wherein alpha is i The fusion weights for the different levels are determined manually, a represents the final attention map, and the invention uses three levels of category activation maps.
For a sampling network, a Gaussian kernel is used for smoothing, and the smoothing mode is as follows:
Figure BDA0003133327450000063
wherein beta is a parameter of a Gaussian kernel and is obtained by network training. A. The i,x,y And (5) obtaining a smooth attention diagram corresponding to the candidate extreme point at the position (x, y). Through the smoothing action of the Gaussian kernel, the image of the candidate extreme point area can be amplified, and the interference generated by the information of the background area and the irrelevant area can be inhibited.
The sampled image obtained after sampling is X new = X (f (v), g (v)), where f and g are sampling functions indicating from which position in the original image the sampling is performed, and the sampling coordinate functions f and g satisfy:
Figure BDA0003133327450000071
two different sampling images can be obtained after the formula is solved.
And further, respectively extracting features of the sampled images by using S-CNN, pooling the image features into feature vectors through global mean pooling, splicing the feature vectors of the two sampled images with the feature vector of the original image to obtain aggregated image features, and sending the aggregated image features into a linear classifier to obtain a classification result. In particular, the linear classifier in the invention is realized by a layer of full connection layer, the output of the linear classifier meets probability distribution through softmax transformation, and the dimension of the probability vector of the final output is equal to the number of blade classes. The network identification result is determined by the vector dimension where the maximum probability value is located.
The training of the network adopts an end-to-end mode, the loss function adopts a cross entropy loss function, the total loss function comprises classifier loss functions of three different stages in a hierarchical attention mechanism, two branches and an aggregated image feature, and the total loss function is as follows:
Figure BDA0003133327450000072
wherein j represents the serial number of the jth classifier, and the total number of the classifiers is 6.
The invention provides an image blade identification method based on a hierarchical attention mechanism, which comprises the following test environments and experimental results:
(1) And (3) testing environment:
the system environment is as follows: ubuntu16.04;
hardware environment: memory: 169b, GPU: TITIANXP, hard disk: 1TB;
(2) Experimental data:
training data:
and (3) training to converge an objective function by using a Leaf-Snap data set comprising 3300 training sets with 180 different blades in total, so that the performance of the model reaches a stable value.
Data preprocessing: both the training and the testing leaf images were changed to 224 x 224 size by bilinear interpolation. During training, a random cutting and random turning data enhancement method is used.
The training optimization method comprises the following steps: ADAM, initial learning rate 0.01, was reduced to 0.96 times the original learning rate over 4 epochs per training.
And (3) test data: leaf-Snap test suite (6600 pieces)
The evaluation method comprises the following steps: calculating recognition accuracy of test set
(3) The experimental results are as follows:
to illustrate the effectiveness of the present invention, the same data set was used to test whether the hierarchical attention mechanism proposed by the present invention was used, using the Leaf-Snap test set.
The results of the present invention are shown in table 1 below:
TABLE 1 comparison of test results of the prior art method and the present invention
Serial number Method Acc
1 The invention (not using layering attention mechanism) 0.964
2 The invention (use the hierarchical attention mechanism) 0.993
Where Acc refers to accuracy. As is clear from the table, the blade identification method based on the hierarchical attention mechanism, which is related to the invention, has the result that the lifting is larger, and the level is higher overall.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (10)

1. An image blade identification method based on a hierarchical attention mechanism comprises the following steps:
1) Generating different hierarchy category activation graphs of the input images;
2) Fusing different level category activation maps into an attention map, and obtaining a plurality of different sampling images by using the attention map;
3) And splicing the characteristic vector of each sampling image and the characteristic vector of the input image, and acquiring an image blade identification result based on the obtained spliced characteristic vectors.
2. The method of claim 1, wherein the different-level category activation map is obtained by:
1) Inputting an input image into a convolutional neural network ResNet-50 to respectively obtain a first-stage output characteristic with the size of 8n, a second-stage output characteristic with the size of 4n, a third-stage output characteristic with the size of 2n, and a fourth-stage output characteristic with the size of n, wherein n is more than or equal to 1;
2) And respectively generating category activation graphs with the sizes of 4n x 4n, 2n x 2n and n x n by utilizing the second-stage output characteristics, the third-stage output characteristics and the fourth-stage output characteristics.
3. The method of claim 2, wherein the attention map is synthesized by:
1) Amplifying the class activation maps with the sizes of 2n x 2n and n x n to the size of 4n x 4n respectively by using a bilinear interpolation method;
2) And performing weighted fusion on the two amplification results and the class activation map with the size of 4 n-4 n to obtain the attention map.
4. The method of claim 1, wherein the plurality of different sampled images are obtained by:
1) Performing sliding window on the attention diagram to obtain local maximum values in different windows as candidate extreme points;
2) Grouping the candidate extreme points by using a set threshold;
3) Smoothing each group of candidate points through a learnable Gaussian kernel respectively to obtain a plurality of smoothed sampling graphs;
4) And selectively sampling the input image according to the sampling image to obtain a corresponding sampling image.
5. The method of claim 4, based on formula T d ={(x,y)|(x,y)∈T if A x,y > delta and formula T c ={(x,y)|(x,y)∈T if A x,y δ grouping the candidate points, where (x, y) is the coordinate of the candidate extreme point, A x,y The value of the candidate extreme point (x, y) on the attention map is shown, delta is a set threshold value, T d For the discriminating group, T c As a supplemental group.
6. The method of claim 4, wherein the smoothed attention map A corresponds to the candidate extremum point (x, y) i,x,y Comprises the following steps:
Figure FDA0003133327440000011
wherein beta is a parameter of a Gaussian kernel, (x, y) are coordinates of candidate extreme points, A x,y The candidate extreme point (x, y) is the value in the attention map.
7. The method of claim 1, wherein the feature vector of each sampled image and the feature vector of the input image are obtained by:
1) Respectively extracting the characteristics of each sampling image and the input image by using the connected convolutional neural network S-CNN;
2) And acquiring a feature vector of each sampling image and a feature vector of the input image according to the features of each sampling image and the input image.
8. The method of claim 7, wherein the connected convolutional neural network S-CNN performs end-to-end training and the loss function is a cross-entropy loss function.
9. The method of claim 1, wherein the image leaf recognition result is obtained by:
1) Inputting the splicing characteristic vector into a full-connection layer to obtain probability distribution of each category;
2) Normalizing each probability distribution by using a softmax function;
3) And taking the corresponding category of the maximum normalized result as an image blade identification result.
10. An electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the method according to any of claims 1-9.
CN202110710190.6A 2021-06-25 2021-06-25 Image blade identification method and device based on hierarchical attention mechanism Pending CN115601634A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110710190.6A CN115601634A (en) 2021-06-25 2021-06-25 Image blade identification method and device based on hierarchical attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110710190.6A CN115601634A (en) 2021-06-25 2021-06-25 Image blade identification method and device based on hierarchical attention mechanism

Publications (1)

Publication Number Publication Date
CN115601634A true CN115601634A (en) 2023-01-13

Family

ID=84841491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110710190.6A Pending CN115601634A (en) 2021-06-25 2021-06-25 Image blade identification method and device based on hierarchical attention mechanism

Country Status (1)

Country Link
CN (1) CN115601634A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245855A (en) * 2023-03-15 2023-06-09 云南大学 Crop variety identification method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245855A (en) * 2023-03-15 2023-06-09 云南大学 Crop variety identification method, device, equipment and storage medium
CN116245855B (en) * 2023-03-15 2023-09-01 云南大学 Crop variety identification method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Li et al. Apple leaf disease identification and classification using resnet models
Arivazhagan et al. Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features
CN110363253A (en) A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks
CN105844285A (en) Cucumber disease identification method and apparatus based on image information
CN104866524A (en) Fine classification method for commodity images
CN111598001B (en) Identification method for apple tree diseases and insect pests based on image processing
CN109446985B (en) Multi-angle plant identification method based on vector neural network
CN111723764A (en) Improved fast RCNN hydroponic vegetable seedling state detection method
CN113627472A (en) Intelligent garden defoliating pest identification method based on layered deep learning model
Kumar et al. Survey on techniques for plant leaf classification
Reddy et al. Optimized convolutional neural network model for plant species identification from leaf images using computer vision
Wenxia et al. Identification of maize leaf diseases using improved convolutional neural network.
CN111563542A (en) Automatic plant classification method based on convolutional neural network
CN104573701B (en) A kind of automatic testing method of Tassel of Corn
Shantkumari et al. Grape leaf image classification based on machine learning technique for accurate leaf disease detection
Bhambri et al. Paddy crop production analysis based on SVM and KNN classifier
CN113505856B (en) Non-supervision self-adaptive classification method for hyperspectral images
Islam et al. Nitrogen fertilizer recommendation for paddies through automating the Leaf Color Chart (LCC)
CN115601634A (en) Image blade identification method and device based on hierarchical attention mechanism
CN114663791A (en) Branch recognition method for pruning robot in unstructured environment
Li et al. A novel denoising autoencoder assisted segmentation algorithm for cotton field
CN103530658B (en) A kind of plant leaf blade data recognition methods based on rarefaction representation
Lakshmanarao et al. Plant disease prediction using transfer learning techniques
Pawar et al. Detection of leaf diseases for multiple plants using convolutional neural network
CN114758132B (en) Fruit tree disease and pest identification method and system based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination