CN111695560A - Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network - Google Patents

Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network Download PDF

Info

Publication number
CN111695560A
CN111695560A CN202010397400.6A CN202010397400A CN111695560A CN 111695560 A CN111695560 A CN 111695560A CN 202010397400 A CN202010397400 A CN 202010397400A CN 111695560 A CN111695560 A CN 111695560A
Authority
CN
China
Prior art keywords
picture
image
focusing
disease
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010397400.6A
Other languages
Chinese (zh)
Inventor
杨桂玲
王紫艳
孙健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Academy of Agricultural Sciences
Original Assignee
Zhejiang Academy of Agricultural Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Academy of Agricultural Sciences filed Critical Zhejiang Academy of Agricultural Sciences
Priority to CN202010397400.6A priority Critical patent/CN111695560A/en
Publication of CN111695560A publication Critical patent/CN111695560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for actively positioning and focusing crop diseases and insect pests based on a convolutional neural network, which belongs to the technical field of positioning and focusing of crop disease and insect pest parts, and aims to combine the latest convolutional neural network technology, obtain characteristic parameters of the disease and insect pest parts through the research of a large number of disease and insect pest part images, form a disease and insect pest part characteristic algorithm, judge the disease and insect pest parts only by substituting the image parameters into the algorithm, thereby automatically positioning and focusing the disease and insect pest parts in the photographing process, marking the disease and insect pest parts, weakening background interference and improving the accuracy and efficiency of disease and insect pest identification. The convolutional neural network technology is adopted, so that the focusing response speed is high; through the identification and classification of each pixel of the image, a plurality of pest and disease damage parts on one image can be identified simultaneously, so that the identification is more comprehensive; the automatic positioning and focusing self-learning can be helped by manually selecting and adjusting the positioning each time, and the characteristic parameters are updated, so that the positioning and focusing are more and more accurate.

Description

Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network
Technical Field
The invention relates to a method for actively positioning and focusing crop diseases and insect pests based on a convolutional neural network, and belongs to the technical field of positioning and focusing of crop disease and insect pest parts.
Background
At present, a great number of recognition technologies exist in the field of pest and disease recognition, and the image recognition methods adopted by the recognition technologies comprise an object image comparison recognition mode, an SVM statistical vector machine method and a convolutional neural network technology, the first two image recognition technologies are relatively lagged behind, and image recognition using convolutional neural network technology is also recognition based on the entire image, can accurately identify plant diseases and insect pests in a relatively perfect identification environment, reduces the identification rate when being put into an actual agricultural planting environment due to the interference of a complex and changeable field natural environment, in order to ensure that the complex image background is processed and identified after the image is uploaded according to the identification rate, the processing pressure at the server end is increased, the configuration requirement of image recognition on a hardware terminal is improved, and the recognition response time is greatly prolonged, so that the recognition efficiency is reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method for actively positioning and focusing crop diseases and insect pests based on a convolutional neural network.
The technical scheme for solving the technical problems is as follows:
a method for actively positioning and focusing crop diseases and insect pests based on a convolutional neural network comprises the following steps,
firstly, preprocessing an image when a user takes a picture, analyzing and processing an image picture by using a convolutional neural network technology, segmenting the image, and extracting the characteristics of each pixel;
secondly, according to the gray level change of the image, counting the contrast in the characteristics through gray level difference;
let (x, y) be a point in the image, and the gray scale difference value between the point and other points (x +. DELTA.x, y +. DELTA.y) with only a small distance is,
g△(x,y)=g(x,y)-g(x+△x,y+△y)
in the formula, g delta is gray difference, delta X is a coordinate point X of a near point, and delta Y is a coordinate point Y of the near point;
setting all possible values of the gray difference to be m levels, moving the point (x, y) on the whole image, and accumulating the times of removing each numerical value of g delta (x, y), thereby making a histogram of g delta (x, y);
the histogram can know that g delta (x, y) takes the probability p delta (i), i takes the value between 1 and m, when the probability p delta (i) of a smaller i value is larger, the texture is rough, and when the values of p delta (i) are closer, namely the probability step is flatter, the texture is thinner.
A formula for calculating the contrast ratio is shown,
con=∑i∧2p△(i)
the formula for calculating the direction of the angle,
ASM=CON=∑i[p△(i)]∧2
the entropy of the signal is,
ENT=-∑ip△(i)lgp△(i)
the average value of the values is calculated,
MEAN=1/m∑ip△(i)
thirdly, extracting the characteristics of the original color, and respectively extracting three color characteristics of a first-order matrix, a second-order matrix and a third-order matrix under a blue channel of the color image;
step four, extracting the characteristics of the texture characteristics, setting a plurality of texture parameters and extracting characteristic vectors;
step five, constructing parameters of roundness, rectangularity, eccentricity, sphericity ratio, compactness, breadth and inscribed circle radius as shape recognition characteristic vectors through shape characteristic extraction;
and step six, finally, integrating the characteristics, substituting all characteristic parameters into an algorithm to perform characteristic calculation, marking the pest and disease damage parts, and enabling the average response of the whole marking process to be within 2 s.
Preferably, the step of preprocessing the image when the user takes a picture in the first step is that firstly, the contrast of the whole picture is enhanced by changing the dynamic range of the grey value of the picture, then the picture is divided, picture features are extracted in different areas, then the features in each area are substituted into an insect pest feature algorithm for calculation, and finally, the calculation result is returned, and at this time, the area which accords with the features on the picture in the user taking the picture area is marked by a square frame.
Preferably, in the first step, the analysis processing of the image picture by using the convolutional neural network technology is to divide the image into small area pictures of one block according to a specified size; and then, extracting the characteristics of each region, substituting the characteristic parameters extracted from each region into a pest characteristic algorithm for calculation, returning the result, and marking the region by using a square box.
Preferably, the area marked by the box can be operated by self, the selected box can be deselected by double clicking, and the position can be selected by clicking any position of the picture.
Preferably, the feature extraction step in the first step is,
a, performing primary feature extraction on a picture by using a convolutional layer, decomposing the picture during extraction, then defining a convolution kernel for extracting a certain feature from the picture, multiplying the convolution kernel by a corresponding bit of a digital matrix and adding to obtain an output result of the convolutional layer;
b, inputting the output result of the convolutional layer into a pooling layer, wherein the pooling layer is used for reducing the dimension of the feature vector output by the convolutional layer, reducing the over-fitting phenomenon, only retaining the most useful picture information and reducing the transmission of noise;
c, after the extraction of the features of the convolution layer and the pooling layer is completed, generating a separator with the number equal to the required classes by using a full connection layer, cutting the tensor output by the pooling layer into a plurality of vectors again, multiplying the vectors by a weight matrix, adding an offset, and then optimizing the parameters by using a gradient descent method for the used ReLU activation function;
and d, obtaining characteristic parameters including the characteristics of the pest and disease damage parts such as color, texture, shape and the like through the process, repeatedly training for many times, continuously optimizing the characteristic parameters to only achieve the optimal effect, and obtaining a final pest and disease damage part characteristic matrix and a characteristic matching algorithm.
Compared with the prior art, the invention has the beneficial effects that: the invention aims to combine the latest convolutional neural network technology, obtain the characteristic parameters of the disease and insect pest part through the research of a large number of disease and insect pest part images, form a disease and insect pest part characteristic algorithm, judge the disease and insect pest part only by substituting the image parameters into the algorithm, automatically position and focus the disease and insect pest part before uploading the image, mark the disease and insect pest part in the photographing process, weaken background interference and improve the disease and insect pest identification accuracy and identification efficiency; by automatically positioning and focusing the pest and disease damage part, background interference is weakened, and the identification accuracy is improved; the convolutional neural network technology is adopted, so that the focusing response speed is high, and the recognition speed is high; through the identification and classification of each pixel of the image, a plurality of pest and disease damage parts on one image can be identified simultaneously, so that the identification is more comprehensive; the method has the advantages that the marking adjustment can be manually carried out on users with incomplete marking or wrong marking of the plant diseases and insect pests, positions can be selected or can be cancelled, meanwhile, the manually marked pictures can be transmitted to the feature sample library, the sample library carries out feature extraction on the manually marked parts through training and learning, original feature parameters are perfected through feature comparison, and therefore the more rigorous the feature matching algorithm is, and the automatic positioning focusing is more and more accurate.
Drawings
FIG. 1 is a flow chart of the operation of the present invention.
Fig. 2 is a working principle diagram of the present invention.
FIG. 3 is a graph of signature extraction according to the present invention.
FIG. 4 is a selected map of the marker diseases of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a method for actively positioning and focusing crop pests based on a convolutional neural network comprises the following steps,
firstly, preprocessing an image when a user takes a picture, analyzing and processing an image picture by using a convolutional neural network technology, segmenting the image, and extracting the characteristics of each pixel instead of the characteristics of the whole image;
secondly, according to the gray level change of the image, counting the contrast in the characteristics through gray level difference;
let (x, y) be a point in the image, and the gray scale difference value between the point and other points (x +. DELTA.x, y +. DELTA.y) with only a small distance is,
g△(x,y)=g(x,y)-g(x+△x,y+△y)
in the formula, g delta is gray difference, delta X is a coordinate point X of a near point, and delta Y is a coordinate point Y of the near point;
setting all possible values of the gray difference to be m levels, moving the point (x, y) on the whole image, and accumulating the times of removing each numerical value of g delta (x, y), thereby making a histogram of g delta (x, y);
the histogram can know that g delta (x, y) takes the probability p delta (i), i takes the value between 1 and m, when the probability p delta (i) of a smaller i value is larger, the texture is rough, and when the values of p delta (i) are closer, namely the probability step is flatter, the texture is thinner.
A formula for calculating the contrast ratio is shown,
con=∑i∧2p△(i)
the formula for calculating the direction of the angle,
ASM=CON=∑i[p△(i)]∧2
the entropy of the signal is,
ENT=-∑ip△(i)lgp△(i)
the average value of the values is calculated,
MEAN=1/m∑ip△(i)
thirdly, extracting the characteristics of the original color, and respectively extracting three color characteristics of a first-order matrix, a second-order matrix and a third-order matrix under a blue channel of the color image;
step four, extracting the characteristics of the texture characteristics, setting a plurality of texture parameters and extracting characteristic vectors;
step five, constructing parameters of roundness, rectangularity, eccentricity, sphericity ratio, compactness, breadth and inscribed circle radius as shape recognition characteristic vectors through shape characteristic extraction;
and step six, finally, integrating the characteristics, substituting all characteristic parameters into an algorithm to perform characteristic calculation, marking the pest and disease damage parts, and enabling the average response of the whole marking process to be within 2 s.
And the step of preprocessing the image when the user takes a picture in the first step comprises the steps of firstly enhancing the overall contrast of the picture by changing the dynamic range of the gray value of the picture, then segmenting the picture, extracting the picture characteristics in regions, then substituting the characteristics in each region into an insect disease characteristic algorithm for calculation, and finally returning the calculation result, wherein the regions which accord with the characteristics on the picture in the user taking region are marked by boxes.
In the first step, the image picture is analyzed and processed by using the convolutional neural network technology, namely, the image is divided into small area graphs of one block according to the specified size, as shown in fig. 3; and then, extracting the characteristics of each region, substituting the characteristic parameters extracted from each region into a pest characteristic algorithm for calculation, returning the result, and marking the region by using a square box.
The area marked by the square frame can also be operated by self, the double-click on the selected square frame can cancel the selection, and the single click on any position of the picture can select the position, as shown in fig. 4.
As shown in fig. 2, the feature extraction step in the first step is,
a, performing primary feature extraction on a picture by using a convolutional layer, decomposing the picture during extraction, then defining a convolution kernel for extracting a certain feature from the picture, multiplying the convolution kernel by a corresponding bit of a digital matrix and adding to obtain an output result of the convolutional layer;
b, inputting the output result of the convolutional layer into a pooling layer, wherein the pooling layer is used for reducing the dimension of the feature vector output by the convolutional layer, reducing the over-fitting phenomenon, only retaining the most useful picture information and reducing the transmission of noise;
c, after the extraction of the features of the convolution layer and the pooling layer is completed, generating a separator with the number equal to the required classes by using a full connection layer, cutting the tensor output by the pooling layer into a plurality of vectors again, multiplying the vectors by a weight matrix, adding an offset, and then optimizing the parameters by using a gradient descent method for the used ReLU activation function;
and d, obtaining characteristic parameters including the characteristics of the pest and disease damage parts such as color, texture, shape and the like through the process, repeatedly training for many times, continuously optimizing the characteristic parameters to only achieve the optimal effect, and obtaining a final pest and disease damage part characteristic matrix and a characteristic matching algorithm.
The invention aims to combine the latest convolutional neural network technology, obtain the characteristic parameters of the disease and insect pest part through the research of a large number of disease and insect pest part images, form a disease and insect pest part characteristic algorithm, judge the disease and insect pest part only by substituting the image parameters into the algorithm, automatically position and focus the disease and insect pest part before uploading the image, mark the disease and insect pest part in the photographing process, weaken background interference and improve the disease and insect pest identification accuracy and identification efficiency; by automatically positioning and focusing the pest and disease damage part, background interference is weakened, and the identification accuracy is improved; the convolutional neural network technology is adopted, so that the focusing response speed is high, and the recognition speed is high; through the identification and classification of each pixel of the image, a plurality of pest and disease damage parts on one image can be identified simultaneously, so that the identification is more comprehensive; the automatic positioning focusing and manual selection positioning are matched for use, hundreds of accurate positioning of pest and disease damage parts is guaranteed, meanwhile, manually marked pictures can be transmitted to a sample library, the sample library performs feature extraction on manually marked parts through training and learning, original feature parameters are perfected through feature comparison, and therefore the feature matching algorithm is more accurate originally.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (5)

1. A method for actively positioning and focusing crop diseases and insect pests based on a convolutional neural network is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
firstly, preprocessing an image when a user takes a picture, analyzing and processing an image picture by using a convolutional neural network technology, segmenting the image, and extracting the characteristics of each pixel;
secondly, according to the gray level change of the image, counting the contrast in the characteristics through gray level difference;
let (x, y) be a point in the image, and the gray scale difference value between the point and other points (x +. DELTA.x, y +. DELTA.y) with only a small distance is,
g△(x,y)=g(x,y)-g(x+△x,y+△y)
in the formula, g delta is gray difference, delta X is a coordinate point X of a near point, and delta Y is a coordinate point Y of the near point;
setting all possible values of the gray difference to be m levels, moving the point (x, y) on the whole image, and accumulating the times of removing each numerical value of g delta (x, y), thereby making a histogram of g delta (x, y);
the histogram can know that g delta (x, y) takes the probability p delta (i) between 1 and m, when the probability p delta (i) of a smaller i value is larger, the texture is rough, when the values of p delta (i) are closer, namely the probability step is flatter, the texture is thinner,
a formula for calculating the contrast ratio is shown,
con=∑i∧2p△(i)
the formula for calculating the direction of the angle,
ASM=CON=∑i[p△(i)]∧2
the entropy of the signal is,
ENT=-∑ip△(i)lgp△(i)
the average value of the values is calculated,
MEAN=1/m∑ip△(i)
thirdly, extracting the characteristics of the original color, and respectively extracting three color characteristics of a first-order matrix, a second-order matrix and a third-order matrix under a blue channel of the color image;
step four, extracting the characteristics of the texture characteristics, setting a plurality of texture parameters and extracting characteristic vectors;
step five, constructing parameters of roundness, rectangularity, eccentricity, sphericity ratio, compactness, breadth and inscribed circle radius as shape recognition characteristic vectors through shape characteristic extraction;
step six, finally, integrating the characteristics, substituting all characteristic parameters into an algorithm to perform characteristic calculation, and marking the pest and disease damage parts, wherein the average response of the whole marking process is within 2 s;
and seventhly, automatic positioning and focusing can be continuously self-learned, the marking adjustment can be manually carried out on a user with incomplete marking or wrong marking of the pest position, the position can be selected or the selection can be cancelled, meanwhile, the manually marked picture can be transmitted to a feature sample library, the sample library carries out feature extraction on the manually marked part through training and learning, original feature parameters are perfected through feature comparison, and therefore the more rigorous the feature matching algorithm is, and the automatic positioning and focusing are more and more accurate.
2. The active crop pest positioning and focusing method based on the convolutional neural network as claimed in claim 1, which is characterized in that: and the step of preprocessing the image when the user takes a picture in the first step comprises the steps of firstly enhancing the overall contrast of the picture by changing the dynamic range of the gray value of the picture, then segmenting the picture, extracting the picture characteristics in regions, then substituting the characteristics in each region into an insect disease characteristic algorithm for calculation, and finally returning the calculation result, wherein the regions which accord with the characteristics on the picture in the user taking region are marked by boxes.
3. The active crop pest positioning and focusing method based on the convolutional neural network as claimed in claim 1, which is characterized in that: in the first step, the image picture is analyzed and processed by using the convolutional neural network technology, namely, the image is divided into small area pictures of one block according to the specified size; and then, extracting the characteristics of each region, substituting the characteristic parameters extracted from each region into a pest characteristic algorithm for calculation, returning the result, and marking the region by using a square box.
4. The active crop pest positioning and focusing method based on the convolutional neural network as claimed in claim 3, wherein: the area marked by the square frame can be operated by self, double-clicking the selected square frame can cancel the selection, clicking any position of the picture can select the position, meanwhile, the manually marked picture can be transmitted to a sample library, the sample library carries out feature extraction on the manually marked part through training and learning, and the original feature parameters are perfected through feature comparison.
5. The active crop pest positioning and focusing method based on the convolutional neural network as claimed in claim 1, which is characterized in that: the characteristic extraction step in the first step is that,
a, performing primary feature extraction on a picture by using a convolutional layer, decomposing the picture during extraction, then defining a convolution kernel for extracting a certain feature from the picture, multiplying the convolution kernel by a corresponding bit of a digital matrix and adding to obtain an output result of the convolutional layer;
b, inputting the output result of the convolutional layer into a pooling layer, wherein the pooling layer is used for reducing the dimension of the feature vector output by the convolutional layer, reducing the over-fitting phenomenon, only retaining the most useful picture information and reducing the transmission of noise;
c, after the extraction of the features of the convolution layer and the pooling layer is completed, generating a separator with the number equal to the required classes by using a full connection layer, cutting the tensor output by the pooling layer into a plurality of vectors again, multiplying the vectors by a weight matrix, adding an offset, and then optimizing the parameters by using a gradient descent method for the used ReLU activation function;
and d, obtaining characteristic parameters including the characteristics of the pest and disease damage parts such as color, texture, shape and the like through the process, repeatedly training for many times, continuously optimizing the characteristic parameters to only achieve the optimal effect, and obtaining a final pest and disease damage part characteristic matrix and a characteristic matching algorithm.
CN202010397400.6A 2020-05-12 2020-05-12 Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network Pending CN111695560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010397400.6A CN111695560A (en) 2020-05-12 2020-05-12 Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010397400.6A CN111695560A (en) 2020-05-12 2020-05-12 Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN111695560A true CN111695560A (en) 2020-09-22

Family

ID=72477654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010397400.6A Pending CN111695560A (en) 2020-05-12 2020-05-12 Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111695560A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI769724B (en) * 2021-03-04 2022-07-01 鴻海精密工業股份有限公司 Image feature extraction method and image feature extraction device, electronic device and storage media
CN116579751A (en) * 2023-07-14 2023-08-11 南京信息工程大学 Crop detection data processing method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901336A (en) * 2010-06-11 2010-12-01 哈尔滨工程大学 Fingerprint and finger vein bimodal recognition decision level fusion method
CN102842032A (en) * 2012-07-18 2012-12-26 郑州金惠计算机***工程有限公司 Method for recognizing pornography images on mobile Internet based on multi-mode combinational strategy
CN103514459A (en) * 2013-10-11 2014-01-15 中国科学院合肥物质科学研究院 Method and system for identifying crop diseases and pests based on Android mobile phone platform
CN105787446A (en) * 2016-02-24 2016-07-20 上海劲牛信息技术有限公司 Smart agricultural insect disease remote automatic diagnosis system
CN107067043A (en) * 2017-05-25 2017-08-18 哈尔滨工业大学 A kind of diseases and pests of agronomic crop detection method
CN108510544A (en) * 2018-03-30 2018-09-07 大连理工大学 A kind of striation localization method of feature based cluster
CN110517311A (en) * 2019-08-30 2019-11-29 北京麦飞科技有限公司 Pest and disease monitoring method based on leaf spot lesion area

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901336A (en) * 2010-06-11 2010-12-01 哈尔滨工程大学 Fingerprint and finger vein bimodal recognition decision level fusion method
CN102842032A (en) * 2012-07-18 2012-12-26 郑州金惠计算机***工程有限公司 Method for recognizing pornography images on mobile Internet based on multi-mode combinational strategy
CN103514459A (en) * 2013-10-11 2014-01-15 中国科学院合肥物质科学研究院 Method and system for identifying crop diseases and pests based on Android mobile phone platform
CN105787446A (en) * 2016-02-24 2016-07-20 上海劲牛信息技术有限公司 Smart agricultural insect disease remote automatic diagnosis system
CN107067043A (en) * 2017-05-25 2017-08-18 哈尔滨工业大学 A kind of diseases and pests of agronomic crop detection method
CN108510544A (en) * 2018-03-30 2018-09-07 大连理工大学 A kind of striation localization method of feature based cluster
CN110517311A (en) * 2019-08-30 2019-11-29 北京麦飞科技有限公司 Pest and disease monitoring method based on leaf spot lesion area

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI769724B (en) * 2021-03-04 2022-07-01 鴻海精密工業股份有限公司 Image feature extraction method and image feature extraction device, electronic device and storage media
CN116579751A (en) * 2023-07-14 2023-08-11 南京信息工程大学 Crop detection data processing method and system
CN116579751B (en) * 2023-07-14 2023-09-08 南京信息工程大学 Crop detection data processing method and system

Similar Documents

Publication Publication Date Title
CN113449594B (en) Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN110322453B (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN110543878A (en) pointer instrument reading identification method based on neural network
CN111783772A (en) Grabbing detection method based on RP-ResNet network
CN111553200A (en) Image detection and identification method and device
CN107239514A (en) A kind of plants identification method and system based on convolutional neural networks
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN111860330A (en) Apple leaf disease identification method based on multi-feature fusion and convolutional neural network
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN110310305B (en) Target tracking method and device based on BSSD detection and Kalman filtering
CN110288033B (en) Sugarcane top feature identification and positioning method based on convolutional neural network
CN112464983A (en) Small sample learning method for apple tree leaf disease image classification
CN111695560A (en) Method for actively positioning and focusing crop diseases and insect pests based on convolutional neural network
CN113344956A (en) Ground feature contour extraction and classification method based on unmanned aerial vehicle aerial photography three-dimensional modeling
CN111340019A (en) Grain bin pest detection method based on Faster R-CNN
CN116977960A (en) Rice seedling row detection method based on example segmentation
CN109615610B (en) Medical band-aid flaw detection method based on YOLO v2-tiny
CN113344009B (en) Light and small network self-adaptive tomato disease feature extraction method
CN113627240B (en) Unmanned aerial vehicle tree species identification method based on improved SSD learning model
Zhong et al. Identification and depth localization of clustered pod pepper based on improved Faster R-CNN
CN113449622A (en) Image classification, identification and detection method for cotton plants and weeds
CN116206208B (en) Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence
CN116563205A (en) Wheat spike counting detection method based on small target detection and improved YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200922