CN109977802A - Crops Classification recognition methods under strong background noise - Google Patents

Crops Classification recognition methods under strong background noise Download PDF

Info

Publication number
CN109977802A
CN109977802A CN201910175557.1A CN201910175557A CN109977802A CN 109977802 A CN109977802 A CN 109977802A CN 201910175557 A CN201910175557 A CN 201910175557A CN 109977802 A CN109977802 A CN 109977802A
Authority
CN
China
Prior art keywords
neural network
network model
background noise
recognition methods
strong background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910175557.1A
Other languages
Chinese (zh)
Inventor
邓悦
张洋
史良胜
张宇婷
连泰棋
何昱晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910175557.1A priority Critical patent/CN109977802A/en
Publication of CN109977802A publication Critical patent/CN109977802A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8466Investigation of vegetal material, e.g. leaves, plants, fruits
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Immunology (AREA)
  • Biomedical Technology (AREA)
  • Biochemistry (AREA)
  • Evolutionary Computation (AREA)
  • Chemical & Material Sciences (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Animal Husbandry (AREA)
  • Mining & Mineral Resources (AREA)
  • Marine Sciences & Fisheries (AREA)
  • General Business, Economics & Management (AREA)
  • Agronomy & Crop Science (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses Crops Classification recognition methods under a kind of strong background noise, each several of the picture for shooting crop of all categories with multispectral camera form pictures;It obtains each pixel NDVI value and is partitioned into plant regional;Non-plant region is replaced with into solid background to protrude plant regional, form multispectral data collection after carrying out picture pretreatment and is divided into 3 training, test, verifying data sets;Pass through the method for transfer learning, training dataset is inputted preset convolutional neural networks model to be trained, convolution prediction neural network model is obtained, test data set input convolution prediction neural network model is subjected to accuracy rate and tests to obtain qualified convolution prediction neural network model;Validation data set is inputted into convolution prediction neural network model, Classification and Identification is carried out to crop therein and obtains classification results.The method reduce the influences that strong background noise generates Crops Classification identification, improve the recognition efficiency and predictive ability of model.

Description

Crops Classification recognition methods under strong background noise
Technical field
The invention belongs to Crop classifications to identify field, and in particular to Crops Classification identification side under a kind of strong background noise Method.
Background technique
In image classification and retrieval based on deep learning, how to which of image progress feature extraction and extraction image Feature (color, texture, shape etc.) not only affects the accuracy of image classification, but also rises to content-based image retrieval Vital effect.Currently, when carrying out crop identification by deep learning using RGB image, under by strong background noise There is the trend that is decreased obviously in the influence of interference characteristic and self brightness, the precision of deep learning, RGB color model passes through between image Color, shape, Texture eigenvalue difference achieve the effect that classification and identification Different Crop, but this method cannot give expression to The information of Color-spatial distribution, only records the information of three wave bands of RGB, and all band is lost, is unfavorable in strong background noise Under identification to crop.In most of deep learnings, crop image generally passes through artificial selection and evades complicated interference, but real The influence that may be subjected to strong background noise in the application of border, causes Crops Classification accuracy of identification inadequate.
Summary of the invention
The object of the present invention is to provide Crops Classification recognition methods under a kind of strong background noise, and the method reduce strong noises Background identifies the influence generated to Crops Classification, improves recognition accuracy, can be applied to agrotype under Small Sample Database Identification, improves the recognition efficiency and predictive ability of model.
The technical scheme adopted by the invention is that:
Crops Classification recognition methods under a kind of strong background noise, comprising steps of
S1, the picture that crop of all categories is shot with multispectral camera it is each several, form pictures;
S2, radiation calibration and vegetation index calculating are completed with algorithm, obtain each pixel NDVI value and be partitioned into plant regional;
S3, non-plant region is replaced with into solid background to protrude plant regional, forms mostly light after carrying out picture pretreatment Multispectral data collection is divided into 3 training, test, verifying data sets by spectrum data set;
Training dataset is inputted preset convolutional neural networks model and is trained by S4, the method by transfer learning, Convolution prediction neural network model is obtained, test data set input convolution prediction neural network model is subjected to accuracy rate test, Optimize the parameter of convolution prediction neural network model according to test result and obtains qualified convolution prediction neural network model;
S5, validation data set is inputted into convolution prediction neural network model, Classification and Identification is carried out to crop therein and obtained Take classification results.
In S1, when multispectral camera is shot, it is necessary to based under the good sunshine condition of light, guarantee enough light Line amount.
In S2, the calculation formula of the NDVI value of each pixel is,
NDVI=(ρnirred)/(ρnirred)
Wherein ρnirReflectivity, the ρ obtained near infrared bandredFor the reflectivity that infrared band obtains, plant regional NDVI value of the NDVI value obviously than ground object area is high, is calculated automatically from NDVI threshold value by otsu algorithm, is drawn with NDVI threshold value Divide plant regional and ground object area.
In S3, carry out using TFRecord format system storing data when picture pretreatment.
In S4, preset convolutional neural networks model uses the Inception_v3 of GoogleNet.
Further, for the parameter setting of Inception_v3, retain the ginseng of all convolutional layers in Inception_v3 Number only substitutes the last layer and trains layer entirely, is the extraction to the feature vector of image before the last layer, with the feature extracted Vector obtains the full Connection Neural Network model of a single layer as output training.
The beneficial effects of the present invention are:
The multispectral image being made of multiple channels, each channel capture the light of specified wavelength, can fully consider figure The Color-spatial distribution information of picture, multispectral imaging can obtain spectral signature and obtain image information, and this method utilizes more Spectrum camera shoots crop picture, eliminates pure RGB image background color and obscures feature searching, and it is accurate to improve identification Rate.
NDVI value (vegetation index) is applied to monitoring vegetation growth state, vegetation coverage and eliminates part Radiation error helps from comprising water and soil etc. to separate plant in interior ambient enviroment, and this method is multispectral by calculating Picture each point NDVI value is simultaneously partitioned into plant regional with this, separates crop under strong background noise, reduces strong background noise pair The influence that Crops Classification identification generates, further extracts and enhances crop feature.
This method obtains convolution prediction neural network model, can be applied to Small Sample Database by the method for transfer learning The identification of lower agrotype, NDVI value using the accurate position of crop feature, improve the recognition efficiency of model, parameter Optimization improves the predictive ability of model, can reach better Crops Classification recognition effect based on deep learning, which can To improve the accuracy rate that Crops Classification identifies under Small Sample Database.
This method is equivalent to delimitation using the thinking for carrying out deep learning after NDVI value cut zone to specified region again One piece of region is played for deep learning, reduces the workload and possible mistake that characteristic point is found in deep learning.
Detailed description of the invention
Fig. 1 is the principle of the present invention flow chart.
Fig. 2 is specific flow chart of the invention.
Specific embodiment
The present invention is further illustrated with reference to the accompanying drawings and examples.
As depicted in figs. 1 and 2, Crops Classification recognition methods under a kind of strong background noise, comprising steps of
S1, wheat and each 500, picture of other classification plant are shot with multispectral camera, forms pictures.
S2, radiation calibration and vegetation index calculating are completed with algorithm, i.e., each pixel NDVI value is simultaneously partitioned into plant with this Region, the calculation formula of NDVI value of each pixel in NDVI image are as follows:
NDVI=(ρnirred)/(ρnirred)
Wherein ρnirFor the reflectivity that near infrared band obtains, ρredThe reflectivity obtained for infrared band;Plant regional NDVI value can be obviously higher than atural object, gives NDVI threshold value, divides plant and ground object area with this.
S3, the replacement of plant regional external application solid background, prominent plant regional, then carry out pretreatment (the progress picture of picture Using TFRecord format system storing data when pretreatment, when data source is more complicated, sample type increases, each sample In information it is more complicated after, still be able to effectively record input data in information), formed multispectral data collection, operation Code realizes the division of multispectral data collection and forming label, and all pictures are divided into 3 training, test, verifying data sets, and And picture is converted to 299 × 299 × 3 character matrix that Inception_v3 needs from unprocessed form.
Training dataset is inputted preset convolutional neural networks model and is trained by S4, the method by transfer learning, Convolution prediction neural network model after being trained, detailed process are as follows:
Convolution presets the Inception_v3 that neural network model uses GoogleNet, can be by one biggish two Dimension convolution kernel splits into two lesser one-dimensional convolution (for example 3 × 3 convolution are splitted into 1 × 3 convolution sum, 3 × 1 convolution), reduces ginseng Number quantity accelerates operation and alleviates over-fitting, while increasing one layer of nonlinear extensions model tormulation ability;It is used in structure The convolutional layer of different size convolution kernel does parallel connection, therefore can identify the feature on different scale.
When the Inception_v3 mould that training dataset has been prepared for, and load is defined by Tensorflow-Slim Type defines preset hyper parameter, and finally the weight parameter of full articulamentum and offset parameter are obtained network by new data training, are saved Train the path of model.
Simplified model form be divided into from front to back convolutional layer → pond layer → convolutional layer → pond layer → full articulamentum → Articulamentum → softmax layers complete, size when picture has just inputted convolutional layer is 299 × 299, and convolutional layer will be every in neural network One fritter carry out deeper into analysis, to obtain the higher feature of level of abstraction, width after convolution and highly usable following Formula calculates:
W2=(W1-F+2P)/S+1
H2=(H1-F+2P)/S+1
Wherein, W2It is width, the W of Feature Map after convolution1Be image before convolution width, F be filter width, P be the quantity of Zero Padding, Zero Padding refer to mended around original image a few circles 0 (if value be 1, With regard to mend 1 circle 0), S be stride, H2It is the height of Feature Map, H after convolution1It is the height of image before convolution.This layer it is defeated It is out next layer of input.
Pond layer neural network will not change the depth of three-dimensional matrice, but it can reduce the size of matrix, further contract The number of small last full articulamentum interior joint, achievees the purpose that reduce parameter in entire neural network.
After processing layer by layer by convolutional layer and pond layer, it is higher that the information in image has been abstracted into information content Feature.Subsequent several layers of full articulamentums are then used to carry out classification task.Last softmax layer is then indicated using a probability Object to be sorted has much probability to belong to some class.
It can be found that network is restrained quickly after by newdata collection training, training loss is gradually decreased, and accuracy rate is promoted To 92% or so.
Finally, test data set input convolution prediction neural network model is carried out accuracy rate test, according to test result Optimize the parameter of convolution prediction neural network model and obtains qualified convolution prediction neural network model;
S5, validation data set is inputted into convolution prediction neural network model, Classification and Identification is carried out to crop therein and obtained Take classification results.
The multispectral image being made of multiple channels, each channel capture the light of specified wavelength, can fully consider figure The Color-spatial distribution information of picture, multispectral imaging can obtain spectral signature and obtain image information, and this method utilizes more Spectrum camera shoots crop picture, eliminates pure RGB image background color and obscures feature searching, and it is accurate to improve identification Rate.
NDVI value (vegetation index) is applied to monitoring vegetation growth state, vegetation coverage and eliminates part Radiation error helps from comprising water and soil etc. to separate plant in interior ambient enviroment, and this method is multispectral by calculating Picture each point NDVI value is simultaneously partitioned into plant regional with this, separates crop under strong background noise, reduces strong background noise pair The influence that Crops Classification identification generates, further extracts and enhances crop feature.
This method obtains convolution prediction neural network model, can be applied to Small Sample Database by the method for transfer learning The identification of lower agrotype, NDVI value using the accurate position of crop feature, improve the recognition efficiency of model, parameter Optimization improves the predictive ability of model, can reach better Crops Classification recognition effect based on deep learning, which can To improve the accuracy rate that Crops Classification identifies under Small Sample Database.
This method is equivalent to delimitation using the thinking for carrying out deep learning after NDVI value cut zone to specified region again One piece of region is played for deep learning, reduces the workload and possible mistake that characteristic point is found in deep learning.
It should be understood that for those of ordinary skills, it can be modified or changed according to the above description, And all these modifications and variations should all belong to the protection domain of appended claims of the present invention.

Claims (6)

1. Crops Classification recognition methods under a kind of strong background noise, it is characterised in that: including step,
S1, the picture that crop of all categories is shot with multispectral camera it is each several, form pictures;
S2, radiation calibration and vegetation index calculating are completed with algorithm, obtain each pixel NDVI value and be partitioned into plant regional;
S3, non-plant region is replaced with into solid background to protrude plant regional, forms multispectral number after carrying out picture pretreatment According to collection, multispectral data collection is divided into 3 training, test, verifying data sets;
Training dataset is inputted preset convolutional neural networks model and is trained, obtained by S4, the method by transfer learning Test data set input convolution prediction neural network model is carried out accuracy rate test by convolution prediction neural network model, according to The parameter of test result optimization convolution prediction neural network model simultaneously obtains qualified convolution prediction neural network model;
S5, validation data set is inputted into convolution prediction neural network model, crop therein is carried out Classification and Identification and obtained to divide Class result.
2. Crops Classification recognition methods under strong background noise as described in claim 1, it is characterised in that: multispectral in S1 When camera is shot, it is necessary to based under the good sunshine condition of light, guarantee enough amount lights.
3. Crops Classification recognition methods under strong background noise as described in claim 1, it is characterised in that: in S2, each picture The calculation formula of the NDVI value of vegetarian refreshments is,
NDVI=(ρnirred)/(ρnirred)
Wherein ρnirReflectivity, the ρ obtained near infrared bandredFor the reflectivity that infrared band obtains, the NDVI of plant regional The obvious NDVI value than ground object area of value is high, is calculated automatically from NDVI threshold value by otsu algorithm, is divided and planted with NDVI threshold value Object area and ground object area.
4. Crops Classification recognition methods under strong background noise as described in claim 1, it is characterised in that: in S3, carry out figure Using TFRecord format system storing data when piece pre-processes.
5. Crops Classification recognition methods under strong background noise as described in claim 1, it is characterised in that: preset in S4 Convolutional neural networks model uses the Inception_v3 of GoogleNet.
6. Crops Classification recognition methods under strong background noise as claimed in claim 5, it is characterised in that: for Inception_ The parameter setting of v3 retains the parameter of all convolutional layers in Inception_v3, only substitutes the last layer and trains layer entirely, last It is the extraction to the feature vector of image before layer, uses the feature vector extracted to obtain a single layer as output training and connect entirely Connect neural network model.
CN201910175557.1A 2019-03-08 2019-03-08 Crops Classification recognition methods under strong background noise Pending CN109977802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910175557.1A CN109977802A (en) 2019-03-08 2019-03-08 Crops Classification recognition methods under strong background noise

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910175557.1A CN109977802A (en) 2019-03-08 2019-03-08 Crops Classification recognition methods under strong background noise

Publications (1)

Publication Number Publication Date
CN109977802A true CN109977802A (en) 2019-07-05

Family

ID=67078249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910175557.1A Pending CN109977802A (en) 2019-03-08 2019-03-08 Crops Classification recognition methods under strong background noise

Country Status (1)

Country Link
CN (1) CN109977802A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619349A (en) * 2019-08-12 2019-12-27 深圳市识农智能科技有限公司 Plant image classification method and device
CN110909820A (en) * 2019-12-02 2020-03-24 齐鲁工业大学 Image classification method and system based on self-supervision learning
CN110991454A (en) * 2019-12-23 2020-04-10 云南大学 Blade image recognition method and device, electronic equipment and storage medium
CN111428798A (en) * 2020-03-30 2020-07-17 北京工业大学 Plant seedling classification method based on convolutional neural network
CN112347894A (en) * 2020-11-02 2021-02-09 东华理工大学 Single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation
CN113489869A (en) * 2021-07-05 2021-10-08 深圳市威视佰科科技有限公司 Clothing material identification method based on hyperspectral camera
CN113673490A (en) * 2021-10-21 2021-11-19 武汉大学 Phenological period self-adaptive crop physiological parameter remote sensing estimation method and system
WO2023018387A1 (en) * 2021-08-11 2023-02-16 Agcurate Bilgi Teknolojileri Anonim Sirketi A crop classification method using deep neural networks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning
CN108710864A (en) * 2018-05-25 2018-10-26 北华航天工业学院 Winter wheat Remotely sensed acquisition method based on various dimensions identification and image noise reduction processing
CN109241817A (en) * 2018-07-02 2019-01-18 广东工业大学 A kind of crops image-recognizing method of unmanned plane shooting

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning
CN108710864A (en) * 2018-05-25 2018-10-26 北华航天工业学院 Winter wheat Remotely sensed acquisition method based on various dimensions identification and image noise reduction processing
CN109241817A (en) * 2018-07-02 2019-01-18 广东工业大学 A kind of crops image-recognizing method of unmanned plane shooting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卫娇娇: "基于遥感的甘肃河西地区绿洲分布提取及时空变化分析", 《中国优秀硕士学位论文全文数据库》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619349A (en) * 2019-08-12 2019-12-27 深圳市识农智能科技有限公司 Plant image classification method and device
CN110909820A (en) * 2019-12-02 2020-03-24 齐鲁工业大学 Image classification method and system based on self-supervision learning
CN110991454A (en) * 2019-12-23 2020-04-10 云南大学 Blade image recognition method and device, electronic equipment and storage medium
CN111428798A (en) * 2020-03-30 2020-07-17 北京工业大学 Plant seedling classification method based on convolutional neural network
CN112347894A (en) * 2020-11-02 2021-02-09 东华理工大学 Single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation
CN112347894B (en) * 2020-11-02 2022-05-20 东华理工大学 Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation
CN113489869A (en) * 2021-07-05 2021-10-08 深圳市威视佰科科技有限公司 Clothing material identification method based on hyperspectral camera
WO2023018387A1 (en) * 2021-08-11 2023-02-16 Agcurate Bilgi Teknolojileri Anonim Sirketi A crop classification method using deep neural networks
CN113673490A (en) * 2021-10-21 2021-11-19 武汉大学 Phenological period self-adaptive crop physiological parameter remote sensing estimation method and system

Similar Documents

Publication Publication Date Title
CN109977802A (en) Crops Classification recognition methods under strong background noise
Wang et al. Identification of tomato disease types and detection of infected areas based on deep convolutional neural networks and object detection techniques
Kong et al. Multi-stream hybrid architecture based on cross-level fusion strategy for fine-grained crop species recognition in precision agriculture
Zhou et al. Wheat ears counting in field conditions based on multi-feature optimization and TWSVM
CN111709379B (en) Remote sensing image-based hilly area citrus planting land plot monitoring method and system
CN110619632B (en) Mango example confrontation segmentation method based on Mask R-CNN
Gong et al. Citrus yield estimation based on images processed by an Android mobile phone
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
Chen et al. Citrus fruits maturity detection in natural environments based on convolutional neural networks and visual saliency map
CN109829425B (en) Farmland landscape small-scale ground feature classification method and system
CN114445785A (en) Internet of things-based litchi insect pest monitoring and early warning method and system and storage medium
CN115311588A (en) Pine wood nematode disease stumpage detection method and device based on unmanned aerial vehicle remote sensing image
CN111814563B (en) Method and device for classifying planting structures
Lv et al. A visual identification method for the apple growth forms in the orchard
CN112560623B (en) Unmanned aerial vehicle-based rapid mangrove plant species identification method
CN116543316B (en) Method for identifying turf in paddy field by utilizing multi-time-phase high-resolution satellite image
Hao et al. Growing period classification of Gynura bicolor DC using GL-CNN
CN110163101A (en) The difference of Chinese medicine seed and grade quick discrimination method
CN113657158A (en) Google Earth Engine-based large-scale soybean planting region extraction algorithm
Yang et al. A comparative evaluation of convolutional neural networks, training image sizes, and deep learning optimizers for weed detection in alfalfa
CN116129260A (en) Forage grass image recognition method based on deep learning
Du et al. DSW-YOLO: A detection method for ground-planted strawberry fruits under different occlusion levels
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
Li et al. Maize leaf disease identification based on WG-MARNet
CN115330833A (en) Fruit yield estimation method with improved multi-target tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190705

RJ01 Rejection of invention patent application after publication