CN114648500A - Crop weed detection method and device based on unmanned aerial vehicle and deep learning - Google Patents

Crop weed detection method and device based on unmanned aerial vehicle and deep learning Download PDF

Info

Publication number
CN114648500A
CN114648500A CN202210267089.2A CN202210267089A CN114648500A CN 114648500 A CN114648500 A CN 114648500A CN 202210267089 A CN202210267089 A CN 202210267089A CN 114648500 A CN114648500 A CN 114648500A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
weed
sampling
weeds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210267089.2A
Other languages
Chinese (zh)
Other versions
CN114648500B (en
Inventor
罗强
韩润华
殷志坚
杨贞
余亮
熊朝松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Science and Technology Normal University
Original Assignee
Jiangxi Science and Technology Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Science and Technology Normal University filed Critical Jiangxi Science and Technology Normal University
Priority to CN202210267089.2A priority Critical patent/CN114648500B/en
Publication of CN114648500A publication Critical patent/CN114648500A/en
Application granted granted Critical
Publication of CN114648500B publication Critical patent/CN114648500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a crop weed detection method and device based on unmanned aerial vehicles and deep learning. The device scans and detects in real time through fixing control module and the vision module on unmanned aerial vehicle to return the weeds coordinate through orientation module. The method comprises the following steps: shooting and making a crop weed data set; then adding a NanoDet with an improved attention module as an identification model in the FPN part, and training the model by using a public crop weed data set; then, transplanting an improved NanoDet detection algorithm and a trained model to an unmanned aerial vehicle, operating the unmanned aerial vehicle to obtain an image, and performing real-time detection by the detection algorithm through reading the image; and finally, positioning the coordinate where the detected weeds are located through a GPS, and transmitting the coordinate back to the background to wait for the next weeding task. The method improves the detection precision of the existing algorithm, realizes efficient and accurate weed detection coordinates by matching with unmanned aerial vehicles, and provides a new method for scientific management of farmlands.

Description

Crop weed detection method and device based on unmanned aerial vehicle and deep learning
Technical Field
The invention belongs to the technical field of computer vision target detection, relates to weed detection by using an improved NanoDet model, and particularly relates to a crop weed detection method and device based on an unmanned aerial vehicle and deep learning.
Background
Weeds are one of the main causes of crop yield reduction, and can greatly affect the income of growers. The weed detection device can accurately detect weeds mixed in crops, is beneficial to a farmland manager to timely take effective measures to clean the weeds, and relieves the condition that the weeds and the crops compete for nutrients to cause the yield reduction of the crops. The traditional artificial weeding method has low efficiency and slow operation, does not distinguish weed areas when weeding, directly sprays pesticides on all farmlands, can cause pesticide waste and excessive consumption in time, cannot adapt to the current increasingly mature large farm production mode, and can reduce economic benefit and crop yield.
The object detection of the image is a challenging task in the field of computer vision, different from an upstream task of simple classification, the object detection requires precise classification of each object in the image and the position of the object is marked, the mainstream object detection technology is completed based on a deep convolution neural network at present, and an object candidate box can be divided into two categories of anchor-based and anchor-free according to whether the anchor is used for extracting the object candidate box, wherein the algorithm based on the anchor-based includes fast-RCNN, Yolov2, Yolov3 and the like, and the advantage is that the training is stable; algorithms based on anchor-free include CornerNet, CenterNet and FCOS, and the algorithms are free of calculated amount caused by using the anchor, so that the real-time performance is higher.
The target detection technology of the image is combined with agricultural weeding, the weed position in the planting area is identified through the neural convolutional neural network, then fixed-point clarification is carried out, and the weeding efficiency can be improved. Lichenwei et al propose an automatic weeding operation method for unmanned aerial vehicles, which performs weed detection by generating detection points, but needs to repeat detection steps, and cannot complete the operation of scanning and detecting in the same area, so that the detection time is too long. Meanwhile, the weed detection method has the problems of large model size, insufficient real-time performance, complex operation during weed scanning and positioning, inapplicability to actual production and the like.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a crop weed detection method and device based on an unmanned aerial vehicle and deep learning.
A crop weed detection method based on unmanned aerial vehicles and deep learning specifically comprises the following steps:
step 1, collecting images containing weeds, and labeling the weeds in the images to form a weed data set.
And 2, constructing an improved NanoDet model as a weed recognition model, and training by using the weed data set obtained in the step 1. The improved NanoDet model specifically comprises the following steps: the backbone network uses the ShuffeNet V2-1.5x of the CSPnet volume block; the FPN part uses a PAN structure to perform multi-scale feature fusion on 8, 16 and 32 times down-sampling features output by a backbone network, and performs feature integration through an attention module after the output of each layer of features.
And 3, transplanting the weed recognition model trained in the step 2 into an unmanned aerial vehicle, and scanning a planting area to be weeded by using the unmanned aerial vehicle. The unmanned aerial vehicle carries out real-time detection through the weeds identification model in the scanning process, and when the weeds identification model judges that weeds exist in the scanned image, the position where the weeds are located is located through the GPS, and the coordinates are returned to the background.
The utility model provides a weeds detection device of crops based on unmanned aerial vehicle and degree of depth study, includes unmanned aerial vehicle and fixes control module, vision module, orientation module on unmanned aerial vehicle. The vision module is used for shooting a planting area picture in the flight process of the unmanned aerial vehicle and transmitting the picture to the control module. The control module is transplanted with a trained weed identification model, and the received picture is identified in real time; and when the picture contains weeds, calling a positioning module to obtain the position coordinates, and returning to the control background.
The invention has the following beneficial effects:
1. the automatic detection of weeds is realized by using an unmanned aerial vehicle and combining a visual image target recognition technology, so that the labor cost can be saved.
2. An improved NanoDet network is constructed, attention mechanism fusion space and channel information is introduced, the defect that image detail information is lost too early in the downsampling process is overcome, target detection performance is improved, and the effects of high target detection precision and strong real-time performance are maintained when the parameter quantity is extremely small.
3. Through the positioning module, the position coordinates of the weeds are directly returned to the background, the positioning precision is high, and the fixed-point weeding work can be conveniently carried out subsequently. And repeated scanning detection is not needed in the whole detection process, and the method is simple and convenient.
Drawings
FIG. 1 is a flow chart of a crop weed detection method based on unmanned aerial vehicles and deep learning;
FIG. 2 is a schematic structural diagram of a modified NanoDet model;
fig. 3 is a schematic structural diagram of a GSE module used in the embodiment.
Detailed Description
The invention is further explained below with reference to the drawings; it should be noted that the described embodiments are part of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, a crop weed detection method based on unmanned aerial vehicle and deep learning specifically includes the following steps:
step 1, shooting 1000 pictures containing weeds and having pixels of 416 × 416 by using an unmanned aerial vehicle, making a weed data set according to a passacal voc format, randomly selecting 700 pictures as a training set, 100 pictures as a verification set and 200 pictures as a test set, and labeling the weeds in the images by using labelimg software.
And 2, constructing an improved NanoDet model shown in the figure 2, inputting the training set constructed in the step 1, and performing model training to obtain a weed identification model.
The improved NanoDet model specifically comprises the following steps: the backbone network uses the ShuffeNet V2-1.5x of the CSPnet volume block; the FPN part uses the PAN structure and feature integration is performed by an attention module after the output of each layer of features. After the training image is sent into an improved NanoDet model, 4 times of downsampling operation is carried out in a backbone network, and 4 times, 8 times, 16 times and 32 times of downsampling feature maps are obtained respectively. And then, inputting the 8-time, 16-time and 32-time downsampled feature maps into the PAN structure for multi-scale feature fusion. The specific fusion process is as follows: firstly, the 32-time down-sampling feature map is up-sampled by a bilinear interpolation method, and is directly added with the 16-time down-sampling feature map to obtain a fused 16-time down-sampling feature map. And then continuing to perform up-sampling on the fused 16-time down-sampling feature map in a bilinear interpolation mode, directly adding and fusing the feature map and the 8-time down-sampling feature map, outputting an 8-time down-sampling result, performing down-sampling on the fused result through linear interpolation, performing addition and fusion on the fused result and the fused 16-time down-sampling feature map, outputting a 16-time down-sampling result, performing down-sampling on the fused result through linear interpolation, performing addition and fusion on the fused result and the 32-time down-sampling feature map, and outputting a 32-time down-sampling result.
In the PAN structure, upsampling and downsampling are performed by linear interpolation, and fusion is performed by directly adding multi-scale feature maps, so that the amount of calculation can be reduced, but feature information is lost. Therefore, the 8-time, 16-time and 32-time down-sampling results are respectively input into the GSE attention module, the lost features in the linear interpolation process are screened, the most concerned region of the network is extracted from the existing features, and the model loss is reduced.
As shown in fig. 3, the GSE attention module extracts a region of the network that needs attention by using two Ghost convolutions for the input feature x, obtains a weight value through a Sigmoid function, and finally performs point multiplication with the input feature x to output an attention feature f (x):
F(x)=x·Sigmoid(Ghost(Ghost(x)))
for the input characteristics of the c channel, the Ghost convolution respectively generates the characteristics of the c/2 channel through a convolution kernel with the size of 1 × 1 and generates the characteristics of the c/2 channel through a convolution kernel with the size of 5 × 5, and then concat operation is carried out on the characteristics of the two c/2 channels to obtain the output characteristics of the c channel.
And 3, transplanting the weed recognition model trained in the step 2 into a control module on the unmanned aerial vehicle, acquiring an image of a planting area to be weeded in the flight process of the unmanned aerial vehicle through a vision module, and transmitting the image to the control module for real-time detection. The shooting and transmission frequency of the vision module is adjusted, so that the repeated part does not exist in the image input into the control module, and the detection efficiency can be improved. When the weed identification model judges that weeds exist in the scanned image, the position of the weeds is located through the positioning module, and the coordinates are returned to the background. Follow-up weeds coordinate that can return through this device utilizes unmanned aerial vehicle to carry out fixed point pesticide and spouts except, not only can reduce cost of labor and pesticide use amount, can also improve the automation degree on farm.

Claims (6)

1. A crop weed detection method based on unmanned aerial vehicle and deep learning is characterized in that: the method specifically comprises the following steps:
step 1, collecting images containing weeds, and labeling the weeds in the images to form a weed data set;
step 2, constructing an improved NanoDet model as a weed identification model, and training by using the weed data set obtained in the step 1; the improved NanoDet model specifically comprises the following steps: the backbone network uses the ShuffeNet V2-1.5x of the CSPnet volume block; the FPN part uses a PAN structure, performs multi-scale feature fusion on 8, 16 and 32 times down-sampling feature maps output by a backbone network, and performs feature integration by using an attention module after outputting features of each layer;
step 3, transplanting the weed recognition model trained in the step 2 into an unmanned aerial vehicle, and scanning a planting area to be weeded by using the unmanned aerial vehicle; the unmanned aerial vehicle carries out real-time detection through the weeds identification model in the scanning process, and when the weeds identification model judges that weeds exist in the scanned image, the position where the weeds are located is located through the GPS, and the coordinates are returned to the background.
2. The crop weed detection method based on unmanned aerial vehicle and deep learning of claim 1, characterized in that: the multi-scale feature fusion process comprises the following steps: firstly, up-sampling a 32-time down-sampling feature map in a bilinear interpolation mode, and directly adding the up-sampling feature map and a 16-time down-sampling feature map to obtain a fused 16-time down-sampling feature map; and then continuing to perform up-sampling on the fused 16-time down-sampling feature map in a bilinear interpolation mode, directly adding and fusing the feature map and the 8-time down-sampling feature map, outputting an 8-time down-sampling result, performing down-sampling on the fused result through linear interpolation, performing addition and fusion on the fused result and the fused 16-time down-sampling feature map, outputting a 16-time down-sampling result, performing down-sampling on the fused result through linear interpolation, performing addition and fusion on the fused result and the 32-time down-sampling feature map, and outputting a 32-time down-sampling result.
3. The crop weed detection method based on unmanned aerial vehicle and deep learning of claim 1, characterized in that: the attention module is a GSE attention module, and the input features are firstly subjected to two Ghost convolutions and Sigmoid functions in the GSE attention module and then multiplied by original input feature points to obtain the attention features.
4. The crop weed detection method based on unmanned aerial vehicle and deep learning as claimed in claim 1, wherein: pictures containing weeds were taken using a drone in step 1 and a weed dataset was made in the pascal voc format.
5. The crop weed detection method based on unmanned aerial vehicle and deep learning of claim 1, characterized in that: in the weed identification process of the step 3, the scanning frequency and the image transmission frequency of the unmanned aerial vehicle are adjusted, and the condition that no repeated part exists in the image input into the weed identification model is ensured.
6. The utility model provides a weeds detection device of crops based on unmanned aerial vehicle and degree of depth study which characterized in that: the unmanned aerial vehicle positioning system comprises an unmanned aerial vehicle, and a control module, a vision module and a positioning module which are fixed on the unmanned aerial vehicle; the vision module is used for shooting a planting area picture in the flight process of the unmanned aerial vehicle and transmitting the picture to the control module; the control module is transplanted with a trained weed identification model, and the received picture is identified in real time; when weeds are identified in the picture, calling a positioning module to obtain position coordinates, and returning to the control background;
the weed identification model is an improved NanoDet model, and specifically comprises the following steps: the backbone network uses the ShuffeNet V2-1.5x of the CSPnet volume block; the FPN part uses a PAN structure to perform multi-scale feature fusion on 8, 16 and 32 times down-sampling feature maps output by a backbone network, and uses an attention module to perform feature integration after the output of each layer of features.
CN202210267089.2A 2022-03-17 2022-03-17 Crop weed detection method and device based on unmanned aerial vehicle and deep learning Active CN114648500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210267089.2A CN114648500B (en) 2022-03-17 2022-03-17 Crop weed detection method and device based on unmanned aerial vehicle and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210267089.2A CN114648500B (en) 2022-03-17 2022-03-17 Crop weed detection method and device based on unmanned aerial vehicle and deep learning

Publications (2)

Publication Number Publication Date
CN114648500A true CN114648500A (en) 2022-06-21
CN114648500B CN114648500B (en) 2023-04-07

Family

ID=81996366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210267089.2A Active CN114648500B (en) 2022-03-17 2022-03-17 Crop weed detection method and device based on unmanned aerial vehicle and deep learning

Country Status (1)

Country Link
CN (1) CN114648500B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100323779A1 (en) * 2007-06-19 2010-12-23 Wms Gaming Inc. Gaming System Having Graphical Feature Interface
CN108881825A (en) * 2018-06-14 2018-11-23 华南农业大学 Rice weed monitoring unmanned system and its monitoring method based on Jetson TK1
CN111556157A (en) * 2020-05-06 2020-08-18 中南民族大学 Crop distribution monitoring method, equipment, storage medium and device
CN112528879A (en) * 2020-12-15 2021-03-19 杭州电子科技大学 Multi-branch pedestrian re-identification method based on improved GhostNet
WO2021085743A1 (en) * 2019-10-31 2021-05-06 전자부품연구원 Method and device for high-speed frame rate conversion of high-resolution video
CN112819771A (en) * 2021-01-27 2021-05-18 东北林业大学 Wood defect detection method based on improved YOLOv3 model
CN113052876A (en) * 2021-04-25 2021-06-29 合肥中科类脑智能技术有限公司 Video relay tracking method and system based on deep learning
CN113449743A (en) * 2021-07-12 2021-09-28 西安科技大学 Coal dust particle feature extraction method
CN113610040A (en) * 2021-08-16 2021-11-05 华南农业大学 Paddy field weed density real-time statistical method based on improved BiSeNetV2 segmentation network
CN114120036A (en) * 2021-11-23 2022-03-01 中科南京人工智能创新研究院 Lightweight remote sensing image cloud detection method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100323779A1 (en) * 2007-06-19 2010-12-23 Wms Gaming Inc. Gaming System Having Graphical Feature Interface
CN108881825A (en) * 2018-06-14 2018-11-23 华南农业大学 Rice weed monitoring unmanned system and its monitoring method based on Jetson TK1
WO2021085743A1 (en) * 2019-10-31 2021-05-06 전자부품연구원 Method and device for high-speed frame rate conversion of high-resolution video
CN111556157A (en) * 2020-05-06 2020-08-18 中南民族大学 Crop distribution monitoring method, equipment, storage medium and device
CN112528879A (en) * 2020-12-15 2021-03-19 杭州电子科技大学 Multi-branch pedestrian re-identification method based on improved GhostNet
CN112819771A (en) * 2021-01-27 2021-05-18 东北林业大学 Wood defect detection method based on improved YOLOv3 model
CN113052876A (en) * 2021-04-25 2021-06-29 合肥中科类脑智能技术有限公司 Video relay tracking method and system based on deep learning
CN113449743A (en) * 2021-07-12 2021-09-28 西安科技大学 Coal dust particle feature extraction method
CN113610040A (en) * 2021-08-16 2021-11-05 华南农业大学 Paddy field weed density real-time statistical method based on improved BiSeNetV2 segmentation network
CN114120036A (en) * 2021-11-23 2022-03-01 中科南京人工智能创新研究院 Lightweight remote sensing image cloud detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WENJU WANG 等: "Double Ghost Convolution Attention Mechanism Network: A Framework for Hyperspectral Reconstruction of a Single RGB Image" *
疏雅丽 等: "基于深层连接注意力机制的田间杂草识别方法" *
罗强 等: "基于深度学习的粮库虫害实时监测预警***" *

Also Published As

Publication number Publication date
CN114648500B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110297483B (en) Method and device for obtaining boundary of area to be operated and operation route planning method
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN109840553A (en) The extracting method and system, storage medium, electronic equipment for agrotype of ploughing
CN112560623B (en) Unmanned aerial vehicle-based rapid mangrove plant species identification method
CN114239756B (en) Insect pest detection method and system
CN113312993B (en) Remote sensing data land cover classification method based on PSPNet
CN114067219A (en) Farmland crop identification method based on semantic segmentation and superpixel segmentation fusion
EP3971767A1 (en) Method for constructing farmland image-based convolutional neural network model, and system thereof
CN114724031A (en) Corn insect pest area detection method combining context sensing and multi-scale mixed attention
CN116091951A (en) Method and system for extracting boundary line between farmland and tractor-ploughing path
CN113537293A (en) Wheat lodging area identification method based on unmanned aerial vehicle and full convolution neural network
CN117830788A (en) Image target detection method for multi-source information fusion
CN114120359A (en) Method for measuring body size of group-fed pigs based on stacked hourglass network
CN116739739A (en) Loan amount evaluation method and device, electronic equipment and storage medium
CN117197595A (en) Fruit tree growth period identification method, device and management platform based on edge calculation
CN114648500B (en) Crop weed detection method and device based on unmanned aerial vehicle and deep learning
CN115205853B (en) Image-based citrus fruit detection and identification method and system
CN117876823B (en) Tea garden image detection method and model training method and system thereof
CN116228782B (en) Wheat Tian Sui number counting method and device based on unmanned aerial vehicle acquisition
TWI709111B (en) Method for rapidly positioning crops
CN116052141B (en) Crop growth period identification method, device, equipment and medium
CN114708495B (en) Multi-source irrigation information fusion decision method and system
CN111814536B (en) Culture monitoring method and device
CN117110217B (en) Three-dimensional water quality monitoring method and system
CN117315552B (en) Large-scale crop inspection method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant