CN110689519B - Fog drop deposition image detection system and method based on yolo network - Google Patents

Fog drop deposition image detection system and method based on yolo network Download PDF

Info

Publication number
CN110689519B
CN110689519B CN201910773409.XA CN201910773409A CN110689519B CN 110689519 B CN110689519 B CN 110689519B CN 201910773409 A CN201910773409 A CN 201910773409A CN 110689519 B CN110689519 B CN 110689519B
Authority
CN
China
Prior art keywords
image
fogdrop
yolo
fog
fog drop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910773409.XA
Other languages
Chinese (zh)
Other versions
CN110689519A (en
Inventor
岳学军
卢杨
王林惠
岑振钊
凌康杰
程子耀
刘永鑫
王健
林依萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN201910773409.XA priority Critical patent/CN110689519B/en
Publication of CN110689519A publication Critical patent/CN110689519A/en
Application granted granted Critical
Publication of CN110689519B publication Critical patent/CN110689519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/02Investigating particle size or size distribution
    • G01N15/0205Investigating particle size or size distribution by optical means
    • G01N15/0227Investigating particle size or size distribution by optical means using imaging; using holography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Dispersion Chemistry (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fog drop deposition image detection system and method based on a yolo network, wherein the system comprises a plurality of pieces of water-sensitive paper, CCD (charge coupled device) camera equipment, image transmission equipment, a UVC (ultraviolet video coding) receiver, a network server and an upper computer, and the upper computer is provided with a Qt interface and a yolo network module; the method comprises the following steps: the method comprises the steps that a CCD camera device collects fog drop deposition images formed by water-sensitive paper, the fog drop deposition images are transmitted to a UVC receiver through an image transmission device, the UVC receiver transmits the fog drop deposition images to an upper computer through a network server to be displayed in real time, a yolo network is trained through a transfer learning method, a Qt interface conducts screenshot storage on the fog drop deposition images, a trained yolo network module conducts target detection on the fog drop deposition images, and the size and the distribution state of fog drops in the fog drop deposition images are obtained. The invention can quickly and accurately measure the size and the distribution state of the spray droplets, thereby enhancing the accuracy and the reasonableness of pesticide spraying and reducing environmental pollution.

Description

Fog drop deposition image detection system and method based on yolo network
Technical Field
The invention relates to the technical field of agricultural unmanned aerial vehicle pesticide fog droplet spraying monitoring, in particular to a fog droplet deposition image detection system and method based on a yolo network.
Background
In modern agriculture, the size parameter of the fog drops is an important index for determining the mechanical performance of plant protection; related researches show that the particle size ranges of fog drops captured by different biological targets are different; only in the optimal particle size range, the number of the fog drops captured by the target is the largest, and the effect of preventing and treating plant diseases and insect pests is also optimal. Today, artificial intelligence is developing at a high speed; computer vision technology is widely used in various fields.
At present, the main measurement methods for the size parameters of the fog drops comprise a laser method and a mechanical method, and although the laser method is easier to operate, the cost price of a measurement instrument is high; and has certain limitation to the detection condition; for a mechanical method, such as an oil pan method, the operation is basically performed manually, and the collected fog drops need to be measured, calculated and corrected by a microscope; the operation process is complicated and the accuracy is low. Both the laser method and the mechanical method have adaptive scenes and cannot rapidly analyze collected fog drops.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a fog drop deposition image detection system and method based on a yolo network, which can quickly and accurately measure the size of sprayed fog drops and describe the distribution state, so that the spraying accuracy and the reasonability of pesticides are enhanced, and the pollution to the environment is reduced.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to a droplet deposition image detection system based on a yolo network, which comprises: the system comprises a plurality of pieces of water-sensitive paper, a CCD camera device, a picture transmission device, a UVC receiver, a network server and an upper computer;
the water-sensitive paper receives fog drops, the CCD camera device collects fog drop deposition images, the image transmission device transmits the fog drop deposition images to the UVC receiver, the UVC receiver receives image data of the image transmission device and carries out data communication with the network server, the network server receives image data transmitted by the UVC receiver and transmits the image data to the upper computer for real-time display, the upper computer is provided with a Qt interface and a yolo network module, the Qt interface carries out screenshot storage on the fog drop deposition images, and the yolo network module carries out target detection on the fog drop deposition images to obtain the size and distribution state of the fog drops in the fog drop deposition images.
As a preferred technical scheme, the UVC receiver is connected with a network server through a USB interface, and the UVC receiver adopts a pocket FPV/USV/otg/5.8G image transmission UVC receiver.
The invention also provides a fog drop deposition image detection method of the fog drop deposition image detection system based on the yolo network, which comprises the following steps:
the method comprises the following steps that a CCD camera device collects a fogdrop deposition image formed by water-sensitive paper and transmits the fogdrop deposition image to a UVC receiver through an image transmission device, and the UVC receiver transmits the fogdrop deposition image to an upper computer through a network server for real-time display;
a Qt interface of the upper computer calls a screenshot function to screenshot the simulated fogdrop deposition image and store the screenshot as a fogdrop deposition digital image which is discrete in space and brightness;
training the yolo network by adopting a transfer learning method;
the trained yolo network carries out target detection on the discretized fogdrop deposition digital image to obtain a target frame selection image;
and displaying the digital fog drop image and the target frame selection image in a Qt interface of an upper computer, and analyzing and calculating the size, the number and the coverage rate of the sprayed fog drops.
As a preferred technical scheme, the method for training the yolo network by using the transfer learning comprises the following specific steps:
improving the yolo network, taking the first 20 convolutional layers, and adding an average pooling layer and a full-connection layer; putting the obtained fogdrop deposition digital image into an improved yolo network for classification training, and performing secondary classification on an activation function of an output neuron of the last full-connection layer of the yolo network by adopting a sigmoid function;
fine-tuning the pre-training model, and continuing the training process on the original pre-training model by using the target task data;
removing the final average pooling layer and the full-connected layer of the trained yolo classification model, and replacing the final average pooling layer and the full-connected layer of the trained yolo classification model with a yolo target detection network prototype, wherein the convolutional layers of the first 20 layers are trained and converged on the classification detection model; the previous convolutional layers are used for detecting the edges and the textures of the pictures and have good generalization capability, so that the learning rate is set to be less than the negative fourth power of 10, and the weight of the previous 20 convolutional layers is finely adjusted; the learning rate of the 4 convolution layers and the 2 full-connection layers arranged behind the convolution layers is a little larger, so that the target detection network is easier to converge.
As a preferred technical scheme, the classification training of the acquired fogdrop deposition digital image in the yolo network specifically comprises:
acquiring a large number of water-sensitive paper pictures with fogdrops;
some preprocessing is carried out on the pictures to reduce the training interference factors and improve the recognition confirmation degree:
dividing all pictures into three parts of a training set, a testing set and a verification set
Initializing a yolo classification model by using a pre-training model; the pre-training model is a model trained on a PASCAL VOC data set by a yolo target detection network;
and inputting fog drop pictures to perform classification training on the yolo classification model.
As a preferred technical scheme, the analyzing and calculating the size, the number and the coverage rate of the sprayed droplets specifically comprises:
the number of the fogdrops is calculated by outputting a target frame of each fogdrops through a target detection network;
according to the size of each fog drop, the coordinates of two points of the upper left corner and the lower right corner of the rectangle are approximated to obtain the size of the fog drop through a formula for calculating the ellipse by approximating the fog drop to the ellipse;
and the coverage rate of the fog drops is obtained by adding the sizes of all the fog drops and dividing the sum by the area of the whole picture and multiplying the sum by one hundred percent.
As a preferable technical solution, the method further comprises a fog droplet image preprocessing step, wherein the fog droplet image preprocessing step adopts morphological image expansion corrosion and an opening and closing operation mode of the image to process the fog droplet image, and specifically comprises the following steps:
(1) morphological dilation and erosion are both ways of morphological filtering:
the expansion is to perform 'field expansion' on the highlight part of the image, and select a convolution kernel to perform convolution on the whole image; the expansion is the operation of solving a local maximum value, and the maximum value is assigned to a pixel specified by a reference point;
corrosion is that the highlight area in the original image is eaten by silkworm, a convolution core is selected to carry out convolution on the whole image, the corrosion is the operation of solving the local minimum value, and the minimum value is assigned to the pixel appointed by the reference point;
(2) the opening operation is to corrode and expand the picture;
(3) the closed operation is to expand the picture and then corrode the picture.
As a preferable technical solution, the method further comprises an overlapping fogdrop image segmentation processing step, wherein the adhesion fogdrop image segmentation processing step adopts a watershed segmentation method to segment adhesion fogdrops, and specifically comprises the following steps:
carrying out edge extraction on the fogdrop image by using a Laplacian operator to improve the image contrast;
carrying out binarization on the image;
carrying out distance transformation on the fog drop area, namely converting the gray value of a point of the fog drop area in the image into a Manhattan distance to the nearest background;
normalizing the distance value of the fog drop area to be between [0 and 1 ];
carrying out binarization on the image again;
obtaining a peak value of each corrosion area through a corrosion method;
searching a contour of a fog drop area in the image;
drawing the outline through lines with different colors;
and dividing boundary boundaries of the adhered fog drops can be divided through watershed transformation.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) according to the method, the target detection and analysis are carried out on the fog drop image by adopting a deep learning method, so that the accuracy of data is improved, and morphological parameters such as the size and the distribution state of fog drop points in the image can be rapidly obtained.
(2) The method adopts a morphological method to carry out filtering pretreatment on the fogdrop image and a watershed segmentation method to segment the adhered fogdrop, thereby improving the accuracy of fogdrop detection in the picture.
(3) The method comprises the steps of firstly, adding average pooling to 20 convolutional layers, finally classifying images by full-connection layers, training a few newly-added convolutional layers and the full-connection layers by controlling the learning rate of the weight of the convolutional layer in front after a classification model is trained, so as to realize the training of coordinate values for positioning fog drops, training the yolo network on a PASCAL VOC2007 data set, then performing fine-tune (fine tuning), and performing transfer learning to the identification and positioning of fog drop images, so that the acquisition of training set samples is reduced, the model training can be converged, and the precision is high.
(4) According to the invention, the distribution state diagram of the fogdrop spraying image is obtained by screenshot of a fogdrop video analog signal through the Qt interface and is stored as a digital image, then the image is subjected to target detection by calling a yolo network model through the Qt interface, and finally the image is displayed in the Qt interface, so that the operation and observation are convenient.
(5) In the invention, a 5.8G analog signal image transmission device and a pocket FPV/USV/otg/5.8G image transmission UVC receiver are adopted for fog drop image transmission; the acquired fogdrop picture signals can be displayed in real time, the time delay is extremely low, and the transmission distance is long.
Drawings
FIG. 1 is a schematic structural diagram of a yolo network-based droplet deposition image detection system according to the present embodiment;
FIG. 2 is a flow chart of the fogdrop image processing of the present embodiment;
FIG. 3 is a block diagram of the yolo network used as a training classification in the present embodiment;
fig. 4 is a diagram of a yolo network used for training target detection according to the present embodiment.
The system comprises water-sensitive paper, a CCD camera, a 3-800 mW adjustable image transmission device, a 4-UAV, a 5-pocket FPV/USV/otg/5.8G image transmission UVC receiver, a 6-network server and a 7-upper computer.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
As shown in fig. 1, the present embodiment provides a droplet deposition image detection system based on yolo network, including:
the system comprises water-sensitive paper 1 for receiving a plurality of fog drops, a CCD camera device 2 for collecting fog drop pictures, 800mW adjustable image transmission equipment 3 for image transmission, a pocket FPV/USV/otg/5.8G image transmission UVC receiver 5 for communicating with a network server and receiving image transmission emission data, the network server 6 for sending the data of the receiver equipment to an upper computer for interface display, an upper computer 7 for analyzing and processing image data and a UAV 4;
in the embodiment, an image of a droplet is acquired by a CCD camera (CCD camera), an acquired droplet video analog signal is sent out through an image transmission device, a UVC receiver receives the video analog signal and is connected with a network server through a USB interface based on a UVC protocol standard, and the droplet video signal is displayed in real time in camera software of an upper computer; in this example, the process of CCD acquisition is a continuous-acquisition process, so that the uploaded analog video signal is an analog video signal (the video signal is formed continuously in time from frame to frame) (then we can convert the acquired analog video signal into a digital signal (i.e. digital image) discrete in time, space and amplitude to do the following image processing work.) so that the uploaded analog video signal is a digital signal that can be processed after being stored by a screen-capture method (a computer screen-capture method).
In this embodiment, the video analog signals are subjected to screenshot operation through a Qt interface of the upper computer and stored as digital pictures, then the acquired fogdrop images are displayed through the Qt interface, and the target detection is performed on the fogdrop images through a trained yolo network to obtain the size and distribution state of the fogdrop images.
In this embodiment, the goal detection based on the yolo network is to first train the goal classification network, then change the classification network into the yolo network model for goal detection, and then perform fine tuning training.
In the embodiment, the target detection of the fog drops comprises classification of fog drop images and positioning of the fog drop images, a trained model of a fine-tune yolo network on a PASCAL VOC2007 data set is finely adjusted by means of transfer learning, the general characteristics of curves and edges of the images are captured by the first few layers of networks of the yolo network model, the general characteristics are relevant to our target, and the yolo network focuses on some specific characteristics in the fog drop image data set in the learning process by finely adjusting weights, so that the subsequent network is adjusted.
The embodiment also provides a method for detecting a droplet deposition image based on the yolo network, which comprises the following steps:
as shown in fig. 2, firstly, the model is trained by using the method of transfer learning: the method comprises the steps of firstly acquiring some images with fog drops, then preprocessing the images, and putting the images into a yolo network for classification training, wherein the output of the last full-connection layer of the yolo network is modified into 2 to judge whether the fog drops exist or not because the purpose of the embodiment is to detect the fog drop images.
As shown in fig. 3, the final average pooling layer and the final full-link layer of the network trained by classification are removed, the 4 convolutional layers and the 2 full-link layers of the yolo network are replaced, the latter layer is trained, the trained layer is set to have a low learning rate for fine tuning, and the trained model is stored.
The prototype of yolo network has 24 convolutional layers and 2 fully-connected layers; now, the first 20 convolutional layers are taken, an average pooling layer and a full-connection layer are added to be used as two classification networks (22 layers in total), and the classification networks are trained; then the following average pooling layer and fully-connected layer are removed (the remaining 20 layers are trained) and then the original following 4 convolutional layers and 2 fully-connected layers are added (26 layers in total, i.e. yolo's prototype); and then carrying out target detection training. The training method comprises the following steps:
(1) putting the obtained fogdrop deposition digital image into an improved yolo network for classification training, and performing secondary classification on an activation function of an output neuron of the last full-connection layer of the yolo network by adopting a sigmoid function;
wherein, the step of classification training:
the first step is as follows: obtaining a large number of water-sensitive paper pictures with fog drops
The second step is that: some preprocessing is carried out on the pictures to reduce the training interference factors and improve the recognition confirmation degree:
the third step: dividing all pictures into three parts of a training set, a testing set and a verification set
The fourth step: initializing a yolo classification model by using a pre-training model; (Pre-trained models here, i.e., weights of models trained on PASCAL VOC data sets at yolo target detection network)
The fifth step: and inputting a fog drop picture to carry out classification training on the yolo classification model.
(2) Removing the final average pooling layer and the full-connection layer of the trained yolo classification model, and replacing the yolo target detection network prototype with the yolo classification model; since the convolutional layers of the first 20 layers have already been trained on the classification detection model to converge; the convolution layer in the front detects the edge and the texture of the picture, and has good generalization capability; the learning rate can be set low (set below the negative fourth power of 10); fine-tuning the weight of the first 20 convolutional layers; the learning rate of 4 layers of convolution layers and 2 layers of full connection layers arranged behind the convolution layers is a little higher; this makes convergence of the target detection network easier.
(3) And (3) fine-tuning the pre-training model, and continuing the training process on the original pre-training model by using the target task data.
In this embodiment, in the preprocessing of the droplet image, the droplet image is processed by using the morphological dilation and erosion of the image, and the opening and closing operations of the image, so that the accuracy of droplet size detection can be increased, specifically:
(1) morphological dilation and erosion are both ways of morphological filtering:
expansion: the expansion is to perform 'field expansion' on the highlight part of the image, and a convolution kernel is needed to be selected to perform convolution on the whole image; dilation is the operation of finding a local maximum, which is assigned to the pixel specified by the reference point.
For example, a 3 × 3 convolution kernel is selected, which moves from left to right and from top to bottom across the image, each time assigning the maximum value in the 3 × 3 field in which the convolution kernel resides to the 3 × 3 most central designated reference point.
And (3) corrosion: the corrosion is that the highlight area in the original image is eaten by silkworm, a convolution kernel needs to be selected to perform convolution on the whole image, and the corrosion is the operation of solving the local minimum value and assigning the minimum value to the pixel specified by the reference point.
(2) The open operation is to corrode and expand the picture.
(3) The closed operation is to expand the picture and then corrode the picture.
In this embodiment, the overlapping and stuck droplets in the droplet image are segmented by using a morphological segmentation method, namely a watershed segmentation method, so that the accuracy of the droplet size and the distribution state can be increased, specifically:
(1) carrying out edge extraction on the fogdrop image by using a Laplace operator to improve the image contrast;
(2) carrying out binarization on the image;
(3) carrying out distance transformation on the fog drop area, namely converting the gray value of a point of the fog drop area in the image into a Manhattan distance to the nearest background;
(4) normalizing all converted distances to be between [0,1 ];
(5) carrying out binarization on the image again;
(6) obtaining a peak value of each corrosion area through a corrosion method;
(7) searching a contour for a fog drop area in the image;
(8) drawing the outline through lines with different colors;
(9) and dividing boundary boundaries of the adhered fog drops can be divided through watershed transformation.
The above 9 steps can be implemented by corresponding functions, which are not described in detail herein.
As shown in fig. 4, crops in an area to be sprayed are sprayed by an unmanned aerial vehicle, fog drop image videos collected on terminal water-sensitive paper are transmitted to a receiver through image transmission equipment to be received, then are displayed in real time through upper computer camera software, a screenshot function is called through a Qt interface of an upper computer to intercept and store the fog drop images into digital images, then a trained yolo network is called through the Qt interface of the upper computer to perform target detection on the fog drop images, the original images and detected target frame selection images are displayed in the Qt interface, and finally the size and distribution conditions of the sprayed fog drops are analyzed and calculated.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (4)

1. A fog drop deposition image detection method of a fog drop deposition image detection system based on a yolo network is characterized by comprising the following steps of: the system comprises a plurality of pieces of water-sensitive paper, a CCD camera device, a picture transmission device, a UVC receiver, a network server and an upper computer;
the water-sensitive paper receives fog drops, the CCD camera device collects fog drop deposition images, the image transmission device transmits the fog drop deposition images to the UVC receiver, the UVC receiver receives image data of the image transmission device and carries out data communication with the network server, the network server receives image data transmitted by the UVC receiver and transmits the image data to the upper computer for real-time display, the upper computer is provided with a Qt interface and a yolo network module, the Qt interface carries out screenshot storage on the fog drop deposition images, and the yolo network module carries out target detection on the fog drop deposition images to obtain the size and distribution state of the fog drops in the fog drop deposition images;
the fogdrop deposition image detection method comprises the following steps:
the method comprises the following steps that a CCD camera device collects a fogdrop deposition image formed by water-sensitive paper and transmits the fogdrop deposition image to a UVC receiver through an image transmission device, and the UVC receiver transmits the fogdrop deposition image to an upper computer through a network server for real-time display;
a Qt interface of the upper computer calls a screenshot function to screenshot the simulated fogdrop deposition image and store the screenshot as a fogdrop deposition digital image which is discrete in space and brightness;
training the yolo network by adopting a transfer learning method, which specifically comprises the following steps:
improving the yolo network, taking the first 20 convolutional layers, and adding an average pooling layer and a full-connection layer;
putting the obtained fogdrop deposition digital image into an improved yolo network for classification training, and performing secondary classification on an activation function of an output neuron of the last full-connection layer of the yolo network by adopting a sigmoid function;
fine-tuning the pre-training model, and continuing the training process on the original pre-training model by using the target task data;
removing the last average pooling layer and the full-link layer of the trained yolo classification model, and replacing the last average pooling layer and the full-link layer of the trained yolo classification model with a yolo target detection network prototype, wherein the convolutional layers of the first 20 layers are trained and converged on the classification detection model, and the convolutional layers of the front are used for detecting the edges and textures of pictures, so that the learning rate is set to be below the minus fourth power of 10, the weight of the convolutional layers of the first 20 layers is finely adjusted, and the learning rate is slightly higher than that of the 4 convolutional layers and the 2 full-link layers which are arranged behind, so that the target detection network is easier to converge;
the trained yolo network carries out target detection on the discretized fogdrop deposition digital image to obtain a target frame selection image, and specifically comprises the following steps:
acquiring a large number of water-sensitive paper pictures with fogdrops;
some preprocessing is carried out on the pictures to reduce the training interference factors and improve the recognition confirmation degree:
dividing all pictures into a training set, a testing set and a verification set;
initializing a yolo classification model by using a pre-training model; the pre-training model is a model trained on a PASCAL VOC data set by a yolo target detection network;
inputting a fog drop picture to perform classification training on the yolo classification model;
displaying the digital fogdrop image and the target frame selection image in a Qt interface of an upper computer, analyzing and calculating the size, the number and the coverage rate of sprayed fogdrop, and specifically comprising the following steps:
the number of the fogdrops is calculated by outputting a target frame of each fogdrops through a target detection network;
the size of the fog drops is obtained by approximating the coordinates of two points of the upper left corner and the lower right corner of the rectangle to an ellipse according to the target frame of each fog drop and approximating the coordinates to the size of the fog drops by an ellipse calculation formula;
and the coverage rate of the fog drops is obtained by adding the sizes of all the fog drops and dividing the sum by the area of the whole picture and multiplying the sum by one hundred percent.
2. The method for detecting the fogdrop deposition image according to claim 1, further comprising a fogdrop image preprocessing step, wherein the fogdrop image preprocessing step is used for processing the fogdrop image by adopting morphological image expansion corrosion and an opening and closing operation mode of the image, and specifically comprises the following steps:
(1) morphological dilation and erosion are both ways of morphological filtering:
the expansion is to perform 'field expansion' on the highlight part of the image, and select a convolution kernel to perform convolution on the whole image; the expansion is the operation of solving a local maximum value, and the maximum value is assigned to a pixel specified by a reference point;
corrosion is that the highlight area in the original image is eaten by silkworm, a convolution core is selected to carry out convolution on the whole image, the corrosion is the operation of solving the local minimum value, and the minimum value is assigned to the pixel appointed by the reference point;
(2) the opening operation is to corrode and expand the picture;
(3) the closed operation is to expand the picture and then corrode the picture.
3. The fogdrop image detection method according to claim 1, further comprising an overlapped fogdrop image segmentation processing step, wherein the adhered fogdrop image segmentation processing step is performed by segmenting the adhered fogdrop by a watershed segmentation method, and specifically comprises the following steps:
carrying out edge extraction on the fogdrop image by using a Laplacian operator to improve the image contrast;
carrying out binarization on the image;
performing distance transformation on the fog drop region, namely converting the gray value of a point of the fog drop region in the image into a Manhattan distance to a nearest background;
normalizing the distance value of the fog drop area to be between [0 and 1 ];
carrying out binaryzation on the image again;
obtaining a peak value of each corrosion area through a corrosion method;
searching a contour of a fog drop area in the image;
drawing the outline through lines with different colors;
and dividing boundary boundaries of the adhered fog drops can be divided through watershed transformation.
4. The fogdrop image detection method according to claim 1, wherein the UVC receiver is connected with a network server through a USB interface, and the UVC receiver adopts a pocket FPV/USV/otg/5.8G image transmission UVC receiver.
CN201910773409.XA 2019-08-21 2019-08-21 Fog drop deposition image detection system and method based on yolo network Active CN110689519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910773409.XA CN110689519B (en) 2019-08-21 2019-08-21 Fog drop deposition image detection system and method based on yolo network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910773409.XA CN110689519B (en) 2019-08-21 2019-08-21 Fog drop deposition image detection system and method based on yolo network

Publications (2)

Publication Number Publication Date
CN110689519A CN110689519A (en) 2020-01-14
CN110689519B true CN110689519B (en) 2022-06-17

Family

ID=69108396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910773409.XA Active CN110689519B (en) 2019-08-21 2019-08-21 Fog drop deposition image detection system and method based on yolo network

Country Status (1)

Country Link
CN (1) CN110689519B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381819A (en) * 2020-12-07 2021-02-19 云南省烟草公司昆明市公司 HSV color model-based plant protection droplet detection method
ES2915258B2 (en) * 2020-12-18 2023-03-01 Agrobotix Innovation Tech S L SENSOR, SYSTEM AND METHOD TO MEASURE THE QUALITY OF THE APPLICATION OF AGROCHEMICALS ON CROP LANDS
CN112837345B (en) * 2021-01-29 2023-12-08 北京农业智能装备技术研究中心 Method and system for detecting deposition distribution of plant canopy liquid medicine
CN113008742B (en) * 2021-02-23 2022-08-19 中国农业大学 Method and system for detecting deposition amount of fog drops
CN113222925B (en) * 2021-04-30 2023-01-31 陕西科技大学 ImagePy-based water-sensitive paper fog drop parameter measuring device and measuring method thereof
CN113252522B (en) * 2021-05-12 2022-03-15 中国农业大学 Hyperspectral scanning-based device for measuring deposition amount of fog drops on plant leaves
CN113252523B (en) * 2021-05-12 2022-03-15 中国农业大学 Device and method for measuring deposition amount of plant leaf fog drops based on RGB camera
CN115240015B (en) * 2022-09-23 2023-01-06 中汽数据(天津)有限公司 Training method, device, equipment and storage medium of target detection model
TWI808913B (en) * 2022-10-25 2023-07-11 國立中山大學 Method for deciding optimal spraying parameters

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750502A (en) * 2012-05-30 2012-10-24 山东神思电子技术股份有限公司 Device and method for image data collection
CN108182703A (en) * 2017-12-13 2018-06-19 安徽农业大学 A kind of measuring method and system quantitative based on water-sensitive test paper
CN109580565A (en) * 2018-11-29 2019-04-05 北京农业智能装备技术研究中心 Aerial pesticide medical fluid deposition parameter monitors system and method
CN109977790A (en) * 2019-03-04 2019-07-05 浙江工业大学 A kind of video smoke detection and recognition methods based on transfer learning
CN110084166A (en) * 2019-04-19 2019-08-02 山东大学 Substation's smoke and fire intelligent based on deep learning identifies monitoring method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750502A (en) * 2012-05-30 2012-10-24 山东神思电子技术股份有限公司 Device and method for image data collection
CN108182703A (en) * 2017-12-13 2018-06-19 安徽农业大学 A kind of measuring method and system quantitative based on water-sensitive test paper
CN109580565A (en) * 2018-11-29 2019-04-05 北京农业智能装备技术研究中心 Aerial pesticide medical fluid deposition parameter monitors system and method
CN109977790A (en) * 2019-03-04 2019-07-05 浙江工业大学 A kind of video smoke detection and recognition methods based on transfer learning
CN110084166A (en) * 2019-04-19 2019-08-02 山东大学 Substation's smoke and fire intelligent based on deep learning identifies monitoring method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Design of Plant Protection UAV Variable Spray System Based on Neural Networks;Sheng Wen et al.;《sensors》;20190305;第1-23页 *
You Only Look Once:Unified, Real-Time Object Detection;Joseph Redmon et al.;《2016 IEEE Conference on Computer Vision and Pattern Recognition》;20161231;第779-788页 *
雾滴图像粘连特征改进判断及分离计数方法优化;吴亚垒 等;《农业机械学报》;20171231;第48卷;第220-227页 *

Also Published As

Publication number Publication date
CN110689519A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110689519B (en) Fog drop deposition image detection system and method based on yolo network
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
Aquino et al. Automated early yield prediction in vineyards from on-the-go image acquisition
CN110210463B (en) Precise ROI-fast R-CNN-based radar target image detection method
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
CN107862705B (en) Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics
CN108986064B (en) People flow statistical method, equipment and system
CN110135341B (en) Weed identification method and device and terminal equipment
CN108830196A (en) Pedestrian detection method based on feature pyramid network
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN110517311A (en) Pest and disease monitoring method based on leaf spot lesion area
CN108491807B (en) Real-time monitoring method and system for oestrus of dairy cows
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN113822198B (en) Peanut growth monitoring method, system and medium based on UAV-RGB image and deep learning
CN112069985A (en) High-resolution field image rice ear detection and counting method based on deep learning
CN113435355A (en) Multi-target cow identity identification method and system
CN108829762A (en) The Small object recognition methods of view-based access control model and device
CN110503647A (en) Wheat plant real-time counting method based on deep learning image segmentation
CN110781865A (en) Crop growth control system
CN113723833B (en) Method, system, terminal equipment and storage medium for evaluating quality of forestation actual results
Fang et al. Classification system study of soybean leaf disease based on deep learning
CN115457437A (en) Crop identification method, device and system and pesticide spraying robot
WO2022061496A1 (en) Object boundary extraction method and apparatus, mobile platform and storage medium
Chaoying et al. A cross-border detection algorithm for agricultural spraying UAV
Jia et al. Automatic lameness detection in dairy cows based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant