CN112766178B - Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system - Google Patents

Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system Download PDF

Info

Publication number
CN112766178B
CN112766178B CN202110086756.2A CN202110086756A CN112766178B CN 112766178 B CN112766178 B CN 112766178B CN 202110086756 A CN202110086756 A CN 202110086756A CN 112766178 B CN112766178 B CN 112766178B
Authority
CN
China
Prior art keywords
image data
data
variance
pest
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110086756.2A
Other languages
Chinese (zh)
Other versions
CN112766178A (en
Inventor
李致富
杜佳荣
曾俊海
王明
吴晋宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202110086756.2A priority Critical patent/CN112766178B/en
Publication of CN112766178A publication Critical patent/CN112766178A/en
Application granted granted Critical
Publication of CN112766178B publication Critical patent/CN112766178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Catching Or Destruction (AREA)

Abstract

The application discloses a pest positioning method, device, equipment and medium based on an intelligent deinsectization system. The method comprises the steps of acquiring first image data of crops through the visible light digital camera; acquiring second image data of the crop through the multispectral digital camera; when the difference value between the first image data and the second image data is smaller than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crop; when the distance data is larger than a second threshold value and smaller than a third threshold value, fusing the first image data and the second image data to obtain fused image data; and inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result. The method can effectively improve the positioning precision of the pests, is convenient for providing a basis for pest removal operation, and is beneficial to improving the yield of crops on the premise of saving resources. The method and the device can be widely applied to the technical field of artificial intelligence.

Description

Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a disease and pest positioning method, device, equipment and medium based on an intelligent deinsectization system.
Background
In recent years, with rapid development of high-resolution remote sensing technology, machine vision, control technology and the like, efficient, high-precision and low-cost crop health monitoring is enabled. For example, in the related art, a technical scheme of automatically spraying pesticides by an unmanned vehicle has appeared, which greatly reduces the work load of farmers and can increase the yield of crops to a certain extent. However, in practical application, it was found that although the automatic spraying of pesticides can reduce the workload of people, considerable waste is caused, crops in some places grow well, and pesticides are sprayed without being needed. In view of the above, there is a need to solve the technical problems in the related art.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the related art to a certain extent.
Therefore, an object of the embodiments of the present application is to provide a pest positioning method based on an intelligent pest killing system, which can effectively detect pest positions in crops, and is convenient for targeted application.
It is another object of embodiments of the present application to provide a pest positioning device based on an intelligent pest killing system.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the application comprises the following steps:
in a first aspect, an embodiment of the present application provides a method for positioning a pest based on an intelligent pest killing system, where the intelligent pest killing system includes an unmanned ground vehicle, a laser radar, a pest killing device and an unmanned aerial vehicle, the laser radar and the pest killing device are disposed on the unmanned ground vehicle, and the unmanned aerial vehicle tracks and flies above the unmanned ground vehicle; the unmanned aerial vehicle is provided with a visible light digital camera and a multispectral digital camera;
the method comprises the following steps:
acquiring first image data of crops through the visible light digital camera;
acquiring second image data of the crop through the multispectral digital camera;
when the difference value between the first image data and the second image data is smaller than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crop;
when the distance data is larger than a second threshold value and smaller than a third threshold value, fusing the first image data and the second image data to obtain fused image data;
and inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
In addition, the method according to the above embodiment of the present application may further have the following additional technical features:
further, in an embodiment of the present application, the fusing the first image data and the second image data to obtain fused image data includes:
grouping the first image data to obtain a plurality of groups of first image sub-data, and determining a first variance of each group of first image sub-data;
grouping the second image data to obtain a plurality of groups of second image sub-data, and determining a second variance of each group of second image sub-data;
determining a first optimal variance of the first image data according to the first variance, and determining a second optimal variance of the second image data according to the second variance;
determining a first weight corresponding to the first image data and a second weight corresponding to the second image data according to the first optimal variance and the second optimal variance;
and fusing the first image data and the second image data according to the first weight and the second weight to obtain the fused image data.
Further, in one embodiment of the present application, the first image data is grouped to obtain three groups of first image sub-data;
determining a first optimal variance of the first image data according to the first variance, which is specifically:
by the formula
Figure BDA0002911037320000021
Determining a first optimal variance of the first image data;
in sigma x 2 For a first optimal variance of the first image data,
Figure BDA0002911037320000022
for a first variance corresponding to a first group of first image sub-data,/>
Figure BDA0002911037320000023
For the first variance corresponding to the second group of first image sub-data,/>
Figure BDA0002911037320000024
And a first variance corresponding to the third group of first image sub-data.
Further, in one embodiment of the present application, the method further comprises the steps of:
when the pixel difference value between the first image data and the second image data is larger than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crops;
and when the distance data is smaller than a second threshold value, inputting the first image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
Further, in one embodiment of the present application, the method further comprises the steps of:
when the pixel difference value between the first image data and the second image data is larger than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crops;
and when the distance data is larger than a third threshold value, inputting the second image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
Further, in one embodiment of the present application, the disease and pest localization neural network model includes a region candidate network and a classification network;
inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result, wherein the prediction result specifically comprises the following steps:
inputting the fused image data into a region candidate network to obtain an identification candidate frame of the target;
classifying the identification candidate frames through the classification network to obtain a classification result of the target; the classification result is used for representing whether the target belongs to a disease and insect image;
and obtaining the prediction result of the positioning of the diseases and insects according to the classification result.
Further, in one embodiment of the present application, the method further comprises the steps of:
and performing Kalman filtering on the first image data and the second image data.
In a second aspect, an embodiment of the present application provides a pest positioning device based on an intelligent pest killing system, where the intelligent pest killing system includes an unmanned ground vehicle, a laser radar, a pest killing device and an unmanned aerial vehicle, the laser radar and the pest killing device are disposed on the unmanned ground vehicle, and the unmanned aerial vehicle tracks and flies above the unmanned ground vehicle; the unmanned aerial vehicle is provided with a visible light digital camera and a multispectral digital camera;
the pest positioning device comprises:
the first acquisition module is used for acquiring first image data of crops through the visible light digital camera;
the second acquisition module is used for acquiring second image data of the crop through the multispectral digital camera;
the processing module is used for acquiring distance data of the unmanned aerial vehicle from the crop when the difference value between the first image data and the second image data is smaller than a first threshold value;
the fusion module is used for fusing the first image data and the second image data to obtain fused image data when the distance data is larger than a second threshold value and smaller than a third threshold value;
and the prediction module is used for inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
In a third aspect, embodiments of the present application further provide a computer device, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the intelligent pest positioning method of the first aspect described above based on the intelligent pest control system.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium having stored therein a processor executable program, which when executed by a processor, is configured to implement the method for positioning a pest based on the intelligent pest control system of the first aspect.
The advantages and benefits of the present application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present application.
According to the pest positioning method based on the intelligent deinsectization system, first image data of crops are acquired through the visible light digital camera; acquiring second image data of the crop through the multispectral digital camera; when the difference value between the first image data and the second image data is smaller than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crop; when the distance data is larger than a second threshold value and smaller than a third threshold value, fusing the first image data and the second image data to obtain fused image data; and inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result. The method can effectively improve the positioning precision of the diseases and insects, is convenient for providing basis for realizing the fine operations such as the elimination of the diseases and insects, and is beneficial to improving the yield of crops on the premise of saving resources.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present application or the related technical solutions in the prior art, it should be understood that, in the following description, the drawings are only for convenience and clarity to describe some embodiments in the technical solutions of the present application, and other drawings may be obtained according to these drawings without any inventive effort for those skilled in the art.
Fig. 1 is a schematic diagram of an intelligent deinsectization system provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a pest positioning method based on an intelligent pest killing system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a pest positioning device based on an intelligent pest killing system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
The precise agriculture is developed in the late 80 s of the 20 th century, and is an important component of the modern agricultural revolution. In the future, the precise agriculture will be used as a break for promoting the modernization of agricultural rural areas. It is estimated that a grain yield increase of 85% in the future will result from crop yield optimization and agro-farming improvement. Specifically, the accurate agricultural technology is based on modernization means such as 3S technology (Remote sensing technology (RS), geographic information system (Geography information systems, GIS) and global positioning system (Global positioning systems, GPS)), sensor technology, internet of things technology, and the like, and aims to realize accurate control over the cultivation process, accurately monitor the conditions of various aspects such as crop growth vigor, disaster recovery, and the like, accurately adjust cultivation input according to the monitoring conditions, realize accurate cultivation, irrigation, fertilization and pesticide application, seeding, harvesting, and the like, namely realize equal harvest or higher harvest with minimum input. Among these, health monitoring of crops is one of the bases of precision agriculture, and great attention and importance of experts and farmers is gained.
Based on the application requirements, an intelligent deinsectization system is provided in the embodiment of the application. Referring to fig. 1, the intelligent disinsection system in the embodiment of the application includes an unmanned ground vehicle 3, a laser radar, a disinsection device and an unmanned aerial vehicle 2, wherein the laser radar and the disinsection device are disposed on the unmanned ground vehicle 3, and the laser radar is used for detecting road conditions beside the unmanned ground vehicle so as to conveniently guide the travel of the unmanned ground vehicle, and the disinsection device is used for spraying medicines on crops 1 to carry out disinsection, for example, in some embodiments, the disinsection device may include a disinsection agent storage device and a spray head. Unmanned aerial vehicle 2 tracks and flies in unmanned ground vehicle 3's top, and unmanned ground vehicle 3 is given unmanned ground vehicle 3 to the road information feedback that unmanned ground vehicle 3 march the place ahead also can be gathered to unmanned ground vehicle 2 on the one hand, and on the other hand unmanned aerial vehicle 2 also can gather unmanned ground vehicle 3 nearby crop 1's disease and pest distribution condition to the quantity of spraying of convenient control pesticide, in order to accomplish more accurate, practice thrift and realize deinsectization. Specifically, the unmanned aerial vehicle 2 in the embodiment of the present application may adopt a rotary-wing unmanned aerial vehicle, which has a capability of taking off and landing vertically, and may hover in the air, and is suitable for working in a complex environment, and on which a visible light digital camera and a multispectral digital camera are mounted. The intelligent deinsectization system in the embodiment of the application further comprises a background upper computer system which is used for receiving various information and controlling the unmanned aerial vehicle 2, the unmanned ground vehicle 3 and other parts to work.
Referring to fig. 2, an embodiment of the present application provides a pest positioning method based on an intelligent pest killing system, which mainly includes the following steps:
step 110, acquiring first image data of crops by a visible light digital camera;
step 120, acquiring second image data of crops by a multispectral digital camera;
in this embodiment of the present application, as mentioned above, the unmanned aerial vehicle is equipped with a visible light digital camera and a multispectral digital camera, and can accomplish the location of disease and pest distribution based on these image acquisition devices, because when the crop is infringed by disease and pest, can grow poorly because of lacking nutrition and moisture, the sponge tissue is destroyed, the pigment proportion of leaf also can change for two absorption valleys of visible light region are unobvious, and the reflection peak value to certain wavelength light wave becomes low according to the degree that the plant leaf is damaged. This change is more pronounced in the near infrared, the peak becomes lower and even vanishes, and the wavy features of the entire reflectance spectrum curve are not apparent. Therefore, in the embodiment of the application, the visible light digital camera and the multispectral digital camera are mounted on the unmanned aerial vehicle to provide the common digital remote sensing image and the spectrum image, namely the first image data and the second image data, respectively. Specifically, for the acquired image, the acquired image may be formatted into digital data in the embodiment of the present application, so as to obtain image data.
130, acquiring distance data of the unmanned aerial vehicle from crops when a pixel difference value between the first image data and the second image data is smaller than a first threshold value;
step 140, when the distance data is greater than the second threshold value and smaller than the third threshold value, fusing the first image data and the second image data to obtain fused image data;
in this embodiment of the present application, due to the influence of the distance between the unmanned aerial vehicle and the crop, there may be a certain difference between the height and the visible light intensity during image data acquisition, so that the credibility of the first image data and the credibility of the second image data are different. Therefore, the applicable fields of the first image data and the second image data should be tested, respectively, to detect the optimal working environment thereof. Specifically, firstly, the difference between the first image data and the second image data can be judged, if the difference is larger, for example, exceeds a preset first threshold value, the difference of the credibility of the two data is larger, and at the moment, the distance data of the unmanned plane from the crop is continuously acquired: if the distance data is larger, for example, exceeds a preset third threshold, the reliability of the multispectral digital camera is higher, and only the second image data can be used for positioning the plant diseases and insect pests; otherwise, if the distance data is smaller, for example, smaller than a preset second threshold, the reliability of the visible light digital camera is higher at this time, and only the first image data can be used for positioning the plant diseases and insect pests. When the difference between the first image data and the second image data is smaller, for example, the difference is lower than a preset first threshold, the image results acquired by the two cameras are consistent, the distance data of the unmanned aerial vehicle from crops is continuously acquired at the moment, if the distance data is larger than the second threshold and smaller than a third threshold, the first image data and the second image data can be fused to obtain fused image data, diseases and insect pests are positioned according to the fused image data, the fused image data combines and complements the image data acquired by different image acquisition devices, and therefore, the high-precision image data can be acquired, and the unmanned aerial vehicle has the characteristics of strong timeliness, small influence by atmospheric radiation, high spatial resolution, abundant data quantity and the like. In some embodiments, the second threshold may be set to 10m, the third threshold may be set to 20m, and of course, the above values are all exemplified, and the actual threshold sizes of the respective thresholds may be flexibly set and adjusted as required. In some embodiments, the first image data and the second image data may also be kalman filtered to improve accuracy of the data and reduce interference of noise data.
In the embodiment of the application, when the first image data and the second image data are fused to obtain fused image data, the first image data are firstly grouped to obtain a plurality of groups of first image sub-data, and a first variance of each group of first image sub-data is determined; and grouping the second image data simultaneously to obtain a plurality of groups of second image sub-data, and determining a second variance of each group of second image sub-data. Then according to the first and second optimal variances, determining a first weight corresponding to the first image data and a second weight corresponding to the second image data, and according to the first and second weights, counting the first imagesAnd fusing the second image data to obtain fused image data. Specifically, taking the example of dividing the first image data into three groups, averaging the first image sub-data of the three groups, and determining the variance of each group of the first image sub-data according to the average value to obtain
Figure BDA0002911037320000061
For a first variance corresponding to a first group of first image sub-data,/>
Figure BDA0002911037320000062
For the first variance corresponding to the second group of first image sub-data,/>
Figure BDA0002911037320000063
For the first variance corresponding to the third group of first image sub-data, by the formula +.>
Figure BDA0002911037320000064
A first optimal variance of the first image data is determined. Similarly, the second image data may be divided into three groups, and the second optimal variance of the second image data may be determined in a similar manner. Then, according to the first optimal variance and the second optimal variance, a first weight corresponding to the first image data can be determined:
Figure BDA0002911037320000071
in which W is 1 For the first weight, σ z Standard deviation which is the second optimum variance; sigma (sigma) x Is the standard deviation of the first optimum variance. By subtracting W from 1 1 A second weight may be obtained. And carrying out weighted summation on the first image data and the second image data according to the first weight and the second weight, so as to obtain the fused image data.
And 150, inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
In the embodiment of the application, when the image data is input into the disease and pest positioning neural network model, a Faster R-CNN network model can be selected to be used. The Fast R-CNN network model mainly comprises two parts, the first part is a region candidate network (Region Proposal Network, RPN), and the second part is a classification network mainly composed of Fast R-CNN. The region candidate network is mainly used for predicting candidate regions possibly containing targets in the image data and outputting identification candidate frames of the targets; the classification network is mainly used for classifying the identification candidate frames and correcting the positions of candidate areas, and the two networks share the characteristic of the convolution layer. Specifically, the detailed flow of the algorithm of the fast R-CNN used in the embodiment of the application can be as follows: the method comprises the steps of inputting image data into a model, firstly obtaining a feature map of the image data through a series of convolution and pooling operations by utilizing a feature extraction network, then extracting a target identification candidate frame by utilizing a region candidate network, judging whether the target in the identification candidate frame belongs to a foreground or a background by utilizing a Softmax classifier, and correcting the position of the target by utilizing a range frame regressive to obtain a final prediction result. When a foreground object is identified, the identification candidate frame is automatically mapped to the convolution feature diagram of the last layer to obtain a feature vector, the feature vector is used as the input of the ROI pooling layer, the feature vector is changed into a feature diagram with a fixed size through the ROI pooling, and then branches entering the two full-connection layers are classified and frame regression is realized, so that accurate disease and insect distribution information is obtained, and disease and insect positioning is completed. In the embodiment of the present application, the input data may be any one of the first image data, the second image data and the fused image data.
It can be understood that in the embodiment of the application, after the position information of the pest is located by the pest location method, the pest control device can be controlled to perform pesticide application in a targeted manner. For example, in some embodiments, a mechanical arm can be added on the insect disinfestation device, and the spray head is operated to spray the pesticide to the area where the insect is located; in some embodiments, the density of the pests can be determined according to the result obtained by positioning the pests, and then the amount of the sprayed drug is adjusted, so that the drug is saved, and the benefit of spraying the drug is improved.
The following describes in detail a pest positioning device based on an intelligent deinsectization system according to an embodiment of the present application with reference to the accompanying drawings.
Referring to fig. 3, in the embodiment of the present application, the intelligent pest positioning device based on the intelligent pest killing system is the same as the foregoing, and is not described herein again, and the pest positioning device includes:
a first acquisition module 101, configured to acquire first image data of a crop through a visible light digital camera;
a second obtaining module 102, configured to obtain second image data of the crop through collection by using a multispectral digital camera;
a processing module 103, configured to obtain distance data of the unmanned aerial vehicle from the crop when a difference between the first image data and the second image data is smaller than a first threshold;
a fusion module 104, configured to fuse the first image data and the second image data to obtain fused image data when the distance data is greater than the second threshold and less than the third threshold;
and the prediction module 105 is used for inputting the fused image data into the disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
It can be understood that the content in the above method embodiment is applicable to the embodiment of the present device, and the specific functions implemented by the embodiment of the present device are the same as those of the embodiment of the above method, and the achieved beneficial effects are the same as those of the embodiment of the above method.
Referring to fig. 4, an embodiment of the present application further provides a computer device, including:
at least one processor 201;
at least one memory 202 for storing at least one program;
the at least one program, when executed by the at least one processor 201, causes the at least one processor 201 to implement the pest positioning method embodiments described above.
Similarly, the content in the above method embodiment is applicable to the embodiment of the present computer device, and the functions specifically implemented by the embodiment of the present computer device are the same as those of the embodiment of the above method, and the achieved beneficial effects are the same as those achieved by the embodiment of the above method.
The present embodiment also provides a computer readable storage medium in which a program executable by the processor 201 is stored, the program executable by the processor 201 being configured to perform the above-described pest positioning method embodiment when executed by the processor 201.
Similarly, the content in the above method embodiment is applicable to the present computer-readable storage medium embodiment, and the functions specifically implemented by the present computer-readable storage medium embodiment are the same as those of the above method embodiment, and the beneficial effects achieved by the above method embodiment are the same as those achieved by the above method embodiment.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of this application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the present application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or one or more of the functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Thus, those of ordinary skill in the art will be able to implement the present application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, descriptions of the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present application have been described in detail, the present application is not limited to the embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (8)

1. The pest positioning method based on the intelligent pest killing system is characterized in that the intelligent pest killing system comprises an unmanned ground vehicle, a laser radar, a pest killing device and an unmanned aerial vehicle, wherein the laser radar and the pest killing device are arranged on the unmanned ground vehicle, and the unmanned aerial vehicle tracks and flies above the unmanned ground vehicle; the unmanned aerial vehicle is provided with a visible light digital camera and a multispectral digital camera;
the method comprises the following steps:
acquiring first image data of crops through the visible light digital camera;
acquiring second image data of the crop through the multispectral digital camera;
when the difference value between the first image data and the second image data is smaller than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crop;
when the distance data is larger than a second threshold value and smaller than a third threshold value, fusing the first image data and the second image data to obtain fused image data;
inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result;
the fusing the first image data and the second image data to obtain fused image data includes:
grouping the first image data to obtain a plurality of groups of first image sub-data, and determining a first variance of each group of first image sub-data;
grouping the second image data to obtain a plurality of groups of second image sub-data, and determining a second variance of each group of second image sub-data;
determining a first optimal variance of the first image data according to the first variance, and determining a second optimal variance of the second image data according to the second variance;
determining a first weight corresponding to the first image data and a second weight corresponding to the second image data according to the first optimal variance and the second optimal variance;
fusing the first image data and the second image data according to the first weight and the second weight to obtain fused image data;
grouping the first image data to obtain three groups of first image sub-data;
determining a first optimal variance of the first image data according to the first variance, which is specifically:
by the formula
Figure QLYQS_1
Determining a first optimal variance of the first image data;
in the method, in the process of the invention,
Figure QLYQS_2
for a first optimal variance of said first image data +.>
Figure QLYQS_3
For a first variance corresponding to a first group of first image sub-data,/>
Figure QLYQS_4
For the first variance corresponding to the second group of first image sub-data,/>
Figure QLYQS_5
And a first variance corresponding to the third group of first image sub-data.
2. The method of claim 1, further comprising the step of:
when the pixel difference value between the first image data and the second image data is larger than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crops;
and when the distance data is smaller than a second threshold value, inputting the first image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
3. The method of claim 1, further comprising the step of:
when the pixel difference value between the first image data and the second image data is larger than a first threshold value, obtaining distance data of the unmanned aerial vehicle from the crops;
and when the distance data is larger than a third threshold value, inputting the second image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result.
4. The method of claim 1, wherein the disease and pest localization neural network model includes a region candidate network and a classification network;
inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result, wherein the prediction result specifically comprises the following steps:
inputting the fused image data into a region candidate network to obtain an identification candidate frame of the target;
classifying the identification candidate frames through the classification network to obtain a classification result of the target; the classification result is used for representing whether the target belongs to a disease and insect image;
and obtaining the prediction result of the positioning of the diseases and insects according to the classification result.
5. The method according to any one of claims 1-4, further comprising the step of:
and performing Kalman filtering on the first image data and the second image data.
6. The utility model provides a sick worm positioner based on intelligent deinsectization system, its characterized in that, intelligent deinsectization system includes unmanned ground vehicle, laser radar, deinsectization device and unmanned aerial vehicle, laser radar with deinsectization device set up in unmanned ground vehicle is last, unmanned aerial vehicle trail flight in unmanned ground vehicle's top; the unmanned aerial vehicle is provided with a visible light digital camera and a multispectral digital camera;
the pest positioning device comprises:
the first acquisition module is used for acquiring first image data of crops through the visible light digital camera;
the second acquisition module is used for acquiring second image data of the crop through the multispectral digital camera;
the processing module is used for acquiring distance data of the unmanned aerial vehicle from the crop when the difference value between the first image data and the second image data is smaller than a first threshold value;
the fusion module is used for fusing the first image data and the second image data to obtain fused image data when the distance data is larger than a second threshold value and smaller than a third threshold value;
the prediction module is used for inputting the fused image data into a disease and pest positioning neural network model to obtain a disease and pest positioning prediction result;
the fusing the first image data and the second image data to obtain fused image data includes:
grouping the first image data to obtain a plurality of groups of first image sub-data, and determining a first variance of each group of first image sub-data;
grouping the second image data to obtain a plurality of groups of second image sub-data, and determining a second variance of each group of second image sub-data;
determining a first optimal variance of the first image data according to the first variance, and determining a second optimal variance of the second image data according to the second variance;
determining a first weight corresponding to the first image data and a second weight corresponding to the second image data according to the first optimal variance and the second optimal variance;
fusing the first image data and the second image data according to the first weight and the second weight to obtain fused image data;
grouping the first image data to obtain three groups of first image sub-data;
determining a first optimal variance of the first image data according to the first variance, which is specifically:
by the formula
Figure QLYQS_6
Determining a first optimal variance of the first image data;
in the method, in the process of the invention,
Figure QLYQS_7
for a first optimal variance of said first image data +.>
Figure QLYQS_8
For a first variance corresponding to a first group of first image sub-data,/>
Figure QLYQS_9
For the first variance corresponding to the second group of first image sub-data,/>
Figure QLYQS_10
And a first variance corresponding to the third group of first image sub-data.
7. A computer device, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of any of claims 1-5.
8. A computer readable storage medium having stored therein instructions executable by a processor, characterized by: the processor-executable instructions, when executed by a processor, are for implementing the method of any one of claims 1-5.
CN202110086756.2A 2021-01-22 2021-01-22 Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system Active CN112766178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110086756.2A CN112766178B (en) 2021-01-22 2021-01-22 Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110086756.2A CN112766178B (en) 2021-01-22 2021-01-22 Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system

Publications (2)

Publication Number Publication Date
CN112766178A CN112766178A (en) 2021-05-07
CN112766178B true CN112766178B (en) 2023-06-23

Family

ID=75702709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110086756.2A Active CN112766178B (en) 2021-01-22 2021-01-22 Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system

Country Status (1)

Country Link
CN (1) CN112766178B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671094A (en) * 2018-11-09 2019-04-23 杭州电子科技大学 A kind of eye fundus image blood vessel segmentation method based on frequency domain classification
CN110210434A (en) * 2019-06-10 2019-09-06 四川大学 Pest and disease damage recognition methods and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9025910B2 (en) * 2012-12-13 2015-05-05 Futurewei Technologies, Inc. Image retargeting quality assessment
EP3432263B1 (en) * 2017-07-17 2020-09-16 Siemens Healthcare GmbH Semantic segmentation for cancer detection in digital breast tomosynthesis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671094A (en) * 2018-11-09 2019-04-23 杭州电子科技大学 A kind of eye fundus image blood vessel segmentation method based on frequency domain classification
CN110210434A (en) * 2019-06-10 2019-09-06 四川大学 Pest and disease damage recognition methods and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进KNN-SVM的车辆图像光照检测模型;郝蓓;杨大利;;计算机工程与应用(24);第212-217页 *

Also Published As

Publication number Publication date
CN112766178A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
Subeesh et al. Automation and digitization of agriculture using artificial intelligence and internet of things
US20230345928A1 (en) Identifying and avoiding obstructions using depth information in a single image
US10721859B2 (en) Monitoring and control implement for crop improvement
US20230240195A1 (en) Plant treatment based on morphological and physiological measurements
US20210360850A1 (en) Automatic driving system for grain processing, automatic driving method, and path planning method
CN109588107A (en) Harvester and its automatic Pilot method
CN204142639U (en) Be positioned at the crop disease and insect detection system on unmanned plane
US20220101554A1 (en) Extracting Feature Values from Point Clouds to Generate Plant Treatments
Jhatial et al. Deep learning-based rice leaf diseases detection using Yolov5
Raman et al. Robotic Weed Control and Biodiversity Preservation: IoT Solutions for Sustainable Farming
US20210365037A1 (en) Automatic driving system for grain processing, automatic driving method, and automatic identification method
CN114723667A (en) Agricultural fine planting and disaster prevention control system
Kumar et al. Role of artificial intelligence, sensor technology, big data in agriculture: next-generation farming
CN112699729A (en) Unmanned aerial vehicle investigation and attack integrated weeding method
CN117496356A (en) Agricultural artificial intelligent crop detection method and system
Zhao et al. Cabbage and weed identification based on machine learning and target spraying system design
US20220100996A1 (en) Ground Plane Compensation in Identifying and Treating Plants
CN112766178B (en) Disease and pest positioning method, device, equipment and medium based on intelligent deinsectization system
Bhusal et al. Bird deterrence in a vineyard using an unmanned aerial system (uas)
Esau et al. Artificial intelligence and deep learning applications for agriculture
Zhang et al. Automatic counting of lettuce using an improved YOLOv5s with multiple lightweight strategies
Ariza-Sentís et al. Object detection and tracking in Precision Farming: a systematic review
Karegowda et al. Deep learning solutions for agricultural and farming activities
Paul et al. Smart agriculture using UAV and deep learning: a systematic review
CN112925310B (en) Control method, device, equipment and storage medium of intelligent deinsectization system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant