CN115019216A - Real-time ground object detection and positioning counting method, system and computer - Google Patents

Real-time ground object detection and positioning counting method, system and computer Download PDF

Info

Publication number
CN115019216A
CN115019216A CN202210947300.5A CN202210947300A CN115019216A CN 115019216 A CN115019216 A CN 115019216A CN 202210947300 A CN202210947300 A CN 202210947300A CN 115019216 A CN115019216 A CN 115019216A
Authority
CN
China
Prior art keywords
real
ground object
time
buffer area
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210947300.5A
Other languages
Chinese (zh)
Other versions
CN115019216B (en
Inventor
黄敏
龚道宏
陈皆红
林珲
齐述华
肖长江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Normal University
Original Assignee
Jiangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Normal University filed Critical Jiangxi Normal University
Priority to CN202210947300.5A priority Critical patent/CN115019216B/en
Publication of CN115019216A publication Critical patent/CN115019216A/en
Application granted granted Critical
Publication of CN115019216B publication Critical patent/CN115019216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a real-time ground feature detection and positioning counting method, a system and a computer, wherein the method comprises the steps of selecting a target image and obtaining an optimal weight; arranging a first base station and a second base station in an area to be counted to construct a virtual plane coordinate system; controlling the unmanned aerial vehicle to collect ground object images, receiving data transmitted by the unmanned aerial vehicle, and calculating coordinates of the unmanned aerial vehicle; predicting the ground object image in real time and judging whether a target ground object exists or not; if so, calculating and extracting the centroid coordinates of the target ground object, calculating the coordinates in a virtual plane coordinate system and generating a buffer area; acquiring a first buffer area and a second buffer area with a time difference, and judging whether the overlapping degree of the first buffer area and the second buffer area is smaller than a standard value; if yes, judging that the newly added target ground object exists in the second buffer area, and updating and counting the newly added target ground object. By the method, the counting cost is saved, the target ground object positioning information can be acquired in real time through deep learning and air-ground integrated equipment, and the real-time positioning counting efficiency is improved.

Description

Real-time ground object detection and positioning counting method, system and computer
Technical Field
The invention relates to the technical field of data processing, in particular to a real-time ground object detection and positioning counting method, a real-time ground object detection and positioning counting system and a computer.
Background
The ground object target detection and counting has great significance for ground object statistics and resource investigation, the existing traditional ground object detection and quantity statistics are stopped in a manual counting or area estimation method, the defects of long time consumption, low precision, poor reliability and the like exist, and the influence on real-time quantity statistical results after natural disasters and other conditions is particularly obvious.
In recent years, with the appearance of the unmanned aerial vehicle, the unmanned aerial vehicle has the advantages of high flying speed, wide operation range and high intelligent degree, so that the ground object target detection and counting by the unmanned aerial vehicle in the prior art are realized.
Most of the prior art provides accurate navigation and timing service for the unmanned aerial vehicle through a global navigation satellite system, however, the global navigation satellite system provides navigation and timing service depending on a space satellite, and in the process, the global navigation satellite system is easily influenced by the direction of the zenith, so that the phenomenon of position inaccuracy and equal offset is easily caused, and errors are brought to the results of ground object target detection and counting.
Disclosure of Invention
Based on this, the present invention aims to provide a method, a system and a computer for real-time ground object detection and positioning counting, so as to solve the problem that in the prior art, most of the prior art provides precise navigation and timing services for an unmanned aerial vehicle through a global navigation satellite system, however, the global navigation satellite system provides navigation and timing services depending on a space satellite, and in the process, the influence of the direction of a zenith covers the global navigation satellite system, so that the phenomenon of position inaccuracy and offset is easily caused, and errors are caused to the result of ground object target detection and counting.
The embodiment of the invention provides a real-time ground object detection and positioning counting method in a first aspect, which comprises the following steps:
selecting a target image of a target ground object required to be acquired by the unmanned aerial vehicle, training and verifying the target image to obtain a corresponding optimal weight, and replacing the initial weight with the optimal weight;
respectively arranging a first base station and a second base station at preset positions of an area to be counted, and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station;
controlling an unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving height data, length data and angle data transmitted by the unmanned aerial vehicle in the air route, and calculating the coordinate of the unmanned aerial vehicle in the virtual plane coordinate system according to the height data, the length data and the angle data;
predicting the surface feature image in real time and judging whether an effective target surface feature exists in the surface feature image or not;
if the feature image is judged to have the effective target feature, extracting the centroid coordinate of the target feature in the feature image, and calculating the coordinate of the target feature in the virtual plane coordinate system according to the centroid coordinate so as to generate a corresponding buffer area in the virtual plane coordinate system;
acquiring a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judging whether the overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value or not;
and if the overlapping degree between the second buffer area and the first buffer area is smaller than the preset standard value, judging that a newly added target ground object exists in the second buffer area, and refreshing the positioning information and carrying out quantity iterative statistics on the newly added target ground object.
The beneficial effects of the invention are: selecting a target image of a target ground object required to be acquired by the unmanned aerial vehicle, acquiring a corresponding optimal weight, and replacing the initial weight with the optimal weight; respectively arranging the first base station and the second base station at preset positions of an area to be counted, and constructing a corresponding virtual plane coordinate system; further, controlling the unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving data transmitted by the unmanned aerial vehicle, and calculating the coordinate of the current unmanned aerial vehicle in the virtual plane coordinate system according to the received data; on the basis, the ground feature image is predicted in real time, and whether an effective target ground feature exists in the ground feature image is judged; if so, extracting the centroid coordinates of the target ground object in the ground object image, and calculating the coordinates of the target ground object in the virtual plane coordinate system according to the centroid coordinates so as to generate a corresponding buffer area in the virtual plane coordinate system; acquiring a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judging whether the overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value or not; if yes, judging that the newly added target ground object exists in the second buffer area, and refreshing the positioning information and carrying out quantity iteration statistics on the newly added target ground object. By the mode, the unmanned aerial vehicle and the deep learning algorithm can be combined together, and only the unmanned aerial vehicle is required to collect the image to be counted and the related data of the earth surface, so that the counting cost is saved; meanwhile, the data are transmitted to the server in real time, and the data acquisition and the operational capability are separated through the server, so that the speed of recognition and statistics of a deep learning algorithm can be increased, and the efficiency of ground feature recognition and target statistics is correspondingly improved; in addition, the positioning information is acquired in real time through communication with the ground base station, the intelligent degree is high, a large amount of manual participation is not needed, the labor cost is saved, the process and the result of counting the quantity of the target ground objects in the research area and the positioning information can be checked in real time, and meanwhile, compared with a traditional manual counting method, the method has the advantages of being high in speed, accuracy and precision based on a deep learning algorithm, meanwhile, the method can adapt to a complex environment, and is suitable for large-scale popularization and use.
Preferably, the step of respectively arranging the first base station and the second base station at preset positions of an area to be counted and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station includes:
determining a corresponding region to be counted according to the target ground object, and determining a diagonal angle with the longest distance in the region to be counted so as to respectively locate the first base station and the second base station at the diagonal angle;
and taking the first base station as a coordinate origin, and correspondingly constructing the virtual plane coordinate system based on the coordinate origin and the diagonal line of the plane rectangular coordinate system extended by the second base station.
Preferably, the step of predicting the feature image in real time and determining whether there is an effective target feature in the feature image includes:
predicting the surface feature image in real time to determine a real-time confidence threshold corresponding to the surface feature image;
judging whether the real-time confidence coefficient threshold is larger than a preset threshold or not;
and if the real-time confidence coefficient threshold is judged to be larger than the preset threshold, judging that the target ground object exists in the ground object image as an effective image.
Preferably, after the step of determining that the newly added target feature exists in the second buffer, and performing location information refreshing and quantity iterative statistics on the newly added target feature, the method further includes:
and performing real-time ortho-image and coordinate updating processing on the ground object image and the newly added target ground object, and making a corresponding statistical chart.
Preferably, the method further comprises:
and if the overlapping degree between the second buffer area and the first buffer area is judged to be larger than the preset standard value, judging that the second buffer area and the first buffer area are the same target ground object, and regarding the target ground object in the second buffer area as an invalid target ground object.
The second aspect of the embodiments of the present invention provides a real-time surface feature detection and location counting system, where the system includes:
the training module is used for selecting a target image of a target ground object required to be acquired by the unmanned aerial vehicle, training and verifying the target image to obtain a corresponding optimal weight, and replacing the initial weight with the optimal weight;
the device comprises a construction module, a calculation module and a calculation module, wherein the construction module is used for respectively arranging a first base station and a second base station at preset positions of an area to be counted and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station;
the transmission module is used for controlling an unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving height data, length data and angle data transmitted by the unmanned aerial vehicle in the air route, and calculating the coordinate of the unmanned aerial vehicle in the virtual plane coordinate system according to the height data, the length data and the angle data;
the prediction module is used for predicting the surface feature image in real time and judging whether an effective target surface feature exists in the surface feature image;
the extraction module is used for extracting the centroid coordinate of the target ground object in the ground object image if the ground object image is judged to have the effective target ground object, and calculating the coordinate of the target ground object in the virtual plane coordinate system according to the centroid coordinate so as to generate a corresponding buffer area in the virtual plane coordinate system;
the judging module is used for acquiring a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judging whether the overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value or not;
and the first processing module is used for judging that a newly added target ground object exists in the second buffer area and refreshing the positioning information and carrying out quantity iteration statistics on the newly added target ground object if the overlapping degree between the second buffer area and the first buffer area is smaller than the preset standard value.
In the real-time surface feature detection and positioning counting system, the construction module is specifically configured to:
determining a corresponding region to be counted according to the target ground object, and determining a diagonal angle with the longest distance in the region to be counted so as to respectively locate the first base station and the second base station at the diagonal angle;
and taking the first base station as a coordinate origin, and correspondingly constructing the virtual plane coordinate system based on the coordinate origin and the diagonal line of the plane rectangular coordinate system extended by the second base station.
In the real-time surface feature detection and location counting system, the prediction module is specifically configured to:
predicting the surface feature image in real time to determine a real-time confidence threshold corresponding to the surface feature image;
judging whether the real-time confidence coefficient threshold is larger than a preset threshold or not;
and if the real-time confidence coefficient threshold is judged to be larger than the preset threshold, judging that the target ground object exists in the ground object image as an effective image.
In the above real-time surface feature detection and positioning counting system, the real-time surface feature detection and positioning counting system further includes an update module, and the update module is specifically configured to:
and performing real-time ortho-image and coordinate updating processing on the ground object image and the newly added target ground object, and making a corresponding statistical chart.
In the above real-time surface feature detecting and positioning counting system, the real-time surface feature detecting and positioning counting system further includes a second processing module, and the second processing module is specifically configured to:
and if the overlapping degree between the second buffer area and the first buffer area is judged to be larger than the preset standard value, judging that the second buffer area and the first buffer area are the same target ground object, and regarding the target ground object in the second buffer area as an invalid target ground object.
A third aspect of the embodiments of the present invention provides a computer, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the real-time feature detection and location counting method as described above when executing the computer program.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a real-time feature detection and location counting method according to a first embodiment of the present invention;
fig. 2 is a schematic view of a virtual plane coordinate system in the real-time feature detection and location counting method according to the first embodiment of the present invention;
fig. 3 is a schematic view of an unmanned aerial vehicle route in the real-time ground object detection and positioning counting method according to the first embodiment of the invention;
fig. 4 is a block diagram of a real-time feature detection and location counting system according to a second embodiment of the present invention.
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Several embodiments of the invention are presented in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In the prior art, a global navigation satellite system is mostly used for providing accurate navigation and timing service for an unmanned aerial vehicle, however, the global navigation satellite system relies on a space satellite to provide navigation and timing service, and is easily influenced by the direction of a zenith to cover in the process, so that the phenomenon of position inaccuracy and equal offset is easily caused, and errors are brought to the results of ground object target detection and counting.
Referring to fig. 1, which is a flowchart illustrating a real-time surface feature detection and positioning counting method according to a first embodiment of the present invention, the real-time surface feature detection and positioning counting method according to the present embodiment can combine an unmanned aerial vehicle with a deep learning algorithm, and only needs to acquire an image to be counted and related data of a surface by the unmanned aerial vehicle, so as to save the counting cost; meanwhile, the data are transmitted to the server in real time, and the data acquisition and the operation capability are separated through the server, so that the recognition and statistics speed of a deep learning algorithm can be increased, and the efficiency of ground feature recognition and target statistics is correspondingly improved; in addition, the positioning information is acquired in real time through communication with the ground base station, the intelligent degree is high, a large amount of manual participation is not needed, the labor cost is saved, the process and the result of counting the quantity of the target ground objects in the research area and the positioning information can be checked in real time, and meanwhile, compared with a traditional manual counting method, the method has the advantages of being high in speed, accuracy and precision based on a deep learning algorithm, meanwhile, the method can adapt to a complex environment, and is suitable for large-scale popularization and use.
Specifically, the real-time surface feature detection and location counting method provided by this embodiment specifically includes the following steps:
step S10, selecting a target image of a target ground object to be acquired by the unmanned aerial vehicle, training and verifying the target image to obtain a corresponding optimal weight, and replacing the initial weight with the optimal weight;
specifically, in this embodiment, it should be noted that the real-time ground object detecting and positioning counting method provided in this embodiment is specifically applied between an unmanned aerial vehicle, a base station, and a server arranged in the background, and is used for implementing real-time statistics on a target ground object in a to-be-counted area by the unmanned aerial vehicle. Wherein, can realize mutual information interaction between unmanned aerial vehicle, basic station and the server three that sets up at the background to accomplish the detection and the location count of target ground object.
Wherein, it should be pointed out that be provided with data acquisition module, communication module, storage module, angle measurement module, height module, two-way range finding module and control module in above-mentioned unmanned aerial vehicle's inside, correspond, be provided with communication module, calculation module, data preprocessing module, degree of deep learning module, statistics module and storage module in the inside of above-mentioned server.
Therefore, in this step, it should be noted that, before the real-time surface feature detecting and positioning counting method provided in this embodiment is specifically implemented, the target image of the target surface feature that needs to be acquired by the current unmanned aerial vehicle needs to be selected in real time according to the actual demand of the user, and the target image of the target surface feature that is selected in real time needs to be trained and verified by the deep learning module in the server, so that the corresponding optimal weight can be obtained, and the default initial weight inside the server is replaced by the obtained optimal weight inside the server, so as to improve the recognition rate of the target surface feature.
Specifically, in this embodiment, for example, navel orange trees are used as research objects, and the real-time ground object detection and positioning counting method provided in this embodiment is used to position the navel orange trees in the navel orange orchard and count the number of the navel orange trees.
More specifically, in this embodiment, an unmanned aerial vehicle carrying a lens is used to perform image acquisition on a navel orange tree at a fixed overlooking shooting angle, and further perform marking and attribute assignment on a sample through a preset program, specifically, in this embodiment, whole tree crown frame selection is performed on the fruit tree, the navel orange tree selected by the frame is assigned to correspondingly construct a navel orange tree sample library, and then a training set and a verification set are randomly divided from the navel orange tree sample library in a ratio of 8:2 according to light and shadow change, fruit tree overlapping phenomenon change and whole image pixel change. Wherein, the training set is used for establishing a training model (namely, a Yolov5 convolutional neural network), and the validation set is used for validating the precision and generalization capability of the training model.
Step S20, respectively arranging a first base station and a second base station at preset positions of an area to be counted, and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station;
further, in this step, it should be noted that, taking a navel orange tree as an example, the area to be counted is a plantation area of the navel orange tree, and on this basis, as shown in fig. 2, it should be noted that, in this step, a first base station a is placed at a diagonal position of a maximum boundary line of a current navel orange orchard 1 And a second base station A 2 Wherein the first base station A 1 Main base station, second base station A 2 Is a secondary base station, and further, the server is based on the first base station A 1 And a second base station A 2 And constructing a corresponding virtual plane coordinate system.
In this embodiment, it should be noted that the step of respectively locating the first base station and the second base station at preset positions in the area to be counted and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station includes:
determining a corresponding region to be counted according to the target ground object, and determining a diagonal angle with the longest distance in the region to be counted so as to respectively locate the first base station and the second base station at the diagonal angle;
and taking the first base station as a coordinate origin, and correspondingly constructing the virtual plane coordinate system based on the coordinate origin and the diagonal line of the plane rectangular coordinate system extended by the second base station.
Taking a navel orange orchard as an example, it should be noted that the processing module in the server uses the first base station a 1 As the origin of coordinates, and further according to the first base station A 1 And a second base station A 2 Constructs a corresponding rectangle with the diagonal vertices of the first base station A 1 And a second base station A 2 The larger side of the included angles between the diagonal direction and the two sides of the current rectangle is an X axis, the corresponding other side is a Y axis, so as to correspondingly construct the first base station A 1 A virtual plane coordinate system of (0, 0).
Step S30, controlling an unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving height data, length data and angle data transmitted by the unmanned aerial vehicle in the air route, and calculating the coordinates of the unmanned aerial vehicle in the virtual plane coordinate system according to the height data, the length data and the angle data;
specifically, in this step, it should be noted that, after the server constructs a virtual plane coordinate system corresponding to the current region to be counted, that is, after the virtual plane coordinate system is constructed in the navel orange orchard, the unmanned aerial vehicle is enabled in this step, and the unmanned aerial vehicle is further controlled to collect the ground feature images in the region to be counted according to a preset air route, where the parameters of the unmanned aerial vehicle provided in this step are the same as those of the unmanned aerial vehicle in step S10.
Further, as shown in fig. 3, it should be noted that, in this step, the unmanned aerial vehicle is controlled according to the user' S requirement to collect the ground feature images in the above-mentioned area to be counted according to a pre-planned route, that is, data collection of the unmanned aerial vehicle is performed in the whole navel orange orchard area, and a fixed viewing angle (the same as that in step S10) of a lens of the unmanned aerial vehicle is used to collect images of navel orange trees in the current navel orange orchard, and meanwhile, it is ensured that there is an overlapping area in the collected adjacent images, so as to ensure that orthographic images of the navel orange orchard can be spliced.
Specifically, in this step, it should be further explained that the operator controls the flight of the unmanned aerial vehicle, and further, the unmanned aerial vehicle is currently pattedRecording the height H of the current unmanned aerial vehicle from the ground through an internal height module within the shooting time t t (ii) a And further respectively recording the data with the first base station A through a ranging module in the base station A 1 And a second base station A 2 Has a linear distance D 1t ,D 2t (ii) a Then, the current unmanned aerial vehicle and the first base station A are recorded through an angle measuring module in the unmanned aerial vehicle 1 And a second base station A 2 Are respectively at an included angle theta 1t And theta 2t (ii) a Then, the current unmanned aerial vehicle and the first base station A are recorded through an angle measurement module 1 Has an angle of beta in the horizontal plane 1t (ii) a And then, the earth surface image in the area to be counted is collected through the data collection module inside the unmanned aerial vehicle, finally, the information and the image collected by the unmanned aerial vehicle are transmitted to the server through the communication module, and in the flying process of the unmanned aerial vehicle, the new image collected by the unmanned aerial vehicle at each time can be transmitted to the server in real time.
The operator takes off the unmanned aerial vehicle, and supposing that the distance A between the unmanned aerial vehicle and the first base station A is obtained according to the tangent function law and the ranging module in the time t 1 Is X 1t Corresponding to tan (θ) 1t )=X 1t /H t Then X 1t =tan(θ 1t )*H t And in the same way, the distance between the unmanned aerial vehicle and the second base station A can be obtained 2 Is X 2t . On this basis, the calculation module in the server calculates the coordinate position of the vertical projection of the current unmanned aerial vehicle in the time t in the virtual plane coordinate system according to the calculated distance data and angle data.
Step S40, predicting the feature image in real time and judging whether the feature image has an effective target feature;
furthermore, in this step, it should be noted that, after the corresponding ground feature image is acquired by the above unmanned aerial vehicle, the step may further predict, in real time, the ground feature image currently acquired in real time by the above server, and further determine whether the target ground feature exists in the ground feature image acquired in real time, which is an effective image.
In this step, it should be noted that the step of predicting the feature image in real time and determining whether the feature image has the effective target feature includes:
predicting the surface feature image in real time to determine a real-time confidence threshold corresponding to the surface feature image;
judging whether the real-time confidence coefficient threshold is larger than a preset threshold or not;
and if the real-time confidence coefficient threshold is judged to be larger than the preset threshold, judging that the target ground object exists in the ground object image as an effective image.
It should be noted that, the deep learning inference submodule in the server may predict, in real time, the surface feature image acquired in real time, specifically:
the server can carry out the extraction of target matter heart coordinate with the ground feature image of real-time confidence coefficient threshold IOU >60%, and the rethread communication module transmits the barycenter coordinate to unmanned aerial vehicle in, this unmanned aerial vehicle through its inside laser module record current target object barycenter distance and current unmanned aerial vehicle's horizontal angle and vertical angle.
On this basis, the unmanned aerial vehicle transmits the ground feature image of gathering to the degree of depth study inference submodule in the server through communication module, and this inference submodule predicts the navel orange fruit tree on the ground feature image of gathering, if the prediction result IOU >60%, judges this target is the navel orange fruit tree to carry out effective target record and extract this target barycenter coordinate (being navel orange fruit tree target object central coordinate), and the laser module among the unmanned aerial vehicle records effective target barycenter distance and with current unmanned aerial vehicle's horizontal angle and vertical angle. The data acquisition module in the unmanned aerial vehicle transmits the acquired images of the navel orange orchard to the server deep learning target detection module in real time, namely, each new image is shot and transmitted to the server in real time through a wireless network.
Step S50, if the feature image is judged to have an effective target feature, extracting a centroid coordinate of the target feature in the feature image, and calculating a coordinate of the target feature in the virtual plane coordinate system according to the centroid coordinate to generate a corresponding buffer area in the virtual plane coordinate system;
specifically, in this step, it should be noted that, if the server determines that the currently received surface feature image is an effective image, the server immediately extracts the centroid coordinates of the target surface feature in the current surface feature image, and generates a corresponding buffer area in the virtual plane coordinate system according to the centroid coordinates extracted in real time.
Step S60, acquiring a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judging whether the overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value;
further, in this step, it should be noted that the server acquires a first buffer area generated at a first time and a second buffer area generated at a second time during the flight of the current unmanned aerial vehicle, and determines whether the overlapping degree between the first buffer area and the second buffer area is smaller than a preset standard value in real time.
Step S70, if it is determined that the overlap between the second buffer area and the first buffer area is smaller than the preset standard value, it is determined that a newly added target feature exists in the second buffer area, and location information refreshing and quantity iterative statistics are performed on the newly added target feature.
Finally, in this step, it should be noted that the calculation module in the current server performs calculation of the real-time target object in the virtual plane coordinate system according to the information obtained in the above step, and generates an appropriate buffer area in the current virtual plane coordinate system according to the target object, specifically, this step replaces the first time with t1 and the second time with t2, and if there is an overlap of an area >60% between the second buffer area generated at time t2 and the first buffer area generated at time t1, the current server regards the current server as the same target object, and at the same time, the calculation module needs to regard the target object corresponding to time t2 as an invalid target object. If the second buffer generated at the time t2 and the first buffer generated at the time t1 have an overlap degree of <60% in area, they are regarded as new targets, the calculation module needs to consider the target feature corresponding to the time t2 as an effective target feature, and further, the statistics module in the server refreshes the positioning information of the effective target feature and performs iterative statistics on the number of the effective target features.
The calculation module in the server calculates the coordinate (X) of the navel orange tree in the virtual plane coordinate system in real time according to the data acquired in the steps a ,Y a ),(X b ,Y b ) …, wherein (X) a ,Y a ) The calculated navel orange coordinate (X) is regarded as t1 b ,Y b ) Consider the navel orange coordinate calculated at t 2. (X) a ,Y a ),(X b ,Y b ) Generating 4m buffer area, if the buffer area generated at time t2 and the buffer area generated at time t1 exist on the area>A 60% overlap is considered to be the same navel orange tree, and the computing module needs to consider the t2 target object as an invalid navel orange tree. If the buffer generated at time t2 and the buffer generated at time t1 exist on the area<And if the overlap degree is 60%, determining that the navel orange tree is newly added, determining that the t2 target ground object is an effective navel orange tree by the calculation module, and further performing positioning information refreshing and quantity iterative statistics on the effective target ground object by the statistics module.
In addition, in this embodiment, it should be noted that, after the step of determining that the new target feature exists in the second buffer, and performing location information refreshing and quantity iteration statistics on the new target feature, the method further includes:
and performing real-time ortho-image and coordinate updating processing on the ground object image and the newly added target ground object, and making a corresponding statistical chart.
Specifically, in this step, it should be noted that the processing module in the server performs real-time production of an orthophoto image of the navel orange orchard and real-time update of a plane coordinate of the navel orange tree according to the land feature image obtained in real time and the number of the effective navel orange plants obtained in real time, that is, when a new navel orange tree image is transmitted, the processing module performs real-time splicing on the received new navel orange tree image based on the previously spliced orthophoto image, and meanwhile, the plane coordinate of the navel orange tree also forms a statistical graph of the navel orange land feature according to the real-time updated coordinate of each effective navel orange tree. And finally, checking the accuracy of real-time target identification and positioning counting and the reliability of the whole process through the orthophoto map.
Further, in this embodiment, it should be further noted that the method further includes:
and if the overlapping degree between the second buffer area and the first buffer area is judged to be larger than the preset standard value, judging that the second buffer area and the first buffer area are the same target ground object, and regarding the target ground object in the second buffer area as an invalid target ground object.
When the unmanned aerial vehicle is used, the target image of the target ground object required to be acquired by the unmanned aerial vehicle is selected, the corresponding optimal weight is obtained, and the initial weight is replaced by the optimal weight; respectively arranging the first base station and the second base station at preset positions of an area to be counted, and constructing a corresponding virtual plane coordinate system; further, controlling the unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving data transmitted by the unmanned aerial vehicle, and calculating the coordinate of the current unmanned aerial vehicle in the virtual plane coordinate system according to the received data; on the basis, the ground feature image is predicted in real time, and whether a target ground feature exists in the ground feature image is judged to be an effective image; if so, extracting the centroid coordinates of the target ground object in the current ground object image, and generating a corresponding buffer area in a virtual plane coordinate system according to the centroid coordinates; acquiring a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judging whether the overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value or not; if yes, judging that the newly added target ground object exists in the second buffer area, and refreshing the positioning information and carrying out quantity iteration statistics on the newly added target ground object. By the mode, the unmanned aerial vehicle and the deep learning algorithm can be combined together, and only the unmanned aerial vehicle is required to collect the image to be counted and the related data of the earth surface, so that the counting cost is saved; meanwhile, the data are transmitted to the server in real time, and the data acquisition and the operational capability are separated through the server, so that the speed of recognition and statistics of a deep learning algorithm can be increased, and the efficiency of ground feature recognition and target statistics is correspondingly improved; in addition, the positioning information is acquired in real time through communication with the ground base station, the intelligent degree is high, a large amount of manual participation is not needed, the labor cost is saved, the process and the result of counting the quantity of the target ground objects in the research area and the positioning information can be checked in real time, and meanwhile, compared with a traditional manual counting method, the method has the advantages of being high in speed, accuracy and precision based on a deep learning algorithm, meanwhile, the method can adapt to a complex environment, and is suitable for large-scale popularization and use.
It should be noted that the above implementation process is only for illustrating the applicability of the present application, but this does not represent that the real-time feature detection and location counting method of the present application has only the above implementation flow, and on the contrary, the real-time feature detection and location counting method of the present application can be incorporated into a feasible embodiment of the present application as long as it can be implemented.
In conclusion, the real-time ground feature detection and positioning counting method provided by the embodiment of the invention can combine the unmanned aerial vehicle with the deep learning algorithm, and only the unmanned aerial vehicle is required to collect the images to be counted and the related data of the ground surface, so that the counting cost is saved; meanwhile, the data are transmitted to the server in real time, and the data acquisition and the operational capability are separated through the server, so that the speed of recognition and statistics of a deep learning algorithm can be increased, and the efficiency of ground feature recognition and target statistics is correspondingly improved; in addition, the positioning information is acquired in real time through communication with the ground base station, the intelligent degree is high, a large amount of manual participation is not needed, the labor cost is saved, the process and the result of statistics of the quantity of the target ground objects in the research area and the positioning information can be checked in real time, and meanwhile, compared with a traditional manual counting method, the method and the device have the advantages of being high in speed, high in accuracy and high in precision based on a deep learning algorithm, meanwhile, the method and the device can adapt to a complex environment, and are suitable for large-scale popularization and use.
Referring to fig. 4, a real-time feature detecting and positioning counting system according to a second embodiment of the present invention is shown, the system includes:
the training module 12 is configured to select a target image of a target ground object to be acquired by the unmanned aerial vehicle, train and verify the target image to obtain a corresponding optimal weight, and replace the initial weight with the optimal weight;
the building module 22 is configured to respectively locate a first base station and a second base station at preset positions of an area to be counted, and build a corresponding virtual plane coordinate system based on the first base station and the second base station;
the transmission module 32 is used for controlling the unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving height data, length data and angle data transmitted by the unmanned aerial vehicle in the air route, and calculating the coordinate of the unmanned aerial vehicle in the virtual plane coordinate system according to the height data, the length data and the angle data;
the prediction module 42 is configured to predict the feature image in real time, and determine whether an effective target feature exists in the feature image;
the extracting module 52 is configured to, if it is determined that the feature image has an effective target feature, extract a centroid coordinate of the target feature in the feature image, and calculate a coordinate of the target feature in the virtual plane coordinate system according to the centroid coordinate, so as to generate a corresponding buffer area in the virtual plane coordinate system;
the judging module 62 is configured to obtain a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judge whether an overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value;
the first processing module 72 is configured to determine that a newly added target feature exists in the second buffer area if it is determined that the overlapping degree between the second buffer area and the first buffer area is smaller than the preset standard value, and perform positioning information refreshing and quantity iteration statistics on the newly added target feature.
In the above real-time surface feature detection and positioning counting system, the building module 22 is specifically configured to:
determining a corresponding region to be counted according to the target ground object, and determining a diagonal angle with the longest distance in the region to be counted so as to respectively locate the first base station and the second base station at the diagonal angle;
and taking the first base station as a coordinate origin, and correspondingly constructing the virtual plane coordinate system based on the coordinate origin and the diagonal line of the plane rectangular coordinate system extended by the second base station.
In the real-time surface feature detection and positioning counting system, the prediction module 42 is specifically configured to:
predicting the surface feature image in real time to determine a real-time confidence threshold corresponding to the surface feature image;
judging whether the real-time confidence coefficient threshold is larger than a preset threshold or not;
and if the real-time confidence coefficient threshold is judged to be larger than the preset threshold, judging that the target ground object exists in the ground object image as an effective image.
In the above real-time surface feature detecting and positioning counting system, the real-time surface feature detecting and positioning counting system further includes an updating module 82, where the updating module 82 is specifically configured to:
and performing real-time ortho-image and coordinate updating processing on the ground object image and the newly added target ground object, and making a corresponding statistical chart.
In the real-time surface feature detection and positioning and counting system, the real-time surface feature detection and positioning and counting system further includes a second processing module 92, and the second processing module 92 is specifically configured to:
and if the overlapping degree between the second buffer area and the first buffer area is judged to be larger than the preset standard value, judging that the second buffer area and the first buffer area are the same target ground object, and regarding the target ground object in the second buffer area as an invalid target ground object.
A third embodiment of the present invention provides a computer, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the real-time feature detection and location counting method as provided in the first embodiment.
In summary, the real-time ground feature detection and positioning counting method, system and computer provided by the embodiments of the present invention can combine the unmanned aerial vehicle and the deep learning algorithm together, and only the unmanned aerial vehicle needs to collect the image to be counted and the related data of the ground surface, so as to save the counting cost; meanwhile, the data are transmitted to the server in real time, and the data acquisition and the operation capability are separated through the server, so that the recognition and statistics speed of a deep learning algorithm can be increased, and the efficiency of ground feature recognition and target statistics is correspondingly improved; in addition, the positioning information is acquired in real time through communication with the ground base station, the intelligent degree is high, a large amount of manual participation is not needed, the labor cost is saved, the process and the result of counting the quantity of the target ground objects in the research area and the positioning information can be checked in real time, and meanwhile, compared with a traditional manual counting method, the method has the advantages of being high in speed, accuracy and precision based on a deep learning algorithm, meanwhile, the method can adapt to a complex environment, and is suitable for large-scale popularization and use.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A real-time feature detection and location counting method, the method comprising:
selecting a target image of a target ground object to be acquired by the unmanned aerial vehicle, training and verifying the target image to obtain a corresponding optimal weight, and replacing the initial weight with the optimal weight;
respectively arranging a first base station and a second base station at preset positions of an area to be counted, and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station;
controlling an unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving height data, length data and angle data transmitted by the unmanned aerial vehicle in the air route, and calculating the coordinate of the unmanned aerial vehicle in the virtual plane coordinate system according to the height data, the length data and the angle data;
predicting the surface feature image in real time and judging whether an effective target surface feature exists in the surface feature image or not;
if the feature image is judged to have the effective target feature, extracting the centroid coordinate of the target feature in the feature image, and calculating the coordinate of the target feature in the virtual plane coordinate system according to the centroid coordinate so as to generate a corresponding buffer area in the virtual plane coordinate system;
acquiring a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judging whether the overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value or not;
and if the overlapping degree between the second buffer area and the first buffer area is smaller than the preset standard value, judging that a newly added target ground object exists in the second buffer area, and refreshing the positioning information and carrying out quantity iterative statistics on the newly added target ground object.
2. The real-time feature detection and location counting method of claim 1, wherein: the step of respectively arranging the first base station and the second base station at preset positions of an area to be counted and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station comprises the following steps:
determining a corresponding region to be counted according to the target ground object, and determining a diagonal angle with the longest distance in the region to be counted so as to respectively locate the first base station and the second base station at the diagonal angle;
and taking the first base station as a coordinate origin, and correspondingly constructing the virtual plane coordinate system based on the coordinate origin and the diagonal line of the plane rectangular coordinate system extended by the second base station.
3. The real-time feature detection and location counting method of claim 1, wherein: the step of predicting the surface feature image in real time and judging whether the surface feature image has an effective target surface feature comprises the following steps:
predicting the feature image in real time to determine a real-time confidence coefficient threshold corresponding to the feature image;
judging whether the real-time confidence coefficient threshold is larger than a preset threshold or not;
and if the real-time confidence coefficient threshold is judged to be larger than the preset threshold, judging that the target ground object exists in the ground object image as an effective image.
4. The real-time feature detection and location counting method of claim 1, wherein: after the step of determining that the newly added target feature exists in the second buffer area, and performing location information refreshing and quantity iterative statistics on the newly added target feature, the method further includes:
and performing real-time ortho-image and coordinate updating processing on the ground object image and the newly added target ground object, and making a corresponding statistical chart.
5. The real-time feature detection and location counting method of claim 1, wherein: the method further comprises the following steps:
and if the overlapping degree between the second buffer area and the first buffer area is judged to be larger than the preset standard value, judging that the second buffer area and the first buffer area are the same target ground object, and regarding the target ground object in the second buffer area as an invalid target ground object.
6. A real-time feature detection and location counting system, the system comprising:
the training module is used for selecting a target image of a target ground object required to be acquired by the unmanned aerial vehicle, training and verifying the target image to obtain a corresponding optimal weight, and replacing the initial weight with the optimal weight;
the device comprises a construction module, a calculation module and a calculation module, wherein the construction module is used for respectively arranging a first base station and a second base station at preset positions of an area to be counted and constructing a corresponding virtual plane coordinate system based on the first base station and the second base station;
the transmission module is used for controlling an unmanned aerial vehicle to collect ground feature images in the area to be counted according to a preset air route, receiving height data, length data and angle data transmitted by the unmanned aerial vehicle in the air route, and calculating the coordinate of the unmanned aerial vehicle in the virtual plane coordinate system according to the height data, the length data and the angle data;
the prediction module is used for predicting the surface feature image in real time and judging whether an effective target surface feature exists in the surface feature image;
the extraction module is used for extracting the centroid coordinate of the target ground object in the ground object image if the ground object image is judged to have the effective target ground object, and calculating the coordinate of the target ground object in the virtual plane coordinate system according to the centroid coordinate so as to generate a corresponding buffer area in the virtual plane coordinate system;
the judging module is used for acquiring a first buffer area generated by the unmanned aerial vehicle at a first time and a second buffer area generated by the unmanned aerial vehicle at a second time, and judging whether the overlapping degree between the second buffer area and the first buffer area is smaller than a preset standard value or not;
and the first processing module is used for judging that a newly added target ground object exists in the second buffer area and refreshing the positioning information and carrying out quantity iteration statistics on the newly added target ground object if the overlapping degree between the second buffer area and the first buffer area is smaller than the preset standard value.
7. The real-time feature detection and location counting system of claim 6, wherein: the building module is specifically configured to:
determining a corresponding region to be counted according to the target ground object, and determining a diagonal angle with the longest distance in the region to be counted so as to respectively locate the first base station and the second base station at the diagonal angle;
and taking the first base station as a coordinate origin, and correspondingly constructing the virtual plane coordinate system based on the coordinate origin and the diagonal line of the plane rectangular coordinate system extended by the second base station.
8. The real-time feature detection and location counting system of claim 6, wherein: the prediction module is specifically configured to:
predicting the surface feature image in real time to determine a real-time confidence threshold corresponding to the surface feature image;
judging whether the real-time confidence coefficient threshold is larger than a preset threshold or not;
and if the real-time confidence coefficient threshold is judged to be larger than the preset threshold, judging that the target ground object exists in the ground object image as an effective image.
9. The real-time feature detection and location counting system of claim 6, wherein: the real-time ground object detection and positioning counting system further comprises an updating module, wherein the updating module is specifically used for:
and performing real-time ortho-image and coordinate updating processing on the ground object image and the newly added target ground object, and making a corresponding statistical chart.
10. A computer comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the real-time feature detection and location counting method of any one of claims 1 to 5 when executing the computer program.
CN202210947300.5A 2022-08-09 2022-08-09 Real-time ground object detection and positioning counting method, system and computer Active CN115019216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210947300.5A CN115019216B (en) 2022-08-09 2022-08-09 Real-time ground object detection and positioning counting method, system and computer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210947300.5A CN115019216B (en) 2022-08-09 2022-08-09 Real-time ground object detection and positioning counting method, system and computer

Publications (2)

Publication Number Publication Date
CN115019216A true CN115019216A (en) 2022-09-06
CN115019216B CN115019216B (en) 2022-10-21

Family

ID=83065348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210947300.5A Active CN115019216B (en) 2022-08-09 2022-08-09 Real-time ground object detection and positioning counting method, system and computer

Country Status (1)

Country Link
CN (1) CN115019216B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116243353A (en) * 2023-03-14 2023-06-09 广西壮族自治区自然资源遥感院 Forest right investigation and measurement method and system based on Beidou positioning

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206431277U (en) * 2016-09-21 2017-08-22 深圳智航无人机有限公司 Unmanned plane alignment system
CN109143257A (en) * 2018-07-11 2019-01-04 中国地质调查局西安地质调查中心 Unmanned aerial vehicle onboard radar mining land change monitors system and method
CN111178148A (en) * 2019-12-06 2020-05-19 天津大学 Ground target geographic coordinate positioning method based on unmanned aerial vehicle vision system
CN113011405A (en) * 2021-05-25 2021-06-22 南京柠瑛智能科技有限公司 Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle
CN113050137A (en) * 2021-03-09 2021-06-29 江西师范大学 Multi-point cooperative measurement spatial information acquisition method
TWM614576U (en) * 2020-12-18 2021-07-21 台灣電力股份有限公司 UAV patrol aerial photography power transmission line monitoring system
US20210224512A1 (en) * 2020-01-17 2021-07-22 Wuyi University Danet-based drone patrol and inspection system for coastline floating garbage
CN113325868A (en) * 2021-05-31 2021-08-31 南通大学 Crop real-time identification system and method based on unmanned aerial vehicle
AU2021105629A4 (en) * 2021-08-17 2021-11-25 A. H., Srinivasa MR System and Method for Monitoring, Detecting and Counting Fruits in a Field
CN113934232A (en) * 2021-11-02 2022-01-14 山东交通学院 Virtual image control-based plant protection unmanned aerial vehicle air route planning system and method
CN114022771A (en) * 2021-11-15 2022-02-08 安徽农业大学 Corn seedling stage field distribution information statistical method based on deep learning
CN114092814A (en) * 2021-11-26 2022-02-25 江西理工大学 Unmanned plane navel orange tree image target identification and statistics method based on deep learning
JP2022035927A (en) * 2020-08-20 2022-03-04 上海姜歌机器人有限公司 Robot obstacle avoidance processing method, device, and robot
CN114170535A (en) * 2022-02-11 2022-03-11 北京卓翼智能科技有限公司 Target detection positioning method, device, controller, storage medium and unmanned aerial vehicle
CN114565725A (en) * 2022-01-19 2022-05-31 中建一局集团第三建筑有限公司 Reverse modeling method for three-dimensional scanning target area of unmanned aerial vehicle, storage medium and computer equipment
CN114708192A (en) * 2022-03-10 2022-07-05 江西中业智能科技有限公司 Target counting method, system, storage medium and computer equipment

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206431277U (en) * 2016-09-21 2017-08-22 深圳智航无人机有限公司 Unmanned plane alignment system
CN109143257A (en) * 2018-07-11 2019-01-04 中国地质调查局西安地质调查中心 Unmanned aerial vehicle onboard radar mining land change monitors system and method
CN111178148A (en) * 2019-12-06 2020-05-19 天津大学 Ground target geographic coordinate positioning method based on unmanned aerial vehicle vision system
US20210224512A1 (en) * 2020-01-17 2021-07-22 Wuyi University Danet-based drone patrol and inspection system for coastline floating garbage
JP2022035927A (en) * 2020-08-20 2022-03-04 上海姜歌机器人有限公司 Robot obstacle avoidance processing method, device, and robot
TWM614576U (en) * 2020-12-18 2021-07-21 台灣電力股份有限公司 UAV patrol aerial photography power transmission line monitoring system
CN113050137A (en) * 2021-03-09 2021-06-29 江西师范大学 Multi-point cooperative measurement spatial information acquisition method
CN113011405A (en) * 2021-05-25 2021-06-22 南京柠瑛智能科技有限公司 Method for solving multi-frame overlapping error of ground object target identification of unmanned aerial vehicle
CN113325868A (en) * 2021-05-31 2021-08-31 南通大学 Crop real-time identification system and method based on unmanned aerial vehicle
AU2021105629A4 (en) * 2021-08-17 2021-11-25 A. H., Srinivasa MR System and Method for Monitoring, Detecting and Counting Fruits in a Field
CN113934232A (en) * 2021-11-02 2022-01-14 山东交通学院 Virtual image control-based plant protection unmanned aerial vehicle air route planning system and method
CN114022771A (en) * 2021-11-15 2022-02-08 安徽农业大学 Corn seedling stage field distribution information statistical method based on deep learning
CN114092814A (en) * 2021-11-26 2022-02-25 江西理工大学 Unmanned plane navel orange tree image target identification and statistics method based on deep learning
CN114565725A (en) * 2022-01-19 2022-05-31 中建一局集团第三建筑有限公司 Reverse modeling method for three-dimensional scanning target area of unmanned aerial vehicle, storage medium and computer equipment
CN114170535A (en) * 2022-02-11 2022-03-11 北京卓翼智能科技有限公司 Target detection positioning method, device, controller, storage medium and unmanned aerial vehicle
CN114708192A (en) * 2022-03-10 2022-07-05 江西中业智能科技有限公司 Target counting method, system, storage medium and computer equipment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HE LI 等: "Comparison of Deep Learning Methods for Detecting and Counting Sorghum Heads in UAV Imagery", 《REMOTE SENSING》 *
HÉCTOR GARCÍA-MARTÍNEZ 等: "Digital Count of Corn Plants Using Images Taken by Unmanned Aerial Vehicles and Cross Correlation of Templates", 《AGRONOMY》 *
ZHIHAO LIU 等: "VisDrone-CC2021: The Vision Meets Drone Crowd Counting Challenge Results", 《ICCVW》 *
万祖毅: "基于无人机遥感的柑橘果树信息提取及应用研究", 《中国优秀硕士学位论文全文数据库 农业科技辑》 *
陈魏冬: "基于无人机航拍图像的松枯死木自动识别与定位", 《中国优秀硕士学位论文全文数据库 农业科学辑》 *
陶婧: "农用无人机高精度定位***研究", 《成都工业学院学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116243353A (en) * 2023-03-14 2023-06-09 广西壮族自治区自然资源遥感院 Forest right investigation and measurement method and system based on Beidou positioning
CN116243353B (en) * 2023-03-14 2024-02-27 广西壮族自治区自然资源遥感院 Forest right investigation and measurement method and system based on Beidou positioning

Also Published As

Publication number Publication date
CN115019216B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN109459734B (en) Laser radar positioning effect evaluation method, device, equipment and storage medium
CN107690840B (en) Unmanned plane vision auxiliary navigation method and system
KR20200121274A (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN109903312A (en) A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN111931565A (en) Photovoltaic power station UAV-based autonomous inspection and hot spot identification method and system
CN111666855B (en) Animal three-dimensional parameter extraction method and system based on unmanned aerial vehicle and electronic equipment
CN109883418A (en) A kind of indoor orientation method and device
CN105608417A (en) Traffic signal lamp detection method and device
CN108763811A (en) Dynamic data drives forest fire appealing prediction technique
CN114252884A (en) Method and device for positioning and monitoring roadside radar, computer equipment and storage medium
CN106845324A (en) The treating method and apparatus of guideboard information
CN113610040B (en) Paddy field weed density real-time statistical method based on improved BiSeNetV2 segmentation network
CN115019216B (en) Real-time ground object detection and positioning counting method, system and computer
CN111856499B (en) Map construction method and device based on laser radar
CN115511878A (en) Side slope earth surface displacement monitoring method, device, medium and equipment
CN114252883B (en) Target detection method, apparatus, computer device and medium
Coradeschi et al. Anchoring symbols to vision data by fuzzy logic
CN114252859A (en) Target area determination method and device, computer equipment and storage medium
CN111104861B (en) Method and apparatus for determining wire position and storage medium
CN116880522A (en) Method and device for adjusting flight direction of flight device in inspection in real time
CN116739739A (en) Loan amount evaluation method and device, electronic equipment and storage medium
CN114719881B (en) Path-free navigation algorithm and system applying satellite positioning
CN113822892B (en) Evaluation method, device and equipment of simulated radar and computer storage medium
CN116311010A (en) Method and system for woodland resource investigation and carbon sink metering
CN115601517A (en) Rock mass structural plane information acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant