CN109063532B - Unmanned aerial vehicle-based method for searching field offline personnel - Google Patents

Unmanned aerial vehicle-based method for searching field offline personnel Download PDF

Info

Publication number
CN109063532B
CN109063532B CN201810382529.2A CN201810382529A CN109063532B CN 109063532 B CN109063532 B CN 109063532B CN 201810382529 A CN201810382529 A CN 201810382529A CN 109063532 B CN109063532 B CN 109063532B
Authority
CN
China
Prior art keywords
image
aerial vehicle
unmanned aerial
infrared
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810382529.2A
Other languages
Chinese (zh)
Other versions
CN109063532A (en
Inventor
郑恩辉
饶建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201810382529.2A priority Critical patent/CN109063532B/en
Publication of CN109063532A publication Critical patent/CN109063532A/en
Application granted granted Critical
Publication of CN109063532B publication Critical patent/CN109063532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for searching field lost contact personnel based on an unmanned aerial vehicle. The unmanned aerial vehicle carries out aerial photography searching on the area where the unconnected personnel move by adopting an automatic cruise mode through a preset air route, and returns to the air after completing an aerial photography task; in the aerial photography process, image segmentation processing, ROI area mapping and target detection processing are sequentially carried out on the acquired images in real time through an airborne image processing module, the detected images of the targets and GPS position information recorded correspondingly are transmitted to a ground station through a communication module, and search and rescue workers perform rescue work through information returned by an unmanned aerial vehicle. The method and the system can be used for sending the position information and the related image data of the possible lost personnel to the ground station through the communication module so as to help the search and rescue personnel to determine the positions of the lost personnel.

Description

Unmanned aerial vehicle-based method for searching field offline personnel
Technical Field
The invention relates to a method for searching field loss-of-contact personnel, in particular to a method for searching field loss-of-contact personnel based on an unmanned aerial vehicle.
Background
In recent years, self-help and independent travel is pursued by wide travel enthusiasts, and an undeveloped unmanned area is the most concerned place of donkey friends. Because travel enthusiasts are autonomous and spontaneous and most of the travel enthusiasts are not professionally trained, the pursuit of stimulation in exploration is accompanied by huge risks, and disconnection frequently occurs. In the process of searching and rescuing field loss-of-contact personnel, the biggest difficulty faced by the search and rescue personnel lies in how to quickly locate the position of the loss-of-contact personnel, the conventional method carries out carpet type search through a large number of personnel at present, the method is time-consuming and labor-consuming, and depends heavily on subjective judgment of people, so that the searching and rescue work efficiency is low, and irreparable consequences are often caused.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the defects of the existing field personnel searching method, the field offline personnel searching method based on the unmanned aerial vehicle is provided.
The problem to be solved by the invention comprises the following steps:
the unmanned aerial vehicle carries out aerial photography searching on the area where the unconnected personnel move by adopting an automatic cruise mode through a preset air route, and returns to the air after completing an aerial photography task; in the aerial photography process, image segmentation processing, ROI area mapping and target detection processing are sequentially carried out on the acquired images in real time through an airborne image processing module, the detected images of the targets and GPS position information recorded correspondingly are transmitted to a ground station through a communication module, and search and rescue workers perform rescue work through information returned by an unmanned aerial vehicle.
Unmanned aerial vehicle on load have infrared camera and visible light camera, infrared camera and visible light camera gather respectively and obtain infrared grey level image and visible light image, infrared camera and visible light camera constitute the image acquisition unit, infrared camera and visible light camera arrange at the unmanned aerial vehicle aircraft nose side by side to infrared camera and visible light camera are towards same direction.
The method comprises the following steps of carrying out image segmentation processing on an acquired image in real time through an airborne image processing module, specifically, carrying out image segmentation on an infrared gray image by adopting an improved maximum entropy method to obtain a foreground region which possibly represents a human body target, and further extracting a region which possibly has a human body in the image:
1) carrying out gray level statistics on an infrared gray level image acquired by an infrared camera, calculating a gray level histogram, and then calculating each gray level probability by adopting the following formula according to the gray level histogram:
Figure BDA0001641364430000021
wherein p isiRepresenting the probability of the ith gray level, i representing the number of gray levels, niIndicating the number of pixels of the ith grey level in the image, and N indicating the total number of pixels of the whole imageThe number of the particles;
2) the probability entropies of background B and target O are then calculated using the following formula:
Figure BDA0001641364430000022
Figure BDA0001641364430000023
Figure BDA0001641364430000024
Figure BDA0001641364430000025
in the formula, H (B), H (O) are probability entropies of a background B and a target O respectively, t represents a preset segmentation threshold of the gray level of the infrared gray level image, i and j both represent the gray level ordinal number of the infrared gray level image, and L represents the maximum gray level;
3) then, according to the obtained probability entropy, the following optimal threshold t is established*The objective function of (2):
Figure BDA0001641364430000026
in the formula, toRepresenting a gray value representing a human body in an infrared gray image, alpha representing a gray value toThe reliability of (2);
finally, solving the target function by adopting a dual optimization solving method according to Lagrange duality to obtain an optimal threshold t*Using an optimum threshold t*And segmenting the infrared gray image to obtain a foreground region and a background region.
The invention designs the target function specially, and sets the reliability alpha specially, which can improve the accuracy of detecting the human target.
The method is characterized in that ROI area mapping and target detection processing are carried out on the acquired image in real time through the airborne image processing module, specifically, an improved target identification method is adopted to obtain a detection result whether a human target is in the image, and useless calculation of a computing device in the process of identifying the target can be effectively reduced:
1) calibrating an infrared gray image and a visible light image respectively collected by an infrared camera and a visible light camera to obtain n pairs of characteristic point pairs between the infrared gray image and the visible light image, wherein the n pairs of characteristic point pairs are at least 8 pairs of characteristic point pairs;
2) and calculating a basic matrix f between the infrared gray image and the visible light image by using the following formula through n pairs of characteristic points:
Af=0
Figure BDA0001641364430000031
wherein A represents a reference matrix, and the matrix A is an n × 9 matrix, (u, v)T(u ', v') are coordinates of a pair of characteristic point pairs in the infrared grayscale image and the visible light image respectively;
3) calculating and obtaining polar lines l corresponding to the foreground regions segmented in the infrared gray level image in the visible light image by adopting the following formula:
l=fm
in the formula, m is a geometric center coordinate of a foreground region extracted from the infrared gray image;
4) and then establishing a sliding detection window by taking each pixel point on the polar line l as a center, extracting Hog characteristics of the sliding detection window, inputting each Hog characteristic of the polar line l into a trained Support Vector Machine (SVM) for classification, and identifying whether a human body target exists in the visible light image.
The invention carries out ROI area mapping and target detection by the mode and can better, more quickly and more accurately detect and obtain the human body target in the image.
The trained support vector machine SVM is obtained by extracting Hog characteristics of pixel points representing a human body target from images in a standard human body image library, inputting the Hog characteristics and known pixel point label information of the images in the standard human body image library into the support vector machine SVM for learning training, and adopting an RBF kernel function (Gaussian kernel function) during the learning training.
The unmanned aerial vehicle is a solar small-sized fixed-wing unmanned aerial vehicle with flexible photovoltaic modules coated on wings and empennages, so that long-time cruising when the illumination intensity is enough met is realized.
The flexible photovoltaic module is a device for converting solar energy into electric energy, and in the embodiment, the solar energy is converted into the electric energy and is sent to the storage battery for storage so as to push a load to work. By flexible, it is meant that the panel can be bent. The bending angle can reach 30 degrees.
The preset air route is planned by a person according to a carpet type searching mode for the area to be searched, and the unmanned aerial vehicle searches according to the air route after executing the flight command.
Compared with the existing searching method, the method has the beneficial effects that:
in the existing searching method for field lost contact personnel, no relevant technology based on unmanned aerial vehicle visual search exists, so that the method is a brand new direction.
The invention avoids the problems of high cost and low efficiency in manual searching, and simultaneously, the high-efficiency identification technology of visible light and infrared thermal imaging is used for searching field personnel in a large range, thereby providing reliable basis for accurate rescue work.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Detailed Description
The invention will be further explained below with reference to the drawings.
As shown in fig. 1, in the implementation of the present invention, a system mainly including an unmanned aerial vehicle and a corresponding ground station is adopted, the unmanned aerial vehicle selects a small solar fixed-wing unmanned aerial vehicle as a flight platform to meet the requirement of long endurance, the unmanned aerial vehicle is loaded with a flight control panel, a power management module, a barometer, a gyroscope, an accelerometer, and a GPS module for controlling the attitude of the aircraft and for navigation, and is externally connected with a communication module and an image acquisition and processing unit for capturing images for identifying targets and returning corresponding data.
The unmanned aerial vehicle is loaded with infrared camera and visible light camera, and infrared camera and visible light camera gather respectively and obtain infrared grey level image and visible light image, and infrared camera and visible light camera constitute the image acquisition unit, and infrared camera and visible light camera arrange at the unmanned aerial vehicle aircraft nose side by side to infrared camera and visible light camera are towards same direction.
The unmanned aerial vehicle is a solar small-sized fixed-wing unmanned aerial vehicle with flexible photovoltaic components coated on wings and empennages.
The flight control board adopts STM32F407 flight control processor module for through MPU-6050 integrated 3 axle gyroscope, 3 axle accelerometer and earth magnetic sensor gather the gesture data and control the aircraft gesture, navigate through the GPS module.
The image acquisition unit adopts image acquisition equipment including visible light and infrared thermal imaging and a controllable three-axis tripod head, and the tripod head reduces the jitter influence of the flight process of the aircraft to the minimum, so that the definition of the acquired image is ensured to the maximum extent. The camera transmits the acquired image information to the airborne image processing module for target identification.
The image processing module adopts an NVIDIA Tegra K1 processor as a processing unit, and consists of an NVIDIA 4-1, a four-core ARM-Cortex A15CPU and an NVIDIA Kepler GPU comprising 192 NVIDIA CUDA cores to meet the requirement of image processing.
The communication module adopts an Iridium 9602SBD satellite data transmission module, the Iridium 9602 is a single-board transceiver module, and data packets are received and transmitted through a satellite channel so as to realize data transmission of the unmanned aerial vehicle and the ground station in the field unmanned area.
The power management module is used for reducing the voltage of the lithium battery to 5V and 2A which are required by the airborne platform and the flight control unit.
The ground station is used for receiving and displaying the information transmitted by the airborne platform in real time.
The implementation process of the invention is as follows:
1. a barren mountain without mass development in a certain place is selected as an implementation object of the embodiment, the area to be searched is defined as 10 hectares, the landform of the forest land is defined, and 10-20 persons are randomly distributed in the area to be searched to simulate the lost contact person to be searched and rescued. And (3) performing blanket type search on the target area by presetting a route through upper computer software of the ground station, wherein the flying height of the unmanned aerial vehicle is set to be about 20-25 m.
The unmanned aerial vehicle sails autonomously according to a preset air route, and the sailing speed is 30-40 km/h. The drone continues to shoot the ground during the voyage at a rate of 3 seconds per image,
2. in the aerial photography process, image segmentation processing is carried out on the acquired image in real time through the airborne image processing module
The specific implementation adopts an improved maximum entropy method to carry out image segmentation on the infrared gray level image to obtain a foreground region:
1) carrying out gray level statistics on an infrared gray level image acquired by an infrared camera, calculating a gray level histogram, and then calculating each gray level probability by adopting the following formula according to the gray level histogram:
Figure BDA0001641364430000051
wherein p isiThe probability of the ith gray level is expressed, i represents the ordinal number of the gray level, and the value of the ith gray level is usually 0-255 for an 8-bit gray level image; n isiThe number of pixels representing the ith gray level in the image, and N represents the total number of pixels of the whole image;
2) the probability entropies of background B and target O are then calculated using the following formula:
Figure BDA0001641364430000052
Figure BDA0001641364430000053
Figure BDA0001641364430000054
Figure BDA0001641364430000055
in the formula, H (B), H (O) are probability entropies of a background B and a target O respectively, t represents a preset segmentation threshold of the gray level of the infrared gray level image, i and j both represent the gray level ordinal number of the infrared gray level image, L represents the maximum gray level, and for an 8-bit image, L is 255;
3) then, according to the obtained probability entropy, the following optimal threshold t is established*The objective function of (2):
Figure BDA0001641364430000061
in the formula, toRepresenting gray values, t, representing the human body in an infrared gray imageoDetermined by a standard infrared gray image with a human body, and is a constant, and alpha represents a gray value toThe reliability of (2);
finally, solving the target function by adopting a dual optimization solving method according to Lagrange duality to obtain an optimal threshold t*Using an optimum threshold t*And segmenting the infrared gray image to obtain a foreground region and a background region.
3. Carrying out ROI area mapping and target detection processing on the acquired image in real time through an airborne image processing module, and specifically adopting an improved target identification method to obtain a detection result whether a human target is in the image:
1) calibrating an infrared gray image and a visible light image respectively acquired by an infrared Camera and a visible light Camera by using a Camera calibration tool in a matlab software tool box to acquire n pairs of characteristic point pairs between the infrared gray image and the visible light image;
2) and calculating a basic matrix f between the infrared gray image and the visible light image by using the following formula through n pairs of characteristic points:
Af=0
Figure BDA0001641364430000062
wherein A represents a reference matrix, and the matrix A is an n × 9 matrix, (u, v)T(u ', v') are coordinates of a pair of characteristic point pairs in the infrared grayscale image and the visible light image respectively;
specifically, singular value decomposition is carried out on a reference matrix A, and a generalized inverse matrix A of the reference matrix A is calculated+And according to the generalized inverse matrix A+A least squares solution of the basis matrix f is calculated, thereby obtaining the basis matrix f.
3) Calculating and obtaining polar lines l corresponding to the segmented foreground regions in the infrared gray level image to the visible light image by adopting the following formula, wherein the polar lines l represent possible positions of the central points of the segmented foreground regions in the infrared gray level image to the visible light image:
l=fm
in the formula, m is a geometric center coordinate of a foreground region extracted from the infrared gray image;
4) and then establishing a sliding detection window by taking each pixel point on the polar line l as a center, extracting Hog characteristics of the sliding detection window, inputting each Hog characteristic of the polar line l into a trained Support Vector Machine (SVM) for classification, and identifying whether a human body target exists in the visible light image.
The trained support vector machine SVM is obtained by extracting Hog characteristics of pixel points representing a human body target from images in a standard human body image library, inputting the Hog characteristics and known pixel point label information of the images in the standard human body image library into the support vector machine SVM for learning training, and adopting an RBF kernel function (Gaussian kernel function) during the learning training.
4. The image of the detected target and the GPS position information correspondingly recorded by the image are transmitted to the ground station through the communication module, and the search and rescue personnel can perform rescue work through the information returned by the unmanned aerial vehicle.
When the unmanned aerial vehicle detects the target, the image and the corresponding GPS information are sent to the ground station through the satellite communication module, and ground station workers compare data returned by the unmanned aerial vehicle and check the correctness of the information.
The unmanned aerial vehicle finishes the search task of the target area and returns to the home in a straight line.
Through practical tests, the searching method can accurately identify and position the search and rescue personnel under the condition that the body of the search and rescue personnel is not shielded by a shielding object, the used satellite transmission module can well transmit target information to the upper computer, and the average processing time of each image (1920 multiplied by 1080) is only less than 1 s. In multiple experiments, the detection precision of the system reaches 98.2% for the randomly distributed personnel to be detected, and the requirement for searching field offline personnel is met.
Therefore, the solar unmanned aerial vehicle can continue a journey for a long time when the illumination intensity is enough, the unmanned aerial vehicle continuously acquires ground images through the image acquisition unit and distinguishes whether suspected unconnected persons exist through the image processing unit in the process of large-scale long-time operation, and position information and relevant image data of the possible unconnected persons are sent to the ground station through the communication module so as to help search and rescue personnel to determine the positions of the unconnected persons.

Claims (5)

1. A method for searching field offline personnel based on an unmanned aerial vehicle is characterized by comprising the following steps: the unmanned aerial vehicle carries out aerial photography searching on the area where the unconnected personnel move by adopting an automatic cruise mode through a preset air route, and returns to the air after completing an aerial photography task; in the aerial photography process, image segmentation processing, ROI area mapping and target detection processing are sequentially carried out on the acquired images in real time through an airborne image processing module, the detected images of the targets and GPS position information recorded correspondingly are transmitted to a ground station through a communication module, and search and rescue workers perform rescue work through information returned by an unmanned aerial vehicle;
the method comprises the following steps of carrying out image segmentation processing on an acquired image in real time through an airborne image processing module, specifically carrying out image segmentation on an infrared gray image by adopting an improved maximum entropy method to obtain a foreground region:
1) carrying out gray level statistics on an infrared gray level image acquired by an infrared camera, calculating a gray level histogram, and then calculating each gray level probability by adopting the following formula according to the gray level histogram:
Figure FDA0003132790030000011
wherein p isiRepresenting the probability of the ith gray level, i representing the number of gray levels, niThe number of pixels representing the ith gray level in the image, and N represents the total number of pixels of the whole image;
2) the probability entropies of background B and target O are then calculated using the following formula:
Figure FDA0003132790030000012
Figure FDA0003132790030000013
Figure FDA0003132790030000014
Figure FDA0003132790030000015
in the formula, H (B), H (O) are probability entropies of a background B and a target O respectively, t represents a preset segmentation threshold of the gray level of the infrared gray level image, i and j both represent the gray level ordinal number of the infrared gray level image, and L represents the maximum gray level;
3) then, according to the obtained probability entropy, the following optimal threshold t is established*The objective function of (2):
Figure FDA0003132790030000016
in the formula, toRepresenting gray values representing a human body in an infrared gray image, a representing gray value toThe reliability of (2);
finally, solving the target function by adopting a dual optimization solving method according to Lagrange duality to obtain an optimal threshold t*Using an optimum threshold t*And segmenting the infrared gray image to obtain a foreground region and a background region.
2. The unmanned aerial vehicle-based field unconnected person searching method according to claim 1, characterized in that: unmanned aerial vehicle on load have infrared camera and visible light camera, infrared camera and visible light camera arrange at the unmanned aerial vehicle aircraft nose side by side to infrared camera and visible light camera are towards same direction.
3. The unmanned aerial vehicle-based field unconnected person searching method according to claim 2, wherein: the method comprises the following steps of carrying out ROI area mapping and target detection processing on an acquired image in real time through an airborne image processing module, specifically adopting an improved target identification method to obtain a detection result whether a human target exists in the image:
1) calibrating an infrared gray image and a visible light image which are respectively collected by an infrared camera and a visible light camera to obtain n pairs of characteristic points between the infrared gray image and the visible light image;
2) and calculating a basic matrix f between the infrared gray image and the visible light image by using the following formula through n pairs of characteristic points:
Af=0
Figure FDA0003132790030000021
wherein A represents a reference matrix, and the matrix A is an n × 9 matrix, (u, v)T(u ', v') are coordinates of a pair of characteristic point pairs in the infrared grayscale image and the visible light image respectively;
3) calculating and obtaining polar lines l corresponding to the foreground regions segmented in the infrared gray level image in the visible light image by adopting the following formula:
l=fm
in the formula, m is a geometric center coordinate of a foreground region extracted from the infrared gray image;
4) and then establishing a sliding detection window by taking each pixel point on the polar line l as a center, extracting Hog characteristics of the sliding detection window, inputting each Hog characteristic of the polar line l into a trained Support Vector Machine (SVM) for classification, and identifying whether a human body target exists in the visible light image.
4. The unmanned aerial vehicle-based field unconnected person searching method according to claim 1, characterized in that: the unmanned aerial vehicle is a solar small-sized fixed-wing unmanned aerial vehicle with flexible photovoltaic components coated on wings and empennages.
5. The unmanned aerial vehicle-based field unconnected person searching method according to claim 1, characterized in that: the preset air route is planned by a person according to a carpet type searching mode for the area to be searched, and the unmanned aerial vehicle searches according to the air route after executing the flight command.
CN201810382529.2A 2018-04-26 2018-04-26 Unmanned aerial vehicle-based method for searching field offline personnel Active CN109063532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810382529.2A CN109063532B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle-based method for searching field offline personnel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810382529.2A CN109063532B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle-based method for searching field offline personnel

Publications (2)

Publication Number Publication Date
CN109063532A CN109063532A (en) 2018-12-21
CN109063532B true CN109063532B (en) 2021-12-07

Family

ID=64820067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810382529.2A Active CN109063532B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle-based method for searching field offline personnel

Country Status (1)

Country Link
CN (1) CN109063532B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109696921A (en) * 2018-12-27 2019-04-30 济南大学 A kind of system design for searching and rescuing unmanned plane
CN109787679A (en) * 2019-03-15 2019-05-21 郭欣 Police infrared arrest system and method based on multi-rotor unmanned aerial vehicle
CN111540166A (en) * 2020-05-09 2020-08-14 重庆工程学院 Unmanned aerial vehicle night search system and method based on deep learning
CN111800180B (en) * 2020-05-12 2022-11-15 萧县航迅信息技术有限公司 Rescue target discovery system and method for field unmanned aerial vehicle
CN111701118A (en) * 2020-06-24 2020-09-25 郭中华 Blood vessel developing device for injection of hyaluronic acid
CN115147741B (en) * 2022-06-28 2023-06-30 慧之安信息技术股份有限公司 Auxiliary helicopter search and rescue method based on edge calculation
CN117295009A (en) * 2023-10-07 2023-12-26 广州精天信息科技股份有限公司 Communication equipment deployment method and device, storage medium and intelligent terminal
CN118018104B (en) * 2024-04-09 2024-07-05 中科元境(江苏)文化科技有限公司 Unmanned aerial vehicle-based data transmission method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105416584A (en) * 2015-11-12 2016-03-23 广州杰赛科技股份有限公司 Post-disaster life tracking unmanned aerial vehicle system
CN106184753A (en) * 2016-07-13 2016-12-07 京信通信***(中国)有限公司 A kind of unmanned plane and unmanned plane search and rescue localization method
CN106291592A (en) * 2016-07-14 2017-01-04 桂林长海发展有限责任公司 A kind of countermeasure system of SUAV
CN106406343A (en) * 2016-09-23 2017-02-15 北京小米移动软件有限公司 Control method, device and system of unmanned aerial vehicle
US9665094B1 (en) * 2014-08-15 2017-05-30 X Development Llc Automatically deployed UAVs for disaster response
CN106741875A (en) * 2016-12-30 2017-05-31 天津市天安博瑞科技有限公司 A kind of flight search and rescue system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9665094B1 (en) * 2014-08-15 2017-05-30 X Development Llc Automatically deployed UAVs for disaster response
CN105416584A (en) * 2015-11-12 2016-03-23 广州杰赛科技股份有限公司 Post-disaster life tracking unmanned aerial vehicle system
CN106184753A (en) * 2016-07-13 2016-12-07 京信通信***(中国)有限公司 A kind of unmanned plane and unmanned plane search and rescue localization method
CN106291592A (en) * 2016-07-14 2017-01-04 桂林长海发展有限责任公司 A kind of countermeasure system of SUAV
CN106406343A (en) * 2016-09-23 2017-02-15 北京小米移动软件有限公司 Control method, device and system of unmanned aerial vehicle
CN106741875A (en) * 2016-12-30 2017-05-31 天津市天安博瑞科技有限公司 A kind of flight search and rescue system and method

Also Published As

Publication number Publication date
CN109063532A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109063532B (en) Unmanned aerial vehicle-based method for searching field offline personnel
CN111932588B (en) Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN111326023A (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
CN108153334B (en) Visual autonomous return and landing method and system for unmanned helicopter without cooperative target
CN205453893U (en) Unmanned aerial vehicle
CN105512628A (en) Vehicle environment sensing system and method based on unmanned plane
CN105847684A (en) Unmanned aerial vehicle
CN105912980A (en) Unmanned plane and unmanned plane system
CN108563236B (en) Target tracking method of nano unmanned aerial vehicle based on concentric circle characteristics
CN110908399B (en) Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network
CN105786016A (en) Unmanned plane and RGBD image processing method
CN112269398A (en) Unmanned aerial vehicle of transformer substation independently patrols and examines system
CN108121350B (en) Method for controlling aircraft to land and related device
CN110231835A (en) A kind of accurate landing method of unmanned plane based on machine vision
CN113228103A (en) Target tracking method, device, unmanned aerial vehicle, system and readable storage medium
Stuckey et al. An optical spatial localization system for tracking unmanned aerial vehicles using a single dynamic vision sensor
CN113449566B (en) Intelligent image tracking method and system for 'low-small' target of human in loop
CN115144879B (en) Multi-machine multi-target dynamic positioning system and method
CN116954264A (en) Distributed high subsonic unmanned aerial vehicle cluster control system and method thereof
CN110968112A (en) Unmanned aerial vehicle autonomous landing system and method based on monocular vision
CN111402324A (en) Target measuring method, electronic equipment and computer storage medium
Qi et al. Detection and tracking of a moving target for UAV based on machine vision
CN112241180B (en) Visual processing method for landing guidance of unmanned aerial vehicle mobile platform
CN213690330U (en) Image recognition-based autonomous carrier landing system for fixed-wing unmanned aerial vehicle
CN111832510B (en) Method and system for intelligently finding pole tower

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant