CN111461013A - Real-time fire scene situation sensing method based on unmanned aerial vehicle - Google Patents

Real-time fire scene situation sensing method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN111461013A
CN111461013A CN202010249010.4A CN202010249010A CN111461013A CN 111461013 A CN111461013 A CN 111461013A CN 202010249010 A CN202010249010 A CN 202010249010A CN 111461013 A CN111461013 A CN 111461013A
Authority
CN
China
Prior art keywords
matching
aerial vehicle
unmanned aerial
images
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010249010.4A
Other languages
Chinese (zh)
Other versions
CN111461013B (en
Inventor
李振宇
王建岭
孙泽华
刘祥勇
孙政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Keweitai Enterprise Development Co ltd
Original Assignee
Shenzhen Keweitai Enterprise Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Keweitai Enterprise Development Co ltd filed Critical Shenzhen Keweitai Enterprise Development Co ltd
Priority to CN202010249010.4A priority Critical patent/CN111461013B/en
Publication of CN111461013A publication Critical patent/CN111461013A/en
Application granted granted Critical
Publication of CN111461013B publication Critical patent/CN111461013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a real-time fire scene situation perception method based on an unmanned aerial vehicle, which comprises the following steps: s1, automatically planning and generating a plurality of air routes by the unmanned aerial vehicle and taking off; s2, in each route, performing frame extraction on the images, and performing real-time matching, splicing and fusion to generate a strip chart; s3, matching, splicing and fusing the two strip graphs in adjacent routes to generate a big graph; s4, after the task is finished, overlapping the spliced normal posture image to a satellite map, and carrying out fire source detection on the normal posture image; and S5, finally, marking the orientation information of the fire source. According to the invention, the camera azimuth information recorded when the unmanned aerial vehicle takes a picture is utilized, the images are matched, spliced and fused through an image processing algorithm, and the situation map and the fire source related information are obtained immediately after the flight is finished, so that effective information is provided for ground monitoring personnel, the rapid on-site situation perception of the unmanned aerial vehicle is realized, and the on-site rescue efficiency is improved.

Description

Real-time fire scene situation sensing method based on unmanned aerial vehicle
Technical Field
The invention belongs to the field of real-time fire scene situation perception, and particularly relates to a real-time fire scene situation perception method based on an unmanned aerial vehicle.
Background
In recent years, along with the rapid development of unmanned aerial vehicle trade, unmanned aerial vehicle also more and more is applied to fields such as forestry, fire control, along with the development of unmanned aerial vehicle technique, the demand in segment market field increases, and unmanned aerial vehicle's application is showing more and more abundant possibility, and aerial photograph, plant protection replace the electric power workman to patrol line and so on, and unmanned aerial vehicle's application is more and more extensive, is promoting the development in each field.
As a novel middle and low altitude real-time television imaging and infrared imaging rapid acquisition system, the unmanned aerial vehicle has particular unique advantages in aspects of forestry resource investigation, ecological environment, forest fire prevention, forest pest control and the like of areas where vehicles and people cannot reach, and in the aspect of forest fire control, as powerful supplement of the existing forest monitoring means, the unmanned aerial vehicle can rapidly position fire points when forest fires occur to determine fires, can monitor and prevent forest fires, and can monitor and control the forest fires
Therefore, the real-time fire scene situation sensing method based on the unmanned aerial vehicle is provided, in the field rescue stage, a fireman needs to quickly know fire scene situation information, carries the pan-tilt camera by the unmanned aerial vehicle, carries out image splicing in real time, quickly generates an orthostatic situation image, automatically analyzes fire source information, and has important significance for improving rescue efficiency.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a real-time fire scene situation perception method based on an unmanned aerial vehicle.
In order to achieve the purpose, the invention provides the following technical scheme:
a real-time fire scene situation perception method based on an unmanned aerial vehicle comprises the following steps:
s1, automatically planning and generating a plurality of air routes by the unmanned aerial vehicle and taking off;
s2, in each route, performing frame extraction on the images, and performing real-time matching, splicing and fusion to generate a strip chart;
s3, matching, splicing and fusing the two strip graphs in adjacent routes to generate a big graph;
s4, after the task is finished, overlapping the spliced normal posture image to a satellite map, and carrying out fire source detection on the normal posture image;
and S5, finally, marking the orientation information of the fire source.
Preferably, the fire source is marked in step S5, the image mosaic technology is used to generate the fire scene situation diagram, and the image processing technology is used to mark and monitor the fire source.
Preferably, in each route, the method for matching, splicing and fusing adjacent images to generate the strip chart includes:
1) SURF feature point detection is carried out on two adjacent frames of images, matching denoising is carried out based on a RANSAC method, and a homography matrix is generated;
2) the matching quality analysis is carried out and,
if the matching quality is not good, the matching based on a plurality of characteristic blocks is carried out again on the two frames of images to obtain a homography matrix;
if the two images cannot be matched, calculating a homography matrix by combining pos information of the two images;
trimming the homography matrix to enable the homography matrix to be suitable for meeting the splicing of the long sequence images;
3) and splicing according to the homography matrix, and fusing images near the seam.
Preferably, in the adjacent routes, the method for matching, splicing and fusing the two strip charts to generate the large chart comprises the following steps:
1) in the overlapping region of the strip chart 1 and the strip chart 2, a fixed arrangement and a fixed number (n) of ROI interesting regions are extracted as a template set
Figure BDA0002434517100000021
2) Respectively matching the templates by using a normalized correlation coefficient matching method
Figure BDA0002434517100000031
Matching with the strip chart 2, and obtaining a coordinate set C of a matching central point in the strip chart 21~Cn
3) To coordinate set C1~CnPerforming cluster analysis to remove outliers and retain m interior points C1~Cm
4) To coordinate set C1~CmPerforming a least squares based line fit to obtain equation L1
5) To coordinate set C1~CmAnd line L1Separately calculate C1~CmTo L1And screening out the coordinates of which the distance exceeds a threshold value to obtain p coordinate sets C subjected to denoising1~Cp
6) To coordinate set C1~CpAnd performing least square method-based line fitting again to obtain line equation L2
7) According to the coordinate set C1~CpAnd equation L of a straight line2From L2Calculating the rotation and translation parameters of the strip chart 2 relative to the strip chart 1 by the slope;
8) and splicing the strip chart 2 and the strip chart 1 according to the rotation and translation parameters, and fusing images near the seam.
Preferably, the method of matching mass analysis comprises:
1) judging the number of the matched feature point pairs after denoising, and considering that the matching quality is poor when the number of the matched feature point pairs is lower than a certain threshold;
2) if the number of the feature point pairs is enough, the mapped 4 vertexes form a quadrangle according to the homography matrix, the shape characteristic of the quadrangle is judged, and whether the matching quality is too poor is judged according to the characteristics of the side length, the length-width ratio, the ratio of the upper side length to the lower side length and the like of the quadrangle.
Preferably, the fusion method adopts a feather algorithm, and adds the pixel values of the two images according to a certain weight value in an overlapping area near the seam to synthesize a new image.
The invention has the technical effects and advantages that: according to the real-time fire scene situation sensing method based on the unmanned aerial vehicle, before taking off, only simple mouse operation is needed, the polygon vertex of the area to be detected is dragged on the map of the ground station, and then the air route can be automatically generated; in the flight process, the holder is kept vertically downward, and the image and POS information are transmitted to the ground station in real time through a data link; meanwhile, the ground station end processes the images in real time and splices the images in real time to generate an orthomorphic image. In the image splicing process, the SURF + RANSAC and a plurality of feature block matching algorithms are adopted, so that a good matching effect is still achieved for a feature scarce area, forced splicing processing is carried out on the condition of matching failure, and the algorithm can be ensured to be normally operated all the time in a flight phase. In order to ensure real-time performance, the operation speed is optimized for each stage of image processing, and the memory occupation condition during operation is also independently optimized. In addition, for the fire source extraction, the color characteristics of the fire source are obvious, so the algorithm combines the characteristics of two color spaces of HSV + RGB, and the reliable extraction of the fire source is realized. The invention realizes the real-time and reliable situation perception of the unmanned aerial vehicle to disaster areas such as fire fields and the like, and improves the rescue efficiency.
Drawings
FIG. 1 is a schematic diagram of fire source detection and marking effects of a real-time fire scene situation perception method based on an unmanned aerial vehicle;
FIG. 2 is a flow chart of the real-time fire scene situation awareness method based on the unmanned aerial vehicle.
In the figure: 1. a histogram of the stripes; 2. a satellite map; 3. extracting a frame image; 4. a fire source.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
A real-time fire scene situation perception method based on an unmanned aerial vehicle comprises the following steps:
s1, automatically planning and generating a plurality of air routes by the unmanned aerial vehicle and taking off;
s2, in each route, performing frame extraction on the images, and performing real-time matching, splicing and fusion to generate a strip chart;
s3, matching, splicing and fusing the two strip graphs in adjacent routes to generate a big graph;
s4, after the task is finished, overlapping the spliced normal posture image to a satellite map, and carrying out fire source detection on the normal posture image;
and S5, finally, marking the orientation information of the fire source.
And marking the fire source in the step S5, generating a fire scene situation diagram by adopting an image splicing technology, and marking and monitoring the fire source by adopting an image processing technology.
In each route, the method for matching, splicing and fusing adjacent images to generate the strip chart comprises the following steps:
1) SURF feature point detection is carried out on two adjacent frames of images, matching denoising is carried out based on a RANSAC method, and a homography matrix is generated;
2) the matching quality analysis is carried out and,
if the matching quality is not good, the matching based on a plurality of characteristic blocks is carried out again on the two frames of images to obtain a homography matrix;
if the two images cannot be matched, calculating a homography matrix by combining pos information of the two images;
trimming the homography matrix to enable the homography matrix to be suitable for meeting the splicing of the long sequence images;
3) and splicing according to the homography matrix, and fusing images near the seam.
In adjacent routes, the method for matching, splicing and fusing the two strip graphs to generate the large graph comprises the following steps:
1) in the overlapping region of the strip chart 1 and the strip chart 2, a fixed arrangement and a fixed number (n) of ROI interesting regions are extracted as a template set
Figure BDA0002434517100000051
2) Respectively matching the templates by using a normalized correlation coefficient matching method
Figure BDA0002434517100000052
Matching with the strip chart 2, and obtaining a coordinate set C of a matching central point in the strip chart 21~Cn
3) To coordinate set C1~CnPerforming cluster analysis to remove outliers and retain m interior points C1~Cm
4) To coordinate set C1~CmPerforming a least squares based line fit to obtain equation L1
5) To coordinate set C1~CmAnd line L1Separately calculate C1~CmTo L1And screening out the coordinates of which the distance exceeds a threshold value to obtain p coordinate sets C subjected to denoising1~Cp
6) To coordinate set C1~CpAnd performing least square method-based line fitting again to obtain line equation L2
7) According to the coordinate set C1~CpAnd equation L of a straight line2From L2Calculating the rotation and translation parameters of the strip chart 2 relative to the strip chart 1 by the slope;
8) and splicing the strip chart 2 and the strip chart 1 according to the rotation and translation parameters, and fusing images near the seam.
The method for matching quality analysis comprises the following steps:
1) judging the number of the matched feature point pairs after denoising, and considering that the matching quality is poor when the number of the matched feature point pairs is lower than a certain threshold;
2) if the number of the feature point pairs is enough, the mapped 4 vertexes form a quadrangle according to the homography matrix, the shape characteristic of the quadrangle is judged, and whether the matching quality is too poor is judged according to the characteristics of the side length, the length-width ratio, the ratio of the upper side length to the lower side length and the like of the quadrangle.
The fusion method adopts a feather algorithm, and pixel values of the two images are added according to a certain weight value in an overlapping area near a seam to synthesize a new image.
Example 2
Referring to fig. 1-2, the present invention provides a real-time fire scene situation awareness method based on an unmanned aerial vehicle, comprising the following steps:
firstly, automatically planning a route for an area to be observed through a ground station;
and secondly, splicing adjacent images in the flight process of the flight path. And extracting features through a SURF algorithm, and filtering noise points through RANSAC to obtain accurate and reliable matching points. If the image with the scarce characteristics appears, a multi-characteristic block matching algorithm is adopted to realize matching. In order to avoid errors caused by incomplete synchronization of pos data and images when the airplane turns at the corner of the flight path, the splicing of the two strips is performed in enough time, and therefore the splicing is not performed on the flight path of the corner. After each route finishes flying, generating a strip chart;
thirdly, splicing the strips in a corner route by using a multi-feature block matching algorithm;
finally, after the task is finished, generating an orthostatic map and overlapping the orthostatic map to a satellite map; and automatically detecting and marking the fire scene in the image through a ground station button to obtain the area of the fire and the length of the fire line. In the ground station, the situation map can be observed from various angles by the operation of the mouse wheel and the keys.
In summary, the following steps: according to the invention, the camera azimuth information recorded when the unmanned aerial vehicle takes a picture is utilized, the images are matched, spliced and fused through an image processing algorithm, and a situation map which is nearly seamless is obtained immediately after the flight is finished and the relevant information of the fire source is obtained, so that effective information is provided for ground monitoring personnel. Compared with the prior art, the unmanned aerial vehicle rapid scene situation sensing method and the unmanned aerial vehicle rapid scene situation sensing device realize rapid scene situation sensing of the unmanned aerial vehicle, and improve the scene rescue efficiency.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (6)

1. A real-time fire scene situation perception method based on an unmanned aerial vehicle is characterized in that: the method comprises the following steps:
s1, automatically planning and generating a plurality of air routes by the unmanned aerial vehicle and taking off;
s2, in each route, performing frame extraction on the images, and performing real-time matching, splicing and fusion to generate a strip chart;
s3, matching, splicing and fusing the two strip graphs in adjacent routes to generate a big graph;
s4, after the task is finished, overlapping the spliced normal posture image to a satellite map, and carrying out fire source detection on the normal posture image;
and S5, finally, marking the orientation information of the fire source.
2. The real-time fire scene situation awareness method based on the unmanned aerial vehicle as claimed in claim 1, wherein: and marking the fire source in the step S5, generating a fire scene situation diagram by adopting an image splicing technology, and marking and monitoring the fire source by adopting an image processing technology.
3. The real-time fire scene situation awareness method based on the unmanned aerial vehicle as claimed in claim 3, wherein: in each route, the method for matching, splicing and fusing adjacent images to generate the strip chart comprises the following steps:
1) SURF feature point detection is carried out on two adjacent frames of images, matching denoising is carried out based on a RANSAC method, and a homography matrix is generated;
2) the matching quality analysis is carried out and,
if the matching quality is not good, the matching based on a plurality of characteristic blocks is carried out again on the two frames of images to obtain a homography matrix;
if the two images cannot be matched, calculating a homography matrix by combining pos information of the two images;
trimming the homography matrix to enable the homography matrix to be suitable for meeting the splicing of the long sequence images;
3) and splicing according to the homography matrix, and fusing images near the seam.
4. The real-time fire scene situation awareness method based on the unmanned aerial vehicle as claimed in claim 3, wherein: in the adjacent routes, the method for matching, splicing and fusing the two strip charts to generate the large chart comprises the following steps:
1) in the overlapping region of the strip chart 1 and the strip chart 2, a fixed arrangement and a fixed number (n) of ROI interesting regions are extracted as a template set
Figure FDA0002434517090000021
2) Respectively matching the templates by using a normalized correlation coefficient matching method
Figure FDA0002434517090000022
Matching with the strip chart 2, and obtaining a coordinate set C of a matching central point in the strip chart 21~Cn
3) To coordinate set C1~CnPerforming cluster analysis to remove outliers and retain m interior points C1~Cm
4) To coordinateCollection C1~CmPerforming a least squares based line fit to obtain equation L1
5) To coordinate set C1~CmAnd line L1Separately calculate C1~CmTo L1And screening out the coordinates of which the distance exceeds a threshold value to obtain p coordinate sets C subjected to denoising1~Cp
6) To coordinate set C1~CpAnd performing least square method-based line fitting again to obtain line equation L2
7) According to the coordinate set C1~CpAnd equation L of a straight line2From L2Calculating the rotation and translation parameters of the strip chart 2 relative to the strip chart 1 by the slope;
8) and splicing the strip chart 2 and the strip chart 1 according to the rotation and translation parameters, and fusing images near the seam.
5. The real-time fire scene situation awareness method based on the unmanned aerial vehicle as claimed in claim 3, wherein: the method for matching quality analysis comprises the following steps:
1) judging the number of the matched feature point pairs after denoising, and considering that the matching quality is poor when the number of the matched feature point pairs is lower than a certain threshold;
2) if the number of the feature point pairs is enough, the mapped 4 vertexes form a quadrangle according to the homography matrix, the shape characteristic of the quadrangle is judged, and whether the matching quality is too poor is judged according to the characteristics of the side length, the length-width ratio, the ratio of the upper side length to the lower side length and the like of the quadrangle.
6. The real-time fire scene situation awareness method based on the unmanned aerial vehicle as claimed in claim 4, wherein: the fusion method adopts a feather algorithm, and pixel values of two images are added according to a certain weight value in an overlapping area near a seam to synthesize a new image.
CN202010249010.4A 2020-04-01 2020-04-01 Unmanned aerial vehicle-based real-time fire scene situation awareness method Active CN111461013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010249010.4A CN111461013B (en) 2020-04-01 2020-04-01 Unmanned aerial vehicle-based real-time fire scene situation awareness method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010249010.4A CN111461013B (en) 2020-04-01 2020-04-01 Unmanned aerial vehicle-based real-time fire scene situation awareness method

Publications (2)

Publication Number Publication Date
CN111461013A true CN111461013A (en) 2020-07-28
CN111461013B CN111461013B (en) 2023-11-03

Family

ID=71684344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010249010.4A Active CN111461013B (en) 2020-04-01 2020-04-01 Unmanned aerial vehicle-based real-time fire scene situation awareness method

Country Status (1)

Country Link
CN (1) CN111461013B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361434A (en) * 2021-06-16 2021-09-07 广东电网有限责任公司 Disaster exploration method and device based on unmanned aerial vehicle remote control device
CN114495416A (en) * 2021-12-29 2022-05-13 北京辰安科技股份有限公司 Fire monitoring method and device based on unmanned aerial vehicle and terminal equipment
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127690A (en) * 2016-07-06 2016-11-16 *** A kind of quick joining method of unmanned aerial vehicle remote sensing image
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
US20170147815A1 (en) * 2015-11-25 2017-05-25 Lockheed Martin Corporation Method for detecting a threat and threat detecting apparatus
CN108600607A (en) * 2018-03-13 2018-09-28 上海网罗电子科技有限公司 A kind of fire-fighting panoramic information methods of exhibiting based on unmanned plane
CN109658366A (en) * 2018-10-23 2019-04-19 平顶山天安煤业股份有限公司 Based on the real-time video joining method for improving RANSAC and dynamic fusion
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN110648363A (en) * 2019-09-16 2020-01-03 腾讯科技(深圳)有限公司 Camera posture determining method and device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170147815A1 (en) * 2015-11-25 2017-05-25 Lockheed Martin Corporation Method for detecting a threat and threat detecting apparatus
CN106127690A (en) * 2016-07-06 2016-11-16 *** A kind of quick joining method of unmanned aerial vehicle remote sensing image
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
CN108600607A (en) * 2018-03-13 2018-09-28 上海网罗电子科技有限公司 A kind of fire-fighting panoramic information methods of exhibiting based on unmanned plane
CN109658366A (en) * 2018-10-23 2019-04-19 平顶山天安煤业股份有限公司 Based on the real-time video joining method for improving RANSAC and dynamic fusion
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN110648363A (en) * 2019-09-16 2020-01-03 腾讯科技(深圳)有限公司 Camera posture determining method and device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张俊前: ""无人机遥感影像快速拼 接方法研究"", no. 5, pages 73 - 75 *
陈武: ""无人机视频影像拼接关键技术研究"", no. 8, pages 2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361434A (en) * 2021-06-16 2021-09-07 广东电网有限责任公司 Disaster exploration method and device based on unmanned aerial vehicle remote control device
CN114495416A (en) * 2021-12-29 2022-05-13 北京辰安科技股份有限公司 Fire monitoring method and device based on unmanned aerial vehicle and terminal equipment
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching

Also Published As

Publication number Publication date
CN111461013B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN112435207B (en) Forest fire monitoring and early warning method based on sky-ground integration
CN111461013A (en) Real-time fire scene situation sensing method based on unmanned aerial vehicle
CN105898216B (en) A kind of number method of counting carried out using unmanned plane
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN110009530A (en) A kind of nerve network system and method suitable for portable power inspection
KR20170101516A (en) Apparatus and method for fire monitoring using unmanned aerial vehicle
EP2124194B1 (en) Method of detecting objects
WO2011093751A1 (en) A three dimensional model method based on combination of ground based images and images taken from above
CN107221006A (en) A kind of communication single pipe tower slant detection method based on unmanned plane imaging platform
Li et al. An early forest fire detection system based on dji m300 drone and h20t camera
Dang-Ngoc et al. Evaluation of forest fire detection model using video captured by UAVs
Ma et al. A cloud-edge-terminal collaborative system for temperature measurement in COVID-19 prevention
CN113378754B (en) Bare soil monitoring method for construction site
CN113345084A (en) Three-dimensional modeling system and three-dimensional modeling method
Dawdi et al. Locating victims in hot environments using combined thermal and optical imaging
CN111860378A (en) Market fire-fighting equipment inspection method based on gun-ball linkage and video event perception
KR102265291B1 (en) Real time fire detection system and fire detection method using the same
CN110944154A (en) Method for marking and identifying fixed object in high-altitude lookout camera image
WO2023060405A1 (en) Unmanned aerial vehicle monitoring method and apparatus, and unmanned aerial vehicle and monitoring device
Rong et al. A joint faster RCNN and stereovision algorithm for vegetation encroachment detection in power line corridors
CN111461986B (en) Night real-time two-dimensional image stitching method for unmanned aerial vehicle
CN114049580A (en) Airport apron aircraft positioning system
Nguyen et al. Neural network‐based optical flow versus traditional optical flow techniques with thermal aerial imaging in real‐world settings
CN112016498A (en) Shopping cart scattered in shopping mall positioning and recycling method based on computer vision
CN111783676A (en) Intelligent urban road video continuous covering method based on key area perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant