CN116958519A - Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method - Google Patents

Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method Download PDF

Info

Publication number
CN116958519A
CN116958519A CN202311188952.6A CN202311188952A CN116958519A CN 116958519 A CN116958519 A CN 116958519A CN 202311188952 A CN202311188952 A CN 202311188952A CN 116958519 A CN116958519 A CN 116958519A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
time
video image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311188952.6A
Other languages
Chinese (zh)
Other versions
CN116958519B (en
Inventor
刘云川
马云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Hongbao Technology Co ltd
Sichuan Hongbaorunye Engineering Technology Co ltd
Original Assignee
Chongqing Hongbao Technology Co ltd
Sichuan Hongbaorunye Engineering Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Hongbao Technology Co ltd, Sichuan Hongbaorunye Engineering Technology Co ltd filed Critical Chongqing Hongbao Technology Co ltd
Priority to CN202311188952.6A priority Critical patent/CN116958519B/en
Publication of CN116958519A publication Critical patent/CN116958519A/en
Application granted granted Critical
Publication of CN116958519B publication Critical patent/CN116958519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A method for aligning unmanned aerial vehicle video images with unmanned aerial vehicle position data is characterized by comprising the following steps: acquiring video streams of unmanned aerial vehicle video images and unmanned aerial vehicle fixed frequency data from a server, wherein the unmanned aerial vehicle fixed frequency data comprise unmanned aerial vehicle position data and unmanned aerial vehicle time data; analyzing the video stream by referring to unmanned aerial vehicle time data, finally obtaining the position data of each frame of video image, and calculating corner point information based on the video stream: first corner point positions Po1 andfirst included angleThe method comprises the steps of carrying out a first treatment on the surface of the Calculating corner point information based on unmanned aerial vehicle fixed frequency data based on unmanned aerial vehicle position data: a second corner point position Po2 and a second included angleThe method comprises the steps of carrying out a first treatment on the surface of the The unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment is realized through matching of the corner point information based on the video stream and the corner point information based on the unmanned aerial vehicle fixed frequency data. The method and the device can improve the time matching precision of the video and the position data, thereby improving the accuracy of video annotation.

Description

Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a method for aligning an unmanned aerial vehicle video image with unmanned aerial vehicle position data.
Background
In recent years, with the development of unmanned aerial vehicle technology, unmanned aerial vehicles have been widely used in various fields including mapping, electric power, oil gas, forestry, security, emergency rescue, and the like. Outside the country, unmanned aerial vehicle is applied to oil gas pipeline trade, replaces artifical patrolling and protecting and has become the normality. The unmanned aerial vehicle has a plurality of application cases in the field of oil and gas pipeline inspection and protection, and the unmanned aerial vehicle automatic inspection is in China or will become a normal state in the future.
In the inspection of unmanned aerial vehicles in the current stage, the inspection based on video images occupies a considerable proportion. In some fields of application, users wish to display some geographic elements on an image. For example, in the field of oil and gas pipeline inspection, the pipeline is not normally buried underground and visible, but a user hopes to mark the position of the pipeline on a video, so that the user can conveniently recognize whether the risk exists in a certain range around the pipeline on the video.
In the current unmanned aerial vehicle device, the video is usually sent to a video server in a streaming mode, and then the video stream is pulled by a client for display. For example, the unmanned plane and the automatic airport system published by Shenzhen company 2022 support several live broadcast types which are standard video streams and do not contain any position information. The position data of the unmanned aerial vehicle is sent to the server through another network channel at a fixed frequency (hereinafter referred to as fixed frequency), and the client reads the data from the server for use. Because of uncertainty in network transmission, the acquisition time of the latest frame of video acquired by the client is not necessarily aligned with the time of the position data, the time deviation of the acquisition time and the time deviation is influenced by the network state, and the time deviation is usually not fixed, so that the labeling precision of the geographic elements on the video image is difficult to ensure.
To implement the above-mentioned pipeline marking application, the client needs to time align the time of each frame of image in the video stream with the unmanned aerial vehicle position data, so as to calculate the position of each frame of video image, then calculate the position of the pipeline on the video image through a specific algorithm, and finally make labeling. The prior technical scheme is generally as follows: the client continuously requests the video stream and the location data from the server at the same time, the frequency of the video image and the location data is not matched, e.g. the video is sent at a frame rate of 30FPS, but the location data is typically sent one frame every N seconds (e.g. one frame every 2 seconds for an airport system). The video stream does not contain time information and the position data typically contains time information. Because the frequency of the position data is low, the acquired position data of the last frame is time-aligned with the image data of the last frame, and the acquired position data and the image data of the last frame are considered to be matched. Then, the middle image is interpolated according to the position data of the front frame and the rear frame according to time, and the position data of each frame of image is calculated. Because of uncertainty in network transmission, the acquisition time of the latest video frame acquired by the client is not necessarily aligned with the time of the position data, the time deviation of the acquisition time and the time deviation is influenced by the network state, and the time error is relatively large, so that the labeling precision of the geographic elements on the video image is difficult to ensure.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for aligning a video image of an unmanned aerial vehicle with position data of the unmanned aerial vehicle, which is characterized by comprising the following steps:
s100: acquiring video streams of unmanned aerial vehicle video images and unmanned aerial vehicle fixed frequency data from a server, wherein the unmanned aerial vehicle fixed frequency data comprise unmanned aerial vehicle position data and unmanned aerial vehicle time data;
s200: analyzing the video stream by referring to unmanned aerial vehicle time data, and acquiring the time of each frame of video image and continuous video images;
s300: processing each frame of video image obtained by analysis by using an instant positioning and map construction SLAM algorithm to obtain the position data of each frame of video image;
s400: segmenting the flight track of the unmanned aerial vehicle according to the position data of each frame of video image, calculating corner point information based on video streams, wherein the corner point information based on the video streams comprises a first corner point position Po1 and a first included angle
S500: segmenting unmanned aerial vehicle position data in unmanned aerial vehicle fixed frequency data, calculating corner point information based on unmanned aerial vehicle fixed frequency data, wherein the corner point information of the unmanned aerial vehicle fixed frequency data comprises a second corner point position Po2 and a second included angle
S600: the unmanned aerial vehicle video image is aligned with the unmanned aerial vehicle position data through matching the corner point information based on the video stream and the corner point information based on the unmanned aerial vehicle fixed frequency data.
Preferably, step S200 further includes:
when the first frame of unmanned aerial vehicle fixed frequency data is acquired, defining the last frame of video image as the first frame of image, extracting unmanned aerial vehicle time data in the unmanned aerial vehicle fixed frequency data as time T1 of the first frame of video image, and then obtaining time Tn=T1+ (N-1) dt of each subsequent frame of video image, wherein N represents the sequence number of the video image, and dt represents the interval time between each frame of video image in the video stream.
Preferably, the dt=1/FPS, where FPS represents a frame rate obtained from parsing the video stream.
Preferably, the position data of each frame of video image in step S300 is three-dimensional position coordinates in a cartesian coordinate system, and includes X, Y, Z data.
Preferably, the segmentation in step S400 refers to division into a plurality of straight line segments.
Preferably, the specific steps of the segmentation are as follows:
s401: selecting the position data of a video image as a starting point, denoted as P m Calculating the position point P of the subsequent video image i And P m Angle A of the line of (a) i
S402: if point P i And the start point P m Angle A of (2) i And the last point P i-1 And the start point P m Angle A of (2) i-1 If the angular deviation of (2) is greater than the threshold value, point P will be m ,P m+1 ,...,P i-1 As straight line L m At a point P i As a starting point, the next straight line is continuously searched.
Preferably, the threshold is 5 degrees or 10 degrees.
Preferably, in step S400, the calculation method based on the first corner point position Po1 of the video stream is: after the front and rear straight line segments are segmented, equations of the two straight lines are calculated respectively through a least square fitting method, and then an intersection point of the two straight lines, namely Po1, is calculated.
Preferably, the step S600 further includes the steps of:
s601: if it isAnd->If the difference value of the corner point position Po1 is smaller than the threshold value da, calculating the time T1 corresponding to the corner point position Po1 according to the time of the video image;
s602: calculating the time T2 corresponding to Po 2;
s603: calculating a time deviation deltat=t2-T1 of the video image and the position data;
s604: the time T' =tn+Δt of the video image is corrected, then the new time is matched with the position data time, and the subsequent position data interpolation operation is performed, where Tn is the time of each frame of the video image.
Preferably, the saidWherein T3 represents a time corresponding to a point P3 before the turning point of any two line segments, T4 represents a time corresponding to a point P4 after the turning point of any two line segments, l1 represents a length of a line segment P3Po2, and l2 represents a length of a Po2P 4.
Through the technical scheme, the time matching precision of the video image and the position data of the unmanned aerial vehicle is improved. Estimating the position of an image by using a SLAM algorithm, segmenting a track, and calculating the time of a corner point; the corner points of the images are matched with the corner points based on the unmanned aerial vehicle fixed frequency data, so that the time precision of the image and position data matching is corrected, and the accuracy of the image time can be effectively improved.
Drawings
FIG. 1 is a flow chart of a method for aligning a video image of a drone with position data of the drone according to one embodiment of the present invention;
FIG. 2 is a position map of a drone estimated from an image in one embodiment of the invention;
FIG. 3 is a graph of the results after segmentation of the position of the drone in one embodiment of the present invention;
FIG. 4 is a graph of the intersection and angle of straight lines in one embodiment of the invention;
fig. 5 is a graph of dividing straight lines and calculating intersection points and angles by the drone fixed frequency position data in one embodiment of the present invention.
Detailed Description
In order for those skilled in the art to understand the technical solutions disclosed in the present invention, the technical solutions of the various embodiments will be described below with reference to the embodiments and the related fig. 1 to 5, where the described embodiments are some embodiments, but not all embodiments of the present invention. The terms "first," "second," and the like, as used herein, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, "including" and "having" and any variations thereof are intended to cover and not be exclusive inclusion. For example, a process, or method, or system, or article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, system, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will appreciate that the embodiments described herein may be combined with other embodiments.
In one embodiment, as shown in fig. 1, a method for aligning a video image of a drone with position data of the drone is disclosed, comprising the steps of:
s100: acquiring video streams of unmanned aerial vehicle video images and unmanned aerial vehicle fixed frequency data from a server, wherein the unmanned aerial vehicle fixed frequency data comprise unmanned aerial vehicle position data and unmanned aerial vehicle time data;
s200: analyzing the video stream by referring to unmanned aerial vehicle time data, and acquiring the time of each frame of video image and continuous video images;
s300: processing each frame of video image obtained by analysis by using an instant positioning and map construction SLAM algorithm to obtain the position data of each frame of video image;
s400: segmenting the flight track of the unmanned aerial vehicle according to the position data of each frame of video image, calculating corner point information based on video streams, wherein the corner point information based on the video streams comprises a first corner point position Po1 and a first included angle
S500: segmenting unmanned aerial vehicle position data in unmanned aerial vehicle fixed frequency data, calculating corner point information based on unmanned aerial vehicle fixed frequency data, wherein the corner point information of the unmanned aerial vehicle fixed frequency data comprises a second corner point position Po2 and a second included angle
S600: the unmanned aerial vehicle video image is aligned with the unmanned aerial vehicle position data through matching the corner point information based on the video stream and the corner point information based on the unmanned aerial vehicle fixed frequency data.
For the embodiment, the method does not need to improve the existing hardware system, and improves the matching precision of the unmanned aerial vehicle video and the position data through an algorithm, so that the labeling precision of the geographic elements on the image is improved. On the basis of hardly increasing the cost, the satisfaction degree of the user is improved. Other solutions typically require modification of software or hardware on the drone, often at additional cost.
In another embodiment, step S200 further comprises:
when the first frame of unmanned aerial vehicle fixed frequency data is acquired, defining the last frame of video image as the first frame of image, extracting unmanned aerial vehicle time data in the unmanned aerial vehicle fixed frequency data as time T1 of the first frame of video image, and then obtaining time Tn=T1+ (N-1) dt of each subsequent frame of video image, wherein N represents the sequence number of the video image, and dt represents the interval time between each frame of video image in the video stream.
For this embodiment, the video stream is parsed to obtain successive video images. When the first frame of unmanned aerial vehicle fixed frequency data is acquired, defining the last frame of image as the first frame of image, extracting time in the unmanned aerial vehicle fixed frequency data as time T1 of the first frame of image, and then carrying out time Tn=T1+ (N-1) dt of each subsequent frame of image (assuming that the image number is N).
In another embodiment, the dt=1/FPS, where FPS represents a frame rate analytically obtained from the video stream.
For this embodiment, after receiving the video stream, the frame rate FPS (typically, an integer of 30, 60, etc.) is first parsed from the video stream, and the inter-image time dt=1/FPS is calculated.
In another embodiment, the position data of each frame of video image in step S300 is three-dimensional position coordinates in a cartesian coordinate system, including X, Y, Z data.
For this embodiment, each frame of video image is processed using a SLAM (instant localization and mapping, collectively Simultaneous Localization and Mapping) algorithm, image data is input, and the position Pn (three-dimensional coordinates in cartesian coordinates, including X, Y, Z data) of the image is output. The position is taken as the position of the unmanned plane. Since the coordinate system of SLAM is a local coordinate system, this position cannot be used directly.
In another embodiment, the segmentation in step S400 refers to dividing into a plurality of straight line segments.
With this embodiment, the flight trajectory is segmented according to the above-described position data, i.e., the flight trajectory is divided into a plurality of straight line segments according to the position coordinates of the image. Fig. 2 is a position point corresponding to an image, and fig. 3 is a segmented result. The track of the unmanned aerial vehicle is generally composed of a plurality of straight line segments, the straight line is convenient to fit, and the calculated angle is convenient to use for subsequent matching.
In another embodiment, the specific steps of the segmentation are as follows:
s401: selecting the position data of a video image as a starting point, denoted as P m Calculating the position point P of the subsequent video image i And P m Angle A of the line of (a) i
S402: if point P i And the start point P m Angle A of (2) i And the last point P i-1 And the start point P m Angle A of (2) i-1 If the angular deviation of (2) is greater than the threshold value, point P will be m ,P m+1 ,...,P i-1 As straight line L m At a point P i As a starting point, the next straight line is continuously searched.
For this embodiment, the segmentation method is as follows: selecting the position of an image as the starting point, denoted as P m (e.g., point P1 in FIG. 2), a position point P of the subsequent image is calculated i (position of subsequent image except P1 in FIG. 2) and P m Angle A of the line of (a) i (the slope K of the connection between Pi and Pm can also be calculated i ,K i =tan(A i ) A) is provided; if point P i And the start point P m Angle of (c) with the last point P i-1 If there is a large deviation (e.g., an angular deviation greater than a threshold value), the previous point (P m ,P m+1 ,...,P i-1 ) As straight line L m The points on the straight line are subjected to straight line fitting by a least square method to obtain a straight line L m And at the current point P i As a starting point, the next straight line is continuously searched.
In another embodiment, the threshold is 5 degrees or 10 degrees.
In another embodiment, the method for calculating the first corner point position Po1 based on the video stream in step S400 is as follows: after the front and rear straight line segments are segmented, equations of the two straight lines are calculated respectively through a least square fitting method, and then an intersection point of the two straight lines, namely Po1, is calculated.
For the embodiment, after the front and rear straight line segments are segmented, equations of the two straight lines can be calculated respectively by a least square fitting method, and then an intersection point of the two straight lines, namely Po1 (the point is the corner position on the flight track of the unmanned aerial vehicle) and an included angle are calculatedAs shown in fig. 4.
Similarly, the position in the fixed frequency data of the unmanned aerial vehicle is segmented, and an intersection point Po2 and a second included angle of the front straight line segment and the rear straight line segment are calculatedAs shown in fig. 5.
In another embodiment, step S600 further comprises the steps of:
s601: if it isAnd->If the difference value of the corner point position Po1 is smaller than the threshold value da, calculating the time T1 corresponding to the corner point position Po1 according to the time of the video image;
s602: calculating the time T2 corresponding to Po 2;
s603: calculating a time deviation deltat=t2-T1 of the video image and the position data;
s604: the time T' =tn+Δt of the video image is corrected, then the new time is matched with the position data time, and the subsequent position data interpolation operation is performed, where Tn is the time of each frame of the video image.
For this embodiment, ifAnd->If the difference between the corner point position Po1 is smaller than the angle threshold da (da is usually 5 to 10 degrees), the time T1 corresponding to the corner point position Po1 is calculated from the time of the image. And calculating the time T2 corresponding to the Po2 according to the fixed frequency data of the unmanned aerial vehicle. And calculating the time deviation delta T=T2-T1 of the video image and the position data according to the time of the corner point. The time T' =tn+Δt of the video image is corrected, and then the matching is performed with the new time and the position data time. And performing subsequent position data interpolation operation. The fixed frequency data is two seconds and one frame, so that the fixed frequency data is too sparse, the image is 30 frames per second and is very dense, and therefore a position needs to be interpolated for each frame of image. Interpolation is carried out from the inside of the fixed frequency data through the new time T', so that the position is obtained; simple linear interpolation is performed by time, since the flight speed of the drone is constant.
In another embodiment, theWherein T3 represents a time corresponding to a point P3 before the turning point of any two line segments, T4 represents a time corresponding to a point P4 after the turning point of any two line segments, l1 represents a length of a line segment P3Po2, and l2 represents a length of a Po2P 4.
In the case of this embodiment, the first and second embodiments,the calculation method comprises the following steps: taking fig. 5 as an example, the time corresponding to P3 is T3, and the time corresponding to P4 is T4; calculating the length l1 of the line segment P3Po2 and the length l2 of the line segment Po2P4, and then the corresponding time of Po2
Although the embodiments of the present invention have been described above with reference to the accompanying drawings, the present invention is not limited to the above-described specific embodiments and application fields, and the above-described specific embodiments are merely illustrative, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous forms of the invention without departing from the scope of the invention as claimed.

Claims (10)

1. A method for aligning a video image of an unmanned aerial vehicle with position data of the unmanned aerial vehicle, comprising the steps of:
s100: acquiring video streams of unmanned aerial vehicle video images and unmanned aerial vehicle fixed frequency data from a server, wherein the unmanned aerial vehicle fixed frequency data comprise unmanned aerial vehicle position data and unmanned aerial vehicle time data;
s200: analyzing the video stream by referring to unmanned aerial vehicle time data, and acquiring the time of each frame of video image and continuous video images;
s300: processing each frame of video image obtained by analysis by using an instant positioning and map construction SLAM algorithm to obtain the position data of each frame of video image;
s400: segmenting the flight track of the unmanned aerial vehicle according to the position data of each frame of video image, calculating corner point information based on video streams, wherein the corner point information based on the video streams comprises a first corner point position Po1 and a first included angle
S500: segmenting unmanned aerial vehicle position data in unmanned aerial vehicle fixed frequency data, calculating corner point information based on unmanned aerial vehicle fixed frequency data, wherein the corner point information of the unmanned aerial vehicle fixed frequency data comprises a second corner point position Po2 and a second included angle
S600: the unmanned aerial vehicle video image is aligned with the unmanned aerial vehicle position data through matching the corner point information based on the video stream and the corner point information based on the unmanned aerial vehicle fixed frequency data.
2. The method according to claim 1, wherein step S200 further comprises:
when the first frame of unmanned aerial vehicle fixed frequency data is acquired, defining the last frame of video image as the first frame of image, extracting unmanned aerial vehicle time data in the unmanned aerial vehicle fixed frequency data as time T1 of the first frame of video image, and then obtaining time Tn=T1+ (N-1) dt of each subsequent frame of video image, wherein N represents the sequence number of the video image, and dt represents the interval time between each frame of video image in the video stream.
3. The method of claim 2, wherein dt = 1/FPS, wherein FPS represents a frame rate analytically obtained from a video stream.
4. The method of claim 1, wherein the position data of each frame of video image in step S300 is three-dimensional position coordinates in a cartesian coordinate system, including X, Y, Z data.
5. The method of claim 1, wherein the segmentation in step S400 is divided into a plurality of straight line segments.
6. The method according to claim 5, characterized in that the specific steps of the segmentation are as follows:
s401: selecting the position data of a video image as a starting point, denoted as P m Calculating the position point P of the subsequent video image i And P m Angle A of the line of (a) i
S402: if point P i And the start point P m Angle A of (2) i And the last point P i-1 And the start point P m Angle A of (2) i-1 If the angular deviation of (2) is greater than the threshold value, point P will be m ,P m+1 ,...,P i-1 As straight line L m At a point P i As a starting point, the next straight line is continuously searched.
7. The method of claim 6, wherein the threshold is 5 degrees or 10 degrees.
8. The method according to claim 1, wherein the calculation method based on the first corner point position Po1 of the video stream in step S400 is: after the front and rear straight line segments are segmented, equations of the two straight lines are calculated respectively through a least square fitting method, and then an intersection point of the two straight lines, namely Po1, is calculated.
9. The method of claim 1, wherein step S600 further comprises the steps of:
s601: if it isAnd->If the difference value of the corner point position Po1 is smaller than the threshold value da, calculating the time T1 corresponding to the corner point position Po1 according to the time of the video image;
s602: calculating the time T2 corresponding to Po 2;
s603: calculating a time deviation deltat=t2-T1 of the video image and the position data;
s604: the time T' =tn+Δt of the video image is corrected, then the new time is matched with the position data time, and the subsequent position data interpolation operation is performed, where Tn is the time of each frame of the video image.
10. The method of claim 9, wherein theWherein T3 represents any twoThe time corresponding to the point P3 before the turning point of the line segment, T4 represents the time corresponding to the point P4 after the turning point of any two line segments, l1 represents the length of the line segment P3Po2, and l2 represents the length of Po2P 4.
CN202311188952.6A 2023-09-15 2023-09-15 Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method Active CN116958519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311188952.6A CN116958519B (en) 2023-09-15 2023-09-15 Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311188952.6A CN116958519B (en) 2023-09-15 2023-09-15 Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method

Publications (2)

Publication Number Publication Date
CN116958519A true CN116958519A (en) 2023-10-27
CN116958519B CN116958519B (en) 2023-12-08

Family

ID=88453240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311188952.6A Active CN116958519B (en) 2023-09-15 2023-09-15 Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method

Country Status (1)

Country Link
CN (1) CN116958519B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130027555A1 (en) * 2011-07-31 2013-01-31 Meadow William D Method and Apparatus for Processing Aerial Imagery with Camera Location and Orientation for Simulating Smooth Video Flyby
WO2016069497A1 (en) * 2014-10-26 2016-05-06 Galileo Group, Inc. Methods and systems for remote sensing with airborne drones and mounted sensor devices
WO2017066904A1 (en) * 2015-10-19 2017-04-27 Nokia Technologies Oy A navigation apparatus and associated methods
RU2016145621A (en) * 2016-11-22 2018-05-22 Федеральное государственное унитарное предприятие Государственный научно-исследовательский институт авиационных систем Method for simultaneous measurement of aircraft velocity vector and range to a ground object
CN110648398A (en) * 2019-08-07 2020-01-03 武汉九州位讯科技有限公司 Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
WO2020119140A1 (en) * 2018-12-13 2020-06-18 歌尔股份有限公司 Method, apparatus and smart device for extracting keyframe in simultaneous localization and mapping
CN112650301A (en) * 2021-01-11 2021-04-13 四川泓宝润业工程技术有限公司 Control method for guiding unmanned aerial vehicle to accurately land
US20220012910A1 (en) * 2018-11-12 2022-01-13 Forsberg Services Ltd Locating system
CN114820768A (en) * 2022-04-15 2022-07-29 中国电子科技集团公司第五十四研究所 Method for aligning geodetic coordinate system and slam coordinate system
CN115615436A (en) * 2022-09-17 2023-01-17 黄广东 Multi-machine repositioning unmanned aerial vehicle positioning method
CN115731100A (en) * 2021-08-30 2023-03-03 成都纵横自动化技术股份有限公司 Image splicing method and system based on multiple unmanned aerial vehicles
US20230081472A1 (en) * 2020-02-13 2023-03-16 Fengyu Wang Method, apparatus, and system for wireless vital monitoring using high frequency signals
CN116109930A (en) * 2023-02-22 2023-05-12 上海电力大学 Cross-view geographic view positioning method based on dynamic observation

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130027555A1 (en) * 2011-07-31 2013-01-31 Meadow William D Method and Apparatus for Processing Aerial Imagery with Camera Location and Orientation for Simulating Smooth Video Flyby
WO2016069497A1 (en) * 2014-10-26 2016-05-06 Galileo Group, Inc. Methods and systems for remote sensing with airborne drones and mounted sensor devices
WO2017066904A1 (en) * 2015-10-19 2017-04-27 Nokia Technologies Oy A navigation apparatus and associated methods
RU2016145621A (en) * 2016-11-22 2018-05-22 Федеральное государственное унитарное предприятие Государственный научно-исследовательский институт авиационных систем Method for simultaneous measurement of aircraft velocity vector and range to a ground object
US20220012910A1 (en) * 2018-11-12 2022-01-13 Forsberg Services Ltd Locating system
WO2020119140A1 (en) * 2018-12-13 2020-06-18 歌尔股份有限公司 Method, apparatus and smart device for extracting keyframe in simultaneous localization and mapping
CN110648398A (en) * 2019-08-07 2020-01-03 武汉九州位讯科技有限公司 Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
US20230081472A1 (en) * 2020-02-13 2023-03-16 Fengyu Wang Method, apparatus, and system for wireless vital monitoring using high frequency signals
CN112650301A (en) * 2021-01-11 2021-04-13 四川泓宝润业工程技术有限公司 Control method for guiding unmanned aerial vehicle to accurately land
CN115731100A (en) * 2021-08-30 2023-03-03 成都纵横自动化技术股份有限公司 Image splicing method and system based on multiple unmanned aerial vehicles
CN114820768A (en) * 2022-04-15 2022-07-29 中国电子科技集团公司第五十四研究所 Method for aligning geodetic coordinate system and slam coordinate system
CN115615436A (en) * 2022-09-17 2023-01-17 黄广东 Multi-machine repositioning unmanned aerial vehicle positioning method
CN116109930A (en) * 2023-02-22 2023-05-12 上海电力大学 Cross-view geographic view positioning method based on dynamic observation

Also Published As

Publication number Publication date
CN116958519B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
US9996936B2 (en) Predictor-corrector based pose detection
US8970694B2 (en) Video processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods
US20230175862A1 (en) Distributed Device Mapping
CN103162682B (en) Based on the indoor path navigation method of mixed reality
US8717436B2 (en) Video processing system providing correlation between objects in different georeferenced video feeds and related methods
US8363109B2 (en) Video processing system providing enhanced tracking features for moving objects outside of a viewable window and related methods
US8933961B2 (en) Video processing system generating corrected geospatial metadata for a plurality of georeferenced video feeds and related methods
WO2021125578A1 (en) Position recognition method and system based on visual information processing
CN110599522A (en) Method for detecting and removing dynamic target in video sequence
KR20180015961A (en) Method of estimating the location of object image-based and apparatus therefor
CN111829532A (en) Aircraft repositioning system and method
CN111272181B (en) Method, device, equipment and computer readable medium for constructing map
CN115908489A (en) Target tracking method and device
CN116958519B (en) Unmanned aerial vehicle video image and unmanned aerial vehicle position data alignment method
CN109816726B (en) Visual odometer map updating method and system based on depth filter
CN113781567B (en) Aerial image target geographic positioning method based on three-dimensional map generation
US11741631B2 (en) Real-time alignment of multiple point clouds to video capture
Wang et al. Automatic positioning data correction for sensor-annotated mobile videos
CN112991388A (en) Line segment feature tracking method based on optical flow tracking prediction and convex geometric distance
Kogut et al. A wide area tracking system for vision sensor networks
KR101635599B1 (en) Method and apparatus for providing update service location of object based location based service
Hwang et al. Object Tracking for a Video Sequence from a Moving Vehicle: A Multi‐modal Approach
Xu et al. Research on Multi-Source Fusion Based Seamless Indoor/Outdoor Positioning Technology
WO2024103708A1 (en) Positioning method, terminal device, server, and storage medium
Arslan Accuracy assessment of single viewing techniques for metric measurements on single images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant