CN112529936B - Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle - Google Patents

Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle Download PDF

Info

Publication number
CN112529936B
CN112529936B CN202011291009.4A CN202011291009A CN112529936B CN 112529936 B CN112529936 B CN 112529936B CN 202011291009 A CN202011291009 A CN 202011291009A CN 112529936 B CN112529936 B CN 112529936B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
point set
model
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011291009.4A
Other languages
Chinese (zh)
Other versions
CN112529936A (en
Inventor
欧阳子臻
杨煜基
成慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011291009.4A priority Critical patent/CN112529936B/en
Publication of CN112529936A publication Critical patent/CN112529936A/en
Application granted granted Critical
Publication of CN112529936B publication Critical patent/CN112529936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of target tracking, in particular to a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle, which comprises the following steps of: s1: extracting and tracking characteristic points of two adjacent frames of gray images, and marking the characteristic points as an original sample point set; s2: removing invalid characteristic points in the original sample point set to obtain a sampling sample point set, and obtaining an optimal model H through the sampling sample point set best The method comprises the steps of carrying out a first treatment on the surface of the S3: according to the optimal model H best Obtaining the speed of the unmanned aerial vehicle under an image coordinate system; s4: the speed in the image coordinate system is converted to the speed in the camera coordinate system. According to the scheme, invalid characteristic points in the original sample point set are removed, so that the influence of the characteristic points generated by the shadow of the unmanned aerial vehicle in the flight process can be eliminated, and the speed estimation of the unmanned aerial vehicle in the outdoor flight is more accurate.

Description

Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle
Technical Field
The invention relates to the technical field of target tracking, in particular to a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle.
Background
The concept of optical flow was first proposed by James j. Gibson in the 40 s 20 th century, and refers to the velocity of pattern motion in time-varying images. Since the brightness pattern of the corresponding point on the image is moving when the object is moving, the apparent movement of the brightness of the image is optical flow, and the apparent movement of the brightness of the image is included in the information of the movement of the target, so that the observer can determine the movement condition of the target.
Optical flow methods are often used for moving image analysis. The method is that the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane is utilized to find the corresponding relation between the previous frame and the current frame in the gray level images of the two adjacent frames by utilizing the change of the pixels in the image sequence on the time domain and the correlation between the adjacent frames, so as to calculate the motion information of the object between the adjacent frames. Optical flow algorithms can be divided into dense optical flow and sparse optical flow on a solution scale. The dense optical flow needs to solve each pixel point on the image, so that the calculated amount is large; the sparse optical flow only focuses on characteristic points such as angular points or edge points on the image, so that the method is small in calculated amount, and is suitable for embedded equipment, more is applied to unmanned aerial vehicle flight control systems at present, and the speed of the unmanned aerial vehicle is obtained by calculation. Classical sparse optical flow algorithms were first proposed by Lucas and Kanade in 1981 and are therefore also referred to as L-K optical flow. The L-K optical flow algorithm needs to satisfy 3 basic assumptions, the first being the constant brightness assumption. The constant brightness assumption indicates that the brightness of points representing the same object in two consecutive frames of pictures remains unchanged. The second is a time-continuous hypothesis. The temporal continuity assumption indicates that in two consecutive frames of pictures, the displacement of the object is relatively small. The third is a spatial consistency assumption. The assumption of spatial consistency indicates that adjacent points in the graph belong to the same surface and have similar motion. And then solving a basic optical flow equation for all pixel points in the neighborhood by using a least square method.
Chinese patent CN106989744a discloses a method for obtaining an optical flow speed, which uses a method for detecting Shi-Tomasi corner points for each gray scale image, selects 100 feature points with most obvious texture information, selects 3*3 pixel windows with the feature points as the center as a pixel unit, uses the pixel window position in the gray scale image of the previous frame in two adjacent gray scale images as the initial position of the pixel window of the gray scale image of the next frame, establishes a search area, uses Lucas-Kanade inverse multiplication algorithm, uses five layers of optical flow pyramid models, uses least square method to search gray scale differences and minimums in the search area of the gray scale image of the next frame by making the pixel window of the previous frame in the gray scale images of the two adjacent frames, uses the distance differences between the two pixel windows as optical flow vectors, and obtains the optical flow speed through difference.
However, when the unmanned aerial vehicle flies outdoors, shadows are generated under the irradiation of the sun, and when edge points and corner points of the shadows of the unmanned aerial vehicle are selected as characteristic points, the movement speed of the unmanned aerial vehicle relative to the bottom surface cannot be accurately reflected. Therefore, this method is not suitable for unmanned aerial vehicle flight control outdoors.
Disclosure of Invention
The invention provides a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle to overcome the defects in the prior art, which can eliminate the influence of shadows of the unmanned aerial vehicle in the speed recognition process and improve the estimation accuracy of the speed of the unmanned aerial vehicle.
In the technical scheme, a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle is provided, and the monocular sparse optical flow algorithm comprises the following steps:
s1: extracting and tracking characteristic points of two adjacent frames of gray images, and marking the characteristic points as an original sample point set;
s2: removing invalid characteristic points in the original sample point set to obtain a sampling sample point set, and obtaining an optimal model H through the sampling sample point set best
S3: according to the optimal model H best Obtaining the speed of the unmanned aerial vehicle under an image coordinate system;
s4: the speed in the image coordinate system is converted to the speed in the camera coordinate system.
According to the scheme, invalid characteristic points in the original sample point set are removed, so that the influence of the characteristic points generated by the shadow of the unmanned aerial vehicle in the flight process can be eliminated, and the speed estimation of the unmanned aerial vehicle in the outdoor flight is more accurate.
Preferably, in the step S1, the equation of relative motion between the feature points of two adjacent frames is expressed as:
p′ [i] =Hp [i]
wherein p' [i] =[x′ [i] ,y′ [i] ,1]For the ith feature point, p of the current frame in two adjacent frames of gray level images [i] =[x [i] ,y [i] ,1]The i-th characteristic point of the previous frame in two adjacent frames of gray images is represented by H, the model of the most characteristic points in the gray images is represented by homography matrix, and H 13 And h 23 Representing the amount of translation of the drone in the x-direction and y-direction in the camera coordinate system, respectively. Solving for h 13 And h 23 The speed of the drone in the image coordinates can be obtained.
Preferably, in the step S2, an optimal model H is obtained by applying a random sampling consistency algorithm best The method specifically comprises the following steps:
s21: randomly finding m in the original sample point set i Sample points;
s22: by m i Obtaining a model H for the sample points;
s23: calculating according to the model H to obtain an intra-local point set I i And according to the intra-office point set I i Obtaining a current optimal local point set I best
S24: according to the current optimal local point set I best Updating the model H and recording the model H as a current optimal model H best
S25: calculating the current optimal model H of all sample points in the original sample point set best Error sum J of (2) sum Judgment of J sum Whether the error threshold value is smaller than the total model error threshold value E, if so, outputting a current optimal model H best If not, go to step S26;
s26: setting the maximum circulation times K, circularly executing the steps S21 to S25 to obtain and output the current optimal model H best
Preferably, in the step S22, the model H is obtained by using the homogeneous equation set, and the specific formula is:
wherein, (x) [n] ,y [n] ) And (x' [n] ,y′ [n ]) For the nth group of sample pairs, x [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the x axis and y [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the y axis of the image, x [n] For the position of the x-axis and y of the m-th feature point of the previous frame in the adjacent two frames of gray level images in the image [1] The y-axis position of the nth characteristic point of the previous frame in the image for the two adjacent frames of gray imagesPut, x' (1) For the position of the first characteristic point of the current frame in the image in the x-axis and y 'of the two adjacent frames of gray images' (1] The position of the first characteristic point of the current frame in the two adjacent frames of gray level images is x 'in the y-axis of the image' [n] For the position of the m-th characteristic point of the current frame in the adjacent two frames of gray images in the x-axis of the image, y' [n] The position of the nth characteristic point of the previous frame in the adjacent two frames of gray images is the y-axis position of the nth characteristic point in the images.
Preferably, H in the model H 33 Set to 1 because the model H is calculated using the homogeneous equation.
Preferably, m is 4 or more, which is set because of h 33 After setting to 1, model H has 8 unknowns, which require 8 points to solve.
Preferably, in the step S23, the method specifically includes the following steps:
s231: setting the maximum cycle number N;
s232: calculating the error J between all pairs of samples in the original sample point set and the model H [n] The specific formula is as follows:
J [n] =||p [n] ′-Hp [n] || 2
wherein p is [n] ' is the nth characteristic point of the previous frame in two adjacent frames of gray level images, p [n] The n-th feature point of the current frame in the gray level images of two adjacent frames;
s233: judgment of J [n] If the error threshold value is smaller than the error threshold value e, if so, the nth characteristic point is classified into an optimal local point set I best If not, classifying the nth characteristic point into an outlier set for eliminating;
s234: repeatedly executing the steps S232 to S233 to obtain the local point set I i
S235: respectively find out the point set I in the office i And an optimal local Point set I best All element numbers S in (3) i And S is best Judgment S i Whether or not it is less than S best If yes, let I best =I best If not, make I best =I i
Wherein i is the number of times of loop execution, i is not more than N.
Preferably, since the optical flow camera of the unmanned aerial vehicle is generally installed at the bottom, and the lens is vertically downward, the unmanned aerial vehicle always maintains the smaller pitch angle and roll angle when flying at medium and low speed, so the speed in the camera coordinate system is considered to be the speed of the unmanned aerial vehicle relative to the lower surface, and the specific formula converted in the step S4 is as follows:
v c =Kv p
wherein v is p For the speed of the unmanned aerial vehicle in the image coordinate system, v c The speed of the unmanned aerial vehicle under the camera coordinate system is represented by K, which is an internal reference matrix of the camera.
In order to improve the reliability of the speed estimation of the unmanned aerial vehicle, the method further comprises the step S5: and estimating the optimal speed of the unmanned aerial vehicle by using Kalman filtering fusion.
Preferably, the step S5 specifically includes the following steps:
s51: the unmanned aerial vehicle generally carries an inertial measurement unit, and at the moment, the unmanned aerial vehicle speed v is obtained by integrating the acceleration according to the acceleration obtained by the unmanned aerial vehicle-mounted inertial measurement unit i
S52: using the velocity model in the prediction stage of Kalman filtering, and using v obtained in the step S4 c And the updating stage is used for Kalman filtering to obtain the optimal speed estimation v of the unmanned aerial vehicle.
Compared with the prior art, the beneficial effects are that: according to the invention, invalid characteristic points in the original sample point set are removed by using a random sampling consistency algorithm, so that the influence of the characteristic points generated by the shadow of the unmanned aerial vehicle in the flight process can be eliminated, and the speed estimation of the unmanned aerial vehicle in the outdoor flight process is more accurate; in addition, after the unmanned aerial vehicle speed is obtained, an inertial measurement unit is utilized to construct an unmanned aerial vehicle speed model, the unmanned aerial vehicle speed and the speed model are fused through Kalman filtering to estimate the unmanned aerial vehicle speed, the reliability and the accuracy of speed estimation are further improved, and better application of an unmanned aerial vehicle flight control system is facilitated.
Drawings
FIG. 1 is a flow chart of a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle of the present invention;
FIG. 2 is a schematic diagram of the invention for extracting tracking feature points in a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle, wherein feature points generated by shadows of the unmanned aerial vehicle are arranged in a rectangular frame;
FIG. 3 is a graph of the visual effect calculated for FIG. 2 using a classical optical flow algorithm;
fig. 4 is a visual effect diagram calculated from fig. 2 by using the monocular sparse optical flow algorithm for the outdoor unmanned aerial vehicle of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent; for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationship depicted in the drawings is for illustrative purposes only and is not to be construed as limiting the present patent.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if there are orientations or positional relationships indicated by terms "upper", "lower", "left", "right", "long", "short", etc., based on the orientations or positional relationships shown in the drawings, this is merely for convenience in describing the present invention and simplifying the description, and is not an indication or suggestion that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, so that the terms describing the positional relationships in the drawings are merely for exemplary illustration and are not to be construed as limitations of the present patent, and that it is possible for those of ordinary skill in the art to understand the specific meaning of the terms described above according to specific circumstances.
The technical scheme of the invention is further specifically described by the following specific embodiments with reference to the accompanying drawings:
example 1
Fig. 1 is an embodiment of a monocular sparse optical flow algorithm for an outdoor drone, comprising the steps of:
s1: extracting and tracking characteristic points of two adjacent frames of gray images by adopting a Shi-Tomasi corner detection method, and marking the characteristic points as an original sample point set; of course, other methods may be used to detect the feature points in the gray scale image, which is not limited herein;
s2: removing invalid characteristic points in the original sample point set to obtain a sampling sample point set, and obtaining an optimal model H through the sampling sample point set best
S3: according to the optimal model H best Obtaining the speed of the unmanned aerial vehicle under an image coordinate system;
s4: the speed in the image coordinate system is converted to the speed in the camera coordinate system.
In step S1 in the present embodiment, the relative motion equation of the feature points of two adjacent frames is expressed as:
p′ [i] =Hp [i]
wherein p' [i] =[x′ [i] ,y′ [i] ,1]For the ith feature point, p of the current frame in two adjacent frames of gray level images [i] =[x [i] ,y [i] ,1]For the ith feature point of the previous frame in two adjacent frames of gray images, H is the model of the most feature points in the gray images, and is a homography matrix, and because the pitch angle and the roll angle of the unmanned aerial vehicle are close to zero, H is 12 ,h 22 ,h 13 ,h 23 All are close to 0, so h 13 And h 23 Respectively representing translation amounts of the unmanned aerial vehicle in the x direction and the y direction in a camera coordinate system, and solving h 13 And h 23 The speed of the drone in the image coordinates can be obtained.
In step S2 in this embodiment, an optimal model H is obtained by applying a random sampling consistency algorithm best The method specifically comprises the following steps:
s21: randomly finding m in the original sample point set i Sample points;
s22: by m i Obtaining a model H for the sample points;
s23: calculating according to the model H to obtain an intra-local point set I i And according to the intra-office point set I i Obtaining a current optimal local point set I best
S24: according to the current optimal local point set I best Updating the model H and recording the model H as a current optimal model H best
S25: calculating the current optimal model H of all sample points in the original sample point set best Error sum J of (2) sum Judgment of J sum Whether the error threshold value is smaller than the total model error threshold value E, if so, outputting a current optimal model H best If not, go to step S26;
s26: setting the maximum circulation times K, circularly executing the steps S21 to S25 to obtain and output the current optimal model H best
In step S22 in this embodiment, the homogeneous equation set is used to obtain the model H, and the specific formula is:
wherein, (x) [n] ,y [n] ) And (x' [n ],y′ [n ]) For the nth group of sample pairs, x [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the x axis and y [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the y axis of the image, x [n] For the position of the x-axis and y of the m-th feature point of the previous frame in the adjacent two frames of gray level images in the image [1] For the position of the nth characteristic point of the previous frame in the adjacent two frames of gray images in the y axis of the image, x' (1) For the position of the first characteristic point of the current frame in the image in the x-axis and y 'of the two adjacent frames of gray images' (1] For the position of the first characteristic point of the current frame in the two adjacent frames of gray images in the y axis of the image, x' [n] For the position of the m-th characteristic point of the current frame in the adjacent two frames of gray images in the x-axis of the image, y' [n] For two adjacent frames of grey-scale imagesThe nth feature point of the previous frame in the image is at the y-axis position in the image.
H in model H in the present embodiment 33 Set to 1 because the model H is calculated using the homogeneous equation.
M in the present embodiment is 4 or more, which is set because of h 33 After setting to 1, the model H has 8 unknowns, and 8 points are needed to solve the model H, i.e., at least 4 pairs of sample points are needed.
In step S23 in the present embodiment, the method specifically includes the following steps:
s231: setting the maximum cycle number N;
s232: calculating the error J between all pairs of samples in the original sample point set and the model H [n] The specific formula is as follows:
J [n] =||p [n] ′-Hp [n] || 2
wherein p is [n] ' is the nth characteristic point of the previous frame in two adjacent frames of gray level images, p [n] The n-th feature point of the current frame in the gray level images of two adjacent frames;
s233: judgment of J [n] If the error threshold value is smaller than the error threshold value e, if so, the nth characteristic point is classified into an optimal local point set I best If not, classifying the nth characteristic point into an outlier set for eliminating;
s234: repeatedly executing the steps S232 to S233 to obtain the local point set I i
S235: respectively find out the point set I in the office i And an optimal local Point set I best All element numbers S in (3) i Sum s best Judgment S i Whether or not it is less than S best If yes, let I best =I best If not, make I best =I i
Wherein i is the number of times of loop execution, i is not more than N.
In this embodiment, since the optical flow camera of the unmanned aerial vehicle is generally mounted at the bottom, and the lens is vertically downward, when the unmanned aerial vehicle flies at medium and low speeds, the pitch angle and roll angle of the unmanned aerial vehicle are considered to be zero, so that the speed in the camera coordinate system is considered to be the specific formula of conversion in the speed step S4 of the unmanned aerial vehicle relative to the lower surface, which is as follows:
v c =Kv p
wherein v is p For the speed of the unmanned aerial vehicle in the image coordinate system, v c The speed of the unmanned aerial vehicle under the camera coordinate system is represented by K, which is an internal reference matrix of the camera.
Example 2
The present embodiment differs from embodiment 1 only in that step S5 is further included: and estimating the optimal speed of the unmanned aerial vehicle by using Kalman filtering fusion. This may further improve the reliability of the speed estimation of the unmanned aerial vehicle.
Since the unmanned aerial vehicle generally carries an inertial measurement unit, step S5 in this embodiment specifically includes the following steps:
s51: the acceleration is obtained according to the unmanned aerial vehicle-mounted inertial measurement unit, and the unmanned aerial vehicle speed v is obtained by integrating the acceleration i
S52: using the velocity model in the prediction stage of Kalman filtering, and using v obtained in the step S4 c And the updating stage is used for Kalman filtering to obtain the optimal speed estimation v of the unmanned aerial vehicle.
As shown in fig. 2 to fig. 4, the monocular sparse optical flow algorithm for the outdoor unmanned aerial vehicle is used for processing the pictures acquired by the unmanned aerial vehicle optical flow camera, wherein invalid characteristic points in an original sample point set, namely characteristic points generated by shadows of the unmanned aerial vehicle in a gray level image acquired by the optical flow camera, are removed by using the random sampling consistency algorithm, so that the characteristic points generated by the shadows of the unmanned aerial vehicle in the flight process of the unmanned aerial vehicle can be eliminated, the influence of the characteristic points is avoided, and the speed estimation of the unmanned aerial vehicle in the outdoor flight process is more accurate.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (7)

1. A monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle, comprising the steps of:
s1: extracting and tracking characteristic points of two adjacent frames of gray images, and marking the characteristic points as an original sample point set;
s2: removing invalid characteristic points in the original sample point set to obtain a sampling sample point set, and obtaining an optimal model H through the sampling sample point set best The method comprises the steps of carrying out a first treatment on the surface of the Obtaining an optimal model H by using a random sampling consistency algorithm best The method specifically comprises the following steps:
s21: randomly finding m in the original sample point set i Sample points;
s22: by m i Obtaining a model H for the sample points; the homogeneous equation set is utilized to calculate a model H, and the specific formula is as follows:
wherein, (x) [n] ,y [n] ) And (x' [n] ,y′ [n] ) For the nth group of sample pairs, x [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the x axis and y [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the y axis of the image, x [n] For the position of the x-axis and y of the m-th feature point of the previous frame in the adjacent two frames of gray level images in the image [n] For the position of the nth characteristic point of the previous frame in the adjacent two frames of gray images in the y axis of the image, x' (1) For the x-axis position, y of the first characteristic point of the current frame in the image in two adjacent frames of gray scale images ′(1] For the position of the first characteristic point of the current frame in the two adjacent frames of gray images in the y axis of the image, x' [n] For the position of the m-th characteristic point of the current frame in the adjacent two frames of gray images in the x-axis of the image, y' [n] Is the first frame in two adjacent frames of gray level imagesThe y-axis positions of the n feature points in the image;
s23: calculating according to the model H to obtain an intra-local point set I i And according to the intra-office point set I i Obtaining a current optimal local point set I best The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following steps:
s231: setting the maximum cycle number N;
s232: calculating the error J between all pairs of samples in the original sample point set and the model H [n] The specific formula is as follows:
J [n] =||p [n] ′-Hp [n] || 2
wherein p is [n] ' is the nth characteristic point of the previous frame in two adjacent frames of gray level images, p [n] The n-th feature point of the current frame in the gray level images of two adjacent frames;
s233: judgment of J [n] If the error threshold value is smaller than the error threshold value e, if so, the nth characteristic point is classified into an optimal local point set I best If not, classifying the nth characteristic point into an outlier set for eliminating;
s234: repeatedly executing the steps S232 to S233 to obtain the local point set I i
S235: respectively find out the point set I in the office i And an optimal local Point set I best All element numbers S in (3) i And S is best Judgment of s i Whether or not it is smaller than s best If yes, let I best =I best If not, make I best =I i
Wherein i is the number of times of cyclic execution, i is less than or equal to N;
s24: according to the current optimal local point set I best Updating the model H and recording the model H as a current optimal model H best
S25: calculating the current optimal model H of all sample points in the original sample point set best Error sum J of (2) sum Judgment of J sum Whether the error threshold value is smaller than the total model error threshold value E, if so, outputting a current optimal model H best If not, go to step S26;
s26: setting the maximum number of loops K, and circularly executing the steps S21 to S25 to obtainAnd outputs the current optimal model H best
S3: according to the optimal model H best Obtaining the speed of the unmanned aerial vehicle under an image coordinate system;
s4: the speed in the image coordinate system is converted to the speed in the camera coordinate system.
2. The monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle of claim 1, wherein the step S1 represents the relative motion equation of the feature points of two adjacent frames as:
p′ [i] =Hp [i] ,
wherein p' [i] =[x′ [i] ,y′ [i] ,1]For the ith feature point, p of the current frame in two adjacent frames of gray level images [i] =[x [i] ,y [i] ,1]The i-th characteristic point of the previous frame in two adjacent frames of gray images is represented by H, the model of the most characteristic points in the gray images is represented by homography matrix, and H 13 And h 23 Respectively represent the translation amount of the unmanned aerial vehicle in the x direction and the y direction in the image coordinate system.
3. The monocular sparse optical flow algorithm for an outdoor drone of claim 1, wherein H of H in the model 33 Set to 1.
4. The monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle of claim 1, wherein m in step S21 is 4 or more.
5. The monocular sparse optical flow algorithm for an outdoor drone of claim 1, wherein the specific formula of the conversion in step S4 is:
v c =v p
wherein v is p For the speed of the unmanned aerial vehicle in the image coordinate system, v c The speed of the unmanned aerial vehicle under the camera coordinate system is represented by K, which is an internal reference matrix of the camera.
6. The monocular sparse optical flow algorithm for an outdoor drone of claim 5, further comprising step S5: and estimating the optimal speed of the unmanned aerial vehicle by using Kalman filtering fusion.
7. The monocular sparse optical flow algorithm for an outdoor drone of claim 6, wherein step S5 specifically comprises the steps of:
s51: the acceleration is obtained according to the unmanned aerial vehicle-mounted inertial measurement unit, and the unmanned aerial vehicle speed v is obtained by integrating the acceleration i
S52: using the velocity model in the prediction stage of Kalman filtering, and using v obtained in the step S4 c And the updating stage is used for Kalman filtering to obtain the optimal speed estimation v of the unmanned aerial vehicle.
CN202011291009.4A 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle Active CN112529936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011291009.4A CN112529936B (en) 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011291009.4A CN112529936B (en) 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112529936A CN112529936A (en) 2021-03-19
CN112529936B true CN112529936B (en) 2023-09-05

Family

ID=74981117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011291009.4A Active CN112529936B (en) 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112529936B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789125A (en) * 2010-01-26 2010-07-28 北京航空航天大学 Method for tracking human skeleton motion in unmarked monocular video
CN102156991A (en) * 2011-04-11 2011-08-17 上海交通大学 Quaternion based object optical flow tracking method
CN103426008A (en) * 2013-08-29 2013-12-04 北京大学深圳研究生院 Vision human hand tracking method and system based on on-line machine learning
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789125A (en) * 2010-01-26 2010-07-28 北京航空航天大学 Method for tracking human skeleton motion in unmarked monocular video
CN102156991A (en) * 2011-04-11 2011-08-17 上海交通大学 Quaternion based object optical flow tracking method
CN103426008A (en) * 2013-08-29 2013-12-04 北京大学深圳研究生院 Vision human hand tracking method and system based on on-line machine learning
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进克隆卡尔曼滤波的视觉惯性里程计实现;徐之航;欧阳威;武元新;;信息技术(第05期);第1-4页 *

Also Published As

Publication number Publication date
CN112529936A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
TWI420906B (en) Tracking system and method for regions of interest and computer program product thereof
CN110287826B (en) Video target detection method based on attention mechanism
CN106529538A (en) Method and device for positioning aircraft
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
US11430199B2 (en) Feature recognition assisted super-resolution method
CN110390685B (en) Feature point tracking method based on event camera
CN105447881B (en) Radar image segmentation and light stream based on Doppler
US10215851B2 (en) Doppler-based segmentation and optical flow in radar images
CN106504274A (en) A kind of visual tracking method and system based under infrared camera
CN112907557A (en) Road detection method, road detection device, computing equipment and storage medium
CN113686314A (en) Monocular water surface target segmentation and monocular distance measurement method of shipborne camera
Braut et al. Estimating OD matrices at intersections in airborne video-a pilot study
Tsoukalas et al. Deep learning assisted visual tracking of evader-UAV
El Bouazzaoui et al. Enhancing RGB-D SLAM performances considering sensor specifications for indoor localization
CN116883897A (en) Low-resolution target identification method
CN112529936B (en) Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
Loktev et al. Image Blur Simulation for the Estimation of the Behavior of Real Objects by Monitoring Systems.
CN116067374A (en) Dynamic scene SLAM positioning method based on target detection algorithm YOLOv4 and geometric constraint
CN116151320A (en) Visual odometer method and device for resisting dynamic target interference
Liu et al. A method for restraining gyroscope drift using horizon detection in infrared video
Nie et al. LFC-SSD: Multiscale aircraft detection based on local feature correlation
JP2018116147A (en) Map creation device, map creation method and map creation computer program
Xing et al. Computationally efficient RGB-t UAV detection and tracking system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant