CN112529936A - Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle - Google Patents

Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle Download PDF

Info

Publication number
CN112529936A
CN112529936A CN202011291009.4A CN202011291009A CN112529936A CN 112529936 A CN112529936 A CN 112529936A CN 202011291009 A CN202011291009 A CN 202011291009A CN 112529936 A CN112529936 A CN 112529936A
Authority
CN
China
Prior art keywords
unmanned aerial
point set
aerial vehicle
model
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011291009.4A
Other languages
Chinese (zh)
Other versions
CN112529936B (en
Inventor
欧阳子臻
杨煜基
成慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011291009.4A priority Critical patent/CN112529936B/en
Publication of CN112529936A publication Critical patent/CN112529936A/en
Application granted granted Critical
Publication of CN112529936B publication Critical patent/CN112529936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of target tracking, in particular to a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle, which comprises the following steps: s1: extracting and tracking feature points of two adjacent frames of gray level images, and recording the feature points as an original sample point set; s2: invalid characteristic points in the original sample point set are removed to obtain a sampling sample point set, and an optimal model H is obtained through the sampling sample point setbest(ii) a S3: according to the optimal model HbestObtaining the speed of the unmanned aerial vehicle under an image coordinate system; s4: and converting the speed in the image coordinate system into the speed in the camera coordinate system. In the scheme, the invalid characteristic points in the original sample point set are removed, so that the influence of characteristic points generated by shadows of the unmanned aerial vehicle in the flying process can be eliminated, and the speed estimation of the unmanned aerial vehicle in outdoor flying is improvedAnd (4) adding accurately.

Description

Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle
Technical Field
The invention relates to the technical field of target tracking, in particular to a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle.
Background
The concept of optical flow was first proposed by James j.gibson in the 40 th century, referring to the velocity of modal motion in time-varying images. Because the brightness pattern of the corresponding point on the image is moving when the object is moving, the apparent visual motion of the image brightness is the optical flow, and the visual motion contains the information of the motion of the object, so that the observer can determine the motion of the object.
Optical flow methods are often used for motion image analysis. The method is characterized in that the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane is utilized, the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames are utilized to find the corresponding relation between the previous frame and the current frame in the adjacent two frames of gray images, and therefore the motion information of the object between the adjacent frames is calculated. Optical flow algorithms can be divided into dense optical flows and sparse optical flows on a solution scale. The dense optical flow needs to solve each pixel point on the image, and the calculated amount is large; the sparse optical flow only focuses on feature points such as angular points or edge points on the image, and the calculated amount is small, so that the sparse optical flow is suitable for embedded equipment, is also applied to an unmanned aerial vehicle flight control system at present, and is used for calculating to acquire the speed of the unmanned aerial vehicle. The classical sparse optical flow algorithm was first proposed by Lucas and Kanade in 1981 and is therefore also referred to as L-K optical flow. The L-K optical flow algorithm needs to satisfy 3 basic assumptions, the first being a constant brightness assumption. The constant brightness assumption indicates that the brightness of the points representing the same object in two consecutive frames of pictures remains unchanged. The second is a time continuous assumption. The assumption of temporal continuity indicates that the displacement of the object is small in two consecutive frames of pictures. The third is the spatial consistency assumption. The assumption of spatial consistency indicates that adjacent points in the figure belong to the same surface, with similar motion. Then, a least square method is utilized to solve a basic optical flow equation for all pixel points in the neighborhood.
Chinese patent CN106989744A discloses a method for obtaining optical flow velocity, the method of Shi-Tomasi corner point detection is adopted for each frame of gray level image, 100 characteristic points with most obvious texture information are selected, selecting a pixel window of 3 x 3 as a pixel unit by taking the feature point as the center, taking the pixel window position in the previous frame of gray scale image in the two adjacent frames of gray scale images as the initial position of the pixel window of the next frame of gray scale image, establishing a search area, utilizing a Lucas-Kanade inverse multiplication algorithm, adopting a five-layer optical flow pyramid model and utilizing a least square method, the method comprises the steps of searching the minimum sum of gray level differences in a search area of a gray level image of a previous frame in two adjacent frames of gray level images and a search area of a gray level image of a next frame to obtain the position of a pixel window of the next frame, wherein the distance difference between the two frames of pixel windows is an optical flow vector, and the optical flow velocity is obtained through difference.
However, when the unmanned aerial vehicle flies outdoors, shadows are generated under the irradiation of the sun, and when edge points and corner points of the shadows of the unmanned aerial vehicle are selected as feature points, the movement speed of the unmanned aerial vehicle relative to the bottom surface cannot be correctly reflected. Therefore, this method is not suitable for outdoor drone flight control.
Disclosure of Invention
The invention provides a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle to overcome the defects in the prior art, so that the influence of the shadow of the unmanned aerial vehicle in the speed identification process can be eliminated, and the estimation precision of the speed of the unmanned aerial vehicle is improved.
In the technical scheme, a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle is provided, and the method comprises the following steps:
s1: extracting and tracking feature points of two adjacent frames of gray level images, and recording the feature points as an original sample point set;
s2: invalid characteristic points in the original sample point set are removed to obtain a sampling sample point set, and an optimal model H is obtained through the sampling sample point setbest
S3: according to the optimal model HbestObtaining the speed of the unmanned aerial vehicle under an image coordinate system;
s4: and converting the speed in the image coordinate system into the speed in the camera coordinate system.
In the scheme, invalid characteristic points in the original sample point set are removed, so that the influence of characteristic points generated by shadows of the unmanned aerial vehicle in the flying process can be eliminated, and the speed estimation of the unmanned aerial vehicle in outdoor flying is more accurate.
Preferably, in the step S1, the relative motion equation of the feature points of two adjacent frames is expressed as:
p′[i]=Hp[i]
Figure BDA0002783781490000021
wherein, p'[i]=[x′[i],y′[i],1]For the ith characteristic point, p, of the current frame in two adjacent frames of gray images[i]=[x[i],y[i],1]The ith characteristic point of the previous frame in the two adjacent frames of gray level images, H is the model of the most characteristic points in the gray level images, H is a homography matrix13And h23Respectively representing the translation amounts of the unmanned aerial vehicle in the x direction and the y direction in the camera coordinate system. Solve out h13And h23The velocity of the drone in image coordinates can be obtained.
Preferably, in the step S2, the optimal model H is obtained by using a random sampling consistency algorithmbestThe method specifically comprises the following steps:
s21: randomly finding m in the original sample point setiFor the sample points;
s22: using miSolving a model H for the sample points;
s23: calculating to obtain a local point set I according to the model HiAnd according to the local point set IiObtaining the current optimal local point set Ibest
S24: according to the current optimal local point set IbestUpdating model H, and recording as current optimal model Hbest
S25: calculating the current optimal model H of all sample point pairs in the original sample point setbestError sum ofsumJudgment JsumIf the error is less than the total model error threshold E, outputting the current optimal model H if the error is less than the total model error threshold EbestIf not, the process proceeds to step S26;
s26: setting the maximum circulation times K, circularly executing the steps S21 to S25, and obtaining and outputting the current optimal model Hbest
Preferably, in step S22, the model H is obtained by using a homogeneous equation set, and the specific formula is as follows:
Figure BDA0002783781490000031
wherein (x)[n],y[n]) And (x'[n],y′[n]) For the nth set of sample pairs, x[1]The position of the first feature point of the previous frame in the images in the x axis, y axis[1]The y-axis position, x, of the first feature point in the previous frame in the two adjacent frames of gray images in the image[n]The x-axis position, y, of the mth characteristic point of the previous frame in the two adjacent frames of gray images in the image[1]Is the y-axis position, x ', of the n-th feature point of the previous frame in the images of the two adjacent frames of gray images'(1)Is the x-axis position, y 'of the first feature point of the current frame in the adjacent two frames of gray level images in the image'(1]Y-axis position, x 'of first feature point of current frame in adjacent two frames of gray level images in image'[n]Is the x-axis position, y 'of the m-th feature point of the current frame in the adjacent two frames of gray level images in the image'[n]The y-axis position of the nth characteristic point of the previous frame in the two adjacent frames of gray level images in the image is obtained.
Preferably, H in the above-mentioned model H33Set to 1 because model H is calculated using the homogeneous equation.
Preferably, m is 4 or more, which is set because h33After setting to 1, model H has 8 unknowns, and 8 points are needed to solve it.
Preferably, the step S23 specifically includes the following steps:
s231: setting a maximum cycle number N;
s232: calculating the error J of all sample pairs in the original sample point set and the model H[n]The concrete formula is as follows:
J[n]=||p[n]′-Hp[n]||2
wherein p is[n]' is the n-th feature point, p, of the previous frame in the two adjacent frames of gray images[n]For two adjacent frames of gray-scale imagesThe nth characteristic point of the current frame;
s233: judgment J[n]If the number of the feature points is less than the error threshold e, if so, the nth feature point is classified into an optimal local interior point set IbestIf not, the nth characteristic point is classified into the out-of-office point set for removing;
s234: repeating the steps S232 to S233 to obtain the local point set Ii
S235: separately solving an intra-office point set IiAnd optimal point set IbestNumber of all elements in SiAnd SbestJudgment SiWhether or not less than SbestIf so, then Ibest=IbestIf not, then Ibest=Ii
Wherein i is the cycle execution frequency, and i is less than or equal to N.
Preferably, since the optical flow camera of the drone is generally mounted at the bottom and the lens is vertically downward, when the drone flies at a medium or low speed, the small pitch angle and roll angle are always maintained, so that the speed in the camera coordinate system is considered to be the speed of the drone relative to the lower surface, and the specific formula converted in the step S4 is as follows:
vc=Kvp
wherein v ispFor unmanned aerial vehicle speed, v, in an image coordinate systemcThe unmanned aerial vehicle speed under the camera coordinate system is shown, and K is an internal reference matrix of the camera.
In order to improve the reliability of the unmanned aerial vehicle speed estimation, the method further comprises the step S5: and estimating the optimal speed of the unmanned aerial vehicle by using Kalman filtering fusion.
Preferably, the step S5 specifically includes the following steps:
s51: the unmanned aerial vehicle generally carries the inertia measurement unit, obtains the acceleration according to unmanned aerial vehicle machine carries inertia measurement unit this moment, obtains unmanned aerial vehicle speed v through the integral to the accelerationi
S52: applying the velocity model to the prediction stage of Kalman filtering, and applying v obtained in step S4cUpdate phase for Kalman filtering to obtain optimal velocity estimate of UAVAnd (6) counting v.
Compared with the prior art, the beneficial effects are: according to the method, the invalid characteristic points in the original sample point set are removed by using the immediate sampling consistency algorithm, so that the influence of characteristic points generated by the shadow of the unmanned aerial vehicle in the flying process can be eliminated, and the speed estimation of the unmanned aerial vehicle in outdoor flying is more accurate; in addition, after the speed of the unmanned aerial vehicle is solved, an unmanned aerial vehicle speed model is established by using the inertia measurement unit, the speed of the unmanned aerial vehicle and the speed model are fused through Kalman filtering to estimate the speed of the unmanned aerial vehicle, the reliability and the accuracy of speed estimation are further improved, and the better application of a flight control system of the unmanned aerial vehicle is facilitated.
Drawings
FIG. 1 is a schematic flow diagram of a monocular sparse optical flow algorithm for an outdoor drone according to the present invention;
FIG. 2 is a schematic diagram of tracking feature points extracted in the monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle according to the present invention, wherein feature points generated by shadows of the unmanned aerial vehicle are within a rectangular frame;
FIG. 3 is a graph of the visualization effect calculated for FIG. 2 using a classical optical flow algorithm;
fig. 4 is a visualization effect graph calculated from fig. 2 by using the monocular sparse optical flow algorithm for the outdoor unmanned aerial vehicle of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there are terms such as "upper", "lower", "left", "right", "long", "short", etc., indicating orientations or positional relationships based on the orientations or positional relationships shown in the drawings, it is only for convenience of description and simplicity of description, but does not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationships in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
The technical scheme of the invention is further described in detail by the following specific embodiments in combination with the attached drawings:
example 1
Fig. 1 shows an embodiment of a monocular sparse optical flow algorithm for an outdoor drone, comprising the following steps:
s1: extracting and tracking characteristic points of two adjacent frames of gray level images by adopting a Shi-Tomasi corner point detection method, and recording the characteristic points as an original sample point set; of course, other methods may be adopted to detect the feature points in the grayscale image, which is not limited herein;
s2: invalid characteristic points in the original sample point set are removed to obtain a sampling sample point set, and an optimal model H is obtained through the sampling sample point setbest
S3: according to the optimal model HbestObtaining the speed of the unmanned aerial vehicle under an image coordinate system;
s4: and converting the speed in the image coordinate system into the speed in the camera coordinate system.
In step S1 in this embodiment, the relative motion equation of the feature points of two adjacent frames is expressed as:
p′[i]=Hp[i]
Figure BDA0002783781490000061
wherein, p'[i]=[x′[i],y′[i],1]For the ith characteristic point, p, of the current frame in two adjacent frames of gray images[i]=[x[i],y[i],1]For the ith characteristic point of the previous frame in the two adjacent frames of gray images, H isThe model of the most characteristic points in the gray level image is a homography matrix, and the pitch angle and the roll angle of the unmanned aerial vehicle are close to zero, so h12,h22,h13,h23Are all close to 0, so h13And h23Respectively representing the translation amount of the unmanned aerial vehicle in the x direction and the y direction in the camera coordinate system, and solving h13And h23The velocity of the drone in image coordinates can be obtained.
In this embodiment, in step S2, an optimal model H is obtained by using a random sampling consistency algorithmbestThe method specifically comprises the following steps:
s21: randomly finding m in the original sample point setiFor the sample points;
s22: using miSolving a model H for the sample points;
s23: calculating to obtain a local point set I according to the model HiAnd according to the local point set IiObtaining the current optimal local point set Ibest
S24: according to the current optimal local point set IbestUpdating model H, and recording as current optimal model Hbest
S25: calculating the current optimal model H of all sample point pairs in the original sample point setbestError sum ofsumJudgment JsumIf the error is less than the total model error threshold E, outputting the current optimal model H if the error is less than the total model error threshold EbestIf not, the process proceeds to step S26;
s26: setting the maximum circulation times K, circularly executing the steps S21 to S25, and obtaining and outputting the current optimal model Hbest
In step S22 in this embodiment, a homogeneous equation set is used to obtain a model H, and the specific formula is:
Figure BDA0002783781490000071
wherein (x)[n],y[n]) And (x'[n],y′[n]) For the nth set of sample pairs, x[1]For two adjacent framesThe first feature point in the previous frame in the image has the position of the x axis in the image, y[1]The y-axis position, x, of the first feature point in the previous frame in the two adjacent frames of gray images in the image[n]The x-axis position, y, of the mth characteristic point of the previous frame in the two adjacent frames of gray images in the image[1]Is the y-axis position, x ', of the n-th feature point of the previous frame in the images of the two adjacent frames of gray images'(1)Is the x-axis position, y 'of the first feature point of the current frame in the adjacent two frames of gray level images in the image'(1]Is the y-axis position, x ', of the first feature point of the current frame in the adjacent two frames of gray level images in the image'[n]Is the x-axis position, y 'of the m-th feature point of the current frame in the adjacent two frames of gray level images in the image'[n]The y-axis position of the nth characteristic point of the previous frame in the two adjacent frames of gray level images in the image is obtained.
H in model H in the present example33Set to 1 because model H is calculated using the homogeneous equation.
M in the present embodiment is 4 or more, and is set because h33After setting to 1, the model H has 8 unknowns, and 8 points are needed to solve the model H, that is, at least 4 pairs of sample points are needed.
In step S23 in this embodiment, the method specifically includes the following steps:
s231: setting a maximum cycle number N;
s232: calculating the error J of all sample pairs in the original sample point set and the model H[n]The concrete formula is as follows:
J[n]=||p[n]′-Hp[n]||2
wherein p is[n]' is the n-th feature point, p, of the previous frame in the two adjacent frames of gray images[n]The nth characteristic point of the current frame in two adjacent frames of gray level images;
s233: judgment J[n]If the number of the feature points is less than the error threshold e, if so, the nth feature point is classified into an optimal local interior point set IbestIf not, the nth characteristic point is classified into the out-of-office point set for removing;
s234: step S23 is repeatedly executed2 to step S233, obtain the local point set Ii
S235: separately solving an intra-office point set IiAnd optimal point set IbestNumber of all elements in SiAnd sbestJudgment SiWhether or not less than SbestIf so, then Ibest=IbestIf not, then Ibest=Ii
Wherein i is the cycle execution frequency, and i is less than or equal to N.
In this embodiment, because the optical flow camera of unmanned aerial vehicle is installed in the bottom generally, and the camera lens is perpendicular downwards, when unmanned aerial vehicle was in low-speed flight, think unmanned aerial vehicle pitch angle and roll angle are zero, consequently think that the speed under the camera coordinate system is that the specific formula of unmanned aerial vehicle for the speed of lower surface conversion in step S4 is:
vc=Kvp
wherein v ispFor unmanned aerial vehicle speed, v, in an image coordinate systemcThe unmanned aerial vehicle speed under the camera coordinate system is shown, and K is an internal reference matrix of the camera.
Example 2
The present embodiment differs from embodiment 1 only in that it further includes step S5: and estimating the optimal speed of the unmanned aerial vehicle by using Kalman filtering fusion. This may further improve the reliability of the estimation of the speed of the drone.
Because the unmanned aerial vehicle generally carries the inertia measurement unit, step S5 in this embodiment specifically includes the following steps:
s51: obtaining acceleration according to an airborne inertia measurement unit of the unmanned aerial vehicle, and obtaining the speed v of the unmanned aerial vehicle by integrating the accelerationi
S52: applying the velocity model to the prediction stage of Kalman filtering, and applying v obtained in step S4cAnd the method is used for an updating stage of Kalman filtering to obtain the optimal speed estimation v of the unmanned aerial vehicle.
As shown in fig. 2 to 4, the monocular sparse optical flow algorithm for the outdoor unmanned aerial vehicle of the present invention is used to process the pictures acquired by the optical flow camera of the unmanned aerial vehicle, wherein the random sampling consistency algorithm is used to eliminate the invalid feature points in the original sample point set, i.e., the feature points generated by the shadow of the unmanned aerial vehicle in the gray level image acquired by the optical flow camera, so as to eliminate the feature points generated by the shadow of the unmanned aerial vehicle during the flying process of the unmanned aerial vehicle, thereby avoiding the influence thereof, and making the speed estimation of the unmanned aerial vehicle during the outdoor flying process more accurate.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A monocular sparse optical flow algorithm for outdoor unmanned aerial vehicles, comprising the steps of:
s1: extracting and tracking feature points of two adjacent frames of gray level images, and recording the feature points as an original sample point set;
s2: invalid characteristic points in the original sample point set are removed to obtain a sampling sample point set, and an optimal model H is obtained through the sampling sample point setbest
S3: according to the optimal model HbestObtaining the speed of the unmanned aerial vehicle under an image coordinate system;
s4: and converting the speed in the image coordinate system into the speed in the camera coordinate system.
2. The monocular sparse optical flow algorithm for outdoor drones according to claim 1, wherein the relative motion equation of two adjacent frame feature points is expressed in step S1 as:
p′[i]=Hp[i]
Figure FDA0002783781480000011
wherein, p'[i]=[x′[i],y′[i],1]For the ith characteristic point, p, of the current frame in two adjacent frames of gray images[i]=[x[i],y[i],1]The ith characteristic point of the previous frame in the two adjacent frames of gray level images, H is the model of the most characteristic points in the gray level images, H is a homography matrix13And h23Respectively representing the translation amounts of the unmanned aerial vehicle in the x direction and the y direction in the image coordinate system.
3. The monocular sparse optical flow algorithm of claim 2, wherein the optimal model H obtained in step S2 is a random sampling consistency algorithmbestThe method specifically comprises the following steps:
s21: randomly finding m in the original sample point setiFor the sample points;
s22: using miSolving a model H for the sample points;
s23: calculating to obtain a local point set I according to the model HiAnd according to the local point set IiObtaining the current optimal local point set Ibest
S24: according to the current optimal local point set IbestUpdating model H, and recording as current optimal model Hbest
S25: calculating the current optimal model H of all sample point pairs in the original sample point setbestError sum ofsumJudgment JsumIf the error is less than the total model error threshold E, outputting the current optimal model H if the error is less than the total model error threshold EbestIf not, the process proceeds to step S26;
s26: setting the maximum circulation times K, circularly executing the steps S21 to S25, and obtaining and outputting the current optimal model Hbest
4. The monocular sparse optical flow algorithm for outdoor drones according to claim 3, wherein in step S22, a homogeneous equation system is used to obtain a model H, and the specific formula is:
Figure FDA0002783781480000021
wherein (x)[n],y[n]) And (x'[n],y′[n]) For the nth set of sample pairs, x[1]The position of the first feature point of the previous frame in the images in the x axis, y axis[1]The y-axis position, x, of the first feature point in the previous frame in the two adjacent frames of gray images in the image[n]The x-axis position, y, of the mth characteristic point of the previous frame in the two adjacent frames of gray images in the image[n]Is the y-axis position, x ', of the n-th feature point of the previous frame in the images of the two adjacent frames of gray images'(1)Is the x-axis position, y 'of the first feature point of the current frame in the adjacent two frames of gray level images in the image'(1]Is the y-axis position, x ', of the first feature point of the current frame in the adjacent two frames of gray level images in the image'[n]Is the x-axis position, y 'of the m-th feature point of the current frame in the adjacent two frames of gray level images in the image'[n]The y-axis position of the nth characteristic point of the previous frame in the two adjacent frames of gray level images in the image is obtained.
5. The monocular sparse optical flow algorithm for outdoor drones of claim 4, wherein H of H in the model is H33Is set to 1.
6. The monocular sparse optical flow algorithm for outdoor drones of claim 5, wherein m is greater than or equal to 4 in step S21.
7. The monocular sparse optical flow algorithm for outdoor drones according to claim 6, wherein the step S23 specifically comprises the following steps:
s231: setting a maximum cycle number N;
s232: calculating the error J of all sample pairs in the original sample point set and the model H[n]Specific formulaComprises the following steps:
J[n]=||p[n]′-Hp[n]||2
wherein p is[n]′For the n-th feature point, p, of the previous frame in the two adjacent frames of gray images[n]The nth characteristic point of the current frame in two adjacent frames of gray level images;
s233: judgment J[n]If the number of the feature points is less than the error threshold e, if so, the nth feature point is classified into an optimal local interior point set IbestIf not, the nth characteristic point is classified into the out-of-office point set for removing;
s234: repeating the steps S232 to S233 to obtain the local point set Ii
S235: separately solving an intra-office point set IiAnd optimal point set IbestNumber of all elements in SiAnd SbestJudgment SiWhether or not less than SbestIf so, then Ibest=IbestIf not, then Ibest=Ii
Wherein i is the cycle execution frequency, and i is less than or equal to N.
8. The monocular sparse optical flow algorithm for outdoor drones of claim 1, wherein the specific formula converted in step S4 is:
vc=Kvp
wherein v ispFor unmanned aerial vehicle speed, v, in an image coordinate systemcThe unmanned aerial vehicle speed under the camera coordinate system is shown, and K is an internal reference matrix of the camera.
9. The monocular sparse optical flow algorithm for outdoor drones of claim 8, further comprising step S5: and estimating the optimal speed of the unmanned aerial vehicle by using Kalman filtering fusion.
10. The monocular sparse optical flow algorithm for outdoor drones of claim 9, wherein said step S5 specifically comprises the steps of:
s51: obtaining acceleration according to an airborne inertia measurement unit of the unmanned aerial vehicle, and obtaining the speed v of the unmanned aerial vehicle by integrating the accelerationi
S52: applying the velocity model to the prediction stage of Kalman filtering, and applying v obtained in step S4cAnd the method is used for an updating stage of Kalman filtering to obtain the optimal speed estimation v of the unmanned aerial vehicle.
CN202011291009.4A 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle Active CN112529936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011291009.4A CN112529936B (en) 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011291009.4A CN112529936B (en) 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112529936A true CN112529936A (en) 2021-03-19
CN112529936B CN112529936B (en) 2023-09-05

Family

ID=74981117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011291009.4A Active CN112529936B (en) 2020-11-17 2020-11-17 Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112529936B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789125A (en) * 2010-01-26 2010-07-28 北京航空航天大学 Method for tracking human skeleton motion in unmarked monocular video
CN102156991A (en) * 2011-04-11 2011-08-17 上海交通大学 Quaternion based object optical flow tracking method
CN103426008A (en) * 2013-08-29 2013-12-04 北京大学深圳研究生院 Vision human hand tracking method and system based on on-line machine learning
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789125A (en) * 2010-01-26 2010-07-28 北京航空航天大学 Method for tracking human skeleton motion in unmarked monocular video
CN102156991A (en) * 2011-04-11 2011-08-17 上海交通大学 Quaternion based object optical flow tracking method
CN103426008A (en) * 2013-08-29 2013-12-04 北京大学深圳研究生院 Vision human hand tracking method and system based on on-line machine learning
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐之航;欧阳威;武元新;: "基于改进克隆卡尔曼滤波的视觉惯性里程计实现", 信息技术, no. 05, pages 1 - 4 *

Also Published As

Publication number Publication date
CN112529936B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN110221603B (en) Remote obstacle detection method based on laser radar multi-frame point cloud fusion
TWI420906B (en) Tracking system and method for regions of interest and computer program product thereof
Yang et al. Concrete defects inspection and 3D mapping using CityFlyer quadrotor robot
CN110569704A (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109949361A (en) A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
CN111209825B (en) Method and device for dynamic target 3D detection
CN106529538A (en) Method and device for positioning aircraft
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
US11430199B2 (en) Feature recognition assisted super-resolution method
US8417062B2 (en) System and method for stabilization of fisheye video imagery
CN105447881B (en) Radar image segmentation and light stream based on Doppler
CN113686314B (en) Monocular water surface target segmentation and monocular distance measurement method for shipborne camera
US10215851B2 (en) Doppler-based segmentation and optical flow in radar images
WO2023134114A1 (en) Moving target detection method and detection device, and storage medium
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
US11069071B1 (en) System and method for egomotion estimation
CN106504274A (en) A kind of visual tracking method and system based under infrared camera
CN112907557A (en) Road detection method, road detection device, computing equipment and storage medium
Braut et al. Estimating OD matrices at intersections in airborne video-a pilot study
CN107093187B (en) A kind of measurement method and device of unmanned plane during flying speed
CN115717867A (en) Bridge deformation measurement method based on airborne double cameras and target tracking
US20220148314A1 (en) Method, system and computer readable media for object detection coverage estimation
CN116977806A (en) Airport target detection method and system based on millimeter wave radar, laser radar and high-definition array camera
CN116883897A (en) Low-resolution target identification method
Li et al. Driver drowsiness behavior detection and analysis using vision-based multimodal features for driving safety

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant