CN106570820B - A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone - Google Patents

A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone Download PDF

Info

Publication number
CN106570820B
CN106570820B CN201610901957.2A CN201610901957A CN106570820B CN 106570820 B CN106570820 B CN 106570820B CN 201610901957 A CN201610901957 A CN 201610901957A CN 106570820 B CN106570820 B CN 106570820B
Authority
CN
China
Prior art keywords
image
point
coordinate system
camera
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610901957.2A
Other languages
Chinese (zh)
Other versions
CN106570820A (en
Inventor
陈朋
陈志祥
党源杰
朱威
梁荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201610901957.2A priority Critical patent/CN106570820B/en
Publication of CN106570820A publication Critical patent/CN106570820A/en
Application granted granted Critical
Publication of CN106570820B publication Critical patent/CN106570820B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone, comprising the following steps: 1) obtain image and image is pre-processed;2) it extracts two dimensional image characteristic point and establishes feature descriptor;3) Airborne GPS coordinate, altitude information and IMU sensor parameters are obtained;4) establishment of coordinate system is carried out to two dimensional character descriptor according to organism parameter, obtains three-dimensional coordinate information.The present invention proposes a kind of low monocular-camera three-dimensional feature extracting method of the simple and operand of motion tracking problem for quadrotor, enormously simplifies the realization process of quadrotor motion tracking.

Description

A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone
Technical field
The present invention relates to the monocular vision field of quadrotor drone, especially a kind of monocular for quadrotor drone The scene of visual movement object identification tracking is come the three-dimension object feature extracting method realized.
Background technique
In recent years, with computer technology, Theory of Automatic Control, embedded development, chip design and sensor technology Rapid development, allow unmanned vehicle can more minimize while, possess stronger processing capacity, the phase on unmanned plane Pass technology also receives more and more attention;Small drone possesses manipulation flexibly, the advantages such as cruising ability is strong, so as to Complex task is handled in narrow environment, is militarily able to carry out military attack, is searched under adverse circumstances, information acquisition is contour The work of soldier is substituted under risk environment;On civilian, provide and take photo by plane for all trades and professions practitioner, remote equipment inspection, ring Border monitoring, rescue and relief work etc. function;
Quadrotor be common rotor unmanned aircraft, by adjust motor speed realize aircraft pitching, roll and Yaw maneuver;Relative to fixed-wing unmanned plane, rotor wing unmanned aerial vehicle possesses apparent advantage: firstly, airframe structure is simple, volume Small, unit volume can produce greater lift;Secondly, dynamical system is simple, only need to adjust each rotor driving motor revolving speed can be complete At the control of aerial statue, it can be achieved that a variety of distinctive offline mode such as VTOL, hovering, and system degree of intelligence is high, Aircraft aerial statue holding capacity is strong;
High-definition camera is carried on unmanned plane, real time execution machine vision algorithm has become hot research neck in recent years Domain, unmanned plane possess flexible visual angle, and people can be helped to capture some ground moving video cameras and be difficult to the image captured, if Lightweight camera is embedded into small-sized quadrotor drone, moreover it is possible to which abundant and cheap information is provided;Target following refers to In the unmanned plane of low-latitude flying, the relative displacement between target and unmanned plane is obtained by calculating the visual information that camera obtains, And then posture and the position of adjust automatically unmanned plane, so that tracked mobile surface targets is maintained at camera fields of view immediate vicinity, Realize that unmanned plane follows target movement to complete tracing task, but due to the technical restriction of monocular-camera, it is desirable to it is moved The three-dimensional coordinate information of object is very difficult, therefore, it is desirable to realize that the tracking of moving target needs one kind and is simple and efficient Three-dimensional feature extracting method.
Summary of the invention
In order to which overcome existing quadrotor drone platform monocular vision feature extracting method can not effectively extract three-dimensional The deficiency of feature can be by the movement letter of aircraft in order to realize the tracking of ground moving object on monocular-camera Turn to the two-dimensional surface movement under a certain height, two dimensional character plane accessed by monocular-camera be considered as perpendicular to Plane of movement, therefore also need to obtain the relative distance between two dimensional character plane and aircraft and can realize the fortune of aircraft Motion tracking is to need to obtain the depth of view information of characteristic plane, and the two dimensional character that joined depth of view information can be approximated to be three-dimensional spy Reference breath, is based on such thinking, and the present invention proposes that a kind of monocular vision three-dimensional feature based on quadrotor drone platform mentions Take method.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone, comprising the following steps:
1) it obtains image and image is pre-processed;
2) it extracts two dimensional image characteristic point and establishes feature descriptor;
3) Airborne GPS coordinate, altitude information and IMU sensor parameters are obtained;
4) establishment of coordinate system is carried out to two dimensional character descriptor according to organism parameter, obtains three-dimensional coordinate information, process is such as Under:
Firstly, Intrinsic Matrix is established according to camera parameter, the two dimensional character that will be got in step 3) according to the matrix Coordinate information is transformed into photo coordinate system I, is transformed into camera coordinates system C according to known focus information;Secondly, according to camera Coordinate system is further converted to body coordinate system B with the fix error angle of body and relative position;Finally, according to IMU attitude angle It is special to spend and merge the two dimension with depth of view information that aircraft GPS coordinate information and elevation information obtain in world coordinate system E Levy descriptor.
Further, in the step 4), the three-dimensional coordinate information of two dimensional character, including following step are obtained according to organism parameter It is rapid:
4.1) conversion of image coordinate system and photo coordinate system
Image coordinate system is the image pixel coordinates system [u, v] using the upper left corner as originT, which does not have physics list Position, therefore introduce origin OIPhoto coordinate system I=[x on optical axisI,yI]T, as plane is camera according to pinhole imaging system mould The plane with physical significance that type is built, it is assumed that physical size of each pixel on u axis and v axis direction be dx and Dy is meant that the actual size of pixel on sensitive chip, is the bridge for connecting image coordinate system and full-size(d) coordinate system, dx It is related with focal length of camera f with dy, then point (the x on photo coordinate system1,y1) and pixel coordinate system midpoint (u1,v1) correspond to and close It is as follows:
Wherein, (u0,v0) it is central point in image coordinate system, i.e. pixel corresponding to the origin of photo coordinate system, It enablesInclude four parameters related with camera internal structure, referred to as the internal reference matrix of camera;
4.2) conversion of photo coordinate system and camera coordinates system
Assuming that a point P in camera coordinates systemC1=(xC,yC,zC), connecting subpoint of the optical center in image coordinate system is PI1 =(xI,yI), then the coordinate transformation relation between this two o'clock is as follows:
It is as follows to be converted into matrix form:
Wherein f is camera focus;
4.3) conversion of camera coordinates system and world coordinate system
Firstly, since aircraft is with camera, there are installation errors, use [α, beta, gamma] hereTIndicate the fixed three-dimensional of installation accidentally Declinate, with [xe,ye,ze]TIndicate video camera to the space length of fuselage coordinates origin, then camera coordinates system and body coordinate system Relationship T=It indicates, i.e.,
C=TB (4)
Wherein C indicates camera coordinates system, and B indicates body coordinate system;
Secondly, for a point P in spaceE=(xE,yE,zE), the attitude angle of corresponding camera coordinate system and video camera It is related with position, and unmanned plane, in flight course, attitude angle and location information obtain in real time, and quadrotor drone is one The system that kind has 6DOF, attitude angle are divided into pitch angle, roll angle θ and yaw angle, rotary shaft is respectively defined as X, Y, Z axis, coordinate origin are the center of gravity of aircraft, respectively obtain to be multiplied after the spin matrix of three axis and obtain the rotation of body Matrix:
The x that can be measured by the IMU sensor on quadrotor fuselage, y, three components of acceleration of z-axis with Gyroscope component resolves to obtain through quaternary number;It enablesWherein (x, y, z) is the position of unmanned plane in space Confidence breath, z are drone flying height, and unmanned plane position (x, y, z) can be obtained by GPS and barometer, then PEIt is corresponding Camera coordinates system under point (xC,yC,zC) can be calculated by following relationship:
Wherein T is camera coordinates system and body coordinate system transformation matrix, and R is body spin matrix, and M is the world of aircraft Coordinate points, [xE,yE,zE]TThe as three-dimensional coordinate of required characteristic point.
Further, it in the step 1), obtains image and pretreated steps are as follows:
1.1) image is acquired
Linux based on quadrotor platform develops environment, subscribes to image subject using robot operating system ROS Mode obtain image, camera driving is realized by ROS and openCV;
1.2) image preprocessing
Collected color image first has to carry out gray processing, removes useless image color information, side used herein Method be find out tri- components of R, G, B of each pixel weighted average be this pixel gray value, it is different here The weight in channel is optimized according to operational efficiency, avoids floating-point operation calculation formula are as follows:
Gray=(R × 30+G × 59+B × 11+50)/100 (7)
Wherein Gray is the gray value of pixel, and R, G, B are respectively the numerical value of red, green, blue chrominance channel.
Further, it in the step 2), extracts two dimensional image characteristic point and establishes the process of feature descriptor are as follows:
2.1) ORB extracts characteristic point
ORB detects angle point first with Harris angular-point detection method, measures direction of rotation using brightness center later; Assuming that the brightness of an angle point from its off-centring, then synthesizes the direction intensity around put, the direction of angle point is calculated, is defined Following intensity matrix:
mpq=∑x,y xpyqI(x,y) (8)
Wherein x, y are the centre coordinate of image block, and I (x, y) indicates the gray scale at center, xp,yqPoint is represented to the inclined of center It moves, then the direction of angle point indicates are as follows:
From this vector of angle point center construction, then the deflection θ of this image block can be indicated are as follows:
θ=tan-1(m01,m10) (10)
Since the ORB key point extracted has direction, there is rotational invariance using the characteristic point that ORB is extracted;
2.2) LDB feature descriptor is established
After obtaining the key point of image, the feature descriptor of image is just established using LDB;The treatment process of LDB according to Secondary is building gaussian pyramid, building integrogram, binary system test, and position selects and series connection;
In order to allow LDB to possess scale invariability, gaussian pyramid is constructed, and calculate characteristic point in corresponding pyramid level Corresponding LDB descriptor:
Wherein, I (x, y) is given image, G (x, y, σi) it is Gaussian filter, σiIt is gradually increased, for constructing 1 Dao L layers Gaussian pyramid Pyri;For without the feature extraction of significant size estimation, needing to calculate each characteristic point as ORB The LDB of each layer of pyramid is described;
LDB calculates rotational coordinates, and uses closest interpolation method, one oriented segment of in-time generatin;
After establishing vertical integrogram or rotating integrogram and extract light intensity and gradient information, carried out between pairs of grid τ binary detection, detection method such as following formula:
Wherein Func ()={ Iavg,dx,dy, for extracting the description information of each grid;
An image block is given, this image block is first divided into the grid cell of the sizes such as n × n, extracted by LDB The average luminous intensity and gradient information of each grid cell, are respectively compared luminous intensity and gradient information between pairs of grid cell, Result is greater than to 0 corresponding position 1;Average intensity and gradient along the direction x or y can be effectively in different grid cells Image is distinguished, therefore, it is as follows to define Func (i):
Func(i)∈{IIntensity(i),dx(i),dy(i)} (13)
WhereinFor the average intensity of grid cell i, dx(i) =Gradientx(i), dy(i)=Gradienty(i), m is the total pixel number in grid cell i, since LDB is used Size and grid, m are consistent on same layer gaussian pyramid;Gradientx(i) and GradientyIt (i) is net respectively Gradient of the lattice unit i along the direction x or y;
2.3) matching of feature descriptor
After obtaining the LDB descriptor of two images, the descriptor of two images is matched;Using K closest to method To be matched to two descriptors;For each characteristic point in target template image, the point is searched in the input image Two matchings of arest neighbors, compare the distance between the two matchings, if the matching distance of any in template image is less than 0.8 The matching distance of times input picture, it is believed that the corresponding point of point and input picture in template is to be effectively matched, and is recorded corresponding Coordinate value, when the match point between two images is more than 4, it is believed that have found target object, corresponding coordinate in the input image Information is two dimensional character information.
Further, in the step 3), the process of Airborne GPS coordinate, altitude information and IMU sensor parameters is obtained Are as follows:
MAVROS is the ROS packet that third party team is directed to MAVLink exploitation, flies control as starting MAVROS and with aircraft After connection, which will start to issue the sensor parameters and flying quality of aircraft, subscribe to the GPS coordinate of aircraft here The message of theme, GPS height theme, IMU attitude angle theme, so that it may get corresponding data.
Technical concept of the invention are as follows: with the mature and stable of quadrotor technology and in large quantities in civilian city It is promoted on, more and more people are conceived to the vision system that can be carried on quadrotor, and the present invention is exactly in four rotations It is proposed under the research background of rotor aircraft realization motion target tracking.
Tracking of the quadrotor to realization moving target, it is necessary first to the three-dimensional feature information of target is extracted, and Three-dimensional feature information is it is difficult to extract obtaining, still, if by the tracking of aircraft using monocular-camera Moving the two-dimensional surface movement being reduced under a certain height can be reduced to required three-dimensional feature information with depth of field letter The two dimensional character information of breath, therefore, the present invention propose to be that depth of view information is added in two dimensional character according to the space coordinate of aircraft, with Realize that approximate three-dimensional feature information is extracted.
Monocular vision three-dimensional feature extracting method based on quadrotor drone, which specifically includes that, obtains image and gray scale Change, will further extract the two dimensional character information in image, obtain the space coordinate and IMU angle information of aircraft, final root Establishment of coordinate system is carried out to two dimensional character according to organism parameter, obtains three-dimensional feature information.
The beneficial effect of this method is mainly manifested in: proposing a kind of letter for the motion tracking problem of quadrotor List and the low monocular-camera three-dimensional feature extracting method of operand, enormously simplify the realization of quadrotor motion tracking Process.
Detailed description of the invention
Fig. 1 is a kind of monocular vision three-dimensional feature extracting method flow chart based on quadrotor drone;
Fig. 2 is the relationship between each coordinate system in three-dimensional feature extraction process, wherein [xc,yc,zc]TIt is camera coordinates System, [xI,yI,zI]TIt is photo coordinate system, [xE,yE,zE]TIt is world coordinate system.
Specific embodiment
The present invention is described further with reference to the accompanying drawing:
Referring to Figures 1 and 2, a kind of monocular vision three-dimensional feature extracting method based on quadrotor drone, comprising following Step:
1) it obtains image and pre-processes:
1.1) image is acquired
In general, acquisition image method have very mostly in, the present invention is the Linux based on quadrotor platform Develop environment, obtain image using the mode that robot operating system ROS subscribes to image subject, camera driving by ROS and OpenCV is realized;
1.2) image preprocessing
Since feature extracting method used in the present invention is based on the texture light intensity and gradient information of image, Collected color image first has to carry out gray processing, removes useless image color information, method used herein is to find out The weighted average of tri- components of R, G, B of each pixel is the gray value of this pixel, here the power in different channels Value can be optimized according to operational efficiency, avoid floating-point operation calculation formula here are as follows:
Gray=(R × 30+G × 59+B × 11+50)/100 (7)
Wherein Gray is the gray value of pixel, and R, G, B are respectively the numerical value of red, green, blue chrominance channel.
2) it extracts two dimensional image characteristic point and establishes feature descriptor:
2.1) ORB extracts characteristic point
ORB is also referred to as rBRIEF, extracts the feature of local invariant, is the improvement to BRIEF algorithm, BRIEF operation speed Degree is fast, however does not have rotational invariance, and more sensitive to noise, and ORB solves the two disadvantages of BRIEF;In order to allow Algorithm can have rotational invariance, and ORB detects angle point first with Harris angular-point detection method, utilize brightness center later (Intensity Centroid) measures direction of rotation;Assuming that the brightness of an angle point is then synthesized from its off-centring The direction intensity that surrounding is put, can calculate the direction of angle point, be defined as follows intensity matrix:
mpqx,y xpyqI(x,y) (8)
Wherein x, y are the centre coordinate of image block, and I (x, y) indicates the gray scale at center, xp,yqPoint is represented to the inclined of center It moves, then the direction of angle point can indicate are as follows:
From this vector of angle point center construction, then the deflection θ of this image block can be indicated are as follows:
θ=tan-1(m01,m10) (10)
Since the ORB key point extracted has direction, there is rotational invariance using the characteristic point that ORB is extracted;
2.2) LDB feature descriptor is established
After obtaining the key point of image, so that it may establish the feature descriptor of image using LDB;LDB have 5 it is main Step is successively building gaussian pyramid, principal direction estimation, building integrogram, binary system test, and position selects and series connection, due to ORB has been selected to extract characteristic point herein, itself can save principal direction estimation already provided with directionality;
In order to allow LDB to possess scale invariability, gaussian pyramid is constructed, and calculate characteristic point in corresponding pyramid level Corresponding LDB descriptor:
Wherein, I (x, y) is given image, G (x, y, σi) it is Gaussian filter, σiIt is gradually increased, for constructing 1 Dao L layers Gaussian pyramid Pyri;For without the feature extraction of significant size estimation, needing to calculate each characteristic point as ORB The LDB of each layer of pyramid is described;
LDB effectively calculates the average intensity and gradient information of grid cell using integral diagram technology, if image has Rotation cannot simply use vertical integrogram, and need to establish rotation integrogram, and the rotation integrogram of image block passes through cumulative Pixel in principal direction generates the two big main computing costs for rotating integrogram in calculating rotational coordinates and digraph to construct As the interpolation of block can quantify azimuth information to reduce this two parts computing cost, and rotational coordinates lookup is established in advance Table, however, fine orientation quantization needs to establish biggish look-up table, the memory reading of low speed in turn results in longer fortune Row time, therefore, LDB calculate rotational coordinates, and use closest interpolation method, one oriented segment of in-time generatin;
After establishing vertical integrogram or rotating integrogram and extract light intensity and gradient information, so that it may in pairs of grid Between carry out τ binary detection, detection method such as following formula:
Wherein Func ()={ Iavg,dx,dy, for extracting the description information of each grid;
An image block is given, this image block is first divided into the grid cell of the sizes such as n × n, extracted by LDB The average luminous intensity and gradient information of each grid cell, are respectively compared luminous intensity and gradient information between pairs of grid cell, Result is greater than to 0 corresponding position 1, in conjunction with the significantly high matching accuracy rate of matching process of light intensity and gradient;In different nets Average intensity and the gradient along the direction x or y can efficiently differentiate image in lattice unit, therefore, it is as follows define Func (i):
Func(i)∈{IIntensity(i),dx(i),dy(i)} (13)
WhereinFor the average intensity of grid cell i, dx(i) =Gradientx(i), dy(i)=Gradienty(i), m is the total pixel number in grid cell i, since LDB is used Size and grid, m are consistent on same layer gaussian pyramid;Gradientx(i) and GradientyIt (i) is net respectively Gradient of the lattice unit i along the direction x or y;
2.3) matching of feature descriptor
After obtaining the LDB descriptor of two images, so that it may be matched to the descriptor of two images;The present invention adopts Two descriptors are matched closest to method (k Nearest Neighbors) with K;The thought of KNN assumes that each A class includes multiple sample datas, and each data have a unique class label to indicate these samples are which point belonged to Class calculates each sample data to the distance of data to be sorted, takes the K sample data nearest with data to be sorted, this K sample The sample data of which classification occupies the majority in notebook data, then data to be sorted just belong to the category;For in target template image Each characteristic point, search two of the arest neighbors of point matchings in the input image, compare the distance between the two matchings, If the matching distance of any in template image is less than the matching distance of 0.8 times of input picture, it is believed that point and input in template The corresponding point of image be effectively matched, record corresponding coordinate value, when the match point between two images be more than 4, recognize herein To have found target object in the input image, corresponding coordinate information is two dimensional character information.
3) process of Airborne GPS coordinate, altitude information and IMU sensor parameters is obtained are as follows:
MAVROS is the ROS packet that third party team is directed to MAVLink exploitation, flies control as starting MAVROS and with aircraft After connection, which will start to issue the sensor parameters and flying quality of aircraft, subscribe to the GPS coordinate of aircraft here The message of theme, GPS height theme, IMU attitude angle theme, so that it may get corresponding data.
4) three-dimensional coordinate information of two dimensional character is obtained according to organism parameter, process is as follows:
4.1) conversion of image coordinate system and photo coordinate system
Image coordinate system is the image pixel coordinates system [u, v] using the upper left corner as originT, which does not have physics list Position, therefore introduce origin OIPhoto coordinate system I=[x on optical axisI,yI]T, as plane is camera according to pinhole imaging system mould The plane with physical significance that type is built, it is assumed that physical size of each pixel on u axis and v axis direction be dx and Dy is meant that the actual size of pixel on sensitive chip, is the bridge for connecting image coordinate system and full-size(d) coordinate system, dx It is related with focal length of camera f with dy, then point (the x on photo coordinate system1,y1) and pixel coordinate system midpoint (u1,v1) correspond to and close It is as follows:
Wherein, (u0,v0) it is central point in image coordinate system, i.e. pixel corresponding to the origin of photo coordinate system, It enablesInclude four parameters related with camera internal structure, referred to as the internal reference matrix of camera;
4.2) conversion of photo coordinate system and camera coordinates system
Assuming that a point P in camera coordinates systemC1=(xC,yC,zC), connecting subpoint of the optical center in image coordinate system is PI1 =(xI,yI), then the coordinate transformation relation between this two o'clock is as follows:
It is as follows to can be converted matrix form:
Wherein f is camera focus;
4.3) conversion of camera coordinates system and world coordinate system
Firstly, since aircraft is with camera, there are installation errors, use [α, beta, gamma] hereTIndicate the fixed three-dimensional of installation accidentally Declinate, with [xe,ye,ze]TIndicate video camera to the space length of fuselage coordinates origin, then camera coordinates system and body coordinate system Relationship can useIt indicates, i.e.,
C=TB (4)
Wherein C indicates camera coordinates system, and B indicates body coordinate system;
Secondly, for a point P in spaceE=(xE,yE,zE), the attitude angle of corresponding camera coordinate system and video camera It is related with position, and unmanned plane, in flight course, attitude angle and location information can obtain in real time, quadrotor drone It is a kind of system with 6DOF, attitude angle can be divided into pitch angleRoll angle θ and yaw angleIts rotary shaft point It is not defined as X, Y, Z axis, coordinate origin is the center of gravity of aircraft, and respectively obtaining to be multiplied after the spin matrix of three axis can be obtained The spin matrix of body:
The x that can be measured by the IMU sensor on quadrotor fuselage, y, three components of acceleration of z-axis with Gyroscope component resolves to obtain through quaternary number;It enablesWherein (x, y, z) is the position of unmanned plane in space Confidence breath, z are drone flying height, and unmanned plane position (x, y, z) can be obtained by GPS and barometer, then PEIt is corresponding Camera coordinates system under point (xC,yC,zC) can be calculated by following relationship:
Wherein T is camera coordinates system and body coordinate system transformation matrix, and R is body spin matrix, and M is the world of aircraft Coordinate points, [xE,yE,zE]TThe as three-dimensional coordinate of required characteristic point.

Claims (5)

1. a kind of monocular vision three-dimensional feature extracting method based on quadrotor drone, it is characterised in that: the method includes Following steps:
1) it obtains image and image is pre-processed;
2) it extracts two dimensional image characteristic point and establishes feature descriptor;
3) Airborne GPS coordinate, altitude information and IMU sensor parameters are obtained;
4) establishment of coordinate system is carried out to two dimensional character descriptor according to organism parameter, obtains three-dimensional coordinate information, process is as follows:
Firstly, establishing Intrinsic Matrix according to camera parameter, the Airborne GPS coordinate got in step 3) is believed according to the matrix Breath is transformed into photo coordinate system I, is transformed into camera coordinates system C according to known focus information;Secondly, according to camera and body Fix error angle and relative position further convert coordinate system to body coordinate system B;Finally, according to IMU attitude angle and Fusion aircraft GPS coordinate information and elevation information obtain the two dimensional character description with depth of view information in world coordinate system E Symbol.
2. a kind of monocular vision three-dimensional feature extracting method based on quadrotor drone as described in claim 1, feature It is: in the step 4), the three-dimensional coordinate information of two dimensional character is obtained according to organism parameter, comprising the following steps:
4.1) conversion of image coordinate system and photo coordinate system
Image coordinate system is the image pixel coordinates system [u, v] using the upper left corner as originT, which does not have physical unit, therefore Introduce origin OIPhoto coordinate system I=[x on optical axisI, yI]T, as plane is that camera is constructed according to national forest park in Xiaokeng The plane with physical significance come, it is assumed that physical size of each pixel on u axis and v axis direction is dx and dy, is contained Justice is the actual size of pixel on sensitive chip, is the bridge for connecting image coordinate system and full-size(d) coordinate system, dx and dy with Focal length of camera f is related, then point (the x on photo coordinate system1, y1) and pixel coordinate system midpoint (u1, v1) corresponding relationship is such as Under:
Wherein, (u0, v0) it is central point in image coordinate system, i.e. pixel corresponding to the origin of photo coordinate system enablesInclude four parameters related with camera internal structure, referred to as the internal reference matrix of camera;
4.2) conversion of photo coordinate system and camera coordinates system
Assuming that a point P in camera coordinates systemC1=(xC, yC, zC), connecting subpoint of the optical center in image coordinate system is PI1= (xI, yI), then the coordinate transformation relation between this two o'clock is as follows:
It is as follows to be converted into matrix form:
Wherein f is camera focus;
4.3) conversion of camera coordinates system and world coordinate system
Firstly, since aircraft is with camera, there are installation errors, use [α, β, γ] hereTIndicate the fixed three-dimensional error angle of installation, With [xe, ye, ze]TIndicate video camera to the space length of fuselage coordinates origin, then the relationship of camera coordinates system and body coordinate system With It indicates, i.e.,
C=TB (4)
Wherein C indicates camera coordinates system, and B indicates body coordinate system;
Secondly, for a point P in spaceE=(xE, yE, zE), the attitude angle and institute of corresponding camera coordinate system and video camera It is equipped with pass in place, and unmanned plane is in flight course, attitude angle and location information obtain in real time, and quadrotor drone is a kind of tool There is the system of 6DOF, attitude angle is divided into pitch angleRoll angle θ and yaw angleIts rotary shaft is respectively defined as X, Y, Z Axis, coordinate origin are the center of gravity of aircraft, respectively obtain to be multiplied after the spin matrix of three axis and obtain the spin matrix of body:
The x that can be measured by the IMU sensor on quadrotor fuselage, y, three components of acceleration of z-axis and gyro Instrument component resolves to obtain through quaternary number;It enablesWherein (x, y, z) is the position letter of unmanned plane in space Breath, z are drone flying height, and unmanned plane position (x, y, z) can be obtained by GPS and barometer, then PECorresponding phase Point (x under machine coordinate systemC, yC, zC) can be calculated by following relationship:
Wherein T is camera coordinates system and body coordinate system transformation matrix, and R is body spin matrix, and M is the world coordinates of aircraft Point, [xE, yE, zE]TThe as three-dimensional coordinate of required characteristic point.
3. a kind of monocular vision three-dimensional feature extracting method based on quadrotor drone as claimed in claim 1 or 2, special Sign is: in the step 1), obtains image and pretreated steps are as follows:
1.1) image is acquired
Linux based on quadrotor platform develops environment, and the side of image subject is subscribed to using robot operating system ROS Formula obtains image, and camera driving is realized by ROS and openCV;
1.2) image preprocessing
Collected color image first has to carry out gray processing, removes useless image color information, method used herein is The weighted average for finding out tri- components of R, G, B of each pixel is the gray value of this pixel, here different channels Weight optimized according to operational efficiency, avoid floating-point operation calculation formula are as follows:
Gray=(R × 30+G × 59+B × 11+50)/100 (7)
Wherein Gray is the gray value of pixel, and R, G, B are respectively the numerical value of red, green, blue chrominance channel.
4. a kind of monocular vision three-dimensional feature extracting method based on quadrotor drone as claimed in claim 1 or 2, special Sign is: in the step 2), extracting two dimensional image characteristic point and establishes the process of feature descriptor are as follows:
2.1) ORB extracts characteristic point
ORB detects angle point first with Harris angular-point detection method, measures direction of rotation using brightness center later;Assuming that The brightness of one angle point then synthesizes the direction intensity around put, calculates the direction of angle point, be defined as follows from its off-centring Intensity matrix:
mpq=∑X, yxpyqI (x, y) (8)
Wherein x, y are the centre coordinate of image block, and I (x, y) indicates the gray scale at center, xp, yqThe offset that point arrives center is represented, then The direction of angle point indicates are as follows:
From this vector of angle point center construction, then the deflection θ of this image block can be indicated are as follows:
θ=tan-1(m01, m10) (10)
Since the ORB key point extracted has direction, there is rotational invariance using the characteristic point that ORB is extracted;
2.2) LDB feature descriptor is established
After obtaining the key point of image, the feature descriptor of image is established using LDB;The treatment process of LDB is successively structure Build gaussian pyramid, building integrogram, binary system test, position selection and series connection;
In order to allow LDB to possess scale invariability, gaussian pyramid is constructed, and it is corresponding in corresponding pyramid level to calculate characteristic point LDB descriptor:
Wherein, Img (x, y) is given image, G (x, y, σi) it is Gaussian filter, σiIt is gradually increased, it is high for constructing 1 Dao L layers This pyramid Pyri;For, without the feature extraction of significant size estimation, needing to calculate gold to each characteristic point as ORB The LDB of each layer of word tower is described;
LDB calculates rotational coordinates, and uses closest interpolation method, one oriented segment of in-time generatin;
After establishing vertical integrogram or rotating integrogram and extract light intensity and gradient information, τ is just carried out between pairs of grid Binary detection, detection method such as following formula:
Wherein Func () is used to extract the description information of each grid;
An image block is given, this image block is first divided into the grid cell of the sizes such as n × n, extracted each by LDB The average luminous intensity and gradient information of grid cell, are respectively compared luminous intensity and gradient information between pairs of grid cell, will tie Fruit is greater than 0 corresponding position 1;Average intensity and the gradient along the direction x or y can efficiently differentiate in different grid cells Therefore image it is as follows to define Func (i):
Func(i)∈{IIntensity(i), dx(i), dy(i)} (13)
WhereinFor the average intensity of grid cell i, dx(i)= Gradientx(i), dy(i)=Gradienty(i), m is the total pixel number in grid cell i, due to LDB use etc. it is big Small grid, m are consistent on same layer gaussian pyramid;Gradientx(i) and GradientyIt (i) is grid list respectively Gradient of first i along the direction x or y;
2.3) matching of feature descriptor
After obtaining the LDB descriptor of two images, the descriptor of two images is matched;Using K closest to method come pair Two descriptors are matched;For each characteristic point in target template image, the nearest of the point is searched in the input image Two adjacent matchings, compare the distance between the two matchings, if the matching distance of any in template image is defeated less than 0.8 times Enter the matching distance of image, it is believed that the corresponding point of point and input picture in template is to be effectively matched, and records corresponding coordinate Value, when the match point between two images is more than 4, it is believed that have found target object, corresponding coordinate information in the input image As two dimensional character information.
5. a kind of monocular vision three-dimensional feature extracting method based on quadrotor drone as claimed in claim 1 or 2, special Sign is: in the step 3), the method for acquisition Airborne GPS coordinate, altitude information and IMU sensor parameters are as follows:
MAVROS is the ROS packet that third party team is directed to MAVLink exploitation, flies control connection as starting MAVROS and with aircraft Afterwards, MAVROS will start to issue the sensor parameters and flying quality of aircraft, subscribe to the GPS coordinate master of aircraft here Topic, the message of GPS height theme, IMU attitude angle theme, get corresponding data.
CN201610901957.2A 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone Active CN106570820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610901957.2A CN106570820B (en) 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610901957.2A CN106570820B (en) 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone

Publications (2)

Publication Number Publication Date
CN106570820A CN106570820A (en) 2017-04-19
CN106570820B true CN106570820B (en) 2019-12-03

Family

ID=58532962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610901957.2A Active CN106570820B (en) 2016-10-18 2016-10-18 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone

Country Status (1)

Country Link
CN (1) CN106570820B (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117690A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN109709977B (en) * 2017-10-26 2022-08-16 广州极飞科技股份有限公司 Method and device for planning movement track and moving object
CN109753079A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane precisely lands in mobile platform method
CN109753076B (en) * 2017-11-03 2022-01-11 南京奇蛙智能科技有限公司 Unmanned aerial vehicle visual tracking implementation method
CN109839945B (en) * 2017-11-27 2022-04-26 北京京东乾石科技有限公司 Unmanned aerial vehicle landing method, unmanned aerial vehicle landing device and computer readable storage medium
CN107966112A (en) * 2017-12-03 2018-04-27 中国直升机设计研究所 A kind of large scale rotor movement parameter measurement method
CN108335329B (en) * 2017-12-06 2021-09-10 腾讯科技(深圳)有限公司 Position detection method and device applied to aircraft and aircraft
CN108255187A (en) * 2018-01-04 2018-07-06 北京科技大学 A kind of micro flapping wing air vehicle vision feedback control method
CN108759826B (en) * 2018-04-12 2020-10-27 浙江工业大学 Unmanned aerial vehicle motion tracking method based on multi-sensing parameter fusion of mobile phone and unmanned aerial vehicle
CN108711166B (en) * 2018-04-12 2022-05-03 浙江工业大学 Monocular camera scale estimation method based on quad-rotor unmanned aerial vehicle
CN108681324A (en) * 2018-05-14 2018-10-19 西北工业大学 Mobile robot trace tracking and controlling method based on overall Vision
CN110799921A (en) * 2018-07-18 2020-02-14 深圳市大疆创新科技有限公司 Shooting method and device and unmanned aerial vehicle
CN109242779B (en) * 2018-07-25 2023-07-18 北京中科慧眼科技有限公司 Method and device for constructing camera imaging model and automobile automatic driving system
CN109344846B (en) * 2018-09-26 2022-03-25 联想(北京)有限公司 Image feature extraction method and device
CN109754420B (en) * 2018-12-24 2021-11-12 深圳市道通智能航空技术股份有限公司 Target distance estimation method and device and unmanned aerial vehicle
CN109895099B (en) * 2019-03-28 2020-10-02 哈尔滨工业大学(深圳) Flying mechanical arm visual servo grabbing method based on natural features
CN110032983B (en) * 2019-04-22 2023-02-17 扬州哈工科创机器人研究院有限公司 Track identification method based on ORB feature extraction and FLANN rapid matching
CN110297498B (en) * 2019-06-13 2022-04-26 暨南大学 Track inspection method and system based on wireless charging unmanned aerial vehicle
CN110254258B (en) * 2019-06-13 2021-04-02 暨南大学 Unmanned aerial vehicle wireless charging system and method
CN110516531B (en) * 2019-07-11 2023-04-11 广东工业大学 Identification method of dangerous goods mark based on template matching
CN111126450B (en) * 2019-11-29 2024-03-19 上海宇航***工程研究所 Modeling method and device for cuboid space vehicle based on nine-line configuration
CN110942473A (en) * 2019-12-02 2020-03-31 哈尔滨工程大学 Moving target tracking detection method based on characteristic point gridding matching
CN111583093B (en) * 2020-04-27 2023-12-22 西安交通大学 Hardware implementation method for ORB feature point extraction with good real-time performance
CN111524182B (en) * 2020-04-29 2023-11-10 杭州电子科技大学 Mathematical modeling method based on visual information analysis
CN111784731A (en) * 2020-06-19 2020-10-16 哈尔滨工业大学 Target attitude estimation method based on deep learning
CN111754603B (en) * 2020-06-23 2024-02-13 自然资源部四川测绘产品质量监督检验站(四川省测绘产品质量监督检验站) Unmanned aerial vehicle image connection diagram construction method and system
CN112116651B (en) * 2020-08-12 2023-04-07 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN112197766B (en) * 2020-09-29 2023-04-28 西安应用光学研究所 Visual gesture measuring device for tethered rotor platform
CN112797912B (en) * 2020-12-24 2023-04-07 中国航天空气动力技术研究院 Binocular vision-based wing tip deformation measurement method for large flexible unmanned aerial vehicle
CN112907662B (en) * 2021-01-28 2022-11-04 北京三快在线科技有限公司 Feature extraction method and device, electronic equipment and storage medium
CN113403942B (en) * 2021-07-07 2022-11-15 西北工业大学 Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN114281096A (en) * 2021-11-09 2022-04-05 中时讯通信建设有限公司 Unmanned aerial vehicle tracking control method, device and medium based on target detection algorithm
CN117032276B (en) * 2023-07-04 2024-06-25 长沙理工大学 Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2849150A1 (en) * 2013-09-17 2015-03-18 Thomson Licensing Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN105928493A (en) * 2016-04-05 2016-09-07 王建立 Binocular vision three-dimensional mapping system and method based on UAV
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2849150A1 (en) * 2013-09-17 2015-03-18 Thomson Licensing Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN105928493A (en) * 2016-04-05 2016-09-07 王建立 Binocular vision three-dimensional mapping system and method based on UAV
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone

Also Published As

Publication number Publication date
CN106570820A (en) 2017-04-19

Similar Documents

Publication Publication Date Title
CN106570820B (en) A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone
CN108711166A (en) A kind of monocular camera Scale Estimation Method based on quadrotor drone
US11748898B2 (en) Methods and system for infrared tracking
Xu et al. Power line-guided automatic electric transmission line inspection system
Patruno et al. A vision-based approach for unmanned aerial vehicle landing
CN109949361A (en) A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
CN106529538A (en) Method and device for positioning aircraft
CN110058602A (en) Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision
CN108759826A (en) A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN108428255A (en) A kind of real-time three-dimensional method for reconstructing based on unmanned plane
CN110852182B (en) Depth video human body behavior recognition method based on three-dimensional space time sequence modeling
CN111527463A (en) Method and system for multi-target tracking
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
WO2021223124A1 (en) Position information obtaining method and device, and storage medium
CN104021538B (en) Object positioning method and device
CN109857144A (en) Unmanned plane, unmanned aerial vehicle control system and control method
WO2019127518A1 (en) Obstacle avoidance method and device and movable platform
Wang et al. An overview of 3d object detection
CN110264530A (en) A kind of camera calibration method, apparatus and unmanned plane
WO2023239955A1 (en) Localization processing service and observed scene reconstruction service
CN105930766A (en) Unmanned plane
Montanari et al. Ground vehicle detection and classification by an unmanned aerial vehicle
Zarei et al. Indoor UAV object detection algorithms on three processors: implementation test and comparison
Zhai et al. Target Detection of Low‐Altitude UAV Based on Improved YOLOv3 Network
Xiao-Hong et al. UAV's automatic landing in all weather based on the cooperative object and computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant