CN100567898C - Pilotless plane landing navigation method and device thereof - Google Patents

Pilotless plane landing navigation method and device thereof Download PDF

Info

Publication number
CN100567898C
CN100567898C CNB2007100627431A CN200710062743A CN100567898C CN 100567898 C CN100567898 C CN 100567898C CN B2007100627431 A CNB2007100627431 A CN B2007100627431A CN 200710062743 A CN200710062743 A CN 200710062743A CN 100567898 C CN100567898 C CN 100567898C
Authority
CN
China
Prior art keywords
mentioned
point
landing
navigation mark
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2007100627431A
Other languages
Chinese (zh)
Other versions
CN101000243A (en
Inventor
张晓林
聂振钢
谭征
陆国雷
李宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNB2007100627431A priority Critical patent/CN100567898C/en
Publication of CN101000243A publication Critical patent/CN101000243A/en
Application granted granted Critical
Publication of CN100567898C publication Critical patent/CN100567898C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

A kind of pilotless plane landing navigation method and be used to implement the device of this method, it uses vision sensor and the visual pattern treatment technology obtains the instantaneous global motion information of land some terrestrial information and aircraft in real time, thereby guides the depopulated helicopter independent accuracy.It at first uses vision sensor images acquired signal, and searches for the zonule at landing navigation mark point place according to this image information; In above-mentioned zonule, seek unique point then, and it is followed the tracks of and mates; If the match is successful, then utilize above-mentioned tracking and match information to obtain the global motion parameter of depopulated helicopter at last, and send the instruction of landing.The advantage of method of the present invention and device is: this method computation complexity is low, do not rely on specific landing point navigation mark pattern, very high degree of accuracy is arranged simultaneously, can guarantee the success ratio and the real-time requirement of landing guidance, greatly reduce cost, and can bring into play lightly advantage flexibly of depopulated helicopter.

Description

Pilotless plane landing navigation method and device thereof
Technical field
The invention belongs to image recognition and navigation communication technical field, be specifically related to a kind of pilotless plane landing navigation method and implement the device of this method.
Background technology
Depopulated helicopter must accurately be grasped the locating information of the point that lands when implementing independent landing, must obtain displacement information such as accurate height, speed, course simultaneously, and above-mentioned information is offered the independent landing system.
In the past, the common method as the accuracy of guiding depopulated helicopter independent was to obtain the locating information of above-mentioned landing point and the displacement information of depopulated helicopter by methods such as Inertial Measurement Unit, GPS, radio altimeters.
But in method in the past, exist the shortcoming of real-time difference, and in the depopulated helicopter independent landing period, it is very fast that the attitude of aircraft changes, and must guarantee to obtain the real-time of attitude information, could satisfy the requirement of independent landing system.
Therefore, the method that vision sensor extracts independent landing information has appearred using recently.But application process mostly is and adopts the CCD optical camera as main vision sensor, and to carrying out calculation process, to extract above-mentioned independent landing information from the image of gathering.And this usually need to camera acquisition to image carry out complicated early stage and handle, mainly comprise: image is carried out steps such as figure image intensifying, edge extracting, image segmentation, with irrelevant informations such as the most of backgrounds of elimination.
In addition, when obtaining depopulated helicopter displacement information, need carry out the Filtering Processing of better performances, for example make its process Kalman filter etc., just can obtain to guarantee the landing reliability of depopulated helicopter is played the data of support effect image.
And above-mentioned treatment step will increase the complexity of algorithm undoubtedly, though in computing machine and electronic technology highly developed today, realize that any one algorithm of determining has not been the bottleneck place of technical development, yet the realization of complicated algorithm is a cost with consumption calculations time or electronic equipment volume, area often still, and this will the bigger problem of real-time difference or equipment scale occur with regard to meaning, and the problems referred to above are for small-sized depopulated helicopter, will lose its lightly advantages characteristic flexibly.So still be necessary to seek the method on the equilibrium point that is in complexity and guiding success ratio.
In addition, usually used the helicopter landing pad ground to be accustomed to " H " type that uses or custom-designed sign, but and this has just limited depopulated helicopter light and flexible ground greatly and has landed in the performance of the characteristics of any flat site as the landing navigation mark point that is used to guide the depopulated helicopter independent accuracy in the past.
Summary of the invention
The object of the present invention is to provide a kind of can be with the landing point navigation mark of arbitrary graphic pattern as guiding accuracy information, utilize vision sensor and visual pattern treatment technology to catch in real time, thereby the method and the device thereof of the accuracy of guiding depopulated helicopter independent, the real-time that satisfies aircraft control requires also to have improved simultaneously the guiding degree of accuracy.
Major programme of the present invention is: by the vision sensor images acquired information that is equipped on the depopulated helicopter; By landing navigation mark point search module,, be different from the landing navigation mark point of the prior agreement of surrounding environment in the desire landing position zone of search depopulated helicopter according to above-mentioned image information; After searching landing navigation mark point, choose module, determine a frame benchmark image, and in this frame benchmark image, delimit the unique point chosen area around navigation mark point by unique point, and the selected characteristic point; Then by the tracking and matching module, in successive image sequence, above-mentioned unique point is followed the tracks of and mated with respect to benchmark image; When the match is successful, determine match point, utilize match point to obtain the global motion parameter of above-mentioned depopulated helicopter by the global motion parameter calculating module; Land when requiring when above-mentioned global motion parameter meets, module is sent in the instruction of landing, and sending the instruction of landing.
Preferably, in such scheme, the visual pattern of target is gathered by above-mentioned vision sensor, and according to principle of computer vision from the visual pattern of target, extract about target attitude, highly, information such as speed.
Preferably, in such scheme, above-mentioned vision sensor adopts optical camera.
Preferably, in such scheme, the image information that vision sensor is exported is transformed to gray level image.
Preferably, in such scheme, in the step of delimiting the unique point chosen area, utilize threshold segmentation method to extract icon to gray level image, then the gained bianry image is carried out expansive working in the morphological method, obtain the shared black of icon (also may be defined as white) zone thus, and above-mentioned black region delimitation is the unique point chosen area.
Preferably, in such scheme, above-mentioned unique point is the angle point of landing navigation mark point.
Preferably, in such scheme, in the above-mentioned step of obtaining the global motion parameter, simplify and think that aircraft has only the motion of translation and vertical direction between the two frame sequence images, and adopt motion model to obtain the global motion parameter.
Preferably, in such scheme, the using iterative least square method is found the solution scale parameter and the translation parameters that is used to try to achieve current relative velocity of depopulated helicopter and relative height information, and is the above-mentioned global motion parameter of above-mentioned depopulated helicopter with above-mentioned relative velocity and relative height information.
According to pilotless plane landing navigation scheme of the present invention, seek possible landing point navigation mark zone owing to used gray scale image to handle, only the part in the some navigation mark zone of landing is carried out a tracking, than the method for entire image being carried out feature point tracking, significantly reduce computation complexity, be enough to guarantee the real-time of calculation process; Because therefore the some tracking technique of using has avoided a restriction of navigation mark design of landing, and the method can be applied to the landing point navigation mark of class different pattern fully, the some navigation mark pattern that promptly lands is chosen not limited in addition.Simultaneously, the some tracking technique has guaranteed the accuracy of this method.
In addition owing to used vision sensor to obtain information in real time, the cost of whole proposal is reduced greatly and be the continuation downward trend, simultaneously because the reduction of the algorithm complex that this method adopts, can with realize this scheme means scale control within the required range, conscientiously improve the operability of depopulated helicopter, and give full play to cheap, the light and handy advantage flexibly of depopulated helicopter.
Description of drawings
Fig. 1 is the functional block diagram synoptic diagram of pilotless plane landing navigation device of the present invention;
Fig. 2 is the process flow diagram of pilotless plane landing navigation method of the present invention;
Fig. 3 is an example that comprises the gray level image of landing navigation mark point;
Fig. 4 is for carrying out the bianry image after the Threshold Segmentation to the gray level image of Fig. 3;
Fig. 5 is for carrying out the image of gained after the expansive working to the bianry image of Fig. 4;
Fig. 6 chooses figure for the unique point of the landing navigation mark point among Fig. 3;
The feature point tracking figure of Fig. 7 for the unique point among Fig. 6 is followed the tracks of.
Embodiment
Fig. 1 is the example of the functional block diagram of the pilotless plane landing navigation device that the present invention relates to.
As shown in Figure 1, the basic structure of pilotless plane landing navigation device comprises: vision sensor 1, landing navigation mark point search module 2, unique point choose module 3, tracking and matching module 4, global motion parameter calculating module 5 and module 6 is sent in the instruction of landing.
Vision sensor 1 is disposed on the depopulated helicopter, be used for obtaining in real time ground image information, landing navigation mark point module 2 is handled for the image that vision sensor 1 obtains, be used for searching landing navigation mark point zone, if search the navigation mark point, then choosing module 3 by unique point carries out unique point and chooses, carry out the tracking and matching of unique point subsequently by tracking module 4, global motion parameter calculating module 5 is calculated kinematic parameter according to the match point that tracking module 4 obtains, offer the instruction of landing and send module 6, if meet landing conditions, the instruction of then landing is sent module 6 and is sent the instruction of landing.
Wherein vision sensor 1 is disposed on the depopulated helicopter, gathers from the ground image information with respect to above-mentioned depopulated helicopter in real time, and the image information that collects is exported in real time, offers subsequent module and finishes its function.Simultaneously, the image information of above-mentioned vision sensor 1 also can be applicable to other function aspects of depopulated helicopter.
The concrete function of other each module will be illustrated in the aftermentioned paragraph.
As pilotless plane landing navigation device of the present invention and each functional module thereof, all can meet the various means realizations of depopulated helicopter service condition by single-chip microcomputer, digital signal processor (DSP), field programmable logic device (FPGA) or custom chip etc., and under the condition that does not break away from purport of the present invention, be not limited to above embodiment.
Next, the processing scheme of pilotless plane landing navigation device is described in conjunction with the process flow diagram of Fig. 2.
At first in step 1 as shown in FIG. among the S1, the processing of searching for landing navigation mark point.The image information that receives from vision sensor 1 by landing navigation mark point search module 2, thus go forward side by side row operation search landing navigation mark point.
As everyone knows, navigation mark point has the design and color of the background of being different from usually, and its purpose is exactly to discern the landing site easily for convenience of the driver under various environment.Because present embodiment has been taked the landing information extraction based on characteristic point information, therefore pattern and the geometry to landing navigation mark point all do not have specific (special) requirements, only in general sense, its feature that satisfies " being different from background ", " identification easily " got final product.In order to be without loss of generality, we use English alphabet " A " as example icon in this patent.
The concrete operations that landing navigation mark point search module 2 is carried out in step 1 are, will become gray level image from the image information of vision sensor, and transformation for mula is as follows:
Y=0.299×R+0.587×G+0.114×B (1)
" R " in its Chinese style (1), " G ", " B " are coloured light three primary colors on each picture element of coloured image, the value of three kinds of colors of red, green, blue just, after formula (1) is calculated, obtain the gray-scale value of this point, be about to the color image information conversion for gray level image, finished the binaryzation of image.Certainly, also can adopt and deceive, make landing navigation mark point search module 2 receive gray level images from vision sensors such as video cameras.
The gray level image that is used for example as shown in Figure 3.
Wherein, landing navigation mark point satisfies general meaning distinguishing characteristics, therefore in gray level image, it has the highest gray-scale value usually, and the cognizable dark icon of naked eyes is the set landing navigation mark dot pattern " A " of present embodiment in the gray level image as shown in Figure 3.For above-mentioned reasons, take simple threshold segmentation method can extract icon.To Fig. 3 carry out after the Threshold Segmentation image as shown in Figure 4.
Subsequently, enter i.e. as shown in Figure 2 the S2 step of step 2, whether inquiry has searched landing navigation mark point, if landing navigation mark point search module 2 is answered "Yes", then enters next step, and step 3 is S3; Otherwise returning step 1 is S1, searches for landing navigation mark point again, till searching.
Next, the processing of step 3 and step 4 is chosen module 3 by unique point and is finished.
At first, if landing navigation mark point search module 2 searches landing navigation mark point, then it will choose signal of module 3 outputs to unique point, click delivery piece 3 with notification feature and enter next step, also above-mentioned image after the Threshold Segmentation and its original-gray image of carrying out should be exported to unique point and choose module 3 simultaneously, being used for step 3 is that S3 and step 4 are the processing of S4.
After receiving the image and its original-gray image that carries out after the Threshold Segmentation, carry out step 3, determine benchmark image, choose this original-gray image of module 3 buffer memorys as benchmark image by unique point.
In addition, also carrying out step 4 is S4, selected characteristic point.
In step 4 is among the S4, particularly, the bianry image after the above-mentioned Threshold Segmentation is carried out the morphology expansive working, and this operation can be filled the hole of icon area, and having obtained black region promptly is the required image-region that operates on it of subsequent step.As, the bianry image after the Threshold Segmentation of Fig. 4 is carried out expansive working then can obtain drawing effect shown in Figure 5 promptly dilated " A ".
Owing to choose in two width of cloth images in the module 3 in the above-mentioned unique point that is cached in, the position of concrete pattern is overlapping, therefore the black region that obtains after the expansive working can be applied to original-gray image, can think the to land zone at image place of navigation mark point, the zone that obtains delimited it and be the unique point chosen area.And,, therefore, can cast out this partial information because the information in this zone does not constitute any influence for the follow-up step of choosing landing navigation mark point to all thinking the background area in the zone beyond the above-mentioned black region.Like this, also brought the good result that reduces data volume.
In above-mentioned unique point chosen area of having delimited, the selected characteristic point.Concrete grammar is as follows:
Adopt Harris Corner Detection operator, this operator is based on the local auto-correlation of image, and local autocorrelation function weighs because the domain transformation that little displacement of image block translation causes.
Suppose level and vertical direction displacement vector (Δ u, Δ v), then picture element (u, autocorrelation function v) is defined as:
E x , y = Σ W ( I ( u , v ) - I ( u + Δu , v + Δv ) ) 2 - - - ( 2 )
Here (u, (u, brightness value v), W represent to be centered close to (u, a window v) to I v) to represent picture element.
Because the hypothesis displacement is very little, and following formula is carried out Taylor expansion, casts out the later item of second order,
I ( u + Δu , v + Δv ) ≈ I ( u , v ) + [ I x ( u , v ) I y ( u , v ) ] Δu Δv - - - ( 3 )
With (3) formula substitution (2) formula, obtain:
E x , y = Σ W ( I x ( u , v ) I y ( u , v ) Δu Δv ) 2
= Δu Δv Σ W ( I x ( u , v ) ) 2 Σ W I x ( u , v ) I y ( u , v ) Σ W I x ( u , v ) I y ( u , v ) Σ W ( I y ( u , v ) ) 2 Δu Δv - - - ( 4 )
= Δu Δv C ( u , v ) Δu Δv
I wherein x(u v) represents the gradient of luminance level direction; I y(u v) represents the gradient of brightness vertical direction
Here (u has v) represented adjacent brightness structure of current week in fact to Matrix C.We use λ 1, λ 2Representing matrix C (u, eigenwert v).According to the size of eigenwert, we are divided three classes current point, and are as described below:
If λ 1, λ 2All very little, then whole window area is a flat site
If both one big, one is little, then this point is frontier point
If both values are all bigger, then this point is angle point.
Because we have detected picture mark position above-mentioned steps, and icon has abundant angle point information usually, therefore only icon area is carried out Corner Detection, promptly obtain required unique point, " x " point in as shown in Figure 6 " A ".
Then, choose module 3 by unique point and send signal, beginning tracking and matching module 4 adopts the tracking of KLT track algorithm to above-mentioned unique point, i.e. step 5 S5 shown in Figure 2; Shown in Figure 7 is exactly that the unique point among Fig. 6 is followed the tracks of and the feature point tracking figure that obtains.
Specifically: fundamental starting point of KLT track algorithm is exactly that frame point is identical with its corresponding point brightness value of present frame before supposing.
I(x,y,t+τ)=I(x-Δx,y-Δy,t)(5)
We consider a window now, and each point is all corresponding to the corresponding point in certain window of the present frame after certain displacement of process (Δ x, Δ y) in the preceding frame window.Therefore can draw shift value (Δ x, Δ y) by minimizing objective function
E x , y = Σ W ( I ( x , y , t + τ ) - I ( x - Δx , y - Δy , t ) ) 2 - - - ( 6 )
By each unique point is carried out above-mentioned computing, finish tracking like this to all unique points.
Execution in step 6 is S6 subsequently, inquire whether above-mentioned tracking mates with respect to unique point, here can set a certain threshold value according to factors such as environmental baselines, the point that still surpasses certain threshold value for minimized target function value, be masked as and follow the tracks of failure or unreliable tracking, promptly do not match, so, return step 1S1, carry out the step of above-mentioned S1~S6 again; Otherwise promptly the match is successful, and then to enter next step 7 be S7 shown in Figure 2, obtains the global motion parameter.
Bear the evaluation work that above-mentioned steps 7 is S7 by global motion parameter calculating module 5, this module 5 receives from above-mentioned module 4 the match is successful signal and successive image information, and carries out calculation process based on above-mentioned signal.If the signal that receives that the match is successful just begins calculation process.The point that obtains by above-mentioned tracking and matching can be obtained the global motion parameter with respect to the successive image sequence of benchmark image, because sampling rate is than higher, we can think that aircraft between the two frame sequence images has only the motion of translation and vertical direction, so we adopt the Model Calculation kinematic parameter:
Δ x ( k ) Δ y ( k ) = a 1 x ( k ) a 3 x ( k ) + a 2 a 4 - - - ( 7 )
Here x (k), y (k) represents k characteristic point coordinates of prior image frame, and Δ x(k), Δ y(k) represent of the displacement of prior image frame unique point at present frame.
We find the solution scale parameter (a at the using iterative least square method 1, a 2) and translation parameters (a 3, a 4).
We try to achieve the current relative velocity of aircraft according to the image overall kinematic parameter of trying to achieve and other sensor parameters at last.And scale parameter can be in the hope of the relative height information of aircraft, and export these global motion parameters to the instruction of landing and send module 6.
Next, sending module 6 by the instruction of landing, to send the step 8 of the instruction of landing be S8 shown in Figure 2, be used for the global motion parameter of Landing Control and land to instruct exporting outside the pilotless plane landing navigation device of the present invention process ends together to above-mentioned.Depopulated helicopter can be realized safe accuracy according to this.

Claims (5)

1. pilotless plane landing navigation method, this method comprises the steps:
By the step of vision sensor images acquired information, above-mentioned vision sensor is equipped on the above-mentioned depopulated helicopter;
According to above-mentioned image information, the step of search landing navigation mark point;
It is characterized in that this method also comprises the steps:
According to the search information of above-mentioned landing navigation mark point, in above-mentioned image information, determine the step of benchmark image;
In the said reference image, delimit the step of unique point chosen area;
In the step of described delimitation unique point chosen area, utilize threshold segmentation method to extract icon, then the gained bianry image is carried out expansive working, obtain the shared black region of icon thus, and above-mentioned black region delimitation is the unique point chosen area;
In above-mentioned unique point chosen area, the step of selected characteristic point is only chosen the unique point in the navigation mark point regional extent in the step of selected characteristic point; Described unique point is the angle point of landing navigation mark point;
In above-mentioned image information, in successive image sequence with respect to benchmark image, the step that above-mentioned unique point is followed the tracks of and mated;
When the match is successful, determine match point, and utilize match point to obtain the step of the global motion parameter of above-mentioned depopulated helicopter that using iterative least square method in the step of calculating the global motion parameter has been rejected the mistake that occurs in the matching process;
In the above-mentioned step of obtaining the global motion parameter, simplify and think that depopulated helicopter has only the motion of horizontal direction and the motion of vertical direction between the two frame sequence images, and adopt motion model to obtain the global motion parameter; The using iterative least square method is found the solution scale parameter and the translation parameters that is used to try to achieve current relative velocity of depopulated helicopter and relative height information, and be the global motion parameter of described depopulated helicopter with above-mentioned relative velocity and relative height information, described scale parameter is the movement in vertical direction parameter, and translation parameters is the kinematic parameter of horizontal direction;
Land when requiring when above-mentioned global motion parameter meets, send the step of the instruction of landing, above-mentioned landing instruction is the signal that depopulated helicopter can land;
In search landing navigation mark point step, can search for, only require that the navigation mark dot pattern should obviously be different from background at the pattern that designs in advance arbitrarily.
2. pilotless plane landing navigation method according to claim 1, it is characterized in that, at above-mentioned vision sensor the visual pattern of target is gathered, and according to principle of computer vision from the visual pattern of target, extract about target attitude, highly, velocity information, described target is a landing navigation mark point.
3. pilotless plane landing navigation device comprises:
Vision sensor, it is disposed on the above-mentioned depopulated helicopter, is used to gather the ground image information with respect to above-mentioned depopulated helicopter;
Landing navigation mark point search module, it receives the above-mentioned image information from above-mentioned vision sensor, and according to above-mentioned image information search landing navigation mark point;
Unique point is chosen module, and it determines benchmark image according to above-mentioned landing navigation mark point search information, delimits unique point chosen area and selected characteristic point in the said reference image;
The tracking and matching module, it is followed the tracks of and mates above-mentioned unique point in the successive image sequence with respect to benchmark image;
The global motion parameter calculating module, it calculates the global motion parameter of above-mentioned depopulated helicopter when above-mentioned Feature Points Matching is successful;
Module is sent in the instruction of landing, and it meets in above-mentioned global motion parameter and lands when requiring, and sending the instruction of landing;
It is characterized in that: landing navigation mark point search module can only require that the navigation mark dot pattern should obviously be different from background at the arbitrary graphic pattern design proposal;
Unique point is chosen module and is only chosen the interior unique point of navigation mark point regional extent, rather than entire image;
Choose in the module in the delimitation unique point, utilize threshold segmentation method to extract icon, then the gained bianry image is carried out expansive working, obtain the shared black region of icon thus, and above-mentioned black region delimitation is the unique point chosen area; Above-mentioned unique point is the angle point of landing navigation mark point;
Global motion parameter calculating module using iterative least square method has been eliminated the mistake that occurs in the matching process;
In the global motion parameter calculating module, simplify and think that depopulated helicopter has only the motion of horizontal direction and the motion of vertical direction between the two frame sequence images, and adopt motion model to obtain the global motion parameter; The using iterative least square method is tried to achieve the scale parameter and the translation parameters of current relative velocity of depopulated helicopter and relative height information in the global motion parameter calculating module, and is the global motion parameter of described depopulated helicopter with above-mentioned relative velocity and relative height information.
4. pilotless plane landing navigation device according to claim 3, it is characterized in that, described vision sensor is gathered the visual pattern of target, and according to principle of computer vision from the visual pattern of target, extract about target attitude, highly, velocity information, described target is a landing navigation mark point.
5. pilotless plane landing navigation device according to claim 3 is characterized in that, above-mentioned vision sensor adopts optical camera.
CNB2007100627431A 2007-01-16 2007-01-16 Pilotless plane landing navigation method and device thereof Expired - Fee Related CN100567898C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100627431A CN100567898C (en) 2007-01-16 2007-01-16 Pilotless plane landing navigation method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100627431A CN100567898C (en) 2007-01-16 2007-01-16 Pilotless plane landing navigation method and device thereof

Publications (2)

Publication Number Publication Date
CN101000243A CN101000243A (en) 2007-07-18
CN100567898C true CN100567898C (en) 2009-12-09

Family

ID=38692302

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100627431A Expired - Fee Related CN100567898C (en) 2007-01-16 2007-01-16 Pilotless plane landing navigation method and device thereof

Country Status (1)

Country Link
CN (1) CN100567898C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11022984B2 (en) * 2016-08-06 2021-06-01 SZ DJI Technology Co., Ltd. Automatic terrain evaluation of landing surfaces, and associated systems and methods

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667032B (en) * 2008-09-03 2012-09-05 中国科学院自动化研究所 Vision-based target tracking system using unmanned helicopter
EP2413096B1 (en) 2009-03-27 2019-02-06 Yu, Qifeng Ground-based videometrics guiding method for aircraft landing or unmanned aerial vehicles recovery
US8098176B2 (en) * 2010-05-07 2012-01-17 Honeywell International Inc. Systems and methods for adjusting landing gear alert envelope for offshore platforms and building-top landings
KR101157484B1 (en) * 2010-12-14 2012-06-20 주식회사 대한항공 Uav automatic recovering method
CN102156480A (en) * 2010-12-30 2011-08-17 清华大学 Unmanned helicopter independent landing method based on natural landmark and vision navigation
CN102538782B (en) * 2012-01-04 2014-08-27 浙江大学 Helicopter landing guide device and method based on computer vision
US9043136B2 (en) * 2012-07-26 2015-05-26 Ge Aviation Systems, Llc Method for displaying suitability of future waypoint locations
CN103744430B (en) * 2013-02-07 2016-08-24 山东英特力光通信开发有限公司 A kind of small-sized depopulated helicopter flight control method
CN103822635B (en) * 2014-03-05 2016-12-07 北京航空航天大学 The unmanned plane during flying spatial location real-time computing technique of view-based access control model information
FR3024127B1 (en) * 2014-07-25 2016-08-26 Airbus Operations Sas AUTONOMOUS AUTOMATIC LANDING METHOD AND SYSTEM
CN104656664B (en) * 2015-02-16 2017-07-11 南京航空航天大学 Vehicle-mounted many rotor unmanned helicopter approaching and landing system control systems and guiding control method
CN104670517B (en) * 2015-03-09 2019-11-15 许木子 Helicopter landing is fixed and takes off ejection system
CN105955289A (en) * 2015-03-10 2016-09-21 李军 Unmanned plane automatic landing system
CN105676875A (en) * 2015-03-10 2016-06-15 张超 Automatic landing system of unmanned aerial vehicle
CN105021184B (en) * 2015-07-08 2017-10-24 西安电子科技大学 It is a kind of to be used for pose estimating system and method that vision under mobile platform warship navigation
EP3835912A1 (en) 2015-12-09 2021-06-16 SZ DJI Technology Co., Ltd. Systems and methods for auto-return
WO2018053861A1 (en) 2016-09-26 2018-03-29 SZ DJI Technology Co., Ltd. Methods and system for vision-based landing
CN110231835A (en) * 2019-07-04 2019-09-13 深圳市科卫泰实业发展有限公司 A kind of accurate landing method of unmanned plane based on machine vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
具有目标识别和定位能力的小型无人驾驶直升机的设计与实现. 陆国雷,张晓林等.2005年中国智能自动化会议论文集. 2005
具有目标识别和定位能力的小型无人驾驶直升机的设计与实现. 陆国雷,张晓林等.2005年中国智能自动化会议论文集. 2005 *
具有视觉导航和目标识别的小型多功能无人驾驶直升机. 聂振钢,张晓林等.中南大学学报(自然科学版),第36卷. 2005
具有视觉导航和目标识别的小型多功能无人驾驶直升机. 聂振钢,张晓林等.中南大学学报(自然科学版),第36卷. 2005 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11022984B2 (en) * 2016-08-06 2021-06-01 SZ DJI Technology Co., Ltd. Automatic terrain evaluation of landing surfaces, and associated systems and methods
US11727679B2 (en) 2016-08-06 2023-08-15 SZ DJI Technology Co., Ltd. Automatic terrain evaluation of landing surfaces, and associated systems and methods

Also Published As

Publication number Publication date
CN101000243A (en) 2007-07-18

Similar Documents

Publication Publication Date Title
CN100567898C (en) Pilotless plane landing navigation method and device thereof
Levinson et al. Traffic light mapping, localization, and state detection for autonomous vehicles
CN109341694A (en) A kind of autonomous positioning air navigation aid of mobile sniffing robot
CN103149939B (en) A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
CN102313547B (en) Vision navigation method of mobile robot based on hand-drawn outline semantic map
Shneier Road sign detection and recognition
CN103901895B (en) Target positioning method based on unscented FastSLAM algorithm and matching optimization and robot
CN113485441A (en) Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology
CN106407315A (en) Vehicle self-positioning method based on street view image database
CN109949361A (en) A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
CN103625649B (en) A kind of aircraft independent landing area judging method
CN104864889A (en) System and method for correcting robot speedometer based on visual sense
CN101109640A (en) Unmanned aircraft landing navigation system based on vision
CN103680291A (en) Method for realizing simultaneous locating and mapping based on ceiling vision
CN104808685A (en) Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
EP3460756B1 (en) Tracking system and method thereof
CN105550692A (en) Unmanned aerial vehicle automatic homing landing method based on landmark color and outline detection
CN109902610A (en) Traffic sign recognition method and device
CN112904395A (en) Mining vehicle positioning system and method
KR20100105126A (en) Apparatus for detecting lane and its method
CN117197676A (en) Target detection and identification method based on feature fusion
CN110568437A (en) Precise environment modeling method based on radar assistance
Hong et al. Fusing ladar and color image information for mobile robot feature detection and tracking
CN116758311A (en) Visual SLAM construction method suitable for low-light environment
Guo et al. Gesture recognition for Chinese traffic police

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091209

Termination date: 20120116