CN107543539A - The location information acquisition method and unmanned plane of a kind of unmanned plane - Google Patents

The location information acquisition method and unmanned plane of a kind of unmanned plane Download PDF

Info

Publication number
CN107543539A
CN107543539A CN201610496595.3A CN201610496595A CN107543539A CN 107543539 A CN107543539 A CN 107543539A CN 201610496595 A CN201610496595 A CN 201610496595A CN 107543539 A CN107543539 A CN 107543539A
Authority
CN
China
Prior art keywords
unmanned plane
image
feature
module
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610496595.3A
Other languages
Chinese (zh)
Other versions
CN107543539B (en
Inventor
左大宁
殷羲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leadcore Technology Co Ltd
Datang Semiconductor Design Co Ltd
Original Assignee
Leadcore Technology Co Ltd
Datang Semiconductor Design Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leadcore Technology Co Ltd, Datang Semiconductor Design Co Ltd filed Critical Leadcore Technology Co Ltd
Priority to CN201610496595.3A priority Critical patent/CN107543539B/en
Publication of CN107543539A publication Critical patent/CN107543539A/en
Application granted granted Critical
Publication of CN107543539B publication Critical patent/CN107543539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to the communications field, the location information acquisition method and unmanned plane of a kind of unmanned plane are disclosed.In embodiment of the present invention, the static object feature and motion feature of image are photographed by extracting unmanned plane camera, calculate the real time data of unmanned plane current location, the location information that real time data further according to the calculating and the positioner carried using unmanned plane are obtained, obtain the current positional information of unmanned plane.In this way, image processing techniques can be referred in the acquisition of unmanned plane positional information, improves the real-time and accuracy of unmanned plane positional information.

Description

The location information acquisition method and unmanned plane of a kind of unmanned plane
Technical field
The present invention relates to the communications field, the location information acquisition method and unmanned plane of more particularly to a kind of unmanned plane.
Background technology
The research of unmanned plane has very big progress in recent years, and application field is also more and more wider.At military aspect, by There is the spies such as short pre-warning time, good concealment, reconnaissance capability are strong, the cruise time is long, cost is low, operational loss is small in unmanned plane Point, it can be widely applied to scout, attack, the military mission such as electronic countermeasure, it can also be used to which target drone is tested.At civilian aspect, can use It is all in communication relay, meteorological detection, disaster monitoring, pesticide spraying, geological exploration, ground mapping, traffic control, border control etc. It is multi-field.
Inventors herein have recognized that conventional unmanned plane relies primarily on inertial navigation and GPS GPS Navigated, however, inertia device has accumulated error in navigation procedure, it is excessively sensitive to initial value, and GPS is not always It can obtain, even and can obtain, precision often can not meet the demand of Navigation of Pilotless Aircraft.
The content of the invention
The purpose of embodiment of the present invention is the location information acquisition method and unmanned plane for providing a kind of unmanned plane so that Image processing techniques is referred in the positional information acquisition of unmanned plane, greatly improves acquired unmanned plane positional information Real-time and accuracy.
In order to solve the above technical problems, embodiments of the present invention provide a kind of positional information acquisition side of unmanned plane Method, including:
Obtain the image that the camera being installed on the unmanned plane photographs;
Extract the static object feature of described image;
Dynamic analysis are carried out to described image, extract motion feature;
Static object feature and motion feature based on the extraction, calculate the real-time number of the unmanned plane current location According to;
According to the real time data of the calculating, and the location information that the positioner carried using the unmanned plane is obtained, Obtain the current positional information of the unmanned plane.
Embodiments of the present invention additionally provide a kind of unmanned plane, comprising:
Image collection module, the image photographed for obtaining the camera being installed on unmanned plane;
Static object characteristic extracting module, for extracting the static object feature of image in the acquisition module;
Motion feature extraction module, for carrying out dynamic analysis to image in the acquisition module, extract motion feature;
Computing module, the fortune extracted for the static object feature extracted based on the extraction module and the analysis module Dynamic feature, calculate the real time data of the unmanned plane current location;
Position information acquisition module, for the real time data calculated according to the computing module, and utilize the unmanned plane The location information that the positioner carried obtains, obtain the current positional information of the unmanned plane..
Embodiment of the present invention in terms of existing technologies, the static state of image is photographed by extracting unmanned plane camera Target signature and motion feature, the real time data of unmanned plane current location is calculated, further according to the real time data and profit of the calculating The location information that the positioner carried with unmanned plane obtains, obtain the current positional information of unmanned plane.In this way, may be used So that image processing techniques is referred to during unmanned plane positional information obtains, the real-time of unmanned plane positional information and accurate is improved Degree.
In addition, after described image is got, before extracting the static object feature, in addition to:Described image is entered Row removes the pretreatment of noise jamming;In the static object feature is extracted, extract through in the pretreated image Static object feature.
When obtaining the image that the camera being installed on unmanned plane photographs, it is easily affected by noise, so obtaining Get after the image, it is necessary to be first removed the pretreatment of noise jamming to the image, effectively prevent noise to data accuracy Influence.
In addition, the static object feature of the extraction image, is specifically included:Extract the geometric properties of described image;Extraction The point feature of described image;The geometric properties and point feature of extraction are subjected to Fusion Features, it is special to obtain the static object Sign.After the geometric properties of image and point feature is extracted, the geometric properties extracted and point feature are subjected to Fusion Features, To obtain the static object feature of more accurate image.
In addition, in the geometric properties of the extraction image, the geometry that extraction described image is handled by Hough transformation is special Sign;In the point feature of the extraction image, the point feature of described image is extracted by Harris algorithms.
The geometric properties of extraction image are handled using Hough transformation, are because the geometry such as straight line is converted to specific seat When in mark system, it can be represented with point, and using the geometric properties of Hough transformation extraction image, can preferably reduce noise Interference.Using the point feature of Harris algorithms extraction image, not only make it that the degree of accuracy for extracting image point feature is high but also real When property is also very high.
Brief description of the drawings
Fig. 1 is the location information acquisition method flow chart according to a kind of unmanned plane of first embodiment of the invention;
Fig. 2 is the Kalman Filter Estimation device schematic diagram according to first embodiment of the invention;
Fig. 3 is the Kalman Algorithm flow chart according to first embodiment of the invention;
Fig. 4 is the location information acquisition method flow chart according to a kind of unmanned plane of second embodiment of the invention;
Fig. 5 is the Hough transformation process schematic according to second embodiment of the invention;
Fig. 6 is the FCM clustering algorithm flow charts according to second embodiment of the invention;
Fig. 7 is the location information acquisition method flow chart according to a kind of unmanned plane of third embodiment of the invention;
Fig. 8 is the structural representation according to a kind of unmanned plane of four embodiment of the invention;
Fig. 9 is the structural representation according to a kind of unmanned plane of fifth embodiment of the invention;
Figure 10 is the structural representation according to a kind of unmanned plane of sixth embodiment of the invention.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, each reality below in conjunction with accompanying drawing to the present invention The mode of applying is explained in detail.However, it will be understood by those skilled in the art that in each embodiment of the present invention, In order that reader more fully understands the application and proposes many ins and outs.But even if without these ins and outs and base Many variations and modification in following embodiment, the application technical scheme claimed can also be realized.
The first embodiment of the present invention is related to a kind of location information acquisition method of unmanned plane.Idiographic flow such as Fig. 1 institutes Show.
In a step 101, the image that the camera being installed on unmanned plane photographs is obtained.
Specifically, by the camera shooting image on unmanned plane, movable information can be preferably caught, and image Head belongs to passive sensor, and what is utilized is visible ray or this natural information of infrared ray, and this is better than in military hidden investigation It is important.
In a step 102, the static object feature of image is extracted.
Specifically, the flight of unmanned plane small range is being guided or during landing, often using static mark, they both can be with Usually place special shape or the mark of color when being such as pinpoint landing specially designed in advance in level point, can be again The road natively having, building, door and window, electric wire even horizon etc..Moreover, the initial information that camera obtains is with the shape of image Formula is present, with substantial amounts of redundancy, it is necessary to extract effective information using image processing techniques.Image processing techniques and The development of camera hardware so that computer vision technique, which can be incorporated into, obtains unmanned plane positional information (i.e. Navigation of Pilotless Aircraft) Technology in.Wherein, computer vision technique is used to obtain navigation effective information from image, realizes to static in image or fortune The extraction of moving-target, herein, for extracting the static object feature of image.
In step 103, dynamic analysis are carried out to image, extracts motion feature.
Specifically, during unmanned plane prolonged flight on a large scale, the characteristic indication thing utilized is to move mostly, example Such as, using other unmanned planes in queue in the moving vehicle on ground or formation flight as mark etc..At this time, it is necessary to Using computer vision technique, navigation effective information is obtained from image, herein, for extracting the motion feature of image.
At step 104, static object feature and motion feature based on extraction, the real-time of unmanned plane current location is calculated Data.
Optical-flow Feature and static nature are to be used to test the speed, and light stream is exactly the movement by luminous point in detection image and dim spot, To judge that pixel relative to the translational speed of unmanned plane, along with the static nature of image, can be obtained by naturally in image Unmanned plane relative to ground translational speed, so as to obtain unmanned plane relative position.Therefore, by quiet to what is extracted State target signature and motion feature carry out state estimation and data fusion, can obtain the real time data ginseng of unmanned plane current location Number.
In step 105, according to the real time data of calculating, and the positioning letter that the positioner carried using unmanned plane is obtained Breath, obtain the current positional information of unmanned plane.
Specifically, according to the real-time data parameters that the unmanned plane current location that target signature obtains is extracted in image, Kalman Algorithm can be used and combine the estimation that some prioris are used for unmanned plane displacement state, unmanned plane is obtained and work as Preceding positional information.Wherein, the location information that the priori in present embodiment obtains for the positioner that unmanned plane carries, The positioner that unmanned plane carries can be inertial navigation system and GPS.That is, by using Kalman Algorithm to nobody The real-time data parameters and inertial navigation system parameter of machine, GPS parameters carry out data fusion, obtain the current position letter of unmanned plane Breath.Wherein, the algorithm that three kinds of data are merged using Kalman Algorithm is as follows:
If s (t) represents Kalman Filter Estimation value of the state based on camera sensing device i observation informations and corresponding respectively Evaluated error covariance matrix, for i=1,2 ..., N, it is assumed thatIt is uncorrelated, then optimal Kalman filter optimal data Fusion criterion is provided by equation (1):
Wherein,Evaluated error covariance matrix is accordingly
It can prove, and P (k | k)≤Pi(k | k) i=1,2 ..., N, wherein, P (k | k) representEvaluated error Covariance.
Based on the Kalman Filter Estimation device of i-th of camera sensing device, as shown in Fig. 2 the real time data to unmanned plane Parameter and inertial navigation system parameter, GPS parameters carry out the Kalman Algorithm flow of data fusion, as shown in Figure 3.
It is seen that in the present embodiment, unmanned plane camera is extracted by computer vision technique and photographs figure The static object feature and motion feature of picture, and according to the static object feature and motion feature extracted, calculate unmanned plane The real time data of current location, then the real time data calculated and the inertial navigation system carried using unmanned plane and GPS are obtained The location information arrived carries out state estimation and data fusion, so as to obtain the current positional information of unmanned plane.In this way, Navigation defect caused by the accumulated error of inertia device and GPS interruptions during Navigation of Pilotless Aircraft is compensate for, effectively improves nobody The real-time and accuracy of machine navigation.
Second embodiment of the present invention is related to a kind of location information acquisition method of unmanned plane.Second embodiment is Further improvement has been done on the basis of one embodiment, has mainly been theed improvement is that:In second embodiment of the invention, specifically Used algorithm and technical scheme during giving the process of extraction image static object feature and being somebody's turn to do, while also specifically give Used algorithm and technical scheme during having gone out to carry out image dynamic analysis and having extracted the process of motion feature and be somebody's turn to do.Tool Body flow is as shown in Figure 4.
Step 401 in present embodiment is identical with the step 101 of the first embodiment of the present invention, is repeated to reduce, It will not be repeated here.
In step 402, the static object feature of image is extracted;Further include following sub-step:
In sub-step 4021, the geometric properties of image are extracted.
Specifically, the geometric properties of extraction image are handled by Hough transformation.Wherein, it is public according to the Hough transformation of straight line Formula, as shown in equation (2):
X*cos (θ)+y*sin (θ)=r equatioies (2)
In equation (1), angle, θ refers to the angle between r and X-axis, and r is to rectilinear geometry vertical range.It is any in straight line Upper, x, y can be expressed, wherein, r, θ are constants.
Hough transformation is completed, preview hough space result, finds maximum Hough value, threshold value is set, image three is changed in contravariant Primary colors rgb value space, processing of crossing the border, display Hough transformation handle later image.
Further, the Hough transformation process shown in equation (2), as shown in Figure 5.
By θ angles in minus 90 degree to 90 degree scopes, many sections are divided into, to all pixels (x, y) at all θ angles When, ρ is obtained, so as to the number that cumulative ρ values occur, as shown in equation (3):
ρ=xcos (θ)+ysin (θ) equation (3)
Calculating such as equation (3) is carried out to each pixel (x, y), obtains Hough transformation matrix H, image conversion enters Hough space, then obtains maximum in hough space, threshold value be set as 0.5*max (H (:)), i.e. the half of maximum, most Need to return to original rgb space from hough space afterwards.
In sub-step 4022, the point feature of image is extracted.
Specifically, the point feature of image is extracted by Harris algorithms.Harris thought is to ask very big to E (x, y) Value, wherein, (x, y) represents the maximum direction of gray-value variation, then obtains the gray-value variation of the vertical direction of the direction, most After compare and draw a conclusion.In order to accelerate solving speed, Harris carries out Taylor expansion to the gray-value variation part in E (x, y). If making D=I (x+u, y+v)-I (u, v), then there is D (x, y) to deploy in origin (0,0) Taylor, finally obtain D=I (x+ U, y+v)-I (u, v), then there is D (x, y) that equation (4) can be finally obtained in origin (0,0) place Taylor expansion, wherein,
Wherein,
That is, the difference approximation in x directions is equal to 1/2 [f (x+1, y)-f (x-1, y)].It can so be easy to try to achieve The H-matrix of each pixel.The purpose for trying to achieve H-matrix is that two of E (x, y) are orthogonal because Harris is had found by calculating Gray-value variation direction be H characteristic vector direction, and corresponding characteristic value is exactly its gray-value variation amount.I.e. its solution is H eigenvalue of maximum λ1Characteristic vector (x1,y2).And there are now E (x1, y1) and=λ1.By matrix theory knowledge, (x1,y1) Vertical direction is H another eigenvalue λ2Corresponding characteristic vector.There are E (x accordingly2, y2) and=λ2.So only require to obtain H Two characteristic values are it may determine that the point is angle point.
In sub-step 4023, the geometric properties of extraction and point feature are subjected to Fusion Features, obtain static object feature.
Specifically, according to according to geometric properties and the point feature extracted, Fusion Features is carried out, optimize feature extraction As a result, rich image feature is stated.
In step 403, dynamic analysis are carried out to image, extracts motion feature;Further include following sub-step:
It is that each pixel in image assigns a velocity in sub-step 4031, forms an image fortune Dynamic field, carries out characteristic light stream calculating.
Specifically, a velocity is assigned to each pixel in image, which forms an image to transport Dynamic field, in a particular moment of motion, the point on the point and three-dimensional body on image corresponds, and this corresponding relation can be by Projection relation obtains, in the case where 2D+t is tieed up (3D and more high-dimensional as the same), it is assumed that the brightness positioned at the pixel of (x, y, t) is I(x,y,t).The voxel moves △ x, △ y, △ t between two picture frames.Then a brightness identical knot can be drawn By as shown in equation (5):
I (x, y, t)=I (x+ △ x, y+ △ y, t+ △ t) equation (5)
Assuming that the movement very little, then equation (6) can be drawn according to Taylor series,
Therefore equation (7) can be released,
It is final it could be assumed that, as shown in equation (8):
According to the velocity feature of each pixel, dynamic analysis are carried out to image.
In sub-step 4032, light stream cluster is carried out based on fuzzy C-mean algorithm FCM clustering algorithms, obtains optical flow computation result.
Specifically, the flow chart of FCM clustering algorithms, as shown in fig. 6, being carried out based on fuzzy C-mean algorithm FCM clustering algorithms During light stream clusters:
First, parameter used in cluster process is initialized.Wherein, it is C, 2≤C≤n to specify cluster classification number, N is data amount check, and it is ε to specify iteration stopping threshold value, and the initial value for specifying cluster centre is V0, it is b, b to specify iteration count Initial value be b=0.
Secondly, calculated according to equation (9) or update Matrix dividing U.Wherein, equation (9) is:
Then, cluster centre V (b+1) is updated according to equation (10).Wherein, equation (10) is:
Finally, if | | Vb-V (b+1) | |<ε, then algorithm stop and export Matrix dividing and cluster centre V, otherwise make b =b+1, turn to and perform above-mentioned steps.
In sub-step 4033, moving target is detected according to optical flow computation result, obtains motion feature.
Specifically, sequence image is calculated with LK algorithms, and moving target is detected according to optical flow computation result.
Step 404 to 405 with the step 104 of the first embodiment of the present invention to 105 identical, repeated to reduce, It will not be repeated here.
Present embodiment can not only reach the technique effect of first embodiment, moreover, extracting the geometry of image After feature and point feature, the geometric properties extracted and point feature are subjected to Fusion Features, can obtain more accurately scheming The static object feature of picture.It is because the geometry such as straight line in addition, handling the geometric properties of extraction image using Hough transformation When being converted in preferred coordinates system, it can be represented with point, and using the geometric properties of Hough transformation extraction image, can be with Preferably reduce noise jamming.Using the point feature of Harris algorithms extraction image, not only to extract image point feature The degree of accuracy is high and real-time is also very high.
Third embodiment of the present invention is related to a kind of location information acquisition method of unmanned plane.3rd embodiment is Further improvement has been done on the basis of one embodiment, has mainly been theed improvement is that:In third embodiment of the invention, obtaining After getting described image, the static object feature is extracted before, it is necessary to first be removed noise jamming to acquired image Pretreatment so that in subsequent extracted static object feature, extract the static object feature in image after pretreatment.Tool Body flow is as shown in Figure 7.
Step 701 in present embodiment is identical with the step 101 of the first embodiment of the present invention, is repeated to reduce, It will not be repeated here.
In a step 702, the pretreatment of noise jamming is removed to acquired image.
Specifically, when obtaining the image that the camera being installed on unmanned plane photographs, it is easily affected by noise, So noise logarithm is effectively prevented, it is necessary to be first removed the pretreatment of noise jamming to the image after the image is got According to the influence of accuracy.
Noise jamming is removed using the method for medium filtering in present embodiment, below with each pixel in 3 × 3 window Exemplified by carry out medium filtering introduction.
Maximum, intermediate value and minimum value are calculated respectively to each row in 3 × 3 windows, thus obtain 3 groups of data, point Wei not maximum group, intermediate value group and minimum value group.Calculating process represents as follows, wherein, max represents to take maxima operation, med tables Show and take median operation, min represents to take minimum Value Operations.Wherein,
Maximum group is:Max0=max [P0,P3,P6],Max1=max [P1,P4,P7],Max2=max [P2,P5,P8],
Intermediate value group is:Med0=med [P0,P3,P6],Med1=med [P1,P4,P7],Med2=med [P2,P5,P8]
Minimum value group is:Min0=min [P0,P3,P6],Min1=min [P1,P4,P7],Min2=min [P2,P5,P8]
In embodiment of the present invention, the removal of noise jamming is carried out by above-mentioned medium filtering mode.
Step 703 to 706 with the step 102 of the first embodiment of the present invention to 105 identical, repeated to reduce, herein Repeat no more.
Present embodiment carries out noise reduction by the image before feature extraction is carried out, first photographed to camera and located in advance Reason, it can further improve the degree of accuracy of image procossing.
Four embodiment of the invention is related to a kind of unmanned plane.Including:Image collection module 10, static object feature extraction Module 11, Motion feature extraction module 12, computing module 13 and position information acquisition module 14, as shown in Figure 8.
Image collection module 10, the image photographed for obtaining the camera being installed on unmanned plane;
Static object characteristic extracting module 11, for extracting the static object feature of image in acquisition module;
Motion feature extraction module 12, for carrying out dynamic analysis to image in acquisition module, extract motion feature;
Computing module 13, the motion extracted for the static object feature extracted based on extraction module and the analysis module Feature, calculate the real time data of unmanned plane current location;
Position information acquisition module 14, for the real time data calculated according to computing module, and carried using unmanned plane The location information that positioner obtains, obtain the current positional information of unmanned plane.
It is seen that present embodiment is the system embodiment corresponding with first embodiment, present embodiment can be with First embodiment is worked in coordination implementation.The relevant technical details mentioned in first embodiment still have in the present embodiment Effect, in order to reduce repetition, is repeated no more here.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in In first embodiment.
It is noted that each module involved in present embodiment is logic module, and in actual applications, one Individual logic unit can be a part for a physical location or a physical location, can also be with multiple physics lists The combination of member is realized.In addition, in order to protrude the innovative part of the present invention, will not be with solving institute of the present invention in present embodiment The unit that the technical problem relation of proposition is less close introduces, but this is not intended that in present embodiment and other lists are not present Member.
Fifth embodiment of the invention is related to a kind of unmanned plane.5th embodiment is done on the basis of the 4th embodiment Further improvement, is mainly theed improvement is that:In fifth embodiment of the invention, in static object characteristic extracting module 11 In also include:Extraction of Geometrical Features submodule 110, point feature extracting sub-module 111, submodule 112 is merged, is carried in motion feature Also include in modulus block 12:Characteristic light stream calculating sub module 120, light stream cluster submodule 121, detects moving target submodule 122, as shown in Figure 9.
Wherein, Extraction of Geometrical Features submodule 110, for extracting the geometric properties of described image;
Point feature extracting sub-module 111, for extracting the point feature of described image;
Submodule 112 is merged, for the geometric properties for extracting the Extraction of Geometrical Features submodule and the point feature The point feature of extracting sub-module extraction carries out Fusion Features, obtains the static object feature;
Characteristic light stream calculating sub module 120, for assigning a velocity for each pixel in described image, An image motion field is formed, carries out characteristic light stream calculating;
Light stream clusters submodule 121, for carrying out light stream cluster based on fuzzy C-mean algorithm FCM clustering algorithms, obtains light stream meter Calculate result;
Moving target submodule 122 is detected, the optical flow computation result for clustering submodule according to the light stream detects fortune Moving-target, obtain the motion feature.
Because second embodiment is mutually corresponding with present embodiment, therefore present embodiment can be mutual with second embodiment It is engaged implementation.The relevant technical details mentioned in second embodiment are still effective in the present embodiment, implement second The technique effect that can reach in mode can similarly be realized in the present embodiment, no longer superfluous here in order to reduce repetition State.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in second embodiment.
Sixth embodiment of the invention is related to a kind of unmanned plane.6th embodiment is done on the basis of the 4th embodiment Further improvement, is mainly theed improvement is that:In sixth embodiment of the invention, in addition to denoising module 15, such as Figure 10 It is shown.
Denoising module 15, for being removed noise jamming to image.
Specifically, when obtaining the image that the camera being installed on unmanned plane photographs, it is easily affected by noise, So, it is necessary to the pretreatment of noise jamming is first removed to the image, so as to effectively prevent noise after the image is got Influence to data accuracy.
Because the 3rd embodiment is mutually corresponding with present embodiment, therefore present embodiment can be mutual with the 3rd embodiment It is engaged implementation.The relevant technical details mentioned in 3rd embodiment are still effective in the present embodiment, implement the 3rd The technique effect that can reach in mode can similarly be realized in the present embodiment, no longer superfluous here in order to reduce repetition State.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in the 3rd embodiment.
It will be appreciated by those skilled in the art that realize that all or part of step in above-described embodiment method is to pass through Program instructs the hardware of correlation to complete, and the program storage is in the storage medium, including some instructions are causing one Individual equipment (can be single-chip microcomputer, chip etc.) or processor (processor) perform each embodiment methods described of the application All or part of step.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize the specific embodiment of the present invention, And in actual applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.

Claims (12)

  1. A kind of 1. location information acquisition method of unmanned plane, it is characterised in that including:
    Obtain the image that the camera being installed on the unmanned plane photographs;
    Extract the static object feature of described image;
    Dynamic analysis are carried out to described image, extract motion feature;
    Static object feature and motion feature based on the extraction, calculate the real time data of the unmanned plane current location;
    According to the real time data of the calculating, and the location information that the positioner carried using the unmanned plane is obtained, obtain The current positional information of the unmanned plane.
  2. 2. the location information acquisition method of unmanned plane according to claim 1, it is characterised in that getting described image Afterwards, before extracting the static object feature, in addition to:
    The pretreatment of noise jamming is removed to described image;
    In the static object feature is extracted, extract through the static object feature in the pretreated image.
  3. 3. the location information acquisition method of unmanned plane according to claim 1, it is characterised in that described to extract the quiet of image State target signature, is specifically included:
    Extract the geometric properties of described image;
    Extract the point feature of described image;
    The geometric properties and point feature of extraction are subjected to Fusion Features, obtain the static object feature.
  4. 4. the location information acquisition method of unmanned plane according to claim 3, it is characterised in that in the extraction image In geometric properties, the geometric properties of extraction described image are handled by Hough transformation;
    In the point feature of the extraction image, the point feature of described image is extracted by Harris algorithms.
  5. 5. the location information acquisition method of unmanned plane according to claim 1, it is characterised in that described that action is entered to image State is analyzed, and is extracted in motion feature, is specifically included:
    A velocity is assigned for each pixel in described image, forms an image motion field, carries out characteristic light Stream calculation;
    Light stream cluster is carried out based on fuzzy C-mean algorithm FCM clustering algorithms, obtains optical flow computation result;
    Moving target is detected according to the optical flow computation result, obtains the motion feature.
  6. 6. the location information acquisition method of unmanned plane according to any one of claim 1 to 5, it is characterised in that described The positioner that unmanned plane carries includes inertial navigation and GPS GPS.
  7. 7. the location information acquisition method of unmanned plane according to claim 6, it is characterised in that the acquisition unmanned plane is worked as Preceding positional information, is specifically included:
    Using Kalman Algorithm, the location information of real time data, the inertial navigation system to the calculating and the GPS's Location information carries out data fusion, obtains the current positional information of the unmanned plane.
  8. A kind of 8. unmanned plane, it is characterised in that including:
    Image collection module, the image photographed for obtaining the camera being installed on unmanned plane;
    Static object characteristic extracting module, for extracting the static object feature of image in the acquisition module;
    Motion feature extraction module, for carrying out dynamic analysis to image in the acquisition module, extract motion feature;
    Computing module, the motion extracted for the static object feature extracted based on the extraction module and the analysis module are special Sign, calculate the real time data of the unmanned plane current location;
    Position information acquisition module, carried for the real time data calculated according to the computing module, and using the unmanned plane The obtained location information of positioner, obtain the current positional information of the unmanned plane.
  9. 9. unmanned plane according to claim 8, it is characterised in that also include:
    Denoising module, for being removed the pretreatment of noise jamming to image in described image acquisition module;
    The static object characteristic extracting module and the Motion feature extraction module are to pretreated through the denoising module Image carries out feature extraction.
  10. 10. unmanned plane according to claim 8, it is characterised in that the static object characteristic extracting module, specific bag Include:
    Extraction of Geometrical Features submodule, for extracting the geometric properties of described image;
    Point feature extracting sub-module, for extracting the point feature of described image;
    Submodule is merged, for the geometric properties for extracting the Extraction of Geometrical Features submodule and point feature extraction submodule The point feature of block extraction carries out Fusion Features, obtains the static object feature.
  11. 11. unmanned plane according to claim 10, it is characterised in that the Extraction of Geometrical Features submodule is become by Hough Change the geometric properties of processing extraction described image;
    The point feature extracting sub-module extracts the point feature of described image by Harris algorithms.
  12. 12. unmanned plane according to claim 8, it is characterised in that the Motion feature extraction module specifically includes:
    Characteristic light stream calculating sub module, for assigning a velocity for each pixel in described image, form one Individual image motion field, carry out characteristic light stream calculating;
    Light stream clusters submodule, for carrying out light stream cluster based on fuzzy C-mean algorithm FCM clustering algorithms, obtains optical flow computation result;
    Moving target submodule is detected, the optical flow computation result for clustering submodule according to the light stream detects moving target, Obtain the motion feature.
CN201610496595.3A 2016-06-29 2016-06-29 Unmanned aerial vehicle position information acquisition method and unmanned aerial vehicle Active CN107543539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610496595.3A CN107543539B (en) 2016-06-29 2016-06-29 Unmanned aerial vehicle position information acquisition method and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610496595.3A CN107543539B (en) 2016-06-29 2016-06-29 Unmanned aerial vehicle position information acquisition method and unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN107543539A true CN107543539A (en) 2018-01-05
CN107543539B CN107543539B (en) 2021-06-01

Family

ID=60966067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610496595.3A Active CN107543539B (en) 2016-06-29 2016-06-29 Unmanned aerial vehicle position information acquisition method and unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN107543539B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459777A (en) * 2018-11-21 2019-03-12 北京木业邦科技有限公司 A kind of robot, robot localization method and its storage medium
CN109975844A (en) * 2019-03-25 2019-07-05 浙江大学 A kind of anti-bleach-out process of GPS signal based on optical flow method
CN110686664A (en) * 2018-07-04 2020-01-14 上海峰飞航空科技有限公司 Visual positioning system, unmanned aerial vehicle and method for self-detecting position of unmanned aerial vehicle
WO2021038485A1 (en) * 2019-08-27 2021-03-04 Indian Institute Of Science System and method for autonomous navigation of unmanned aerial vehicle (uav) in gps denied environment
CN113570546A (en) * 2021-06-16 2021-10-29 北京农业信息技术研究中心 Fan running state detection method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020130792A1 (en) * 2000-11-09 2002-09-19 Christoph Schaefer Wire detection procedure for low-flying aircraft
CN101532841A (en) * 2008-12-30 2009-09-16 华中科技大学 Method for navigating and positioning aerocraft based on landmark capturing and tracking
CN102426019A (en) * 2011-08-25 2012-04-25 航天恒星科技有限公司 Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN102788579A (en) * 2012-06-20 2012-11-21 天津工业大学 Unmanned aerial vehicle visual navigation method based on SIFT algorithm
CN103822631A (en) * 2014-02-28 2014-05-28 哈尔滨伟方智能科技开发有限责任公司 Positioning method and apparatus by combing satellite facing rotor wing and optical flow field visual sense
CN104359482A (en) * 2014-11-26 2015-02-18 天津工业大学 Visual navigation method based on LK optical flow algorithm
CN104729506A (en) * 2015-03-27 2015-06-24 北京航空航天大学 Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information
CN105021184A (en) * 2015-07-08 2015-11-04 西安电子科技大学 Pose estimation system and method for visual carrier landing navigation on mobile platform

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020130792A1 (en) * 2000-11-09 2002-09-19 Christoph Schaefer Wire detection procedure for low-flying aircraft
CN101532841A (en) * 2008-12-30 2009-09-16 华中科技大学 Method for navigating and positioning aerocraft based on landmark capturing and tracking
CN102426019A (en) * 2011-08-25 2012-04-25 航天恒星科技有限公司 Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN102788579A (en) * 2012-06-20 2012-11-21 天津工业大学 Unmanned aerial vehicle visual navigation method based on SIFT algorithm
CN103822631A (en) * 2014-02-28 2014-05-28 哈尔滨伟方智能科技开发有限责任公司 Positioning method and apparatus by combing satellite facing rotor wing and optical flow field visual sense
CN104359482A (en) * 2014-11-26 2015-02-18 天津工业大学 Visual navigation method based on LK optical flow algorithm
CN104729506A (en) * 2015-03-27 2015-06-24 北京航空航天大学 Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information
CN105021184A (en) * 2015-07-08 2015-11-04 西安电子科技大学 Pose estimation system and method for visual carrier landing navigation on mobile platform

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110686664A (en) * 2018-07-04 2020-01-14 上海峰飞航空科技有限公司 Visual positioning system, unmanned aerial vehicle and method for self-detecting position of unmanned aerial vehicle
CN109459777A (en) * 2018-11-21 2019-03-12 北京木业邦科技有限公司 A kind of robot, robot localization method and its storage medium
CN109975844A (en) * 2019-03-25 2019-07-05 浙江大学 A kind of anti-bleach-out process of GPS signal based on optical flow method
CN109975844B (en) * 2019-03-25 2020-11-24 浙江大学 GPS signal anti-drift method based on optical flow method
WO2021038485A1 (en) * 2019-08-27 2021-03-04 Indian Institute Of Science System and method for autonomous navigation of unmanned aerial vehicle (uav) in gps denied environment
CN113570546A (en) * 2021-06-16 2021-10-29 北京农业信息技术研究中心 Fan running state detection method and device
CN113570546B (en) * 2021-06-16 2023-12-05 北京农业信息技术研究中心 Fan running state detection method and device

Also Published As

Publication number Publication date
CN107543539B (en) 2021-06-01

Similar Documents

Publication Publication Date Title
Fu et al. PL-VINS: Real-time monocular visual-inertial SLAM with point and line features
Dai et al. Rgb-d slam in dynamic environments using point correlations
Zou et al. StructVIO: Visual-inertial odometry with structural regularity of man-made environments
Zhou et al. StructSLAM: Visual SLAM with building structure lines
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
CN107543539A (en) The location information acquisition method and unmanned plane of a kind of unmanned plane
Yin et al. Dynam-SLAM: An accurate, robust stereo visual-inertial SLAM method in dynamic environments
Hwangbo et al. Visual-inertial UAV attitude estimation using urban scene regularities
CN111666871B (en) Unmanned aerial vehicle-oriented improved YOLO and SIFT combined multi-small target detection tracking method
CN110825101A (en) Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network
Dusha et al. Fixed-wing attitude estimation using computer vision based horizon detection
Fan et al. Vision algorithms for fixed-wing unmanned aerial vehicle landing system
CN108844538A (en) Unmanned aerial vehicle obstacle avoidance waypoint generation method based on vision/inertial navigation
Zhai et al. Target Detection of Low‐Altitude UAV Based on Improved YOLOv3 Network
Kuang et al. A real-time and robust monocular visual inertial slam system based on point and line features for mobile robots of smart cities toward 6g
Liu et al. A joint optical flow and principal component analysis approach for motion detection
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN115683109B (en) Visual dynamic obstacle detection method based on CUDA and three-dimensional grid map
WO2023030062A1 (en) Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
Irmisch et al. Robust visual-inertial odometry in dynamic environments using semantic segmentation for feature selection
CN113158816B (en) Construction method of visual odometer quadric road sign for outdoor scene object
Zhang et al. Vision-based uav positioning method assisted by relative attitude classification
Feng et al. Improved monocular visual-inertial odometry with point and line features using adaptive line feature extraction
Moore et al. A method for the visual estimation and control of 3-DOF attitude for UAVs
Yang et al. Locator slope calculation via deep representations based on monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant