CN106886988A - A kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing - Google Patents
A kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing Download PDFInfo
- Publication number
- CN106886988A CN106886988A CN201510919643.0A CN201510919643A CN106886988A CN 106886988 A CN106886988 A CN 106886988A CN 201510919643 A CN201510919643 A CN 201510919643A CN 106886988 A CN106886988 A CN 106886988A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- point
- measured
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing.The linear goal detection method is comprised the following steps:Step a:Unmanned plane sequential images are obtained, and feature point extraction is carried out in the overlapping region of sequential images;Step b:Characteristic point to extracting is matched, and generates cubic phase pair, and according to cubic phase to calculating elevation information;Step c:Colouring information according to elevation information and image recognizes the marginal information of target to be measured, and according to edge extraction target to be measured.The present invention formulates the sequential images that data acquisition plan obtains degree of overlapping high according to target signature, Feature Points Matching is carried out using sane SIFT algorithms, and the altitude data of acquisition is combined the identification that carries out object edge with image feature, so as to realize the extraction of existing target.The present invention can greatly save cost, and be conducive to improving the ageing and accuracy of linear target detection in image.
Description
Technical field
The invention belongs to linear goal detection technique field, more particularly to a kind of line based on unmanned aerial vehicle remote sensing
Property object detection method and system.
Background technology
The extraction of linear goal has great importance and widely studies in remote sensing image, such as irrigation canals and ditches,
River, road and power circuit etc., in actual applications mostly according to the size of linear goal, selection is high
The satellite image or aerial images of resolution ratio carry out infomation detection and the extraction of linear goal.Remote sensing image line
Property object detection method it is more, be generally all on the image for be ready for geometric correction use two-dimensional color
Picture appearance feature is split, and has certain suitability and limitations;Such as with Hough transform to figure
As the algorithm that main characteristic straight line is detected is based on full search, to the demand of amount of calculation and memory space all
It is very big, it is unfavorable for real-time processing;And be based on mathematical morphology linear goal detection, based on region growing
Although linear goal detection and the method such as the detection of the linear goal based on neutral net calculate simple, can with
Real-time processing is realized, but due to being restricted by edge detection algorithm to a certain extent, it is impossible to treatment has
The straight line of interruption.And purchase is generally required using the tracking and detection of high-resolution satellite image and aerial images
Buy large-scale remote sensing image to be identified and extract, cause unnecessary image purchase cost and a large amount of numbers
According to the burden for the treatment of.
The content of the invention
The invention provides a kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing, it is intended to extremely
It is few to solve above mentioned problem of the prior art to a certain extent.
Implementation of the present invention is as follows, a kind of linear goal detection method based on unmanned aerial vehicle remote sensing, including
Following steps:
Step a:Unmanned plane sequential images are obtained, and characteristic point is carried out in the overlapping region of sequential images and carried
Take;
Step b:Characteristic point to extracting is matched, and generates cubic phase pair, and according to cubic phase to meter
Calculate elevation information;
Step c:Colouring information according to elevation information and image recognizes the marginal information of target to be measured, and
According to edge extraction target to be measured.
The technical scheme that the embodiment of the present invention is taken also includes:Also include before the step a:Analysis is to be measured
The target signature of target, according to resolution ratio and course/sidelapping degree needed for target signature setting of image;
Resolution ratio and course according to needed for image/sidelapping degree formulate flight plan and shoot rule;The mesh
Mark feature includes position, size and elevation, and the flight plan includes flight path, relative ground flying height
And the speed of a ship or plane, the shooting is regular to include screening-mode.
The technical scheme that the embodiment of the present invention is taken also includes:The step a also includes:To the sequence for obtaining
Row image is pre-processed;The pretreatment includes image positioning, the analysis of actual degree of overlapping and image color
Enhancing.
The technical scheme that the embodiment of the present invention is taken also includes:It is described in sequence shadow in the step a
The Feature Points Extraction that the overlapping region of picture carries out feature point extraction is comprised the following steps:
Step a1:Using partial 3 d extreme value of the DoG operators on image metric space search yardstick image
Point, primarily determines that position and the characteristic dimension of key point;
Step a2:The expansion of the stepwise of Taylor two is carried out to DoG operators at key point, by image yardstick
The secondary Taylor expansion of spatial fit, the accurate position for determining key point and characteristic dimension;
Step a3:By the first differential of DoG operators and second-order differential composition Hessian matrixes, by inciting somebody to action
Hessian matrixes eigenvalue of maximum is compared with the ratio of minimal eigenvalue with the threshold value of setting, rejects low
The key point of contrast and unstable skirt response point, improve the noise resisting ability of key point;
Step a4:By picture point gradient direction in crucial vertex neighborhood and the Grad shape by Gauss weighting treatment
It is accurate to determine to close into gradient orientation histogram, and with the value near the Parabolic Fit Histogram Maximum
The principal direction of key point, forms SIFT feature.
The technical scheme that the embodiment of the present invention is taken also includes:Also include between the step b and step c:
According to cubic phase to obtaining the three-dimensional structure information of target region to be measured, according to the three-dimensional structure in region
Information is corrected and inlays to image.
The technical scheme that the embodiment of the present invention is taken also includes:It is described according to elevation in the step c
Information recognizes the marginal information of target to be measured with the colouring information of image, and to be measured according to edge extraction
Target specifically includes following steps:
Step c1:The colouring information of image is split by mean shift partitioning algorithms, obtains image
Marginal probability figure;
Step c2:Elevation information is split by mean shift partitioning algorithms, obtains elevation discontinuous
Boarder probability figure;
Step c3:Target to be measured is identified according to image border probability graph and elevation noncoherent boundary probability graph
Marginal information;
Step c4:Edge extraction according to target to be measured goes out target to be measured.
Another technical scheme that the embodiment of the present invention is taken is:A kind of linear goal based on unmanned aerial vehicle remote sensing
Detecting system, including image acquiring module, feature point extraction module, Feature Points Matching module, altimeter
Calculate module and object extraction module;The image acquiring module is used to obtain unmanned plane sequential images;It is described
Feature point extraction module is used to carry out feature point extraction in the overlapping region of sequential images;The characteristic point
It is used to match the characteristic point extracted with module, generates cubic phase pair;The grid DEM module is used
According to cubic phase to calculate elevation information;The object extraction module is used for according to elevation information and image
Colouring information recognize the marginal information of target to be measured, and according to edge extraction target to be measured.
The technical scheme that the embodiment of the present invention is taken also includes:Also include image setting module, plan
Module and image correction module, the image setting module are used to analyze the target signature of target to be measured, according to
According to the resolution ratio needed for target signature setting of image and course/sidelapping degree;The plan module root
Flight plan is formulated according to the resolution ratio of required image and course/sidelapping degree and shoot rule;The image
Correction module is used for the three-dimensional structure information to acquisition target region to be measured according to cubic phase, according to area
The three-dimensional structure information in domain is corrected and inlays to image.
The technical scheme that the embodiment of the present invention is taken also includes:The feature point extraction module includes key point
Search unit, key point determining unit, key point culling unit and characteristic point generation unit;
The key point search unit is used to search for yardstick image in image metric space using DoG operators
Partial 3 d extreme point, primarily determine that position and the characteristic dimension of key point;
The key point determining unit is used at key point carry out DoG operators the expansion of the stepwise of Taylor two,
Secondary Taylor expansion, the accurate position for determining key point and feature chi are fitted by image metric space
Degree;
The key point culling unit is used for the first differential of DoG operators and second-order differential composition Hessian
Matrix, is carried out with the ratio of minimal eigenvalue by by Hessian matrixes eigenvalue of maximum with the threshold value of setting
Compare, reject the key point and unstable skirt response point of low contrast, improve the antinoise of key point
Ability;
The characteristic point generation unit is used to be weighted by picture point gradient direction in crucial vertex neighborhood and by Gauss
The Grad for the treatment of forms gradient orientation histogram, and near the Parabolic Fit Histogram Maximum
Value, the accurate principal direction for determining key point, forms SIFT feature.
The technical scheme that the embodiment of the present invention is taken also includes:The object extraction module includes colouring information
Processing unit, elevation information processing unit, limb recognition unit and Objective extraction unit;
The colouring information processing unit is used to enter the colouring information of image by mean shift partitioning algorithms
Row segmentation, obtains image border probability graph;
The elevation information processing unit is used to divide elevation information by mean shift partitioning algorithms
Cut, obtain elevation noncoherent boundary probability graph;
The limb recognition unit is used to be known according to image border probability graph and elevation noncoherent boundary probability graph
Do not go out the marginal information of target to be measured;
The Objective extraction unit is used to go out target to be measured according to the edge extraction of target to be measured.
The embodiment of the present invention is based on the linear goal detection method and system of unmanned aerial vehicle remote sensing according to target signature
The sequential images that data acquisition plan obtains degree of overlapping high are formulated, feature is carried out using sane SIFT algorithms
Point matching, and the altitude data of acquisition is combined the identification that carries out object edge with image feature, so that
Realize the extraction of existing target.The present invention can greatly save cost, and be conducive to improving linear mesh in image
Mark the ageing and accuracy of detection.
Brief description of the drawings
Fig. 1 is the flow chart of the linear goal detection method that the embodiment of the present invention is based on unmanned aerial vehicle remote sensing;
Fig. 2 is the flow chart of the method for the image overlap area feature point extraction of the embodiment of the present invention;
Fig. 3 is the flow chart of the target extraction method to be measured of the embodiment of the present invention;
Fig. 4 is the structural representation of the linear goal detecting system that the embodiment of the present invention is based on unmanned aerial vehicle remote sensing
Figure.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with accompanying drawing and
Embodiment, the present invention will be described in further detail.It should be appreciated that specific implementation described herein
Example is only used to explain the present invention, is not intended to limit the present invention.
Fig. 1 is referred to, is the stream of the linear goal detection method that the embodiment of the present invention is based on unmanned aerial vehicle remote sensing
Cheng Tu.The linear goal detection method of the embodiment of the present invention is comprised the following steps:
Step 100:The target signature of target to be measured is analyzed, according to dividing needed for target signature setting of image
Resolution and course/sidelapping degree;
In step 100, target signature is including position, size and elevation etc.;It is true by detection zone position
Determine coverage of survey area, the size according to target to be measured determine image needed for optimal resolution so that inverse goes out
The relatively flying height in face when required camera lens situation and flight;In order to ensure that later stage treatment possesses enough
Overlapping region, it is considered to wind speed and airflow influence in flight course cause platform unstability, in order to protect
Enough course/sidelapping the degree of card, the embodiment of the present invention sets data acquisition phase camera exposure point position
It is calculated as ship's control 80%, the pattern of sidelapping degree 60%;In actual applications, can be according to specific
Situation is set or is changed.
Step 200:Resolution ratio and course according to needed for image/sidelapping degree formulate flight plan and bat
Take the photograph rule;
In step 200, the flight plan includes flight path, relative ground flying height and the speed of a ship or plane etc.,
The rule that shoots is including screening-mode etc..
Step 300:According to flight plan obtain unmanned plane sequential images, and to obtain sequential images enter
Row pretreatment;
In step 300, sequential images pretreatment include image positioning, actual degree of overlapping analyze and
Image color enhancing etc..
Step 400:SIFT feature extraction is carried out in the overlapping region of sequential images;
In step 400, the present invention carries out characteristic point using sane SIFT algorithms in image overlap area
Extraction, SIFT algorithms not only have very strong adaptability to the deformation of the complex geometry of image, while also existing
The aspect such as arithmetic speed and positioning precision embodies certain superiority.Specifically also referring to Fig. 2, it is
The flow chart of the method for the image overlap area feature point extraction of the embodiment of the present invention.The embodiment of the present invention
The method of image overlap area feature point extraction is comprised the following steps:
Step 401:Yardstick is searched in image metric space using DoG (Difference of Gaussian) operators
Partial 3 d extreme point on image, primarily determines that position and the characteristic dimension of key point;
Step 402:The expansion of the stepwise of Taylor two is carried out to DoG operators at key point, by image yardstick
The secondary Taylor expansion of spatial fit, the accurate position for determining key point and characteristic dimension;
Step 403:By the first differential of DoG operators and second-order differential composition Hessian matrixes, by inciting somebody to action
Hessian matrixes eigenvalue of maximum is compared with the ratio of minimal eigenvalue with the threshold value of setting, rejects low
The key point of contrast and unstable skirt response point, improve the noise resisting ability of key point;
Step 404:By picture point gradient direction in crucial vertex neighborhood and the Grad by Gauss weighting treatment
Gradient orientation histogram is formed, and with the value near the Parabolic Fit Histogram Maximum, it is accurate to determine
The principal direction of key point, ultimately forms SIFT feature.
Step 500:SIFT feature Point matching is carried out, Mismatching point opposite into cubic phase pair is deleted;
In step 500, the mode of SIFT feature Point matching is:Using BBF (approximate KNN search)
Algorithm is slightly matched to characteristic point, extracts reliability matching double points higher;Calculated using Hough transformation afterwards
Method carries out smart matching to characteristic point again, rejects due to the erroneous matching that image self-similarity or symmetry are caused
Point is right;
Step 600:According to cubic phase to calculating elevation information;
Step 700:According to cubic phase to obtaining the three-dimensional structure information of target region to be measured, according to
The three-dimensional structure information in region is corrected and inlays to image;
Step 800:Colouring information according to elevation information and image recognizes the marginal information of target to be measured,
And according to edge extraction target to be measured.
It is of the invention on the basis of elevation information has been obtained in step 800, with the two dimension coloured silk after correction
Color image is combined, and feature space is analyzed using average drifting process mean shift algorithms, passes through
Average drifting is filtered, and each data point is substituted by corresponding mode point in feature space;Specifically please
Fig. 3 is referred in the lump, is the flow chart of the target extraction method to be measured of the embodiment of the present invention.The present invention is implemented
The target extraction method to be measured of example is comprised the following steps:
Step 801:The colouring information of image is split by mean shift partitioning algorithms, obtains figure
As marginal probability figure;
Step 802:Elevation information is split by mean shift partitioning algorithms, is obtained elevation and is not connected
Continuous boarder probability figure;
Step 803:Mesh to be measured is identified according to image border probability graph and elevation noncoherent boundary probability graph
Target marginal information;
Step 804:Edge extraction according to target to be measured goes out target to be measured.
Fig. 4 is referred to, is the knot of the linear goal detecting system that the embodiment of the present invention is based on unmanned aerial vehicle remote sensing
Structure schematic diagram.The linear goal detecting system of the embodiment of the present invention includes image setting module, plan
Module, image acquiring module, feature point extraction module, Feature Points Matching module, grid DEM module,
Image correction module and object extraction module;Wherein,
Image setting module is used to analyze the target signature of target to be measured, according to target signature setting of image institute
The resolution ratio and course/sidelapping degree for needing;Wherein, target signature is including position, size and elevation etc.;
Data acquisition phase camera exposure point Position Design is ship's control 80% by the embodiment of the present invention, it is other to
The pattern of degree of overlapping 60%;In actual applications, can as the case may be set or be changed.
The plan module resolution ratio of image and course/sidelapping degree according to needed for formulate flight plan and
Shoot rule;Wherein, the flight plan includes flight path, relative ground flying height and the speed of a ship or plane etc., institute
State shooting regular including screening-mode etc.;
Image acquiring module is used to obtain unmanned plane sequential images, and the sequence to obtaining according to flight plan
Image is pre-processed;Wherein, the sequential images pretreatment includes image positioning, actual degree of overlapping point
Analysis and image color enhancing etc..
Feature point extraction module is used to carry out SIFT feature extraction in the overlapping region of sequential images;Specifically
Ground, it is single that feature point extraction module includes that key point search unit, key point determining unit, key point are rejected
Unit and characteristic point generation unit;
Key point search unit is used for the office on image metric space search yardstick image using DoG operators
Portion's three-dimensional extreme point, primarily determines that position and the characteristic dimension of key point;
Key point determining unit is used at key point carry out DoG operators the expansion of the stepwise of Taylor two, passes through
Secondary Taylor expansion, the accurate position for determining key point and characteristic dimension are fitted in image metric space;
Key point culling unit is used for the first differential of DoG operators and second-order differential composition Hessian squares
Battle array, is compared with the ratio of minimal eigenvalue by by Hessian matrixes eigenvalue of maximum with the threshold value of setting
Compared with, the key point and unstable skirt response point of low contrast are rejected, improve the anti-noise acoustic energy of key point
Power;
Characteristic point generation unit is used to be processed by picture point gradient direction in crucial vertex neighborhood and by Gauss weighting
Grad form gradient orientation histogram, and with the value near the Parabolic Fit Histogram Maximum,
The accurate principal direction for determining key point, ultimately forms SIFT feature.
Feature Points Matching module is used to carry out SIFT feature Point matching, deletes Mismatching point opposite into cubic phase
It is right;Wherein, the mode of Feature Points Matching module matching SIFT feature is:Using BBF algorithms to feature
Point is slightly matched, and extracts reliability matching double points higher;Afterwards using Hough transformation algorithm again to feature
Point carries out smart matching, rejects due to the error matching points pair that image self-similarity or symmetry are caused;
Grid DEM module is used for according to cubic phase to calculating elevation information;
Image correction module is used to believe the three-dimensional structure for obtaining target region to be measured according to cubic phase
Breath, the three-dimensional structure information according to region is corrected and inlays to image;
Object extraction module is used to be recognized according to the colouring information of elevation information and image the edge of target to be measured
Information, and according to edge extraction target to be measured;Specifically, object extraction module includes colouring information
Processing unit, elevation information processing unit, limb recognition unit and Objective extraction unit;
Colouring information processing unit is used to divide the colouring information of image by mean shift partitioning algorithms
Cut, obtain image border probability graph;
Elevation information processing unit is used to split elevation information by mean shift partitioning algorithms, obtains
To elevation noncoherent boundary probability graph;
Limb recognition unit is used to be identified according to image border probability graph and elevation noncoherent boundary probability graph
The marginal information of target to be measured;
Objective extraction unit is used to go out target to be measured according to the edge extraction of target to be measured.
The embodiment of the present invention is based on the linear goal detection method and system of unmanned aerial vehicle remote sensing according to target signature
The sequential images that data acquisition plan obtains degree of overlapping high are formulated, feature is carried out using sane SIFT algorithms
Point matching, and the altitude data of acquisition is combined the identification that carries out object edge with image feature, so that
Realize the extraction of existing target.The present invention can greatly save cost, and be conducive to improving linear mesh in image
Mark the ageing and accuracy of detection.
Presently preferred embodiments of the present invention is the foregoing is only, is not intended to limit the invention, it is all at this
Any modification, equivalent and improvement made within the spirit and principle of invention etc., should be included in this
Within the protection domain of invention.
Claims (10)
1. a kind of linear goal detection method based on unmanned aerial vehicle remote sensing, comprises the following steps:
Step a:Unmanned plane sequential images are obtained, and characteristic point is carried out in the overlapping region of sequential images and carried
Take;
Step b:Characteristic point to extracting is matched, and generates cubic phase pair, and according to cubic phase to meter
Calculate elevation information;
Step c:Colouring information according to elevation information and image recognizes the marginal information of target to be measured, and
According to edge extraction target to be measured.
2. method according to claim 1, it is characterised in that also include before the step a:Point
The target signature of target to be measured is analysed, according to the resolution ratio needed for target signature setting of image and course/side to weight
Folded degree;Resolution ratio and course according to needed for image/sidelapping degree formulate flight plan and shoot rule;
The target signature includes position, size and elevation, and the flight plan includes flight path, relatively
Face flying height and the speed of a ship or plane, the shooting rule include screening-mode.
3. method according to claim 1, it is characterised in that the step a also includes:To obtaining
The sequential images for taking are pre-processed;The pretreatment includes image positioning, the analysis of actual degree of overlapping and schemes
Strengthen as colored.
4. method according to claim 3, it is characterised in that in the step a, it is described
The Feature Points Extraction that the overlapping region of sequential images carries out feature point extraction is comprised the following steps:
Step a1:Using partial 3 d extreme value of the DoG operators on image metric space search yardstick image
Point, primarily determines that position and the characteristic dimension of key point;
Step a2:The expansion of the stepwise of Taylor two is carried out to DoG operators at key point, by image yardstick
The secondary Taylor expansion of spatial fit, the accurate position for determining key point and characteristic dimension;
Step a3:By the first differential of DoG operators and second-order differential composition Hessian matrixes, by inciting somebody to action
Hessian matrixes eigenvalue of maximum is compared with the ratio of minimal eigenvalue with the threshold value of setting, rejects low
The key point of contrast and unstable skirt response point, improve the noise resisting ability of key point;
Step a4:By picture point gradient direction in crucial vertex neighborhood and the Grad shape by Gauss weighting treatment
It is accurate to determine to close into gradient orientation histogram, and with the value near the Parabolic Fit Histogram Maximum
The principal direction of key point, forms SIFT feature.
5. method according to claim 4, it is characterised in that between the step b and step c also
Including:According to cubic phase to obtaining the three-dimensional structure information of target region to be measured, according to the three of region
Dimension structural information is corrected and inlays to image.
6. method according to claim 5, it is characterised in that in the step c, described
The marginal information of target to be measured is recognized according to the colouring information of elevation information and image, and is carried according to marginal information
Take target to be measured and specifically include following steps:
Step c1:The colouring information of image is split by mean shift partitioning algorithms, obtains image
Marginal probability figure;
Step c2:Elevation information is split by mean shift partitioning algorithms, obtains elevation discontinuous
Boarder probability figure;
Step c3:Target to be measured is identified according to image border probability graph and elevation noncoherent boundary probability graph
Marginal information;
Step c4:Edge extraction according to target to be measured goes out target to be measured.
7. a kind of linear goal detecting system based on unmanned aerial vehicle remote sensing, it is characterised in that including image
Acquisition module, feature point extraction module, Feature Points Matching module, grid DEM module and Objective extraction mould
Block;The image acquiring module is used to obtain unmanned plane sequential images;The feature point extraction module is used for
Feature point extraction is carried out in the overlapping region of sequential images;The Feature Points Matching module is used for extraction
Characteristic point is matched, and generates cubic phase pair;The grid DEM module is used for according to cubic phase to calculating
Elevation information;The object extraction module is used for be measured with the colouring information identification of image according to elevation information
The marginal information of target, and according to edge extraction target to be measured.
8. system according to claim 7, it is characterised in that also including image setting module, meter
Cover half block and image correction module are drawn, the target that the image setting module is used to analyze target to be measured is special
Levy, according to resolution ratio and course/sidelapping degree needed for target signature setting of image;The plan
The module resolution ratio of image and course/sidelapping degree according to needed for formulate flight plan and shoot rule;Institute
State image correction module for according to cubic phase to obtaining the three-dimensional structure information of target region to be measured,
Three-dimensional structure information according to region is corrected and inlays to image.
9. system according to claim 8, it is characterised in that the feature point extraction module includes
Key point search unit, key point determining unit, key point culling unit and characteristic point generation unit;
The key point search unit is used to search for yardstick image in image metric space using DoG operators
Partial 3 d extreme point, primarily determine that position and the characteristic dimension of key point;
The key point determining unit is used at key point carry out DoG operators the expansion of the stepwise of Taylor two,
Secondary Taylor expansion, the accurate position for determining key point and feature chi are fitted by image metric space
Degree;
The key point culling unit is used for the first differential of DoG operators and second-order differential composition Hessian
Matrix, is carried out with the ratio of minimal eigenvalue by by Hessian matrixes eigenvalue of maximum with the threshold value of setting
Compare, reject the key point and unstable skirt response point of low contrast, improve the antinoise of key point
Ability;
The characteristic point generation unit is used to be weighted by picture point gradient direction in crucial vertex neighborhood and by Gauss
The Grad for the treatment of forms gradient orientation histogram, and near the Parabolic Fit Histogram Maximum
Value, the accurate principal direction for determining key point, forms SIFT feature.
10. system according to claim 9, it is characterised in that the object extraction module includes
Colouring information processing unit, elevation information processing unit, limb recognition unit and Objective extraction unit;
The colouring information processing unit is used to enter the colouring information of image by mean shift partitioning algorithms
Row segmentation, obtains image border probability graph;
The elevation information processing unit is used to divide elevation information by mean shift partitioning algorithms
Cut, obtain elevation noncoherent boundary probability graph;
The limb recognition unit is used to be known according to image border probability graph and elevation noncoherent boundary probability graph
Do not go out the marginal information of target to be measured;
The Objective extraction unit is used to go out target to be measured according to the edge extraction of target to be measured.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510919643.0A CN106886988B (en) | 2015-12-11 | 2015-12-11 | Linear target detection method and system based on unmanned aerial vehicle remote sensing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510919643.0A CN106886988B (en) | 2015-12-11 | 2015-12-11 | Linear target detection method and system based on unmanned aerial vehicle remote sensing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106886988A true CN106886988A (en) | 2017-06-23 |
CN106886988B CN106886988B (en) | 2020-07-24 |
Family
ID=59173743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510919643.0A Active CN106886988B (en) | 2015-12-11 | 2015-12-11 | Linear target detection method and system based on unmanned aerial vehicle remote sensing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106886988B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109104574A (en) * | 2018-10-09 | 2018-12-28 | 中国科学院深圳先进技术研究院 | A kind of method and system of cradle head camera masking privacy area |
CN110232389A (en) * | 2019-06-13 | 2019-09-13 | 内蒙古大学 | A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance |
CN111310624A (en) * | 2020-02-05 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Occlusion recognition method and device, computer equipment and storage medium |
CN112258419A (en) * | 2020-11-02 | 2021-01-22 | 无锡艾立德智能科技有限公司 | Method for weighting type enhancing image edge information |
CN113155293A (en) * | 2021-04-06 | 2021-07-23 | 内蒙古工业大学 | Human body remote sensing temperature measurement monitoring and recognition system based on unmanned aerial vehicle |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127078A (en) * | 2007-09-13 | 2008-02-20 | 北京航空航天大学 | Unmanned machine vision image matching method based on ant colony intelligence |
CN101916452A (en) * | 2010-07-26 | 2010-12-15 | 中国科学院遥感应用研究所 | Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information |
CN102254319A (en) * | 2011-04-19 | 2011-11-23 | 中科九度(北京)空间信息技术有限责任公司 | Method for carrying out change detection on multi-level segmented remote sensing image |
CN102999892A (en) * | 2012-12-03 | 2013-03-27 | 东华大学 | Intelligent fusion method for depth images based on area shades and red green blue (RGB) images |
CN103714549A (en) * | 2013-12-30 | 2014-04-09 | 南京大学 | Stereo image object segmentation method based on rapid local matching |
US20140266773A1 (en) * | 2012-10-23 | 2014-09-18 | Bounce Imaging, Inc. | Remote surveillance system |
CN104867126A (en) * | 2014-02-25 | 2015-08-26 | 西安电子科技大学 | Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay |
CN105139355A (en) * | 2015-08-18 | 2015-12-09 | 山东中金融仕文化科技股份有限公司 | Method for enhancing depth images |
-
2015
- 2015-12-11 CN CN201510919643.0A patent/CN106886988B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127078A (en) * | 2007-09-13 | 2008-02-20 | 北京航空航天大学 | Unmanned machine vision image matching method based on ant colony intelligence |
CN101916452A (en) * | 2010-07-26 | 2010-12-15 | 中国科学院遥感应用研究所 | Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information |
CN102254319A (en) * | 2011-04-19 | 2011-11-23 | 中科九度(北京)空间信息技术有限责任公司 | Method for carrying out change detection on multi-level segmented remote sensing image |
US20140266773A1 (en) * | 2012-10-23 | 2014-09-18 | Bounce Imaging, Inc. | Remote surveillance system |
CN102999892A (en) * | 2012-12-03 | 2013-03-27 | 东华大学 | Intelligent fusion method for depth images based on area shades and red green blue (RGB) images |
CN103714549A (en) * | 2013-12-30 | 2014-04-09 | 南京大学 | Stereo image object segmentation method based on rapid local matching |
CN104867126A (en) * | 2014-02-25 | 2015-08-26 | 西安电子科技大学 | Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay |
CN105139355A (en) * | 2015-08-18 | 2015-12-09 | 山东中金融仕文化科技股份有限公司 | Method for enhancing depth images |
Non-Patent Citations (2)
Title |
---|
李雷达: "《图像质量评价中的特征提取方法与应用》", 30 June 2015 * |
郭运艳: "视频序列中目标的多特征融合跟踪技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109104574A (en) * | 2018-10-09 | 2018-12-28 | 中国科学院深圳先进技术研究院 | A kind of method and system of cradle head camera masking privacy area |
CN110232389A (en) * | 2019-06-13 | 2019-09-13 | 内蒙古大学 | A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance |
CN110232389B (en) * | 2019-06-13 | 2022-11-11 | 内蒙古大学 | Stereoscopic vision navigation method based on invariance of green crop feature extraction |
CN111310624A (en) * | 2020-02-05 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Occlusion recognition method and device, computer equipment and storage medium |
CN111310624B (en) * | 2020-02-05 | 2023-11-21 | 腾讯科技(深圳)有限公司 | Occlusion recognition method, occlusion recognition device, computer equipment and storage medium |
CN112258419A (en) * | 2020-11-02 | 2021-01-22 | 无锡艾立德智能科技有限公司 | Method for weighting type enhancing image edge information |
CN112258419B (en) * | 2020-11-02 | 2023-08-11 | 无锡艾立德智能科技有限公司 | Method for enhancing image edge information by weighting |
CN113155293A (en) * | 2021-04-06 | 2021-07-23 | 内蒙古工业大学 | Human body remote sensing temperature measurement monitoring and recognition system based on unmanned aerial vehicle |
CN113155293B (en) * | 2021-04-06 | 2022-08-12 | 内蒙古工业大学 | Human body remote sensing temperature measurement monitoring and recognition system based on unmanned aerial vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN106886988B (en) | 2020-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104867126B (en) | Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle | |
CN107067415B (en) | A kind of object localization method based on images match | |
CN106886988A (en) | A kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing | |
CN104778721B (en) | The distance measurement method of conspicuousness target in a kind of binocular image | |
Zhou et al. | Seamless fusion of LiDAR and aerial imagery for building extraction | |
CN110415342A (en) | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors | |
CN103268358B (en) | Multi-source control point image database builds and update method | |
CN106485690A (en) | Cloud data based on a feature and the autoregistration fusion method of optical image | |
CN103218787B (en) | Multi-source heterogeneous remote sensing image reference mark automatic acquiring method | |
CN107481315A (en) | A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms | |
CN107480727A (en) | The unmanned plane image fast matching method that a kind of SIFT and ORB are combined | |
CN107560592B (en) | Precise distance measurement method for photoelectric tracker linkage target | |
Li et al. | RIFT: Multi-modal image matching based on radiation-invariant feature transform | |
CN103295232B (en) | Based on the SAR image registration method in straight line and region | |
CN109146948A (en) | The quantization of crop growing state phenotypic parameter and the correlation with yield analysis method of view-based access control model | |
CN103822616A (en) | Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint | |
CN104376334B (en) | A kind of pedestrian comparison method of multi-scale feature fusion | |
CN108195736B (en) | Method for extracting vegetation canopy clearance rate through three-dimensional laser point cloud | |
CN104574401A (en) | Image registration method based on parallel line matching | |
CN110298227A (en) | A kind of vehicle checking method in unmanned plane image based on deep learning | |
CN104778668B (en) | The thin cloud minimizing technology of remote sensing image based on visible light wave range spectrum statistical nature | |
CN109829423A (en) | A kind of icing lake infrared imaging detection method | |
CN103646389A (en) | SAR slant range image match automatic extraction method based on geometric model | |
CN105184786A (en) | Floating-point-based triangle characteristic description method | |
CN105956544A (en) | Remote sensing image road intersection extraction method based on structural index characteristic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |