CN106485737A - Cloud data based on line feature and the autoregistration fusion method of optical image - Google Patents

Cloud data based on line feature and the autoregistration fusion method of optical image Download PDF

Info

Publication number
CN106485737A
CN106485737A CN201510526379.4A CN201510526379A CN106485737A CN 106485737 A CN106485737 A CN 106485737A CN 201510526379 A CN201510526379 A CN 201510526379A CN 106485737 A CN106485737 A CN 106485737A
Authority
CN
China
Prior art keywords
line
cloud data
point
feature
optical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510526379.4A
Other languages
Chinese (zh)
Inventor
吕芳
任侃
韶阿俊
潘佳惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201510526379.4A priority Critical patent/CN106485737A/en
Publication of CN106485737A publication Critical patent/CN106485737A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses the autoregistration fusion method of a kind of cloud data based on line feature and optical image, including:Cloud data is filtered processing using Mathematical Morphology Method;Being respectively adopted self adaptation supports weight dense stereo algorithm and Delaunay Triangulation algorithm to determine the depth map of optical image and cloud data;Extract the line feature of depth map by Hough algorithm, slightly mated as similarity measure with line length ratio using line feature two wire clamp angle;Reject error matching points pair by two step RANSAC algorithms, obtain camera position parameter estimation;Carry out the color and vein mapping of cloud data and optical image, the 3-D view after being merged.The present invention no relies on to priori such as GPS/INS, and robustness is good, and autoregistration degree is high.

Description

Cloud data based on line feature and the autoregistration fusion method of optical image
Technical field
The invention belongs to a kind of improved heterologous image registration algorithm, particularly a kind of cloud data based on line feature and light Learn the autoregistration fusion method of image.
Background technology
The LiDAR point cloud data registration that according to the registering primitive that used can be divided into based on gray scale registering with optical image Method and the method for registering of feature based.Wherein feature registration is divided into a feature, line feature and region feature registration again.Point Feature be feature based Image registration method in most commonly used, but point feature registration has limitation it is impossible to adapt to universal Commonly used.As soon as line feature is the more higher leveled characters of ground object of analogy point, for the extraction of feature, extract meaningful Line aspect ratio extract significant feature easily, and, all there are rich in natural environment or artificial environment Rich linear ground object, such as road, the edge in river, contour line of target etc..
But traditional line feature registration algorithm has obvious defect.Extract straightway from profile point, using straight line The distribution of the slope of section and edge contour is come to determine the method for transformation parameter be a weight in terms of based on linear feature registration Big breakthrough, is to be laid a good foundation based on the method for registering of linear feature.But concentration, shape are compared for building distribution The more similar city three-dimensional reconstruction of shape, when the similar straightway of a plurality of position of appearance, slope, can cause matching error. A kind of Line feature method, three straight lines being constituted with edges such as building, houses and the correlation of other 3CS Angle asks for similarity measure, finally using two step RANSAC methods, registration result is corrected.This method is completeer Whole comprehensive used line feature, but complexity is higher, operation time is long, and for extracting on remote sensing image of taking photo by plane For being the building at rule of similarity edge, matching characteristic reduces, and error hiding rate can uprise.Algorithm above generally needs Rely on the initial position priori such as GPS/INS, for the registration between three-dimensional, algorithm complex is very high, and depends on three The precision of dimension reconstruct.
Content of the invention
It is an object of the invention to provide a kind of improved heterologous image registration algorithm, particularly a kind of point based on line feature Cloud data and the autoregistration fusion method of optical image.
The technical solution realizing the object of the invention is:
A kind of cloud data based on line feature and the autoregistration fusion method of optical image, comprise the following steps:
Step 1, is filtered to cloud data processing;
Step 2, is respectively adopted self adaptation and supports weight dense stereo algorithm and Delaunay Triangulation algorithm to determine optics Image and the depth map of cloud data;
Step 3, extracts the line feature of depth map by Hough algorithm, using line feature two wire clamp angle and line length ratio as Similarity measure is slightly mated;
Step 4, rejects error matching points pair by two step RANSAC algorithms, obtains camera position parameter estimation;
Step 5, carries out the color and vein mapping of cloud data and optical image, the 3-D view after being merged.
The present invention compared with prior art, its remarkable advantage:
(1) cloud data based on a feature of the present invention and the autoregistration fusion method of optical image are without GPS/INS Etc. priori, high degree of automation, robustness is good;
(2) present invention is carried out on the basis of visible ray depth map, improves the registration accuracy of image;
(3) present invention adopts improved line feature registration, reduces amount of calculation on the premise of ensureing registration accuracy, tool There is higher registering accuracy.
Brief description
Fig. 1 is the autoregistration fusion method flow chart with optical image for the cloud data based on line feature of the present invention.
Fig. 2 (a) is the cloud data of the embodiment of the present invention, and Fig. 2 (b) is visible images, and Fig. 2 (c) is to merge 3-D view afterwards.
Specific embodiment
In conjunction with Fig. 1, the cloud data based on a feature of the present invention and the autoregistration fusion method of optical image, including Following steps:
Step 1, using mathematical morphology filter method cloud data is filtered process:
Morphologic filtering removes cancelling noise as a kind of wave filter being mutated thought based on the local discrepancy in elevation from bottom to top, purpose Point, redundancy and elevation rough error;Corrosion and dilation operation are the bases that morphological images are processed, for " minimizing " The size of character shape in (corrosion) or " increase " (expansion) image;Corrosion based on gray level image and dilation operation It is the minimum after selecting image pixel value phase separation in the neighborhood of structural element definition or max pixel value;
If LiDAR point cloud data observation value sequence is p (x, y, z), then at x and y, the expansive working to height z is defined as:
In formula, (xp, yp, zp) representing point in p neighborhood of a point window w, w is also called size of structure element;Neighborhood window Can be one-dimensional straight line it is also possible to make rectangle or the other shapes of two dimension;The result of dilation operation is in neighborhood window Highest elevation value;
Corrosion, as the dual operator expanding, is defined as:
Expansion and corrosion are combined, obtain opening operation and the closed operation that can be directly used for LiDAR filtering, during opening operation Data wire is corroded, then is expanded, and closed operation contrast.
Using linear structure element, a certain vertical section of LiDAR point cloud data is carried out after opening operation filtering, if trees target It is smaller in size than construction unit, then can be removed after erosion operation, building then obtains weight in dilation operation Build.
Step 2, it is respectively adopted self adaptation and supports weight dense stereo algorithm and Delaunay Triangulation algorithm to determine optics Image and the depth map of cloud data, specially:
For cloud data, the three-dimensional point cloud discrete data P ' after morphologic filtering, first pass through Delaunay triangle Change the gridding of algorithm construction LiDAR point cloud data, then mapped net according to the color that height value z carries out gray level The LiDAR data formatted is converted into depth map;
For optical image I, in order to obtain its depth map, we support weight from based on related in color and self adaptation Dense stereo coupling ask for optical image I and be adjacent the optimum Stereo matching side of image I ' (there is overlapping region) Method.The method combines color similarity, and in Euclidean distance similarity and color, associated similarity is determining in match window The weight size of pixel;Simultaneously in order to eliminate the different impact to images match result of illumination, match point is first carried out Carry out Matching power flow relation calculating again after rank conversion.Finally, three step optimizations are carried out to the initial parallax figure calculating, pick Remove and the different parallax mistakes causing such as blocked, repeat by image, thus obtaining final parallax result.It is specially:
Step 2-1, for image I, its RGB triple channel color vector be IR(x, y), IG(x, y), IB(x, y), wherein X, y represent the coordinate figure on image slices vegetarian refreshments ranks direction, and related rin vector representation in color is:
Rin=[rce1, rce2, rce3]
In formula, rce is defined as:
rce1=IR(x, y)-IG(x, y)
rce2=IG(x, y)-IB(x, y)
rce3=IR(x, y)-IR(x, y)
Support weight between pixel p and q to be matched is expressed as:
W (p, q)=f (Δ cpq, Δ dpq, Δ rpq)
=f (Δ cpq)·f(Δdpq)·f(Δrpq)
Wherein, Δ cpq, Δ dpqWith Δ rpqRepresent that color distortion between pixel p and q to be matched, Euclidean distance are poor respectively Relevant difference in different and color, f () represents difference intensity, and it is expressed as:
In formula, τc, τd, τrIt is respectively fixed constant, in present embodiment, take τc=5, τd=17.5, τr=5.
So, the support weight between pixel p and q to be matched is expressed as:
Produce corresponding matching error in order to eliminate two width images because illumination is different, the point of coupling is first carried out rank change Change, then carry out Matching power flow calculating again, specific rank conversion expression formula is as follows:
Wherein IpFor the gray value of current matching pixel, IqFor the gray value of pixel in window, τ1, τ2For rank conversion Class condition, in practice for fixed constant, takes τ in present embodiment1=9, τ2=2.
Optical image left view I and optical image right view I ' match point p andCorresponding point in match window For q andIts Matching power flow value isIt is expressed as:
In formula, N represents match window size, dmaxFor the maximum disparity value of matched pixel point,Represent left and right figure As in q andThe matching value at place:
Step 2-2, according to the victor is a king (WTA) algorithm, when left images scan for coupling, using polar curve about Restraint, the parallax of final p point is:
Step 2-3, left and right consistency check and medium filtering are carried out to initial parallax, estimated with reference to Epipolar geometry by disparity map Meter obtains the depth map D of optical imageI.
Step 3, extract the line feature of depth map by Hough algorithm, using line feature two wire clamp angle and line length ratio as Similarity measure is slightly mated, and comprises the following steps that:
Step 3-1, rim detection is carried out using Canny operator, the edge image generating is carried using Hough transform Cut-off line:
(1) obtain linear feature, adjust Hough transform parameter, obtain linear feature;
(2) adjacent segments AB, CD head and the tail end-point distances are less than first threshold Th1Line segment to be connected to become same straight Line AD, by straight line extended line in Second Threshold Th2Intersecting line segment l in pixel1、l2Connect intersecting;
(3) line segment extracting is carried out line length to compare, if line length d is less than the 3rd threshold value Th3, then this line segment be ignored;
Step 3-2, in the line feature that cloud data depth map and visible images depth map extract, with two wire clamp angles With line length ratio as similarity measure;It is specially:
First line length ratio r is compared, if through comparing left view line to be matched feature line length ratio r1Treat with right view Matched line feature line length ratio r2Difference drIn the 4th threshold value TrIn the range of, then prestore and carry out the angle changing rate of next step, If through comparing left view line to be matched feature angle α1With right view line to be matched feature angle α2Difference dαIn the 5th threshold Value TαIn the range of, then for the line feature of a pair of coupling;
If with line feature F1The feature of coupling has multiple, then to line feature F1The line feature of neighborhood and matched line are special Line feature near levying is carried out two wire clamp angles and is compared with line length ratio, draws the line feature of Optimum Matching, finally gives many In the best match line pair of 3, the end points choosing each pair line pair is asked for similarity transformation parameter, is realized the rough registration of image.
Step 4, by two step RANSAC algorithms reject error matching points pair, obtain camera position parameter estimation, specifically Step is as follows:
Step 4-1, according to spatial neighbor degree it is assumed that matching characteristic to being divided into different groups, that is, piece image is divided It is slit into different windows, window size is set to 4s*4s pixel, s is positive integer;Window is from the upper left corner of image from a left side Start from top to bottom to slide to the right side, slip unit is s, thus carrying out next step segmentation, so that the coupling in each group is special Levy and intersection be might have to the matching double points in quantity each group in a suitable threshold range;
Step 4-2, the matching characteristic that each is organized calculate to carrying out a step RANSAC algorithm, reject point not in the know:Repeatedly During generation, two matching characteristics calculate corresponding homography matrix to being sampled simultaneously, if other matching characteristic is to symbol Closing this matrix is considered as then intra-office point, and otherwise for point not in the know, the matching characteristic electing intra-office point as mates spy to for basic Levy;
Step 4-3, one again the basic matching characteristic in different groups is carried out towards an overall step RANSAC algorithm Calculate;The basic matching characteristic of each two group is sampled the corresponding homography matrix of calculating, if other basic matching characteristic Meeting this matrix is considered as then intra-office point, and otherwise for point not in the know, the fraction equation below of each iteration calculates. Assume to have point in j group, interior number of kth group is nk, iterate to calculate every time must be divided into:
Represent the weight of kth group, when interior point concentrates on several groups, weight is larger;When interior point is distributed in institute When having group, weight will decline;I=1~j, expression group number;
It is referred to as the matching characteristic pair of intra-office point after step 4-4, all calculating by step 4-3 of reservation, be referred to as point not in the know Matching characteristic to then again being calculated again using fraction highest homography matrix, to retain more features pair;Traversal After end, all of intra-office point is the matching characteristic pair of our needs.
Because, in Feature Points Matching, mapping model is the feature in the characteristic point a plane to an other plane The projective rejection of point, reacts for projection matrix;In the match point eliminating mistake through two steps RANSAC, i.e. point not in the know Afterwards, intra-office point is used to calculate mapping matrix.
Step 5, be based on above two step RANSAC algorithms, we will reject mistake matching double points, obtain high accuracy Phase position parameter estimation, carry out the re-projection between cloud data and optical image, complete cloud data and optics shadow The color and vein mapping of picture, the 3-D view after being merged.
With reference to specific embodiment, the invention will be further described.
Embodiment
The present embodiment adopts airborne LiDAR point cloud data and optical image, mainly covers portion centers area, comprises to lead Want building, short jungle etc., have chosen part airborne LiDAR point cloud data, with optical image, there is overlapping region Data, is processed using improved line feature registration algorithm.
Fig. 2 (a) is cloud data, and Fig. 2 (b) is visible images, and Fig. 2 (c) is the 3-D view after merging, Image be can be seen that according to the image after merging and achieve the three-dimensional reconstruction to city, make point cloud chart picture have visible ray The color information of image.
The cloud data based on line feature for the present invention and the autoregistration fusion method of optical image, without GPS/INS etc. In the case of priori, algorithm high degree of automation, robustness is good, and amount of calculation is little, just has higher registration simultaneously Really rate.

Claims (6)

1. a kind of cloud data based on line feature and the autoregistration fusion method of optical image are it is characterised in that wrap Include following steps:
Step 1, is filtered to cloud data processing;
Step 2, is respectively adopted self adaptation and supports weight dense stereo algorithm and Delaunay Triangulation algorithm to determine optics Image and the depth map of cloud data;
Step 3, extracts the line feature of depth map by Hough algorithm, using line feature two wire clamp angle and line length ratio as Similarity measure is slightly mated;
Step 4, rejects error matching points pair by two step RANSAC algorithms, obtains camera position parameter estimation;
Step 5, carries out the color and vein mapping of cloud data and optical image, the 3-D view after being merged.
2. the autoregistration fusion method of the cloud data based on line feature according to claim 1 and optical image, It is characterized in that, in step 1, cloud data is filtered processing using Mathematical Morphology Method.
3. the autoregistration fusion method of the cloud data based on line feature according to claim 1 and optical image, It is characterized in that, the depth map of cloud data, detailed process is determined in step 2 using Delaunay Triangulation algorithm For:
First pass through the gridding that Delaunay Triangulation Algorithm constructs cloud data, then gray scale is carried out according to height value z The color mapping of level, the cloud data after gridding is converted into depth map DP.
4. the autoregistration fusion method of the cloud data based on line feature according to claim 1 and optical image, It is characterized in that, determine the depth map of optical image in step 2, support power using based on related in color and self adaptation The weight dense stereo matching algorithm Stereo matching that to determine optical image I and the adjacent image I ' with overlapping region between optimum; Detailed process is:
Step 2-1, for optical imagery I, its RGB triple channel color vector is IR(x, y), IG(x, y), IB(x, y), Wherein x, y represent the coordinate figure on image slices vegetarian refreshments ranks direction;Related rin vector representation in color is:
Rin=[rce1, rce2, rce3]
In formula:
rce1=IR(x, y)-IG(x, y)
rce2=IG(x, y)-IB(x, y)
rce3=IB(x, y)-IR(x, y)
Support weight between pixel p and q to be matched is expressed as:
W (p, q)=f (Δ cpq, Δ dpq, Δ rpq)
=f (Δ cpq)·f(Δdpq)·f(Δrpq
Wherein, Δ cpq, Δ dpqWith Δ rpqRepresent that color distortion between pixel p and q to be matched, Euclidean distance are poor respectively Relevant difference in different and color, f () represents difference intensity, and it is expressed as:
f ( Δc p q ) = exp ( - Δc p q τ c )
f ( Δd p q ) = exp ( - Δd p q τ d )
f ( Δr p q ) = exp ( - Δr p q τ r )
In formula, τc, τd, τrIt is respectively fixed constant;
So, the support weight between pixel p and q to be matched is expressed as:
w ( p , q ) = f ( Δc p q , Δd p q , Δr p q ) = f ( Δc p q ) · f ( Δd p q ) · f ( Δr p q ) = exp ( - ( Δc p q τ c + Δd p q τ d + Δr p q τ r ) )
The point of coupling is carried out rank conversion, rank conversion expression formula is as follows:
R ( q ) = - 2 I p - I q < - &tau; 1 - 1 - &tau; 1 &le; I p - I q &le; - &tau; 2 0 - &tau; 2 < I p - I q &le; &tau; 2 1 &tau; 2 < I p - I q &le; &tau; 1 2 I p - I q > &tau; 1
Wherein IpGray value for p, IqGray value for q, τ1, τ2Convert class condition for rank;
Optical image left view I and optical image right view I ' match point be p andCorresponding point in match window be q andIts Matching power flow value isIt is expressed as:
E ( p , p &OverBar; d ) = &Sigma; q , q &OverBar; d &Subset; N w ( p , q ) w ( p &OverBar; d , q &OverBar; d ) e ( q , q &OverBar; d ) &Sigma; q , q &OverBar; d &Subset; N w ( p , q ) w ( p &OverBar; d , q &OverBar; d )
In formula, N represents match window size, dmaxFor the maximum disparity value of matched pixel point,Represent left and right figure As in q andThe matching value at place;
Step 2-2, according to the victor is a king algorithm, when left images scan for coupling, using epipolar-line constraint, final p Point parallax be:
d p = arg max E ( p , p &OverBar; d )
Step 2-3, carries out left and right consistency check and medium filtering to initial parallax, is estimated with reference to Epipolar geometry by disparity map Meter obtains the depth map D of optical imageI.
5. the autoregistration fusion method of the cloud data based on line feature according to claim 1 and optical image, It is characterized in that, step 3 is specially:
Step 3-1, rim detection is carried out using Canny operator, the edge image generating is carried using Hough transform Cut-off line:
(1) adjust Hough transform parameter, obtain linear feature;
(2) adjacent segments AB, CD head and the tail end-point distances are less than first threshold Th1Line segment to be connected to become same straight Line AD, by straight line extended line in Second Threshold Th2Intersecting line segment l in pixel1、l2Connect intersecting;
(3) line segment extracting is carried out line length to compare, if line length d is less than the 3rd threshold value Th3, then this line segment be ignored;
Step 3-2, in the line feature that cloud data depth map and visible images depth map extract, with two wire clamp angles With line length ratio as similarity measure, specially:
First line length ratio r is compared, if through comparing left view line to be matched feature line length ratio r1Treat with right view Matched line feature line length ratio r2Difference drIn the 4th threshold value TrIn the range of, then prestore and carry out the angle changing rate of next step, If through comparing left view line to be matched feature angle α1With right view line to be matched feature angle α2Difference dαIn the 5th threshold Value TαIn the range of, then for the line feature of a pair of coupling;
If with line feature F1The feature of coupling has multiple, then to line feature F1The line feature of neighborhood and matched line are special Line feature near levying is carried out two wire clamp angles and is compared with line length ratio, draws the line feature of Optimum Matching, finally gives many In the best match line pair of 3, the end points choosing each pair line pair is asked for similarity transformation parameter, is realized the rough registration of image.
6. the autoregistration fusion method of the cloud data based on line feature according to claim 1 and optical image, It is characterized in that, step 4 is specially:
Step 4-1, according to spatial neighbor degree it is assumed that matching characteristic to being divided into different groups, that is, piece image is divided It is slit into different windows, window size is set to 4s*4s pixel, s is positive integer;Window is from the upper left corner of image from a left side Start from top to bottom to slide to the right side, slip unit is s, thus carrying out next step segmentation;
Step 4-2, the matching characteristic that each is organized calculate to carrying out a step RANSAC algorithm, reject point not in the know:Repeatedly During generation, two matching characteristics calculate corresponding homography matrix to being sampled simultaneously, if other matching characteristic is to symbol Closing this matrix is considered as then intra-office point, otherwise for point not in the know;The matching characteristic electing intra-office point as mates spy to for basic Levy;
Step 4-3, again the basic matching characteristic in different groups is calculated using a step RANSAC algorithm;Every two Individual group of basic matching characteristic is sampled the corresponding homography matrix of calculating, if other basic matching characteristic meets this square Battle array is considered as then intra-office point, otherwise for point not in the know;The fraction equation below of each iteration obtains:
Assume to have point in j group, interior number of kth group is nk, iterate to calculate every time must be divided into:
G = &Sigma; i = 1 j n k &CenterDot; n k
For the weight of kth group, i=1~j, expression group number;
It is considered the matching characteristic pair of intra-office point in step 4-4, reservation step 4-3, be referred to as the matching characteristic of point not in the know To then being calculated again using fraction highest homography matrix;After traversal terminates, all of intra-office point is our needs Matching characteristic pair;
In Feature Points Matching, mapping model is the characteristic point in the characteristic point a plane to another one plane Projective rejection, mapping matrix is determined by intra-office point, obtains phase position parameter estimation.
CN201510526379.4A 2015-08-25 2015-08-25 Cloud data based on line feature and the autoregistration fusion method of optical image Pending CN106485737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510526379.4A CN106485737A (en) 2015-08-25 2015-08-25 Cloud data based on line feature and the autoregistration fusion method of optical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510526379.4A CN106485737A (en) 2015-08-25 2015-08-25 Cloud data based on line feature and the autoregistration fusion method of optical image

Publications (1)

Publication Number Publication Date
CN106485737A true CN106485737A (en) 2017-03-08

Family

ID=58233291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510526379.4A Pending CN106485737A (en) 2015-08-25 2015-08-25 Cloud data based on line feature and the autoregistration fusion method of optical image

Country Status (1)

Country Link
CN (1) CN106485737A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424166A (en) * 2017-07-18 2017-12-01 深圳市速腾聚创科技有限公司 Point cloud segmentation method and device
CN108304870A (en) * 2018-01-30 2018-07-20 河南理工大学 The erroneous matching elimination method of dotted line Fusion Features
CN109003516A (en) * 2018-07-27 2018-12-14 国家电网有限公司 A kind of extra-high-voltage alternating current transformer processing quality control simulation training system
CN109472816A (en) * 2018-09-17 2019-03-15 西北大学 A kind of point cloud registration method
CN110136159A (en) * 2019-04-29 2019-08-16 辽宁工程技术大学 Line segments extraction method towards high-resolution remote sensing image
CN111178138A (en) * 2019-12-04 2020-05-19 国电南瑞科技股份有限公司 Distribution network wire operating point detection method and device based on laser point cloud and binocular vision
CN111709988A (en) * 2020-04-28 2020-09-25 上海高仙自动化科技发展有限公司 Method and device for determining characteristic information of object, electronic equipment and storage medium
CN112581595A (en) * 2020-12-02 2021-03-30 中国人民解放军战略支援部队航天工程大学 Multi-view satellite image consistency analysis method
CN112785615A (en) * 2020-12-04 2021-05-11 浙江工业大学 Engineering surface multi-scale filtering method based on extended two-dimensional empirical wavelet transform
CN113160389A (en) * 2021-04-25 2021-07-23 上海方联技术服务有限公司 Image reconstruction method and device based on characteristic line matching and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140056508A1 (en) * 2012-08-27 2014-02-27 Samsung Electronics Co., Ltd. Apparatus and method for image matching between multiview cameras
CN104123730A (en) * 2014-07-31 2014-10-29 武汉大学 Method and system for remote-sensing image and laser point cloud registration based on road features
CN104463826A (en) * 2013-09-18 2015-03-25 镇江福人网络科技有限公司 Novel point cloud parallel Softassign registering algorithm
CN104599272A (en) * 2015-01-22 2015-05-06 中国测绘科学研究院 Movable target sphere oriented onboard LiDAR point cloud and image united rectification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140056508A1 (en) * 2012-08-27 2014-02-27 Samsung Electronics Co., Ltd. Apparatus and method for image matching between multiview cameras
CN104463826A (en) * 2013-09-18 2015-03-25 镇江福人网络科技有限公司 Novel point cloud parallel Softassign registering algorithm
CN104123730A (en) * 2014-07-31 2014-10-29 武汉大学 Method and system for remote-sensing image and laser point cloud registration based on road features
CN104599272A (en) * 2015-01-22 2015-05-06 中国测绘科学研究院 Movable target sphere oriented onboard LiDAR point cloud and image united rectification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FANG LV, KAN REN: "Automatic registration of airborne LiDAR point cloud data and optical imagery depth map based on line and points features", 《INFRARED PHYSICS & TECHNOLOGY》 *
LU WANG, ULRICH NEUMANN: "A Robust Approach for Automatic Registration of Aerial Images with Untextured Aerial LiDAR Data", 《2009 CVPR》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424166A (en) * 2017-07-18 2017-12-01 深圳市速腾聚创科技有限公司 Point cloud segmentation method and device
CN108304870B (en) * 2018-01-30 2021-10-08 河南理工大学 Error matching elimination method for point-line feature fusion
CN108304870A (en) * 2018-01-30 2018-07-20 河南理工大学 The erroneous matching elimination method of dotted line Fusion Features
CN109003516A (en) * 2018-07-27 2018-12-14 国家电网有限公司 A kind of extra-high-voltage alternating current transformer processing quality control simulation training system
CN109472816A (en) * 2018-09-17 2019-03-15 西北大学 A kind of point cloud registration method
CN109472816B (en) * 2018-09-17 2021-12-28 西北大学 Point cloud registration method
CN110136159A (en) * 2019-04-29 2019-08-16 辽宁工程技术大学 Line segments extraction method towards high-resolution remote sensing image
CN110136159B (en) * 2019-04-29 2023-03-31 辽宁工程技术大学 Line segment extraction method for high-resolution remote sensing image
CN111178138A (en) * 2019-12-04 2020-05-19 国电南瑞科技股份有限公司 Distribution network wire operating point detection method and device based on laser point cloud and binocular vision
CN111709988A (en) * 2020-04-28 2020-09-25 上海高仙自动化科技发展有限公司 Method and device for determining characteristic information of object, electronic equipment and storage medium
CN111709988B (en) * 2020-04-28 2024-01-23 上海高仙自动化科技发展有限公司 Method and device for determining characteristic information of object, electronic equipment and storage medium
CN112581595A (en) * 2020-12-02 2021-03-30 中国人民解放军战略支援部队航天工程大学 Multi-view satellite image consistency analysis method
CN112581595B (en) * 2020-12-02 2023-12-19 中国人民解放军战略支援部队航天工程大学 Multi-view satellite image consistency analysis method
CN112785615A (en) * 2020-12-04 2021-05-11 浙江工业大学 Engineering surface multi-scale filtering method based on extended two-dimensional empirical wavelet transform
CN113160389A (en) * 2021-04-25 2021-07-23 上海方联技术服务有限公司 Image reconstruction method and device based on characteristic line matching and storage medium

Similar Documents

Publication Publication Date Title
CN106485737A (en) Cloud data based on line feature and the autoregistration fusion method of optical image
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
Zhou et al. Seamless fusion of LiDAR and aerial imagery for building extraction
Yu et al. Semantic alignment of LiDAR data at city scale
CN101216895B (en) An automatic extracting method for ellipse image features in complex background images
CN102750537B (en) Automatic registering method of high accuracy images
CN108921895B (en) Sensor relative pose estimation method
CN104346608A (en) Sparse depth map densing method and device
CN104156957B (en) Stable and high-efficiency high-resolution stereo matching method
CN115564926B (en) Three-dimensional patch model construction method based on image building structure learning
CN104463108A (en) Monocular real-time target recognition and pose measurement method
CN105139379B (en) Based on the progressive extracting method of classified and layered airborne Lidar points cloud building top surface
CN105528785A (en) Binocular visual image stereo matching method
CN102938142A (en) Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN104156968A (en) Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
Pang et al. Automatic 3d industrial point cloud modeling and recognition
CN105389774A (en) Method and device for aligning images
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN112669379B (en) Image feature rapid extraction method based on auxiliary mark points
Battrawy et al. Lidar-flow: Dense scene flow estimation from sparse lidar and stereo images
CN105139355A (en) Method for enhancing depth images
Maltezos et al. Automatic detection of building points from LiDAR and dense image matching point clouds
CN110619368A (en) Planet surface navigation feature imaging matching detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170308