CN105654462A - Building elevation extraction method based on image registration - Google Patents

Building elevation extraction method based on image registration Download PDF

Info

Publication number
CN105654462A
CN105654462A CN201510657481.8A CN201510657481A CN105654462A CN 105654462 A CN105654462 A CN 105654462A CN 201510657481 A CN201510657481 A CN 201510657481A CN 105654462 A CN105654462 A CN 105654462A
Authority
CN
China
Prior art keywords
image
buildings
point
refractive
reflective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510657481.8A
Other languages
Chinese (zh)
Inventor
黄俊仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Technology Co Ltd
Original Assignee
Hunan Youxiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Youxiang Technology Co Ltd filed Critical Hunan Youxiang Technology Co Ltd
Priority to CN201510657481.8A priority Critical patent/CN105654462A/en
Publication of CN105654462A publication Critical patent/CN105654462A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a building elevation extraction method based on image registration. Through a catadioptric omni-directional imaging system, a catadioptric omni-directional image of a building is obtained. A corresponding remote sensing image is downloaded from Google Earth. The catadioptric omni-directional image is converted into a flat image. The flat image and the remote sensing image carry out registering to acquire viewpoint position information. The information is used to extract upper and lower boundaries of the building. Finally, based on boundary information, a height of the building is calculated. By using the method of the invention, needed hardware equipment is simple and cost is low. An extraction process does not need manual intervention. Automatic acquisition of a building elevation can be realized, practicality of a whole system is high and the method is convenient for popularization.

Description

A kind of buildings elevation extraction method based on Image registration
Technical field
The invention belongs to the technical field of virtual reality based on image, it relates to the acquisition methods of buildings elevation, refer in particular to a kind of buildings elevation extraction method based on Image registration.
Background technology:
In recent years, along with the develop rapidly of remote sensing technology, space shuttle and statllite system for people provide a large amount of high resolving power to ground observed data. These data are that we build the information source of digital earth, digital city, are also the main carriers of current spatial information. Being in the image analysing computer of target taking Urban areas, the plane information of usual acquisition buildings, does not obtain its elevation information in the past. And the fast development along with digital city, people are increasing to the information requirement of three-dimensional space data, and especially the demand of the elevation information of buildings is particularly outstanding. Therefore City Building elevation extraction method based on remote sensing image is paid close attention to widely.
The method obtaining buildings elevation at present mainly contains following several mode: one is gathered by the buildings in remote sensing image at measurement workstation by artificial mode, records its three-dimensional information; Two is that the two dimensional character information such as the texture utilized in remote sensing image, shade obtains buildings elevation information; Three is that remote sensing image obtains buildings elevation information in conjunction with airborne lidar system. The first method at substantial is artificial, and inefficiency; 2nd kind of method is subject to the image of Remote Sensing Image Quality, and accuracy is not high; The third method gathers apparatus expensive, processing mode more complicated.
Summary of the invention
The present invention is directed to the deficiency of existing structure elevation extraction method, it is proposed that a kind of buildings elevation extraction method based on Image registration, utilize the complementary characteristic of multiple sensors comprehensively to extract useful information. Its basic ideas are the refractive-reflective all images being obtained corresponding buildings by refractive-reflective all imaging system, and itself and remote sensing image are carried out registration, utilize registration information to calculate buildings elevation, and concrete steps are as follows:
The first step, obtains refractive-reflective all image and the remote sensing image of buildings;
Obtained the refractive-reflective all image of buildings by refractive-reflective all imaging system, from GoogleEarth, download corresponding remote sensing image.
2nd step, carries out registration by refractive-reflective all image and remote sensing image, comprises two steps:
(1) according to refractive-reflective all image image-forming principle, taking catadioptric curved surface focus O as virtual view, refractive-reflective all image is done projective transformation, it is transformed to orthographic plan picture, obtain the coordinate of corresponding viewpoint on orthographic plan picture simultaneously;
As shown in Figure 3, bottom reflecting curved surface, center of circle O ' is as initial point, taking camera optical axis as Z axle sets up rectangular coordinate system for conversion schematic diagram. Paraboloidal height is represented with P, s represents the distance of virtual view distance parabola vertex, t represents the distance of projection plane (i.e. projected horizontal imaging surface) distance parabola vertex, and Q represents the distance of camera lens photocentre distance parabola vertex, and f represents the lens focus of camera.
Reflecting curved surface is a paraboloid of revolution, and SECTION EQUATION is para-curve equation: y=x2/ a-P. A represents the size of para-curve opening, directly obtains by refractive-reflective all imaging system.
Appoint and get 1 B on refractive-reflective all image2, connect B2With camera photocentre F and extend, hand over reflecting curved surface in a B3; Connect virtual view O and some B3And extend, meet at a B with projected horizontal imaging surface1, B1It is B2Corresponding pixel points after projection;
Order point B1Coordinate is (x1,y1), some B2Coordinate is (x2,y2), both relations are formulated as follows:
x 2 = A C A 2 + E 2 ; y 2 = E C A 2 + E 2 Formula 1
Wherein
The mutual conversion of orthographic plan picture and refractive-reflective all image can be realized by formula 1.
The central point of catadioptric image and Z axle and its interface point, projecting on orthographic plan picture is also the interface point of Z axle with orthographic plan picture, and this puts the viewpoint being orthographic plan picture, uses O2Represent.
(2) the orthographic plan picture after conversion and remote sensing image are carried out image registration, adopt the classical method carrying out registration based on SIFT feature, the viewpoint O of orthographic plan picture2Corresponding points in remote sensing image are the viewpoint of remote sensing image;
Owing to illumination variation and dimensional variation are all had very strong robustness by SIFT feature, image registration mostly selects SIFT feature. The method carrying out registration based on SIFT feature of above-mentioned classics, step is: the SIFT feature extracting two width images respectively, obtains two feature point set F1��F2; Two feature point sets are carried out characteristic matching, and the feature point set obtaining matching is to { (F1i,F2i), wherein i represents match index number; The corresponding relation assuming two width images is an affined transformation, utilizes the feature point set matched to the parameter calculating affined transformation; Finally according to affine transformation parameter, two width images are carried out registration.
3rd step, utilizes registration information to obtain the lower boundary of buildings, comprises two steps:
(1) view information obtained with registration is transformed to refractive-reflective all image remote sensing image;
Remote sensing image is equivalent to regard planar object as, surface refractive-reflective all imaging system at viewpoint position is taken, obtain the refractive-reflective all image that a width is new, catadioptric imaging process using remote sensing image as photographed is equivalent to the inverse process of S2.1, middle projection plane is remote sensing image here, appoints and gets 1 B on remote sensing image1', connect B1' and virtual view O, hand over reflecting curved surface in a B3'; Tie point B3' with camera photocentre F and extend, meet at a B with refractive-reflective all image2', B2' it is B1Corresponding pixel points after ' conversion; The conversion of remote sensing image is obtained catadioptric image, and its conversion process still can represent with formula 1.
(2) the refractive-reflective all image converted in the catadioptric image obtained and S1 by remote sensing image according to pixels asks poor, obtains error image, determines the lower boundary of buildings according to the value characteristic of error image;
For error image, arranging a fixed threshold, pixel value is less than the successive zone of threshold value as ground region; The outer edge of ground region is the following boundary line of buildings.
4th step, obtains border on buildings, comprises three steps:
(1) the two class problem classifier identifying sky and non-sky are trained in advance: the 100 width refractive-reflective all images selecting anywhere, they are carried out Iamge Segmentation, all segmentation regions are divided into sky and non-sky two class, with color, position, account for image percentage and texture as characteristic of division, train a linear sorter;
(2) refractive-reflective all image step one obtained carries out Iamge Segmentation, is classified by each segmentation region linear sorter trained, is that the upper border in the region of sky is designated as candidate's sky border by classification results;
(3) the buildings lower boundary obtained in conjunction with the 3rd step, it is believed that the sky border of the positive outside of buildings lower boundary is only the place that sky connects with buildings, selects, obtain border on final buildings in candidate's sky border;
As shown in Figure 4, Figure 5 and Figure 6, after buildings up-and-down boundary has extracted, O ' is the corresponding picture point of camera photocentre, according to refractive-reflective all imaging system feature, O ' is also the picture point of the mirror surface vertex correspondence shown in Fig. 5, and mistake O ' makes ray and hands over the following boundary line of buildings and sky boundary line in N respectively2And N1, the lower boundary point being designated as in refractive-reflective all figure and upper frontier point, actual buildings lower boundary point and upper frontier point corresponding to it are M respectively2And M1��
5th step, calculates the height of buildings, comprises two steps based on up-and-down boundary information:
(1) horizontal information of buildings is calculated based on up-and-down boundary information;
Such as the rectangular coordinate system that Fig. 5 sets up, X-coordinate is parallel to the ground, and ordinate zou is perpendicular to the ground. The vertical section of plane of reflection is parabolic surface, and equation is y=ax2. F represents camera photocentre, and f is focal length, and the height H T on camera photocentre F to the distance LT of parabola vertex and plane of reflection distance ground is known. Put in F and step 4 the lower boundary point N tried to achieve2Line for reflection straight line SL2, intersect at a G with plane of reflection2, corresponding incident straight SL1For the lower boundary point M of buildings in real space2And G2Line.
The height of buildings equals frontier point M1Ordinate zou add the height H T of plane of reflection, so only needing to try to achieve M1The ordinate zou of point.
Reflection straight line SL2Upper some F coordinate is (0 ,-LT), some N2To be its coordinate (i, j) of lower boundary point all known, can be calculated reflection straight line SL2Straight-line equationAnd slope
Associating straight line SL2EquationWith parabola equation y=ax2Can in the hope of intersection point G2Coordinate (xG2,yG2): ( ( f i + ( f i ) 2 + 4 a L T ) / 2 a ( ( f i + ( f i ) 2 + 4 a L T ) / 2 a ) 2 ) ;
Try to achieve para-curve at a G2The tangent slope at place: tan ��=2axG2;
According to light reflection law, input angle equals reflection angle, can release incident straight SL1Slope k=tan (2 ��-��), according to slope and some G2, incident straight SL can be obtained1Equation y=kx+ (yG2-kxG2), then make y=-HT bring straight-line equation into, it is possible to obtain lower boundary point M2X-coordinate, represent with WT here: WT=(yG2-kxG2+ HT)/k;
(2) elevation information of buildings is calculated based on the horizontal information of up-and-down boundary information and buildings.
As shown in Figure 6, the vertical section of plane of reflection is parabolic surface, and equation is y=ax2. F represents camera photocentre, and f is focal length, and camera photocentre is all known to the height H T of the distance LT of parabola vertex and plane of reflection distance ground. Point F and upper frontier point N1Line for reflection straight line SL2', intersect at a G with plane of reflection1, corresponding incident straight SL1' it is the upper frontier point M of buildings in real space1And G1Line; WT is the horizontal coordinate distance of buildings, obtains.
N1It is that upper border point coordinate is known, with (i ', j ') represent; Point F coordinate is (0 ,-LT), can be calculated reflection straight line SL2' equation be y = f i ′ x - L T And slope tanα ′ = f i ′ ;
Associating straight line SL2' equationWith parabola equation y=ax2Can in the hope of intersection point G1Coordinate (xG1,yG1): ( ( f i ′ + ( f i ′ ) 2 + 4 a L T ) / 2 a , ( ( f i ′ + ( f i ′ ) 2 + 4 a L T ) / 2 a ) 2 ) ;
Try to achieve para-curve at a G1The tangent slope at place: tan �� '=2axG1;
According to light reflection law, input angle equals reflection angle, can release incident straight SL1' slope k '=tan (2 �� '-�� '), according to slope and some G1, incident straight SL can be obtained1' equation y=k ' x+ (yG1-k��xG1), then make x=WT bring straight-line equation into, it is possible to obtain upper frontier point M1Ordinate zou: y=k ' WT+ (yG1-k��xG1); The height of known buildings equals frontier point M1Ordinate zou add the height H T of plane of reflection, so the height of final buildings is: k ' WT+ (yG1-k��xG1)+HT��
The buildings elevation extraction method necessary hardware equipment based on image coupling that the present invention proposes is simple, and cost is lower, and leaching process does not need manual intervention, it is possible to realize the automatic acquisition of buildings elevation, and whole system availability is strong, is convenient to promote.
Accompanying drawing illustrates:
Fig. 1 is refractive-reflective all image and corresponding remote sensing image;
Fig. 2 is scene information distribution schematic diagram in refractive-reflective all image;
The ground region part that Fig. 3 is refractive-reflective all image is transformed to plane perspective image schematic diagram;
Fig. 4 is that in refractive-reflective all image, buildings up-and-down boundary extracts result schematic diagram;
Fig. 5 is that buildings horizontal throw calculates schematic diagram;
Fig. 6 is buildings high computational schematic diagram.
Embodiment:
Below in conjunction with accompanying drawing, the present invention is described in further details.
First obtained the refractive-reflective all image of buildings by refractive-reflective all imaging system, from GoogleEarth, download corresponding remote sensing image.
As shown in Figure 1, the left side is refractive-reflective all image, and the right is remote sensing image. It can be seen that the imaging mode of refractive-reflective all image is different from remote sensing image, various scene information distributes as shown in Figure 2, and ground region is at the inner ring of refractive-reflective all image.
There is gross distortion in the object in refractive-reflective all image, directly with remote sensing image, it is carried out registration and there is very big difficulty. In order to the image of better registration two kinds of different imaging modes, the conversion formula of the present invention first derivation refractive-reflective all image and remote sensing image.
According to refractive-reflective all image image-forming principle, taking catadioptric curved surface focus O as virtual view, refractive-reflective all image is done projective transformation, it is transformed to orthographic plan picture, obtain the coordinate of corresponding viewpoint on orthographic plan picture simultaneously;
As shown in Figure 3, bottom reflecting curved surface, center of circle O ' is as initial point, taking camera optical axis as Z axle sets up rectangular coordinate system for conversion schematic diagram. Paraboloidal height is represented with P, s represents the distance of virtual view distance parabola vertex, t represents the distance of projection plane (i.e. projected horizontal imaging surface) distance parabola vertex, and Q represents the distance of camera lens photocentre distance parabola vertex, and f represents the lens focus of camera.
Reflecting curved surface is a paraboloid of revolution, and SECTION EQUATION is para-curve equation: y=x2/ a-P. A represents the size of para-curve opening, directly obtains by refractive-reflective all imaging system.
Appoint and get 1 B on refractive-reflective all image2, connect B2With camera photocentre F and extend, hand over reflecting curved surface in a B3; Connect virtual view O and some B3And extend, meet at a B with projected horizontal imaging surface1, B1It is B2Corresponding pixel points after projection;
Order point B1Coordinate is (x1,y1), some B2Coordinate is (x2,y2), both relations are formulated as follows:
x 2 = A C A 2 + E 2 ; y 2 = E C A 2 + E 2 Formula 1
Wherein
By formula 1, it is possible to realize the mutual conversion of remote sensing image and refractive-reflective all image.
The central point of catadioptric image and Z axle and its interface point, projecting on orthographic plan picture is also the interface point of Z axle with orthographic plan picture, and this puts the viewpoint being orthographic plan picture, uses O2Represent.
Orthographic plan picture after conversion and remote sensing image are carried out image registration, adopts the classical method carrying out registration based on SIFT feature, the viewpoint O of orthographic plan picture2Corresponding points in remote sensing image are the viewpoint of remote sensing image;
Extract the SIFT feature of two width images respectively, obtain two feature point set F1��F2; Two feature point sets are carried out characteristic matching, and the feature point set obtaining matching is to { (F1i,F2i), wherein i represents match index number; The corresponding relation assuming two width images is an affined transformation, utilizes the feature point set matched to the parameter calculating affined transformation; Finally according to affine transformation parameter, two width images are carried out registration.
Finding when registration refractive-reflective all image and remote sensing image, terrestrial information region two width image can mate preferably, but the vertical scenery such as the buildings of periphery then cannot mate; Can matching area and the line of delimitation of matching area can not just characterize buildings bottom boundary. The lower boundary of buildings can be obtained by this characteristic. Concrete steps are as follows:
(1) remote sensing image is inversely transformed into refractive-reflective all image by the view information obtained with registration;
Remote sensing image is equivalent to regard planar object as, surface refractive-reflective all imaging system at viewpoint position is taken, obtain the refractive-reflective all image that a width is new, catadioptric imaging process using remote sensing image as photographed is equivalent to the inverse process of S2.1, middle projection plane is remote sensing image here, appoints and gets 1 B on remote sensing image1', connect B1' and virtual view O, hand over reflecting curved surface in a B3'; Tie point B3' with camera photocentre F and extend, meet at a B with refractive-reflective all image2', B2' it is B1Corresponding pixel points after ' conversion; The conversion of remote sensing image is obtained catadioptric image, and its conversion process still can represent with formula 1.
(2) the catadioptric image and the initial refractive-reflective all image that the conversion of remote sensing image are obtained according to pixels ask poor, obtain error image, arrange a fixed threshold, and pixel value is less than the successive zone of threshold value as ground region; The outer edge of ground region is the following boundary line of buildings.
Characteristic distributions according to region each in refractive-reflective all image, the upper border of buildings is exactly the line of delimitation of construction zone and sky areas. The method of machine learning can be utilized to judge that whether certain image block is for sky areas. Concrete steps are as follows:
A () trains the two class problem classifier identifying sky and non-sky in advance: the 100 width refractive-reflective all images selecting anywhere, they are carried out Iamge Segmentation, all segmentation regions are divided into sky and non-sky two class, with color, position, account for image percentage and texture as characteristic of division, train a linear sorter;
B refractive-reflective all image that step one is obtained by () carries out Iamge Segmentation, is classified by each segmentation region linear sorter trained, is that the upper border in the region of sky is designated as candidate's sky border by classification results;
C buildings lower boundary that () obtains in conjunction with the 3rd step, it is believed that the sky border of the positive outside of buildings lower boundary is only the place that sky connects with buildings, selects, obtain border on final buildings in candidate's sky border;
The present invention below calculates elevation information by the up-and-down boundary of buildings. As shown in Figure 4, Figure 5 and Figure 6, after buildings up-and-down boundary has extracted, O ' is the corresponding picture point of camera photocentre, according to refractive-reflective all imaging system feature, O ' is also the picture point of the mirror surface vertex correspondence shown in Fig. 5, and mistake O ' makes ray and hands over the following boundary line of buildings and sky boundary line in N respectively2And N1, the lower boundary point being designated as in refractive-reflective all figure and upper frontier point, actual buildings lower boundary point and upper frontier point corresponding to it are M respectively2And M1��
Such as the rectangular coordinate system that Fig. 5 sets up, X-coordinate is parallel to the ground, and ordinate zou is perpendicular to the ground. The vertical section of plane of reflection is parabolic surface, and equation is y=ax2. F represents camera photocentre, and f is focal length, and the height H T on camera photocentre F to the distance LT of parabola vertex and plane of reflection distance ground is known. Point F and lower boundary point N2Line for reflection straight line SL2, intersect at a G with plane of reflection2, corresponding incident straight SL1For the lower boundary point M of buildings in real space2And G2Line. The height of buildings equals frontier point M1Ordinate zou add the height H T of plane of reflection, so only needing to try to achieve M1The ordinate zou of point. Specifically solve process as follows:
(1) straight line SL is reflected2Upper some F coordinate is (0 ,-LT), some N2To be its coordinate (i, j) of lower boundary point all known, can be calculated reflection straight line SL2Straight-line equationAnd slope
(2) straight line SL is combined2EquationWith parabola equation y=ax2Can in the hope of intersection point G2Coordinate ( x G 2 , y G 2 ) : ( ( f i + ( f i ) 2 + 4 a L T ) / 2 a , ( ( f i + ( f i ) 2 + 4 a L T ) / 2 a ) 2 ) ;
(3) ask para-curve at a G2The tangent slope at place: tan ��=2axG2;
(4) according to light reflection law, input angle equals reflection angle, can release incident straight SL1Slope k=tan (2 ��-��), according to slope and some G2, incident straight SL can be obtained1Equation y=kx+ (yG2-kxG2), then make y=-HT bring straight-line equation into, it is possible to obtain lower boundary point M2X-coordinate, represent the value of X-coordinate with WT: WT=(yG2-kxG2+ HT)/k;
(5) as shown in Figure 6, the vertical section of plane of reflection is parabolic surface, and equation is y=ax2. F represents camera photocentre, and f is focal length, and photocentre is all known to the height H T of the distance LT of parabola vertex and plane of reflection distance ground. Point F and upper frontier point N1Line for reflection straight line SL2', intersect at a G with plane of reflection1, corresponding incident straight SL1' it is the upper frontier point M of buildings in real space1And G1Line; WT is the horizontal coordinate distance of buildings, obtains.
N1It is that upper border point coordinate is known, with (i ', j ') represent; Point F coordinate is (0 ,-LT), can be calculated reflection straight line SL2' equation be y = f i ′ x - L T And slope tanα ′ = f i ′ ;
Associating straight line SL2' equationWith parabola equation y=ax2Can in the hope of intersection point G1Coordinate (xG1,yG1): ( ( f i ′ + ( f i ′ ) 2 + 4 a L T ) / 2 a , ( ( f i ′ + ( f i ′ ) 2 + 4 a L T ) / 2 a ) 2 ) ;
Try to achieve para-curve at a G1The tangent slope at place: tan �� '=2axG1;
According to light reflection law, input angle equals reflection angle, can release incident straight SL1' slope k '=tan (2 �� '-�� '), according to slope and some G1, incident straight SL can be obtained1' equation y=k ' x+ (yG1-k��xG1), then make x=WT bring straight-line equation into, it is possible to obtain upper frontier point M1Ordinate zou: y=k ' WT+ (yG1-k��xG1); The height of known buildings equals frontier point M1Ordinate zou add the height H T of plane of reflection, so the height of final buildings is: k ' WT+ (yG1-k��xG1)+HT��

Claims (1)

1. the buildings elevation extraction method based on image coupling, it is characterised in that, comprise the following steps:
S1. the refractive-reflective all image of buildings and corresponding remote sensing image is obtained
Obtained the refractive-reflective all image of buildings by refractive-reflective all imaging system, from GoogleEarth, download corresponding remote sensing image;
S2. the refractive-reflective all image in S1 and remote sensing image are carried out registration
Refractive-reflective all image, according to refractive-reflective all image image-forming principle, taking catadioptric curved surface focus O as virtual view, is done projective transformation, is transformed to orthographic plan picture by S2.1, obtains the coordinate of corresponding viewpoint on orthographic plan picture simultaneously;
Bottom reflecting curved surface, center of circle O ' is as initial point, taking camera optical axis as Z axle sets up rectangular coordinate system; Representing paraboloidal height with P, s represents the distance of virtual view distance parabola vertex, and t represents the distance of projection plane separation from parabola vertex, and Q represents the distance of camera lens photocentre distance parabola vertex, and f represents the lens focus of camera;
Reflecting curved surface is a paraboloid of revolution, and SECTION EQUATION is para-curve equation: y=x2/ a-P; Wherein: a represents the size of para-curve opening, directly obtain by refractive-reflective all imaging system;
Appoint and get 1 B on refractive-reflective all image2, connect B2With camera photocentre F and extend, hand over reflecting curved surface in a B3; Connect virtual view O and some B3And extend, meet at a B with projected horizontal imaging surface1, B1It is B2Corresponding pixel points after projection;
Order point B1Coordinate is (x1,y1), some B2Coordinate is (x2,y2), both relations are formulated as follows:
Wherein
The mutual conversion of orthographic plan picture and refractive-reflective all image can be realized by formula 1;
The central point of catadioptric image and Z axle and its interface point, projecting on orthographic plan picture is also the interface point of Z axle with orthographic plan picture, and this puts the viewpoint being orthographic plan picture, uses O2Represent;
Orthographic plan picture after conversion and remote sensing image are carried out image registration by S2.2
Adopt the classical method carrying out registration based on SIFT feature, obtain the corresponding points of plane image viewpoint on remote sensing image, be the viewpoint of remote sensing image;
S3 utilizes registration information to obtain the lower boundary of buildings
The view information that S3.1 registration obtains is transformed to refractive-reflective all image remote sensing image;
Remote sensing image is equivalent to regard planar object as, surface refractive-reflective all imaging system at viewpoint position is taken, obtain the refractive-reflective all image that a width is new, catadioptric imaging process using remote sensing image as photographed is equivalent to the inverse process of S2.1, middle projection plane is remote sensing image here, appoints and gets 1 B on remote sensing image1', connect B1' and virtual view O, hand over reflecting curved surface in a B3'; Tie point B3' with camera photocentre F and extend, meet at a B with refractive-reflective all image2', B2' it is B1Corresponding pixel points after ' conversion; The conversion of remote sensing image is obtained catadioptric image, and its conversion process still can represent with formula 1;
The refractive-reflective all image that remote sensing image is converted in the catadioptric image obtained and S1 by S3.2 according to pixels asks poor, obtains error image, determines the lower boundary of buildings according to the value characteristic of error image;
For error image, arranging a fixed threshold, pixel value is less than the successive zone of threshold value as ground region; The outer edge of ground region is the following boundary line of buildings;
S4. border on buildings is obtained
S4.1 trains the two class problem classifier identifying sky and non-sky in advance: the 100 width refractive-reflective all images selecting anywhere, they are carried out Iamge Segmentation, all segmentation regions are divided into sky and non-sky two class, with color, position, account for image percentage and texture as characteristic of division, train a linear sorter;
The refractive-reflective all image obtained in S1 is carried out Iamge Segmentation by S4.2, is classified by each segmentation region linear sorter trained, is that the upper border in the region of sky is designated as candidate's sky border by classification results;
S4.3 is in conjunction with the buildings lower boundary obtained in S3, it is believed that the sky border of the positive outside of buildings lower boundary is only the place that sky connects with buildings, selects, obtain border on final buildings in candidate's sky border;
S5. the height of buildings is calculated based on up-and-down boundary information
S5.1 is based on the horizontal information of the upper and lower boundary information calculating buildings of buildings
After buildings up-and-down boundary has extracted, O ' is the corresponding picture point of camera photocentre, and mistake O ' makes ray and hands over the following boundary line of buildings and sky boundary line in N respectively2And N1, the lower boundary point being designated as in refractive-reflective all figure and upper frontier point, actual buildings lower boundary point and upper frontier point corresponding to it are M respectively2And M1;
The rectangular coordinate system set up, X-coordinate is parallel to the ground, and ordinate zou is perpendicular to the ground; The vertical section of plane of reflection is parabolic surface, and equation is y=ax2; F represents camera photocentre, and f is focal length, and camera photocentre is all known to the height H T of the distance LT of parabola vertex and plane of reflection distance ground; Point F and lower boundary point N2Line for reflection straight line SL2, intersect at a G with plane of reflection2, corresponding incident straight SL1For the lower boundary point M of buildings in real space2And G2Line; The height of buildings equals frontier point M1Ordinate zou add the height H T of plane of reflection, so only needing to try to achieve M1The ordinate zou of point;
Reflection straight line SL2Upper some F coordinate is (0 ,-LT), some N2To be its coordinate (i, j) of lower boundary point all known, can be calculated reflection straight line SL2Straight-line equationAnd slope
Associating straight line SL2EquationWith parabola equation y=ax2Can in the hope of intersection point G2Coordinate (xG2,yG2):
Try to achieve para-curve at a G2The tangent slope at place: tan ��=2axG2;
According to light reflection law, input angle equals reflection angle, can release incident straight SL1Slope k=tan (2 ��-��), according to slope and some G2, incident straight SL can be obtained1Equation y=kx+ (yG2-kxG2), then make y=-HT bring straight-line equation into, it is possible to obtain lower boundary point M2X-coordinate, represent with WT here: WT=(yG2-kxG2+ HT)/k;
S5.2 calculates the elevation information of buildings based on the horizontal information of up-and-down boundary information and buildings;
The vertical section of plane of reflection is parabolic surface, and equation is y=ax2; The height H T on camera photocentre F to the distance LT of parabola vertex and plane of reflection distance ground is known; Point F and upper frontier point N1Line for reflection straight line SL2', intersect at a G with plane of reflection1, corresponding incident straight SL1' it is the upper frontier point M of buildings in real space1And G1Line; WT is the horizontal coordinate distance of buildings, obtains;
N1It is that upper border point coordinate is known, with (i ', j ') represent; Point F coordinate is (0 ,-LT), can be calculated reflection straight line SL2' equation beAnd slope
Associating straight line SL2' equationWith parabola equation y=ax2Can in the hope of intersection point G1Coordinate (xG1,yG1):
Try to achieve para-curve at a G1The tangent slope at place: tan �� '=2axG1;
According to light reflection law, input angle equals reflection angle, can release incident straight SL1' slope k '=tan (2 �� '-�� '), according to slope and some G1, incident straight SL can be obtained1' equation y=k ' x+ (yG1-k��xG1), then make x=WT bring straight-line equation into, it is possible to obtain upper frontier point M1Ordinate zou: y=k ' WT+ (yG1-k��xG1); The height of known buildings equals frontier point M1Ordinate zou add the height H T of plane of reflection, so the height of final buildings is: k ' WT+ (yG1-k��xG1)+HT��
CN201510657481.8A 2015-10-13 2015-10-13 Building elevation extraction method based on image registration Pending CN105654462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510657481.8A CN105654462A (en) 2015-10-13 2015-10-13 Building elevation extraction method based on image registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510657481.8A CN105654462A (en) 2015-10-13 2015-10-13 Building elevation extraction method based on image registration

Publications (1)

Publication Number Publication Date
CN105654462A true CN105654462A (en) 2016-06-08

Family

ID=56482086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510657481.8A Pending CN105654462A (en) 2015-10-13 2015-10-13 Building elevation extraction method based on image registration

Country Status (1)

Country Link
CN (1) CN105654462A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053409A (en) * 2017-12-11 2018-05-18 中南大学 Automatic construction method and system for remote sensing image segmentation reference library
CN108062793A (en) * 2017-12-28 2018-05-22 百度在线网络技术(北京)有限公司 Processing method, device, equipment and storage medium at the top of object based on elevation
CN111666910A (en) * 2020-06-12 2020-09-15 北京博能科技股份有限公司 Airport clearance area obstacle detection method and device and electronic product
CN113487634A (en) * 2021-06-11 2021-10-08 中国联合网络通信集团有限公司 Method and device for correlating height and area of building

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158226A1 (en) * 2006-12-19 2008-07-03 California Institute Of Technology Imaging model and apparatus
CN104599281A (en) * 2015-02-03 2015-05-06 中国人民解放军国防科学技术大学 Panoramic image and remote sensing image registration method based on horizontal line orientation consistency

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158226A1 (en) * 2006-12-19 2008-07-03 California Institute Of Technology Imaging model and apparatus
CN104599281A (en) * 2015-02-03 2015-05-06 中国人民解放军国防科学技术大学 Panoramic image and remote sensing image registration method based on horizontal line orientation consistency

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHANG C C ETALS: "Library for Support Vector", 《HTTP://WWW.CSIE.NTU.EDU.TW/CJLIN/LIBSVM》 *
侯媛彬 等: "《神经网络》", 31 August 2007 *
张婷婷: "《遥感技术概论》", 31 July 2011 *
徐玮 等: "折反射全景与遥感图像融合的建筑物高程自动提取方法", 《国防科技大学学报》 *
杨明凡 等: "半自动全向图与卫星遥感图快速配准", 《计算机工程与应用》 *
王媛媛 等: "折反射全向图像与遥感图像配准的建筑物高度提取算法", 《计算机应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053409A (en) * 2017-12-11 2018-05-18 中南大学 Automatic construction method and system for remote sensing image segmentation reference library
CN108062793A (en) * 2017-12-28 2018-05-22 百度在线网络技术(北京)有限公司 Processing method, device, equipment and storage medium at the top of object based on elevation
CN108062793B (en) * 2017-12-28 2021-06-01 百度在线网络技术(北京)有限公司 Object top processing method, device, equipment and storage medium based on elevation
CN111666910A (en) * 2020-06-12 2020-09-15 北京博能科技股份有限公司 Airport clearance area obstacle detection method and device and electronic product
CN111666910B (en) * 2020-06-12 2024-05-17 北京博能科技股份有限公司 Airport clearance area obstacle detection method and device and electronic product
CN113487634A (en) * 2021-06-11 2021-10-08 中国联合网络通信集团有限公司 Method and device for correlating height and area of building
CN113487634B (en) * 2021-06-11 2023-06-30 中国联合网络通信集团有限公司 Method and device for associating building height and area

Similar Documents

Publication Publication Date Title
CN106228507B (en) A kind of depth image processing method based on light field
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
Hoppe et al. Online Feedback for Structure-from-Motion Image Acquisition.
CN102509093B (en) Close-range digital certificate information acquisition system
Matzen et al. Nyc3dcars: A dataset of 3d vehicles in geographic context
CN106228609A (en) A kind of oblique photograph three-dimensional modeling method based on spatial signature information
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN105096386A (en) Method for automatically generating geographic maps for large-range complex urban environment
CN103971404A (en) 3D real-scene copying device having high cost performance
KR101759798B1 (en) Method, device and system for generating an indoor two dimensional plan view image
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
CN103902953B (en) A kind of screen detecting system and method
CN102986372A (en) Picking object recognizing, classifying and space positioning device and picking object recognizing, classifying and space positioning method based on panoramic stereoscopic vision
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN105654462A (en) Building elevation extraction method based on image registration
CN112819895A (en) Camera calibration method and device
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN113643345A (en) Multi-view road intelligent identification method based on double-light fusion
CN117315146B (en) Reconstruction method and storage method of three-dimensional model based on trans-scale multi-source data
Özdemir et al. A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment
CN112801184A (en) Cloud tracking method, system and device
Li et al. WHU-stereo: A challenging benchmark for stereo matching of high-resolution satellite images
CN113052110B (en) Three-dimensional interest point extraction method based on multi-view projection and deep learning
Peppa et al. Handcrafted and learning-based tie point features-comparison using the EuroSDR RPAS benchmark datasets

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160608