CN108182722A - A kind of true orthophoto generation method of three-dimension object edge optimization - Google Patents
A kind of true orthophoto generation method of three-dimension object edge optimization Download PDFInfo
- Publication number
- CN108182722A CN108182722A CN201710623219.0A CN201710623219A CN108182722A CN 108182722 A CN108182722 A CN 108182722A CN 201710623219 A CN201710623219 A CN 201710623219A CN 108182722 A CN108182722 A CN 108182722A
- Authority
- CN
- China
- Prior art keywords
- point
- line segment
- dimensional
- line
- dimension object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of true orthophoto generation method of three-dimension object edge optimization, step includes:1) point feature processing is carried out to the image comprising three-dimension object and generates point off density cloud;2) edge line of three-dimension object is extracted from multiple images comprising the three-dimension object and homonymous line is matched, being then based on coplanar condition using matched straight line carries out three-dimensional reconstruction, obtains the 3 d-line at the edge of the three-dimension object;3) two endpoints are chosen from the 3 d-line and obtains a three-dimensional line segment, and turn to three-dimensional point by the three-dimensional line segment is discrete;4) point that the point in the point cloud obtained based on step 1) and step 3) are obtained builds the triangulation network together, the digital surface model of image of the generation comprising the three-dimension object simultaneously carries out ortho-rectification based on the digital surface model, obtains the true orthophoto of three-dimension object edge quality optimization.The present invention makes up the point cloud lacked at object edge, so as to eliminate this sawtooth twisted effect.
Description
Technical field
The present invention relates to photogrammetric and three-dimensional imaging fields more particularly to a kind of the real of three-dimension object edge optimization to penetrate
Image generating method.
Background technology
With the fast development of current social economy, it is important that digital orthoimage has become photogrammetric spatial data
A part.In photogrammetric 3 D digital imaging, 3 D stereo object obviously becomes main research object.Therefore, base
The focus for becoming Recent study is corrected in really penetrating for digital surface model (Digital Surface Model, DSM).It
In view of the 3D shape and elevation information of object, can object be carried out to slope correction and height displacement corrects, it is true so as to obtain
Real orthophotoquad.
But the making of true orthophoto is more complicated than orthography more.True orthophoto product generally will appear as
Under it is several influence its quality the phenomenon that:Garland, distortion dislocation, ghost image and edge sawtooth etc..Influence the quality of true orthophoto
Many because being known as, it is even such as to inlay line drawing, even light for quality and subsequent working process including point off density cloud early period and DSM
Color, occlusion area are filled up, shadow removing, edge are modified etc..Image is done really penetrate correction when, how to obtain accurate DSM
It is a first step for key.And the acquisition of DSM is generally obtained by point off density cloud, certain methods are from laser scanning point cloud and photography
Two aspects of measurement pointcloud, which are started with, improves the quality of dense three-dimensional point cloud.After three-dimensional point cloud is obtained, dissipated for how to utilize
Disorderly point cloud is rebuild surface mesh and there has also been many research and is made some progress.It is above-mentioned for tall and big object
After Resolving probiems, on side short and small object block and shadow problem be also influence orthography quality an important factor for.
Therefore, it blocks the automatic detection with shadow region and also becomes the critical issue really penetrated in correcting.However, for real projection
There is the situation of sawtooth distortion in the edge of regular object as in, then that studies is fewer.Many production units are all after operation
Phase is modified sawtooth distortion position using PhotoShop, to achieve the effect that beauty.
The generation of edge sawtooth and the edge of structure three-dimensional point cloud is uneven, localized loss and wrong up and down
There are much relations in position.The uneven and localized loss of point cloud leads to triangulation network missing herein, and shaking dislocation leads to marginal point cloud
Not point-blank.For traditional true ortho-rectification method, these abnormal cloud is caused by irregular triangle
The object edge model that net (Triangulated Irregular Network, TIN) represents forms " sawtooth " shape, so as to cause
Image projecting misplaces when just penetrating correction, and zigzag is presented in edge.When the irregular distributed degrees of this sawtooth increase, sawtooth will
Show " distortion " shape.If one section of dense uniform point cloud can be made up out in the edge of cloud missing, make these clouds
Participate in triangulation network resurfacing, then when just penetrating correction, edge can correctly be projected, so as to weaken " sawtooth " and
" distortion " effect.
Invention content
The object of the present invention is to provide a kind of object edges occurred in conventional true orthophoto production process to distort saw
In tooth problem, progress three-dimension object edge extracting and the method for improving true orthophoto quality of refining.
To achieve the above object, the present invention takes following technical scheme:
A kind of true orthophoto generation method of three-dimension object edge optimization, includes the following steps:
1) based on conventional photogrammetric survey method, point feature processing is carried out to the image of the three-dimension object of shooting and is generated close
Collection point cloud, is filtered and vacuates to point off density cloud, obtain good cloud;
2) object edge line is extracted from multiple images and homonymous line is matched, rebuilds 3 d-line;
3) two endpoints are chosen from the 3 d-line and obtains a three-dimensional line segment, and turn to three-dimensional by the three-dimensional line segment is discrete
Point;
4) triangulation network is built together by the point put in cloud obtained based on step 1) and by the point that line segment discretization obtains,
It generates perfect DSM and does ortho-rectification, obtain the true orthophoto that edge sawtooth distortion is eliminated.
The step 1) generates point off density cloud and generates true orthophoto, includes the following steps:1. multiple images are carried out
Feature point extraction and homotopy mapping obtain sparse object space point cloud;2. using step 1. in observation do bundle adjustment, obtain
To accurate camera internal and external orientation;3. the high-precision camera pose (i.e. camera internal and external orientation) 2. provided based on step
Dense Stereo Matching is carried out to image, obtains dense three-dimensional point cloud;4. processing is filtered and vacuated to point off density cloud, obtain good
Point off density cloud.
The step 2) rebuilds 3 d-line, includes the following steps:1. the shadow for having lap from two or two or more
As upper using Straight Line Extraction, extract three-dimension object edge (lap includes the edge of three-dimension object);2. to being extracted
Each image on homonymous line matched, obtain matching line;3. straight line matched in step 2) is based on coplanar item
Part carries out three-dimensional reconstruction, obtains the 3 d-line expression at three-dimension object edge.
Step 3) the three-dimensional line segment is discrete to turn to three-dimensional point, includes the following steps:1. the two of three-dimensional line segment is determined first
A endpoint obtains the length of three-dimensional line segment;2. the number of the point of discretization is wanted in setting, discrete sampling is obtained at object edge
Three-dimensional point cloud.
In the step 3), the step of obtaining the three-dimensional line segment, includes:For simultaneously comprising three-dimension object edge line segment
Two images, in first image, one head and the tail picture point of selection be x1 (1)、x2 (1)Line segment l1, according to line segment l1It reconstructs
One with X1 (1)And X2 (1)As the three-dimensional line segment of head and the tail endpoint, i.e.,A head and the tail picture is chosen in second image
Point is x1 (2)、x2 (2)Line segment l2, according to line segment l2One is reconstructed with X1 (2)And X2 (2)As the three-dimensional line segment of first and last endpoint, i.e.,Wherein, line segment l1With line segment l1For line segment of the same name, L1、L2It is conllinear and be line segment on 3 d-line L;From
Four endpoint X1 (1)、X2 (1)、X1 (2)And X2 (2)Endpoint of the two farthest endpoints of middle chosen distance as the three-dimensional line segment, obtains
To the three-dimensional line segment
The step 4) generates perfect DSM and does ortho-rectification, includes the following steps:1. will be obtained in step 1) three
It ties up point and is combined together by the point that line segment discretization obtains, according to Di Luoni triangulation networks structure principle, build the triangulation network, generate
Include the DSM of object edge feature;2. based on the DSM in 1., according to the algorithm of true ortho-rectification, the true of no sawtooth distortion is generated
Orthography.
The present invention is had the following advantages using above technical scheme:
1st, bearing calibration is really penetrated just with point feature due to conventional, often lack a cloud or point at object edge
Cloud is inaccurate, leads to sawtooth occur doing really to penetrate at timing object edge, is even distorted when serious.Utilize line segment
With the point cloud lacked at object edge can be made up, so as to eliminate this sawtooth twisted effect.
2nd, due to needing the three-dimensional reconstruction to object edge progress line segment, the three-dimensional edges of object are obtained.So as to make
During digital three-dimensional model, the profile of three-dimension object can also be improved and be supplemented, make three-dimensional object model more beautiful.
Description of the drawings
Fig. 1 is conventional true orthophoto production process figure;
Fig. 2 is that the height value of TIN interpolation regular grids obtains schematic diagram;
Fig. 3 is the triangulation network structure schematic diagram for matching line segment;
Fig. 4 is the ortho-rectification route map of additional three-dimensional line segment;
Fig. 5 is straight line three-dimensional reconstruction schematic diagram.
Specific embodiment
The present invention is described in detail with reference to the accompanying drawings and examples.It is added in specific implementation content conventional
True orthophoto generation method, to be compared with the effect of true orthophoto generation method newly proposed.
A kind of true orthophoto generation method of three-dimension object edge optimization of the present invention, includes the following steps:
1st, true ortho-rectification is done using conventional photogrammetric survey method, generates true orthophoto, include the following steps, such as schemed
Shown in 1:
1) using multi-view images matching technique, feature point extraction and homotopy mapping are carried out;
2) it carries out sky three based on the point data of the same name 1) obtained to measure, realizes camera internal and external orientation and object space point coordinates
Optimization;
3) based on the optimization data 2) obtained, dense Stereo Matching is carried out, obtains point off density cloud.
4) point off density cloud is filtered, vacuated, triangulation network structure.
The point off density cloud matched using multi-view images, generally includes many noise spots, these points are not belonging to atural object, because
This needs to remove in advance, and otherwise the subsequent triangulation network can be built and had an impact;In addition, the point cloud quantity of photogrammetric reconstruction is past
It is past very big, it is millions of easily, up to ten million, if these clouds directly are involved in triangulation network structure, then operand certainly will be very
Greatly, it is therefore desirable to point off density cloud be vacuated by evacuating algorithm, make a cloud is intensive, geometry changes small ground can use
A small amount of point cloud can approach expression.Here apparent noise spot first is removed using the condition that peels off, then using based on TIN slopes
The evacuating algorithm of degree simplifies the triangulation network, and last using area growth method carries out network forming.
5) triangulation network based on step 4) generation obtains DSM models, and carries out true ortho-rectification according to collinearity equation, obtains
To true orthophoto.
Utilize is Differential rectification technology is corrected for really penetrating for photogrammetric technology, based on photography point cloud structure
DSM, change the geometry deformation of raw video, to surveying area's resampling pixel-by-pixel, generate no height displacement's image.Its basic principle is
The projection relation of two-dimentional picpointed coordinate and three-dimensional object space point coordinates is established, the homogeneous coordinates in computer vision can be used
Imaging equation represents.Its expression formula is as follows:
Wherein, [x1,x2,x3]T[X1,X2,X3,X4]TIt is the homogeneous vector expression of 2D picture points and 3d space point respectively.K generations
Table camera intrinsic parameter matrix, R andSpin matrix and translation matrix are represented respectively, and I is unit matrix.
Differential rectification is usually the two dimension rule that the DSM that triangle web form is expressed is projected to setting resolution sizes
Object coordinates are used as on grid, as shown in Figure 2.Corresponding picture point is searched for by mesh point back projection according to formula (1), then by picture
The color information of point is assigned to mesh point, image after generation correction.Interpolation is needed to realize from DSM to regular grid.
6) post processing of orthophotoquad.
Individual image, which is just being penetrated, will be detected occlusion area and compensate block information in correction procedure, then by inlaying and
Color consistency processing obtains surveying the whole picture orthophotoquad in area.
2nd, it extracts object edge line and matches, rebuild 3 d-line.
The three-dimensional reconstruction of straight line is based on matching line segments, i.e., is built by the homonymous line matching on multiple images.In order to
Ensure the precision rebuild, it is necessary to ensure certain precision in individual image lines detection, reduce error hiding.It obtains matched straight
After line, (i.e. for homonymous line, the pixel coordinate of first and last endpoint is just known on image) can be according to following principle
Solve 3D straight lines (illustrating by taking two images as an example).
As Fig. 3, L represent the 3 d-line to be rebuild.tm1And tm2The projection centre of camera 1 and camera 2 is represented respectively,
Coordinate T under world coordinate system1、T2It represents;x1 (1)、x2 (1)Two endpoints of line segment, X are matched in representative image 11 (1)、X2 (1)Represent the corresponding three dimensions point of the two picture points;x1 (2)、x2 (2)Two endpoints of line segment, X are matched in representative image 21 (2)、
X2 (2)It is its corresponding spatial point.tm1A plane Ω is formed with L1, n1Represent normal vector, tm2Another plane Ω is formed with L2,
Its normal vector n2It represents.
It is well-known, it is known that a direction vector and a point in straight line are assured that space three-dimensional straight line.Below
Derive the equation expression formula of 3D straight lines.
By taking camera 1 as an example, it is assumed that a two dimension straight line l passes through two point (x on image1,x2), 3 d-line L passes through two
Corresponding three-dimensional point (X1,X2).Here xiAnd XiIt is all represented with homogeneous coordinates, that is, is respectively (tui,tvi, t) and (Xi,Yi,Zi,
1), t represents scale factor, ui、viThe pixel coordinate of representative image two-dimensional points.It, can according to the correspondence of two-dimensional points and three-dimensional point
To obtain following formula.
Wherein, K, R represent the calibration matrix and spin matrix of camera.By the respective cross product in two equation both sides in formula (2),
It can obtain following formula.
x1×x2=(KRX1)×(KRX2)=(KR)*(X1×X2)=det (KR) (KR)-T(X1×X2) (3)
With camera tm1For coordinate origin, X in formula (3)1×X2It represents to pass through X1,X2With the plane of image center
Normal vector, i.e. X1×X2=n1;A two-dimentional straight line can be expressed as the cross product of two points on straight line, i.e. l simultaneously1=x1×x2。
Formula (3) is substituted into, the expression formula of two-dimentional straight line as follows can be obtained.
l1=det (KR) K-TRn1 (4)
In above formula det (KR) represent a constant, the expression of two-dimentional straight line is not impacted, thus can eliminate to get
It arrives
l1=K-TRn1 (5)
So as to obtain plane Ω1And Ω2Normal vector be respectively n1=R1KTl1, n2=R2KTl2.Here n1、n2It needs
Normalized is done, N is used after normalization1And N2It represents.
Because L is plane Ω1And Ω2Intersection, it is possible to obtain its direction vector SL=N1×N2。
Now it needs to be determined that a point P on straight line0(X0,Y0,Z0)。
P0With two camera projection centre tm1And tm2Line respectively with N1、N2Vertically, so having
It arranges:
Enable X0=0, then have
(Y can be solved0,Z0).So as to point P0Coordinate be (0, Y0,Z0)。
In this way, the equation of 3 d-line L can be expressed as:
P=P0+λ*SL (9)
In above formula, λ is scale factor, by taking different values that can represent any point on straight line.
3rd, two endpoints of 3 d-line are determined, obtain three-dimensional line segment.
Due to object flattened edge generally have certain length, using above-mentioned formula reconstruct come 3 d-line be one
A unlimited concept.Meanwhile the edge projection of three-dimension object is the two-dimensional line segment for having certain length on the image.Therefore, it is necessary to
The length of three-dimensional line segment is determined using the corresponding head and the tail picture point of two-dimensional line segment on the respective image of camera 1 and 2.As shown in figure 3,
In first image, the head and the tail picture point x of line segment is matched1 (1)And x2 (1)The three-dimensional line segment come is reconstructed with X1 (1)And X2 (1)As
Head and the tail endpoint can be expressed asEqually, the three-dimensional line segment gone out by second image reconstruction is expressed as
When Image Matching, line segment of the same name on imageWithLength it is not necessarily equal, and
Endpoint is not necessarily corresponding image points, so the three-dimensional line segment L built1、L2It is possible that intersect, mutually from, the pass that includes
System.Therefore, it is necessary to find out two endpoints of decision line segment maximum length, should be in figure 3
According to formula (9), different three-dimensional points corresponds to different λ values.Therefore, it should in endpoint Xi (j)In find out λ most
Big minimum value can determine two endpoints of required maximum line segment.
Below in the hope of X1 (1)It is illustrated for corresponding λ value.Formula (9) both sides are subtracted into T simultaneously1, it is as follows:
P-T1=P0+λ*SL-T1 (10)
It is converted into
(KR1)-1x1 (1)=t (KR1)-1(u1 (1)v1 (1)1)T=P0+λ*SL-T1 (11)
It arranges:
SL*λ-(KR1)-1(u1 (1)v1 (1)1)T* t=P0-T1 (12)
Wherein, the left side is 3x1 matrixes, and right-hand component is also 3x1 matrixes, and λ and t are amount to be asked.
Enable [SL(KR1)-1(u1 (1)v1 (1)1)T]=A, P0-T1=b can be obtained according to the principle of least square
[λt]T=(ATA)-1(ATb) (14)
So as to solve λ.
Similarly, all λ values can be obtained.The λ value of minimax is found out to sit to get to the head and the tail endpoint of three-dimensional line segment
Mark.
4th, three-dimensional point is turned to by three-dimensional line segment is discrete.
Three-dimensional line segment can not participate in triangulation network structure as a kind of line feature, thus need by the line segment of regular length from
Dispersion obtains discrete three-dimensional point, it is possible to and other three-dimensional points rebuild the triangulation network, generate new DSM, such as Fig. 4.
It determines two endpoints of 3 d-line, and turn to three-dimensional point by three-dimensional line segment is discrete, includes the following steps:
1) two endpoints of three-dimensional line segment are first determined, obtain the length of three-dimensional line segment;
2) number of the point of discretization is wanted in setting, and discrete sampling obtains the three-dimensional point cloud at object edge.
Two endpoints of three-dimensional line segment are determined in step 1), the length of three-dimensional line segment is obtained, includes the following steps and (can refer to
Fig. 3):
A) for include two images of three-dimension object edge line segment simultaneously, in first image, one head and the tail picture of selection
Point is x1 (1)And x2 (1)Line segment l1, the three-dimensional line segment come is reconstructed with X1 (1)And X2 (1)As head and the tail endpoint, can be expressed asEqually, it is x to choose a head and the tail picture point by second image1 (2)、x2 (2)Line segment l2, the three-dimensional line segment that reconstructs
With X1 (2)And X2 (2)As first and last endpoint, it is represented byLine segment l1With line segment l1For line segment of the same name, L1、L2Point
Line segment that Wei be on 3 d-line L.
B) L obtained from step a)1、L2This two three-dimensional line segments are conllinear and all on 3 d-line L, therefore from four
Endpoint X1 (1)、X2 (1)、X1 (2)And X2 (2)Endpoint of the two farthest endpoints of middle chosen distance as the three-dimensional line segment rebuild, and count
Calculate the length of line segment.
In step 2), the three-dimensional line segment that is obtained according to step 3 can be different by the way that the different sampling intervals is set to obtain
Discrete point number.The sparse degree of discrete point for needing to generate may be set according to actual conditions.If sampled point number is m, then
Sampling interval Δ t=(λmax-λmin)/m。
The coordinate of each discrete three-dimensional point is obtained by formula (15) on three-dimensional line segment.
Pi=P0+(λmin+i*Δt)SL(i=1 ... m) (15)
5th, the point of existing Points And lines section discretization is built into the triangulation network together, generates perfect DSM and do again and just penetrate
Correction.
Special marking is done to the three-dimensional point of discretization, is added in existing three-dimensional point cloud as annex point cloud, again structure
Net, such as Fig. 4.Set network forming principle:
1) basic triangle network forming criterion is constant.
2) when the point cloud for growing into special marking, no matter three-dimensional point be in where, all not reject, with proximity
The three-dimensional point in domain builds the triangulation network together.
It is again interior to be inserted as regular grid using the triangulation network newly built as new DSM models, it is carried out according to the flow of Fig. 5
Correction is just being penetrated, is obtaining new true orthophoto.
Claims (7)
1. a kind of true orthophoto generation method of three-dimension object edge optimization, step include:
1) point feature processing is carried out to the image comprising three-dimension object and generates point off density cloud;
2) edge line of three-dimension object is extracted from multiple images comprising the three-dimension object and homonymous line is matched,
Then it is based on coplanar condition using matched straight line and carries out three-dimensional reconstruction, obtain the 3 d-line at the edge of the three-dimension object;
3) two endpoints are chosen from the 3 d-line and obtains a three-dimensional line segment, and turn to three-dimensional by the three-dimensional line segment is discrete
Point;
4) point that the point in the point cloud obtained based on step 1) and step 3) are obtained builds the triangulation network together, and generation includes the three-dimensional
The digital surface model of the image of object simultaneously carries out ortho-rectification based on the digital surface model, obtains three-dimension object edge quality
The true orthophoto of optimization.
2. the method as described in claim 1, which is characterized in that the step of obtaining the three-dimensional line segment includes:
A) for include two images of three-dimension object edge line segment simultaneously, in first image, one head and the tail picture point of selection is x1 (1)、x2 (1)Line segment l1, according to line segment l1One is reconstructed with X1 (1)And X2 (1)As the three-dimensional line segment of head and the tail endpoint, i.e.,A head and the tail picture point is chosen in second image as x1 (2)、x2 (2)Line segment l2, according to line segment l2It reconstructs
One with X1 (2)And X2 (2)As the three-dimensional line segment of first and last endpoint, i.e.,Wherein, line segment l1With line segment l1It is of the same name
Line segment, L1、L2It is conllinear and be line segment on 3 d-line L;
B) from four endpoint X1 (1)、X2 (1)、X1 (2)And X2 (2)Two farthest endpoints of middle chosen distance are as the three-dimensional line segment
Endpoint obtains the three-dimensional line segment.
3. method as claimed in claim 1 or 2, which is characterized in that described by the discrete step for turning to three-dimensional point of the three-dimensional line segment
Suddenly include:1. the length of the three-dimensional line segment is obtained according to the two of the three-dimensional line segment endpoints first;2. the point of discretization is wanted in setting
Number, discrete sampling obtains the three-dimensional point cloud of the edge of the three-dimension object.
4. the method as described in claim 1, which is characterized in that in the step 4), filtered to the point cloud that step 1) obtains
Wave and processing is vacuated, the point structure triangulation network that the point and step 3) for being then based on obtaining after processing obtain, generation includes the three-dimensional
The digital surface model of the image of object edge feature;It is then based on the digital surface model and carries out ortho-rectification, comprising
The true orthophoto of the image of the three-dimension object.
5. the method as described in claim 1, which is characterized in that according to Di Luoni triangulation networks structure principle, build the triangulation network.
6. the method as described in claim 1, which is characterized in that the step of generating the point off density cloud be:1. to comprising this three
Multiple images for tieing up object carry out feature point extraction and homotopy mapping, obtain sparse object space point cloud;2. in utilizing step 1.
Observation does bundle adjustment, obtains accurate camera internal and external orientation;3. the high-precision phase seat in the plane 2. provided based on step
Appearance carries out dense Stereo Matching to image, obtains point off density cloud.
7. the method as described in claim 1, which is characterized in that in the step 2), using Straight Line Extraction from including this
Two of three-dimension object or two or more have the edge for extracting the three-dimension object on the image of degree of overlapping respectively;Then to being extracted
Each image on homonymous line matched, obtain matched straight line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710623219.0A CN108182722B (en) | 2017-07-27 | 2017-07-27 | Real projective image generation method for three-dimensional object edge optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710623219.0A CN108182722B (en) | 2017-07-27 | 2017-07-27 | Real projective image generation method for three-dimensional object edge optimization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108182722A true CN108182722A (en) | 2018-06-19 |
CN108182722B CN108182722B (en) | 2021-08-06 |
Family
ID=62545127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710623219.0A Active CN108182722B (en) | 2017-07-27 | 2017-07-27 | Real projective image generation method for three-dimensional object edge optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108182722B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109242862A (en) * | 2018-09-08 | 2019-01-18 | 西北工业大学 | A kind of real-time digital surface model generation method |
CN109978800A (en) * | 2019-04-23 | 2019-07-05 | 武汉惟景三维科技有限公司 | A kind of point cloud shadow data minimizing technology based on threshold value |
CN111159498A (en) * | 2019-12-31 | 2020-05-15 | 北京蛙鸣华清环保科技有限公司 | Data point thinning method and device and electronic equipment |
CN112419443A (en) * | 2020-12-09 | 2021-02-26 | 中煤航测遥感集团有限公司 | True ortho image generation method and device |
CN113593010A (en) * | 2021-07-12 | 2021-11-02 | 杭州思锐迪科技有限公司 | Correction method, electronic device, and storage medium |
CN115200556A (en) * | 2022-07-18 | 2022-10-18 | 华能澜沧江水电股份有限公司 | High-altitude mining area surveying and mapping method and device, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009143986A1 (en) * | 2008-05-27 | 2009-12-03 | The Provost, Fellows And Scholars Of The College Of The Holy And Undivided Trinity Of Queen Elizabeth Near Dublin | Automated building outline detection |
CN103017739A (en) * | 2012-11-20 | 2013-04-03 | 武汉大学 | Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image |
CN104778720A (en) * | 2015-05-07 | 2015-07-15 | 东南大学 | Rapid volume measurement method based on spatial invariant feature |
CN105466399A (en) * | 2016-01-11 | 2016-04-06 | 中测新图(北京)遥感技术有限责任公司 | Quick semi-global dense matching method and device |
-
2017
- 2017-07-27 CN CN201710623219.0A patent/CN108182722B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009143986A1 (en) * | 2008-05-27 | 2009-12-03 | The Provost, Fellows And Scholars Of The College Of The Holy And Undivided Trinity Of Queen Elizabeth Near Dublin | Automated building outline detection |
CN103017739A (en) * | 2012-11-20 | 2013-04-03 | 武汉大学 | Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image |
CN104778720A (en) * | 2015-05-07 | 2015-07-15 | 东南大学 | Rapid volume measurement method based on spatial invariant feature |
CN105466399A (en) * | 2016-01-11 | 2016-04-06 | 中测新图(北京)遥感技术有限责任公司 | Quick semi-global dense matching method and device |
Non-Patent Citations (1)
Title |
---|
AMHAR F. ET AL.: "The generation of true orthophotos using a 3D building model in conjunction with a conventional DTM", 《INTERNATIONAL ARCHIVES OF PHOTOGRAMMETRY AND REMOTE SENSING》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109242862A (en) * | 2018-09-08 | 2019-01-18 | 西北工业大学 | A kind of real-time digital surface model generation method |
CN109242862B (en) * | 2018-09-08 | 2021-06-11 | 西北工业大学 | Real-time digital surface model generation method |
CN109978800A (en) * | 2019-04-23 | 2019-07-05 | 武汉惟景三维科技有限公司 | A kind of point cloud shadow data minimizing technology based on threshold value |
CN111159498A (en) * | 2019-12-31 | 2020-05-15 | 北京蛙鸣华清环保科技有限公司 | Data point thinning method and device and electronic equipment |
CN112419443A (en) * | 2020-12-09 | 2021-02-26 | 中煤航测遥感集团有限公司 | True ortho image generation method and device |
CN113593010A (en) * | 2021-07-12 | 2021-11-02 | 杭州思锐迪科技有限公司 | Correction method, electronic device, and storage medium |
CN115200556A (en) * | 2022-07-18 | 2022-10-18 | 华能澜沧江水电股份有限公司 | High-altitude mining area surveying and mapping method and device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108182722B (en) | 2021-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108182722A (en) | A kind of true orthophoto generation method of three-dimension object edge optimization | |
CN110363858B (en) | Three-dimensional face reconstruction method and system | |
CN110264567B (en) | Real-time three-dimensional modeling method based on mark points | |
CN104361628B (en) | A kind of three-dimensional live modeling based on aviation oblique photograph measurement | |
CN109242954B (en) | Multi-view three-dimensional human body reconstruction method based on template deformation | |
CN103021017B (en) | Three-dimensional scene rebuilding method based on GPU acceleration | |
CN103544711B (en) | The autoegistration method of remote sensing image | |
CN104021588B (en) | System and method for recovering three-dimensional true vehicle model in real time | |
CN105976426B (en) | A kind of quick three-dimensional atural object model building method | |
CN110728671A (en) | Dense reconstruction method of texture-free scene based on vision | |
TW201724026A (en) | Generating a merged, fused three-dimensional point cloud based on captured images of a scene | |
WO2017156905A1 (en) | Display method and system for converting two-dimensional image into multi-viewpoint image | |
CN107123156A (en) | A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision | |
CN111629193A (en) | Live-action three-dimensional reconstruction method and system | |
CN102592124A (en) | Geometrical correction method, device and binocular stereoscopic vision system of text image | |
CN106023230B (en) | A kind of dense matching method of suitable deformation pattern | |
CN106875437A (en) | A kind of extraction method of key frame towards RGBD three-dimensional reconstructions | |
CN110176053B (en) | Large-scale live-action three-dimensional integral color homogenizing method | |
JP2016218694A (en) | Three-dimensional model generation device, three-dimensional model generation method, and program | |
CN107610215A (en) | A kind of high-precision multi-angle oral cavity 3 D digital imaging model building method | |
US20090106000A1 (en) | Geospatial modeling system using void filling and related methods | |
JP4354708B2 (en) | Multi-view camera system | |
CN107958489B (en) | Curved surface reconstruction method and device | |
CN106251349B (en) | A kind of SAR stereopsis dense Stereo Matching method | |
CN108830921A (en) | Laser point cloud reflected intensity correcting method based on incident angle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |