CN117994463A - Construction land mapping method and system - Google Patents

Construction land mapping method and system Download PDF

Info

Publication number
CN117994463A
CN117994463A CN202410404271.7A CN202410404271A CN117994463A CN 117994463 A CN117994463 A CN 117994463A CN 202410404271 A CN202410404271 A CN 202410404271A CN 117994463 A CN117994463 A CN 117994463A
Authority
CN
China
Prior art keywords
data
point
coordinates
camera
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410404271.7A
Other languages
Chinese (zh)
Other versions
CN117994463B (en
Inventor
高兴康
杨志林
刘志坚
杨福光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Sanqian Technology Information Co ltd
Original Assignee
Yunnan Sanqian Technology Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Sanqian Technology Information Co ltd filed Critical Yunnan Sanqian Technology Information Co ltd
Priority to CN202410404271.7A priority Critical patent/CN117994463B/en
Publication of CN117994463A publication Critical patent/CN117994463A/en
Application granted granted Critical
Publication of CN117994463B publication Critical patent/CN117994463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a construction land mapping method and system, comprising the following steps: s1: obtaining geographic coordinate data in a region range, and obtaining a datum point and a basic geographic environment; s2: carrying out aerial three-dimensional scanning by adopting an unmanned aerial vehicle, and collecting accurate distance information of the ground and the ground feature in the range of a measurement area and multi-view images; s3: generating a digital elevation model by using a TIN algorithm according to the data acquired in the S1 and the S2, and generating a digital orthophoto map DOM by using the digital elevation model and an orthographic correction algorithm; s4: identifying and mapping the position and shape of the ground object in the range of the area by utilizing a digital elevation model and a digital orthophoto map DOM; s5: and calculating the land areas of different land blocks in the construction land. The invention combines the parallax relation among the images shot by a plurality of cameras, and can accurately map the pixels on the two-dimensional image to the actual positions in the three-dimensional space through distortion correction, stereo matching and coordinate transformation.

Description

Construction land mapping method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a construction land mapping method and system.
Background
With rapid development of urbanization and increase of socioeconomic demands, it is becoming particularly important for reasonable planning and efficient use of land resources, and construction land mapping refers to detailed measurement and recording of geographical environments of a planned area to provide basic geographical information required for land use, city planning and construction projects. Such mapping activities typically include various aspects of topography, cadastral, engineering, etc. in order to obtain accurate data regarding land shape, location, area, elevation, and other features. Traditional mapping methods such as total station measurement, leveling, etc., rely on physical equipment, the accuracy of which is limited by the performance of the equipment and the skill level of the operator, and traditional mapping methods generally only provide planar positional information, and lack height and volume data, making it difficult to construct fine three-dimensional models.
Disclosure of Invention
The invention mainly aims to provide a construction land mapping method and system, which are used for solving the problem of imprecision in construction of a three-dimensional model in the related technology.
In order to achieve the above object, according to one aspect of the present invention, there is provided a construction land mapping method including the steps of:
S1: obtaining geographic coordinate data in a region range, and obtaining a datum point and a basic geographic environment;
S2: carrying out aerial three-dimensional scanning by adopting an unmanned aerial vehicle, and collecting accurate distance information of the ground and the ground feature in the range of a measurement area and multi-view images;
S3: generating a digital elevation model by using a TIN algorithm according to the data acquired in the S1 and the S2, and generating a digital orthophoto map DOM by using the digital elevation model and an orthographic correction algorithm;
S4: and identifying and mapping the position and the shape of the ground object in the range of the measuring area by using the digital elevation model and the digital orthophoto map DOM.
Further, in S1, the acquired data sources include a base geographic information database acquired in real time by the satellite positioning system and derived from the pre-stored base geographic information database.
Further, in S2, the unmanned aerial vehicle includes an unmanned aerial vehicle body and a high-precision ranging sensor and a multi-angle camera carried by the unmanned aerial vehicle body.
Further, the specific steps of S2 are as follows:
s21: the unmanned aerial vehicle performs aerial photography according to a preset flight path and a preset altitude;
s22: the high-precision ranging sensor continuously transmits signals and receives reflected echoes in the flight process, ultrasonic mapping data are obtained, and distance data to the ground and the ground object are obtained by calculating the signal round trip time;
S23: the multi-angle camera continuously shoots ground surface images according to a preset time interval or distance interval and records gesture parameters of the unmanned aerial vehicle shooting images in real time.
Further, the specific steps of S3 are as follows:
s31: correcting and screening the acquired ultrasonic mapping data, and carrying out radiation correction and geometric correction on images shot at multiple angles;
s32: calculating the ground coordinates corresponding to each pixel by utilizing the parallax relation among the multi-view images, and generating dense point cloud data;
S33: removing noise points by adopting a filtering algorithm according to the obtained dense point cloud data, and constructing a digital elevation model by adopting a TIN algorithm;
S34: according to the digital elevation model, correcting the original image into an orthographic image DOM without influence of terrain distortion by adopting an orthographic correction algorithm, and eliminating perspective distortion caused by terrain fluctuation;
s35: and superposing the digital elevation model and the orthophotomap DOM and performing model smoothing.
Further, in S32, the expression of the ground coordinates corresponding to each pixel is calculated using the parallax relationship between the multi-view images as follows:
wherein K is an internal reference matrix containing the focal length of the camera and the coordinates of principal points; is pixel coordinates; /(I) Normalized coordinates in a camera coordinate system;
wherein Z is the distance in the Z axis direction; b is the disparity value; d is a camera baseline; f is the focal length of the camera;
Wherein, Is the actual coordinates of the pixel; /(I)Normalized coordinates in the left camera coordinate system; /(I)AndRespectively projecting translational components of a world coordinate system on an X axis and a Y axis under a left camera coordinate system;
Wherein, A three-dimensional world coordinate point corresponding to the pixel; /(I)A rotation matrix for the ith camera; /(I)An internal reference matrix for the ith camera; /(I)Is built-in matrix/>An inverse matrix of (a); /(I)A translation vector representing an ith camera; pixel coordinates for the image; /(I) Representing a three-dimensional vector of two-dimensional pixel coordinates plus homogeneous coordinate 1 on the image.
Further, since radial distortion and tangential distortion exist in actual camera imaging, there is a difference in pixel coordinatesThe expression of the distortion correction is as follows:
Radial distortion is introduced first:
Wherein, 、/>And/>Are distortion correction parameters; /(I)Pixel coordinates on the original uncorrected image; Is the center coordinates of the image; /(I) The new coordinates after radial distortion correction; r is the distance from the pixel point to the center of the image;
Tangential distortion is reintroduced:
Wherein, And/>Is a tangential distortion correction parameter; /(I)The pixel coordinates are corrected for radial distortion and then tangential distortion.
Further, the specific steps of constructing the digital elevation model by using the TIN algorithm in S33 are as follows:
S331: constructing Delaunay triangulation by using the denoised high-quality point cloud data, and ensuring that the circumscribed circles of any two triangles do not contain other points;
s332: for each triangle, calculating the elevation value of each point in the triangle by an interpolation method;
s333: generating a digital elevation model according to the calculated elevation value and the triangle information;
The calculation formula of the elevation value of each point in S332 is as follows:
wherein P is the point to be solved inside the triangle ABC; h is the height of the point p to be solved; Triangle vertices/>, respectively Is of a height of (2); /(I)And/>The coordinate distances of the x-axis and the y-axis of the point p to be solved are respectively; /(I)And/>The coordinate distances of the X axis and the Y axis of the point A are respectively; /(I)And/>The x-axis and y-axis coordinate distances of the B-point, respectively.
Further, the specific steps of S34 are as follows:
s341: calculating the projection position of each point in the triangle on the image plane in S332 by utilizing the elevation information in the digital elevation model and combining the camera internal parameter and external parameter;
S342: the orthographic correction algorithm corrects the original image into an orthographic image without the influence of the topographic distortion according to the projection position of each point.
The invention also provides a construction land mapping system, which adopts the construction land mapping method to map, and comprises the following steps:
A data acquisition unit: the method is used for collecting unmanned aerial vehicle data, terrain and ground object data within a measuring area;
A data processing unit: the method is used for preprocessing the acquired data and ensuring the accuracy of the data;
a data storage unit: the method is used for synchronously storing the processed data in real time;
model generation unit: and the system is used for generating and superposing the digital elevation model and the orthophotomap DOM according to the stored data.
Compared with the prior art, the invention has the following beneficial effects:
1. The invention combines the parallax relation among the images shot by a plurality of cameras, and can accurately map the pixels on the two-dimensional image to the actual positions in the three-dimensional space through distortion correction, stereo matching and coordinate transformation, thereby realizing high-precision measurement of construction land mapping. Compared with the traditional method without distortion, the method fully considers the actual physical characteristics of camera imaging, and improves the accuracy and reliability of measurement.
2. The invention integrates the modern remote sensing technology, unmanned aerial vehicle photogrammetry technology and geographic information system technology, realizes the integrated flow optimization from data acquisition, processing and application, not only improves the automation degree and the working efficiency of mapping work, but also remarkably improves the spatial resolution and the accuracy of mapping results.
Drawings
FIG. 1 is a flow chart of the overall method of the present invention;
Fig. 2 is a system block diagram of the overall invention.
Illustration of:
1. A data acquisition unit; 2. a data processing unit; 3. a data storage unit; 4. and a model generation unit.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention for achieving the intended purpose, the following detailed description will refer to the specific implementation, structure, characteristics and effects according to the present invention with reference to the accompanying drawings and preferred embodiments.
Referring to fig. 1, the present embodiment provides a construction land mapping method, which includes the following steps:
S1: obtaining geographic coordinate data in a region range, and obtaining a datum point and a basic geographic environment;
S2: carrying out aerial three-dimensional scanning by adopting an unmanned aerial vehicle, and collecting accurate distance information of the ground and the ground feature in the range of a measurement area and multi-view images;
S3: generating a digital elevation model by using a TIN algorithm according to the data acquired in the S1 and the S2, and generating a digital orthophoto map DOM by using the digital elevation model and an orthographic correction algorithm;
S4: and identifying and mapping the position and the shape of the ground object in the range of the measuring area by using the digital elevation model and the digital orthophoto map DOM.
In S1, the acquired data sources comprise a basic geographic information database which is acquired in real time by a satellite positioning system and is sourced from a pre-stored database.
In S2, unmanned aerial vehicle includes unmanned aerial vehicle body and high accuracy range sensor and the multi-angle camera of carrying on thereof.
The specific steps of S2 are as follows:
s21: the unmanned aerial vehicle performs aerial photography according to a preset flight path and a preset altitude;
S22: the high-precision ranging sensor continuously transmits signals and receives reflected echoes in the flight process, and distance data to the ground and the ground object are obtained by calculating the signal round trip time;
S23: the multi-angle camera continuously shoots ground surface images according to a preset time interval or distance interval and records gesture parameters of the unmanned aerial vehicle shooting images in real time.
The specific steps of S3 are as follows:
s31: correcting and screening the acquired ultrasonic mapping data, and carrying out radiation correction and geometric correction on images shot at multiple angles;
s32: calculating the ground coordinates corresponding to each pixel by utilizing the parallax relation among the multi-view images, and generating dense point cloud data;
S33: removing noise points by adopting a filtering algorithm according to the obtained dense point cloud data, and constructing a digital elevation model by adopting a TIN algorithm;
S34: according to the digital elevation model, correcting the original image into an orthographic image DOM without influence of terrain distortion by adopting an orthographic correction algorithm, and eliminating perspective distortion caused by terrain fluctuation;
s35: and superposing the digital elevation model and the orthophotomap DOM and performing model smoothing.
In S32, the expression of the ground coordinates corresponding to each pixel is calculated using the parallax relationship between the multi-view images as follows:
wherein K is an internal reference matrix containing the focal length of the camera and the coordinates of principal points; is pixel coordinates; /(I) Normalized coordinates in a camera coordinate system;
wherein Z is the distance in the Z axis direction; b is the disparity value; d is a camera baseline; f is the focal length of the camera;
Wherein, Is the actual coordinates of the pixel; /(I)Normalized coordinates in the left camera coordinate system; /(I)AndRespectively projecting translational components of a world coordinate system on an X axis and a Y axis under a left camera coordinate system;
Wherein, A three-dimensional world coordinate point corresponding to the pixel; /(I)A rotation matrix for the ith camera; /(I)An internal reference matrix for the ith camera; /(I)Is built-in matrix/>An inverse matrix of (a); /(I)A translation vector representing an ith camera; pixel coordinates for the image; /(I) Representing a three-dimensional vector of two-dimensional pixel coordinates plus homogeneous coordinate 1 on the image.
Since in actual camera imaging there is radial and tangential distortion, at the pixel coordinatesThe expression of the distortion correction is as follows:
Radial distortion is introduced first:
Wherein, 、/>And/>Are distortion correction parameters; /(I)Pixel coordinates on the original uncorrected image; Is the center coordinates of the image; /(I) The new coordinates after radial distortion correction; r is the distance from the pixel point to the center of the image;
Tangential distortion is reintroduced:
Wherein, And/>Is a tangential distortion correction parameter; /(I)The pixel coordinates are corrected for radial distortion and then tangential distortion.
The specific steps of constructing the digital elevation model by using the TIN algorithm in S33 are as follows:
S331: constructing Delaunay triangulation by using the denoised high-quality point cloud data, and ensuring that the circumscribed circles of any two triangles do not contain other points;
s332: for each triangle, calculating the elevation value of each point in the triangle by an interpolation method;
s333: generating a digital elevation model according to the calculated elevation value and the triangle information;
The calculation formula of the elevation value of each point in S332 is as follows:
wherein P is the point to be solved inside the triangle ABC; h is the height of the point p to be solved; Triangle vertices/>, respectively Is of a height of (2); /(I)And/>The coordinate distances of the x-axis and the y-axis of the point p to be solved are respectively; /(I)And/>The coordinate distances of the X axis and the Y axis of the point A are respectively; /(I)And/>The x-axis and y-axis coordinate distances of the B-point, respectively.
In another preferred embodiment, to ensure data processing efficiency during the processing of a large number of triangles and points, a quadtree is used to speed up the search for nearest neighbor triangles, thereby reducing unnecessary computations, as follows:
a 2D spatial region is defined at the vertex coordinates of a set of triangles. A quadtree is constructed from the boundaries of these triangles.
Center point (-),/>))
Width and height ((w, h))
Child node pointer (4)
Triangle list (only in leaf nodes)
Splitting conditions: each node contains fewer triangles than the threshold T or has a depth up to the maximum D.
Querying quadtree
P for each query point,/>):
The traversal starts from the root node.
The decision point P intersects which sub-region (upper left, upper right, lower left, lower right) of the current node and continues to traverse the corresponding sub-node.
When a leaf node is reached, a list of triangles intersecting the point is obtained and an elevation calculation is performed for each triangle.
Elevation calculation formula (using barycentric coordinate interpolation):
Given triangle vertex ,/>,/>And query point P (/ >),) Inside the triangle, its barycentric coordinates are/>And/>Elevation/>, of point PCalculated by the following formula:
Wherein the barycentric coordinates The calculation matrix can be calculated by the coordinates of the triangle vertexes and the coordinates of the query point P as follows:
finding by solving the inverse of the 3x3 matrix And/>
The specific steps of S34 are as follows:
s341: calculating the projection position of each point in the triangle on the image plane in S332 by utilizing the elevation information in the digital elevation model and combining the camera internal parameter and external parameter;
S342: the orthographic correction algorithm corrects the original image into an orthographic image without the influence of the topographic distortion according to the projection position of each point.
Referring to fig. 2, there is provided a construction land mapping system comprising:
data acquisition unit 1: the method is used for collecting unmanned aerial vehicle data, terrain and ground object data within a measuring area;
a data processing unit 2: the method is used for preprocessing the acquired data and ensuring the accuracy of the data;
Data storage unit 3: the method is used for synchronously storing the processed data in real time;
model generation unit 4: and the system is used for generating and superposing the digital elevation model and the orthophotomap DOM according to the stored data.
In this embodiment, first, the data acquisition unit 1 acquires geographic coordinate data within a region of interest, and acquires a reference point and a basic geographic environment; the acquired data sources comprise a pre-stored basic geographic information database acquired in real time by a satellite positioning system, a datum point and a basic geographic environment are provided for mapping, the geodetic coordinate position of a measuring area can be determined, and a basic framework for mapping operation is constructed.
Secondly, the data acquisition unit 1 adopts an unmanned plane to perform three-dimensional scanning in the air, and acquires accurate distance information of the ground and the ground feature in the range of a measurement area and multi-view images; the unmanned aerial vehicle is including carrying on high accuracy range sensor and multi-angle camera, can realize the omnidirectional of survey like this, high resolution data acquisition, for follow-up data elevation model and DOM of generating establish the basis, specific step is: according to the range of the area and the required resolution ratio, the flight path and the height of the unmanned aerial vehicle are designed in advance, the whole area is covered, enough overlapping degree is obtained to support subsequent data processing, the unmanned aerial vehicle can realize accurate positioning by utilizing a satellite navigation system in the flight process, and real-time attitude information comprising heading, pitch angle and roll angle is provided by combining an inertial navigation unit; the high-precision ranging sensor continuously transmits signals and receives reflected echoes in the flight process, distance data to the ground and the ground feature are obtained by calculating the signal round trip time, the data can accurately reflect the relief of the terrain and the height information of the ground feature, and meanwhile, the multi-angle camera arranged on the unmanned aerial vehicle continuously shoots the ground surface images according to a preset time interval or distance interval. Cameras with different angles can capture different sides of the same ground feature, increase visual coverage, and provide sufficient information for later construction of a three-dimensional model.
The data processing unit 2 preprocesses the acquired data to ensure the accuracy of the data; the data storage unit 3 records and stores all the generated data in real time and synchronously, the model generating unit 4 generates a digital elevation model by using a TIN algorithm according to the acquired data, and generates a digital orthographic image DOM by using the digital elevation model and an orthographic correction algorithm, and the specific steps are as follows: firstly, correcting and screening acquired ultrasonic mapping data, and carrying out radiation correction and geometric correction on images shot at multiple angles, wherein the correction involves time delay correction and temperature influence compensation so as to ensure that the measured distance is accurate; the screening is to remove invalid or noise data, and only keep reliable data points for subsequent analysis, so that images at different visual angles can be accurately aligned; calculating the ground coordinates corresponding to each pixel by utilizing the parallax relation among the multi-view images, and generating dense point cloud data; specifically, after obtaining pixel-level parallax from stereo matching, the coordinates of each pixel under an image coordinate system can be converted into coordinates in a real three-dimensional space by combining an internal parameter matrix and an external parameter matrix of a camera, and the expression is as follows:
wherein K is an internal reference matrix containing the focal length of the camera and the coordinates of principal points; is pixel coordinates; /(I) Normalized coordinates in a camera coordinate system;
Wherein Z is the distance in the Z axis direction; b is the disparity value; d is a camera baseline; f is the focal length of the camera; parallax refers to the horizontal offset between corresponding pixels at two different viewing angles for the same point. The disparity value is the horizontal distance between two adjacent pixels, typically expressed in pixel units. The size of the parallax value is related to the distance between the object and the camera, and the depth information of the object can be deduced through parallax; the camera baseline refers to the distance between two stereo cameras, i.e. their position difference. The size of the camera baseline influences the acquisition capability of the stereoscopic vision system on the depth information, and the larger the baseline is, the stronger the resolution capability of the camera system on the depth information is. The focal length of a camera, which refers to the focal length of a lens in an optical system, determines the zoom of the image. Here, the camera focal length is used to calculate the Z-axis direction distance Z, and the depth of the object from the camera is inferred from the relationship of the disparity value b, the camera base line d, and the camera focal length f.
Wherein,Is the actual coordinates of the pixel; /(I)Normalized coordinates in the left camera coordinate system; /(I)AndRespectively projecting translational components of a world coordinate system on an X axis and a Y axis under a left camera coordinate system;
Wherein, A three-dimensional world coordinate point corresponding to the pixel; /(I)Describing a coordinate axis rotation relationship from a world coordinate system to a camera coordinate system for a rotation matrix of an ith camera; /(I)An internal reference matrix of the ith camera comprises factors such as focal length, principal point offset and the like; /(I)Is built-in matrix/>An inverse matrix of (a); /(I)Representing a translation vector of the ith camera, describing the position of the camera coordinate system origin relative to the world coordinate system; /(I)Pixel coordinates for the image; /(I)Representing a three-dimensional vector of two-dimensional pixel coordinates plus homogeneous coordinate 1 on the image.
The above formula combines parameters such as intrinsic information, parallax information, camera baseline, camera pose and the like of the camera to help realize conversion from image pixel coordinates to three-dimensional world coordinates and inference of depth information.
However, since radial distortion and tangential distortion exist in actual camera imaging, at pixel coordinatesThe distortion correction is introduced, the camera imaging is modeled more accurately, and the expression is as follows:
Radial distortion is first introduced and radial distortion correction is mainly used to correct pixel position deviations due to camera lens non-idealities such as aspheric or non-ideal lens shapes. In the actual imaging process, as the distance from the center of the image increases, the pixel may undergo different degrees of radial stretching or shrinkage:
Wherein, 、/>And/>Are distortion correction parameters which correspond to the first, second and third order terms of radial distortion, respectively. The three coefficients determine the degree of radial stretching or shrinking of the pixel points as the distance from the center of the image increases; /(I)Pixel coordinates on the original uncorrected image; /(I)Is the center coordinates of the image; /(I)The new coordinates after radial distortion correction; r is the distance from the pixel point to the center of the image;
Tangential distortion is introduced again, and tangential distortion correction further addresses the offset problem caused by the incomplete parallelism of the optical axis of the lens with the sensor plane, which affects the position of the pixel in both the horizontal and vertical directions:
Wherein, And/>For tangential distortion correction parameters, the tangential distortion degree in the horizontal and vertical directions is described; The pixel coordinates are corrected for radial distortion and then tangential distortion. Through the distortion correction step, more accurate pixel coordinates can be obtained, so that the calculation accuracy of ground coordinates in multi-view image processing is improved.
Removing noise points by adopting a filtering algorithm according to the obtained dense point cloud data, and constructing a digital elevation model by adopting a TIN algorithm; the method comprises the following specific steps: firstly, sorting or blocking the high-quality point cloud data after noise removal according to the space coordinates, facilitating quick search and access of a subsequent algorithm, and determining a starting point as the vertex of a first triangle. From the initial vertex, its nearest neighbors are selected and form the first edge. Points within the sector of points that form the edge and the selected vertex are then found in the set of remaining points, and points that meet the Delaunay criterion or the Gabriel criterion are found to construct the next triangle.
Wherein the Delaunay criterion requires that any other data point is not included in any triangle circumcircle in the triangle net so as to ensure that the triangle net is distributed as uniformly as possible and has no triangle with an oversized angle; the Gabriel criterion requires that the quadrilateral formed between any triangle and its neighboring triangle is a convex quadrilateral, thereby avoiding the creation of too small acute triangles.
For all the remaining points that are not added to the triangle network, it is checked in turn whether they can be added to the existing triangle network, if so, an insert operation is performed, otherwise the search for the appropriate triangle location is continued.
Repeating the steps until all the effective points are contained in the triangular net, so as to ensure continuous coverage of the surface of the terrain, and simultaneously avoid overlapping and hollowing among the triangles; if a definite boundary is formed, such as a non-measurement area of a lake, a building and the like, the boundary points are specially processed and usually participate in the TIN generation process as virtual boundary constraint points, so that the boundary is ensured not to be erroneously contained in the triangle;
once the TIN is established, the height of any point in the grid is calculated, and the elevation value of each point in the triangle is calculated through interpolation; the calculation formula of the elevation value of each point is as follows:
wherein P is the point to be solved inside the triangle ABC; h is the height of the point p to be solved; Triangle vertices/>, respectively Is of a height of (2); /(I)And/>The coordinate distances of the x-axis and the y-axis of the point p to be solved are respectively; /(I)And/>The coordinate distances of the X axis and the Y axis of the point A are respectively; /(I)And/>The x-axis and y-axis coordinate distances of the B-point, respectively.
According to the calculated elevation value and triangle information, the digital elevation model stores the information in the form of a two-dimensional matrix, and each element in the matrix corresponds to a grid point on the terrain, and the value of the grid point is the elevation of the point.
Further, according to the digital elevation model, an orthographic correction algorithm is adopted to correct the original image into an orthographic image DOM without the influence of terrain distortion, and perspective distortion caused by the terrain fluctuation is eliminated; orthographic correction is a process for geometrically correcting aerial images or images shot by unmanned aerial vehicles based on a digital elevation model, and aims to eliminate projection distortion caused by topography fluctuation, so as to generate an orthographic image without topography distortion influence, and specifically comprises the following steps:
Calculating the projection position of each point in the triangle on the image plane by utilizing elevation information in the digital elevation model and combining camera internal parameters and external parameters; resampling or interpolating the whole image area by using the known relation between the ground points and the projection points on the corresponding image plane, and corresponding the real geographic position of each pixel point on the original image to the new orthographic image. The orthographic correction algorithm corrects the original image into an orthographic image without the influence of the topographic distortion according to the projection position of each point. And finally, superposing the digital elevation model and the orthophotomap DOM and performing model smoothing.
Still further, the position and shape of the ground object in the area are identified and mapped by utilizing the digital elevation model and the digital orthophoto map DOM; the method has decisive effects of defining the land use right boundary, planning the road trend, confirming the building layout and the like, and ensures the accuracy of the land boundary and the facility position. And finally, calculating the land areas of different land blocks in the construction land, and classifying and counting the land areas according to the land classification standards of the country or region. Therefore, the management department is facilitated to reasonably allocate and effectively manage land resources, and the management department is also a key basis for transacting land use certificates, issuing land planning permissions and other legal programs.
The present invention is not limited to the above embodiments, but is capable of other modifications, equivalents and alternatives falling within the scope of the present invention as long as the modifications, equivalents and alternatives falling within the spirit and scope of the invention are available to those skilled in the art without departing from the scope of the invention.

Claims (9)

1. The construction land mapping method is characterized by comprising the following steps of:
S1: obtaining geographic coordinate data in a region range, and obtaining a datum point and a basic geographic environment;
S2: carrying out aerial three-dimensional scanning by adopting an unmanned aerial vehicle, and collecting accurate distance information of the ground and the ground feature in the range of a measurement area and multi-view images;
S3: generating a digital elevation model by using a TIN algorithm according to the data acquired in the S1 and the S2, and generating a digital orthophoto map DOM by using the digital elevation model and an orthographic correction algorithm;
The specific steps of S3 are as follows:
s31: correcting and screening the acquired ultrasonic mapping data, and carrying out radiation correction and geometric correction on images shot at multiple angles;
s32: calculating the ground coordinates corresponding to each pixel by utilizing the parallax relation among the multi-view images, and generating dense point cloud data;
S33: removing noise points by adopting a filtering algorithm according to the obtained dense point cloud data, and constructing a digital elevation model by adopting a TIN algorithm;
S34: according to the digital elevation model, correcting the original image into an orthographic image DOM without influence of terrain distortion by adopting an orthographic correction algorithm, and eliminating perspective distortion caused by terrain fluctuation;
S35: superposing the digital elevation model and the orthophotomap DOM and performing model smoothing;
S4: and identifying and mapping the position and the shape of the ground object in the range of the measuring area by using the digital elevation model and the digital orthophoto map DOM.
2. The method of claim 1, wherein in S1, the acquired data sources include a database of basic geographic information acquired in real time by a satellite positioning system and derived from a pre-stored database.
3. The method according to claim 1, wherein in S2, the unmanned aerial vehicle includes an unmanned aerial vehicle body, a high-precision ranging sensor mounted thereon, and a multi-angle camera.
4. A method of mapping a construction site according to claim 3, wherein the specific steps of S2 are as follows:
s21: the unmanned aerial vehicle performs aerial photography according to a preset flight path and a preset altitude;
s22: the high-precision ranging sensor continuously transmits signals and receives reflected echoes in the flight process, ultrasonic mapping data are obtained, and distance data to the ground and the ground object are obtained by calculating the signal round trip time;
S23: the multi-angle camera continuously shoots ground surface images according to a preset time interval or distance interval and records gesture parameters of the unmanned aerial vehicle shooting images in real time.
5. The construction land mapping method according to claim 4, wherein in S32, the expression of the land coordinates corresponding to each pixel is calculated using the parallax relationship between the multi-view images as follows:
wherein K is an internal reference matrix containing the focal length of the camera and the coordinates of principal points; is pixel coordinates; /(I) Normalized coordinates in a camera coordinate system;
wherein Z is the distance in the Z axis direction; b is the disparity value; d is a camera baseline; f is the focal length of the camera;
Wherein, Is the actual coordinates of the pixel; /(I)Normalized coordinates in the left camera coordinate system; /(I)And/>Respectively projecting translational components of a world coordinate system on an X axis and a Y axis under a left camera coordinate system;
Wherein, A three-dimensional world coordinate point corresponding to the pixel; /(I)A rotation matrix for the ith camera; /(I)An internal reference matrix for the ith camera; /(I)Is built-in matrix/>An inverse matrix of (a); /(I)A translation vector representing an ith camera; /(I)Pixel coordinates for the image; /(I)Representing a three-dimensional vector of two-dimensional pixel coordinates plus homogeneous coordinate 1 on the image.
6. The method of mapping a construction site according to claim 5, wherein at pixel coordinatesThe distortion correction is introduced in the process, and the method is as follows:
Radial distortion is introduced first:
Wherein, 、/>And/>Are distortion correction parameters; /(I)Pixel coordinates on the original uncorrected image; Is the center coordinates of the image; /(I) The new coordinates after radial distortion correction; r is the distance from the pixel point to the center of the image;
Tangential distortion is reintroduced:
Wherein, And/>Is a tangential distortion correction parameter; /(I)The pixel coordinates are corrected for radial distortion and then tangential distortion.
7. The method of mapping a construction site according to claim 6, wherein the specific steps of constructing the digital elevation model using the TIN algorithm in S33 are as follows:
S331: constructing Delaunay triangulation by using the denoised high-quality point cloud data, and ensuring that the circumscribed circles of any two triangles do not contain other points;
s332: for each triangle, calculating the elevation value of each point in the triangle by an interpolation method;
s333: generating a digital elevation model according to the calculated elevation value and the triangle information;
The calculation formula of the elevation value of each point in S332 is as follows:
wherein P is the point to be solved inside the triangle ABC; h is the height of the point p to be solved; Respectively triangle apexes Is of a height of (2); /(I)And/>The coordinate distances of the x-axis and the y-axis of the point p to be solved are respectively; /(I)And/>The coordinate distances of the X axis and the Y axis of the point A are respectively; /(I)And/>The x-axis and y-axis coordinate distances of the B-point, respectively.
8. The construction site mapping method according to claim 7, characterized in that the specific steps of S34 are as follows:
s341: calculating the projection position of each point in the triangle on the image plane in S332 by utilizing the elevation information in the digital elevation model and combining the camera internal parameter and external parameter;
S342: the orthographic correction algorithm corrects the original image into an orthographic image without the influence of the topographic distortion according to the projection position of each point.
9. A construction site mapping system for mapping using the construction site mapping method of any one of claims 1 to 8, comprising:
data acquisition unit (1): the method is used for collecting unmanned aerial vehicle data, terrain and ground object data within a measuring area;
Data processing unit (2): the method is used for preprocessing the acquired data and ensuring the accuracy of the data;
Data storage unit (3): the method is used for synchronously storing the processed data in real time;
model generation means (4): and the system is used for generating and superposing the digital elevation model and the orthophotomap DOM according to the stored data.
CN202410404271.7A 2024-04-07 2024-04-07 Construction land mapping method and system Active CN117994463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410404271.7A CN117994463B (en) 2024-04-07 2024-04-07 Construction land mapping method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410404271.7A CN117994463B (en) 2024-04-07 2024-04-07 Construction land mapping method and system

Publications (2)

Publication Number Publication Date
CN117994463A true CN117994463A (en) 2024-05-07
CN117994463B CN117994463B (en) 2024-06-18

Family

ID=90901422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410404271.7A Active CN117994463B (en) 2024-04-07 2024-04-07 Construction land mapping method and system

Country Status (1)

Country Link
CN (1) CN117994463B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010068185A1 (en) * 2008-12-09 2010-06-17 Tele Atlas B.V. Method of generating a geodetic reference database product
CN110763205A (en) * 2019-11-05 2020-02-07 新疆维吾尔自治区测绘科学研究院 Method for generating orthophoto map of border narrow and long area by digital photogrammetric system
CN112113542A (en) * 2020-09-14 2020-12-22 浙江省自然资源征收中心 Method for checking and accepting land special data for aerial photography construction of unmanned aerial vehicle
CN114255286A (en) * 2022-02-28 2022-03-29 常州罗博斯特机器人有限公司 Target size measuring method based on multi-view binocular vision perception
CN116429069A (en) * 2023-03-22 2023-07-14 自然资源部第二地形测量队(陕西省第三测绘工程院) Underwater and near-shore integrated topographic mapping data production method
CN116486056A (en) * 2023-03-20 2023-07-25 江西良测信息技术有限公司 Integrated mapping method and mapping instrument for remote sensing image data
CN116804537A (en) * 2023-06-26 2023-09-26 上海应用技术大学 Binocular range finding system and method
CN116878524A (en) * 2023-07-11 2023-10-13 武汉科技大学 Dynamic SLAM dense map construction method based on pyramid L-K optical flow and multi-view geometric constraint

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010068185A1 (en) * 2008-12-09 2010-06-17 Tele Atlas B.V. Method of generating a geodetic reference database product
CN110763205A (en) * 2019-11-05 2020-02-07 新疆维吾尔自治区测绘科学研究院 Method for generating orthophoto map of border narrow and long area by digital photogrammetric system
CN112113542A (en) * 2020-09-14 2020-12-22 浙江省自然资源征收中心 Method for checking and accepting land special data for aerial photography construction of unmanned aerial vehicle
CN114255286A (en) * 2022-02-28 2022-03-29 常州罗博斯特机器人有限公司 Target size measuring method based on multi-view binocular vision perception
CN116486056A (en) * 2023-03-20 2023-07-25 江西良测信息技术有限公司 Integrated mapping method and mapping instrument for remote sensing image data
CN116429069A (en) * 2023-03-22 2023-07-14 自然资源部第二地形测量队(陕西省第三测绘工程院) Underwater and near-shore integrated topographic mapping data production method
CN116804537A (en) * 2023-06-26 2023-09-26 上海应用技术大学 Binocular range finding system and method
CN116878524A (en) * 2023-07-11 2023-10-13 武汉科技大学 Dynamic SLAM dense map construction method based on pyramid L-K optical flow and multi-view geometric constraint

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
M. IRANI, P. ANANDAN, AND D. WEINSHALL: "From Reference Frames to Reference Planes: Multi-View Parallax Geometry and Applications", WEIZMANN INSTITUTE OF SCIENCE, 28 August 2016 (2016-08-28) *
刘宇;郑新奇;艾刚;: "无人机遥感真正射影像高精度制图", 测绘通报, no. 02, 25 February 2018 (2018-02-25), pages 83 - 88 *
方宏涛: "遥感航测技术在大比例地形图测绘中的应用", 世界有色金属, 31 December 2022 (2022-12-31), pages 175 - 177 *

Also Published As

Publication number Publication date
CN117994463B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
CN112927360A (en) Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
CN102645209B (en) Joint positioning method for spatial points by means of onboard LiDAR point cloud and high resolution images
CN113607135B (en) Unmanned aerial vehicle inclination photogrammetry method for road and bridge construction field
CN109813335B (en) Calibration method, device and system of data acquisition system and storage medium
CN113192193B (en) High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame
CN107767440A (en) Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN112465732A (en) Registration method of vehicle-mounted laser point cloud and sequence panoramic image
CN112862966B (en) Method, device, equipment and storage medium for constructing surface three-dimensional model
CN113177974A (en) Point cloud registration method and device, electronic equipment and storage medium
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
Dold Extended Gaussian images for the registration of terrestrial scan data
CN115965790A (en) Oblique photography point cloud filtering method based on cloth simulation algorithm
CN110780313A (en) Unmanned aerial vehicle visible light stereo measurement acquisition modeling method
Wu Photogrammetry: 3-D from imagery
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN117994463B (en) Construction land mapping method and system
WO2024098428A1 (en) Registration method and system
US20220276046A1 (en) System and method for providing improved geocoded reference data to a 3d map representation
Moussa et al. Complementing TLS point clouds by dense image matching
CN113240755B (en) City scene composition method and system based on street view image and vehicle-mounted laser fusion
CN114913297A (en) Scene orthoscopic image generation method based on MVS dense point cloud
CN114387488A (en) Road extraction system and method based on Potree point cloud image fusion
Previtali et al. An automatic multi-image procedure for accurate 3D object reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant