CN117665841B - Geographic space information acquisition mapping method and device - Google Patents

Geographic space information acquisition mapping method and device Download PDF

Info

Publication number
CN117665841B
CN117665841B CN202410139366.0A CN202410139366A CN117665841B CN 117665841 B CN117665841 B CN 117665841B CN 202410139366 A CN202410139366 A CN 202410139366A CN 117665841 B CN117665841 B CN 117665841B
Authority
CN
China
Prior art keywords
image
mapping
coordinates
preset mapping
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410139366.0A
Other languages
Chinese (zh)
Other versions
CN117665841A (en
Inventor
付仁俊
南智勇
朱海山
朱君稻
田微玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aihua Survey Engineering Co ltd
Original Assignee
Shenzhen Aihua Survey Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aihua Survey Engineering Co ltd filed Critical Shenzhen Aihua Survey Engineering Co ltd
Priority to CN202410139366.0A priority Critical patent/CN117665841B/en
Publication of CN117665841A publication Critical patent/CN117665841A/en
Application granted granted Critical
Publication of CN117665841B publication Critical patent/CN117665841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a geographic space information acquisition mapping method, which comprises the following steps: acquiring a preset mapping area image and scanning data, and further carrying out preset processing on the preset mapping area image and the scanning data; performing feature matching, and further mapping the features of the image of the preset mapping area into a geographic coordinate system; calculating three-dimensional coordinates of a geographic space by utilizing matching features of images of a preset mapping area and preset parameters contained in photographic equipment data; registering the preset mapping region scanning data with the three-dimensional coordinates, and fusing the registered preset mapping region scanning data with the three-dimensional coordinates to form a three-dimensional geographic model; and performing precision calibration on the three-dimensional geographic model. The geographic space information acquisition mapping method can improve environmental adaptability and can provide comprehensive acquisition of surface texture information and high-density space coordinates.

Description

Geographic space information acquisition mapping method and device
Technical Field
The invention relates to the technical field of geographic mapping, in particular to a geographic space information acquisition mapping method, a geographic space information acquisition mapping device, a computer medium and a computer.
Background
With the development of geographic information technology, geographic information acquisition and mapping technology is mature, accurate and efficient geographic space data is provided for each industry, and at present, laser scanning and stereoscopic photogrammetry technologies become high-precision and efficient choices in geographic space information acquisition and mapping technology.
The laser scanning technology is to scan the surface of the ground object by using a laser beam and generate point cloud data by receiving information reflected by the laser, however, the laser scanning technology may not provide enough characteristic points for areas with weaker surface textures or low reflectivity, so that the data is incomplete; on the other hand, the stereoscopic photogrammetry technique photographs the same object with two or more cameras, calculates three-dimensional coordinates of the object from parallax information between images, however, is restricted by natural conditions such as weather, illumination, etc., and particularly in a long distance or flat area, it may be difficult to accurately calculate three-dimensional coordinates of the object due to insufficient parallax.
Therefore, there is a need for a mapping method for geospatial information acquisition that can improve environmental adaptability and provide comprehensive acquisition of surface texture information and high-density spatial coordinates.
Disclosure of Invention
The invention aims to: in order to overcome the defects, the invention aims to provide a geographic space information acquisition mapping method which is flexible to apply, combines the laser scanning and the stereo photogrammetry technology, can fully utilize the advantages of the two technologies, provides more comprehensive and accurate geographic information, can improve adaptability, and can deal with different geographic conditions under the condition of not being limited by weather conditions so as to realize all-weather geographic information acquisition.
In order to solve the technical problems, the invention provides a geospatial information acquisition mapping method, which comprises the following steps:
Step S1: acquiring a preset mapping area image and scanning data, and further carrying out preset processing on the preset mapping area image and the scanning data;
step S2: according to the preset mapping area image subjected to the preset processing, performing feature matching, and further mapping the features of the preset mapping area image into a geographic coordinate system;
Step S3: calculating three-dimensional coordinates of a geographic space by utilizing matching features of images of a preset mapping area and preset parameters contained in photographic equipment data;
Step S4: registering the preset mapping region scanning data with the three-dimensional coordinates, and fusing the registered preset mapping region scanning data with the three-dimensional coordinates to form a three-dimensional geographic model;
step S5: and carrying out precision calibration on the three-dimensional geographic model.
As a preferred mode of the present invention, in step S1, the method further comprises the steps of:
Step S10: shooting a preset mapping area at multiple positions to obtain mapping images containing different positions;
step S11: and scanning the earth surface of the preset mapping area by using the laser beam to acquire scanning data of the preset mapping area.
As a preferred mode of the present invention, in step S10, the method further comprises the steps of:
Step S100: graying treatment is carried out on the mapping image:
Wherein, Is the gray value of the gray scale,Is a red channel which is used for the control of the liquid,Is a green channel which is arranged on the side of the light source,Is a blue channel;
step S101: carrying out histogram equalization on the mapping image;
step S102: performing size adjustment of the mapping image:
Wherein, Is a mapping image that has been resized,Is the pixel value of the mapped image subjected to histogram equalization,Is an interpolation kernel function.
As a preferred mode of the present invention, in step S101, the method further includes the steps of:
Step S1010: calculating a gray histogram of the gray mapping image;
step S1011: normalized gray level histogram:
Wherein, Is a gray-scale histogram,Is the total number of pixels;
step S1012: calculating a cumulative distribution function of the normalized histogram:
Wherein, Is the gray level in the normalized histogramProbability of (2);
Step S1013: equalizing and mapping the cumulative distribution function:
Wherein, Is the total number of gray levels,AndThe number of rows and columns of the mapping image;
Step S1014: applying the equalized mapped gray level to each pixel of the original mapping image to generate an equalized mapping image:
Wherein, Is the pixel value of the original mapping image.
As a preferred mode of the present invention, in step S11, the method further includes the steps of:
step S110: removing noise points in the scanned data by adopting Gaussian filtering:
Wherein, Is the scan data after the denoising process,Is the original coordinates of the scan data,Is the standard deviation of the Gaussian filter, and controls the window size of the Gaussian filter;
step S111: separating the ground point cloud data in the scanned data from the whole point cloud data by adopting a RANSAC method:
Wherein, Is the slope of the fitted ground,Is the intercept of the fitted ground;
step S112: position and attitude correction of the scan data is performed via GPS/IMU data:
Wherein, Is the geographical coordinates of the corrected scan data,Is the local coordinates of the original scan data,Is a rotation matrix of the rotation,Is a translation matrix;
Step S113: registering the scan data such that the scan data are located in the same coordinate system:
Wherein, Is a function of the error and,Is a source point cloud of the scan data,Is a cloud of target points of the scan data.
As a preferred mode of the present invention, in step S2, the method further comprises the steps of:
Step S20: dividing a preset mapping area image into a plurality of image blocks with fixed-size windows;
Step S21: calculating the mean square error of each image block:
and further calculating a correlation coefficient of each image block:
Wherein, Is the mean square error of the data,Is the pixel value of the image block,Is the coefficient of correlation (co-efficient),Is the average of the pixel values within the image block;
step S22: performing feature matching according to the mean square error and the correlation coefficient to generate matching point pairs;
Step S23: calculating a homography matrix according to the matching point pairs:
Wherein,
The mapping formula is:
Wherein, Is an image 1 of the image of the person,Is an image 2 of the person in question,Is a matrix element of the matrix type,Is the feature point coordinates of the image 1,Is the feature point coordinates of image 2.
As a preferred mode of the present invention, in step S3, the method further comprises the steps of:
step S30: performing image correction of a preset mapping area to ensure that characteristic points of a plurality of mapping images contained in the preset mapping area image are matched with each other;
step S31: searching matching point pairs in a plurality of mapping images, and further calculating parallax information of a plurality of preset mapping region images:
Wherein, In order to map the left-hand position of the image,The position of the right side of the mapping image;
step S32: calculating depth information using the parallax information:
Wherein, Is the baseline length between the image capturing apparatuses,Is the focal length of the image pickup apparatus;
step S33: converting the image coordinates of the preset mapping area into normalized plane coordinates:
and further converting the two-dimensional coordinates on the image plane of the preset mapping area into three-dimensional coordinates according to the depth information:
Wherein, Is the coordinates on the image plane of the preset mapping area,Is the coordinates of the center point of the image of the preset mapping area.
As a preferred mode of the present invention, the method for registering the scan data of the preset mapping region with the three-dimensional coordinates includes:
step S40: minimizing a matching error between points of scan data of a preset mapping region:
Wherein, It is a minimum of the number of times that the number of times of the process is reduced,Is a point in the scan data of the preset mapping region,Is a rotation matrix of the rotation,Is the position of the corresponding point in the three-dimensional coordinates;
step S41: selecting an initial rotation matrix:
further, according to each point in the scan data of the preset mapping area, the method comprises the steps of Registering;
Step S42: adjusting the transformation matrix through the matching error;
Step S43: continuously adjusting rotation matrix Translation vectorAnd registering the scanning data of the preset mapping area with the three-dimensional coordinates.
As a preferred mode of the present invention, the method of forming a three-dimensional geographic model includes:
step S44: carrying out weighted average on the registered preset mapping region scanning data and the three-dimensional coordinates, and fusing the preset mapping region scanning data and the three-dimensional coordinates:
Wherein, Is the data to be fused together,Is the scan data of the preset mapping region,Is a weight representing the contribution of the scan data of the preset mapping region in the fusion,Is a three-dimensional coordinate;
step S45: and generating a matched three-dimensional geographic model according to the fused preset mapping region scanning data and the three-dimensional coordinates.
As a preferred mode of the present invention, in step S5, the method further comprises the steps of:
Step S50: obtaining geographic coordinates provided by known real geographic information, and further extracting points matched with the geographic coordinates provided by the known real geographic information from the three-dimensional geographic model;
Step S51: calculating an error between points of the three-dimensional geographic model and known real geographic information:
Wherein, Is the error that is present in the error,Is the geographic coordinates provided by the known real geographic information,Coordinates of three-dimensional geographic information contained in the three-dimensional geographic model;
Step S52: and according to the error, performing three-dimensional geographic model calibration:
Wherein, Is the coordinates of the three-dimensional geographic model after calibration,Is the original coordinates of the three-dimensional geographic model,Is an error.
The invention also provides a geographic space information acquisition mapping device, which comprises:
The data processing module is used for acquiring a preset mapping area image and scanning data, and further carrying out preset processing on the preset mapping area image and the scanning data;
The coordinate calculation module is used for carrying out feature matching according to the preset mapping area image subjected to the preset processing, and further mapping the features of the preset mapping area image into a geographic coordinate system; calculating three-dimensional coordinates of a geographic space by utilizing matching features of images of a preset mapping area and preset parameters contained in photographic equipment data;
The model fusion module is used for registering the preset mapping region scanning data with the three-dimensional coordinates, and further fusing the registered preset mapping region scanning data with the three-dimensional coordinates to form a three-dimensional geographic model;
and the model calibration module is used for carrying out precision calibration on the three-dimensional geographic model.
Compared with the prior art, the technical scheme of the invention has the following advantages:
1. The laser scanning technology is used for providing high-density and high-precision surface data of ground features, is suitable for complex terrains and building structures, is not limited by natural conditions such as weather, illumination and the like, can be used for collecting data in different environments, and can penetrate through shields such as vegetation, buildings and the like to obtain geographic information of a shielding region;
2. The method has the advantages that the geographic information with rich textures and strong sense of reality is provided by a stereoscopic photogrammetry technology, the method is suitable for high-resolution expression of a large-scale geographic space, the image of a panoramic view angle can be obtained by a plurality of cameras, the method is suitable for high-resolution expression of the large-scale geographic space, and the geographic information is enabled to be more sense of reality;
3. The laser scanning technology and the stereo photogrammetry technology are combined, so that the adaptability of the system can be improved, complex conditions such as different terrains, vegetation coverage, buildings and the like can be met, high-quality geographic information can be obtained in various environments, and a three-dimensional geographic model with more accurate and fine geographic space information is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only embodiments of the present invention, and other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a geospatial information acquisition mapping method of the present invention.
FIG. 2 is a flow chart of a method for acquiring images and scan data of a predetermined mapping region according to the present invention.
Fig. 3 is a flowchart of a method for feature matching of a preset mapping region image of the present invention.
Fig. 4 is a flow chart of a three-dimensional coordinate calculation method of the geospatial of the present invention.
Fig. 5 is a flow chart of a method of registering scan data with three-dimensional coordinates in accordance with the present invention.
FIG. 6 is a flow chart of a three-dimensional geographic model forming method of the present invention.
FIG. 7 is a flow chart of a three-dimensional geographic model calibration method of the present invention.
Fig. 8 is a flow chart of a method of preprocessing a mapping image of the present invention.
Fig. 9 is a flow chart of a method of histogram equalization of a mapped image of the present invention.
Fig. 10 is a flowchart of a method of preprocessing scan data according to the present invention.
Fig. 11 is a connection diagram of the geospatial information acquisition mapping apparatus of the present invention.
Description of the specification reference numerals:
100. The system comprises a data processing module 101, a coordinate calculation module 102, a model fusion module 103 and a model calibration module.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
Referring to fig. 1, in some embodiments, a method of geospatial information acquisition mapping is contemplated, using a laser scanning device and at least two imaging devices, the method comprising the steps of:
step S1: acquiring a preset mapping area image and scanning data, and further carrying out preset processing on the preset mapping area image and the scanning data.
Specifically, in step S1, referring to fig. 2, the method includes the steps of:
Step S10: and carrying out multi-position shooting on a preset mapping area through at least two camera equipment so as to acquire mapping images at different positions.
In the practical implementation process, at least two cameras are deployed, the cameras are respectively deployed at different positions to simulate the stereoscopic vision effect of human eyes, a certain overlapping area is arranged in the field of view of the cameras to ensure that similar scenes are shot, and then the same target is shot through the deployed cameras to acquire mapping images at different positions.
Step S11: and scanning the ground surface of the preset mapping area through laser scanning equipment to acquire scanning data of the preset mapping area.
In the actual implementation process, a laser radar sensor is deployed and is carried on an aircraft (including but not limited to aircraft, helicopter, unmanned plane and other flight equipment), and coordinates of each point on the surface of a preset mapping area are measured by using laser beams, so that high-precision scanning is realized, and scanning data of the preset mapping area are acquired.
Step S2: and performing feature matching according to the preset mapping region image subjected to the preset processing, and further mapping the features of the preset mapping region image into a geographic coordinate system.
Specifically, in step S2, referring to fig. 3, the method further includes the steps of:
Step S20: dividing the image of the preset mapping area into a plurality of image blocks with fixed-size windows.
Step S21: calculating the mean square error of each image block:
and further calculating a correlation coefficient of each image block:
Wherein, Is the mean square error of the data,Is the pixel value of the image block,Is the coefficient of correlation (co-efficient),Is the average of the pixel values within the image block.
Step S22: performing feature matching according to the mean square error and the correlation coefficient to generate matching point pairs;
wherein, the smaller the mean square error is, the higher the matching degree is, the closer the correlation coefficient is to 1, and the higher the matching degree is.
Step S23: calculating a homography matrix according to the matching point pairs;
For example, assuming two mapping images, image 1 and image 2, respectively, which overlap and have some common feature points, the feature points in image 1 are mapped into a geographic coordinate system, this process is:
Wherein,
The mapping formula is:
Wherein, Is an image 1 of the image of the person,Is an image 2 of the person in question,Is a matrix element, describing the mapping relationship between image 1 and image 2,Is the feature point coordinates of the image 1,Is the feature point coordinates of image 2; homography matrixThe elements in (2) are calculated by a least square method and the like.
Step S3: and calculating the three-dimensional coordinates of the geographic space by utilizing the matching characteristics of the image of the preset mapping area and preset parameters contained in the photographing equipment data.
Specifically, in step S3, referring to fig. 4, the method further includes the steps of:
Step S30: performing correction of the preset mapping region image to ensure that the characteristic points of a plurality of mapping images contained in the preset mapping region image are matched with each other,
Specifically, by correcting camera distortion, it is ensured that feature points in the preset mapping region image correspond to each other in the plurality of images.
Step S31: searching matching point pairs in a plurality of mapping images, and further calculating parallax information of a plurality of preset mapping region images:
Wherein, In order to map the left-hand position of the image,The position of the right side of the mapping image;
step S32: calculating depth information using the parallax information:
Wherein, Is the baseline length between the image capturing apparatuses,Is the focal length of the image pickup apparatus; the relationship between depth and parallax can be calculated from the camera's internal parameters (e.g., focal length, principal point coordinates), external parameters (camera position, pose, etc.), and baseline length parameters.
Step S33: converting the image coordinates of the preset mapping area into normalized plane coordinates:
and further converting the two-dimensional coordinates on the image plane of the preset mapping area into three-dimensional coordinates according to the depth information:
Wherein, Is the coordinates on the image plane of the preset mapping area,Is the coordinates of the center point of the image of the preset mapping area.
Step S4: registering the preset mapping region scanning data with the three-dimensional coordinates, and fusing the registered preset mapping region scanning data with the three-dimensional coordinates to form a three-dimensional geographic model;
specifically, in step S4, referring to fig. 5, the method for registering the scan data of the preset mapping region with the three-dimensional coordinates includes:
step S40: and adopting ITERATIVE CLOSEST POINT (ICP) point cloud registration algorithm to minimize matching errors between points in scan data of a preset mapping region:
Wherein, It is a minimum of the number of times that the number of times of the process is reduced,Is a point in the scan data of the preset mapping region,Is a rotation matrix of the rotation,Is the position of the corresponding point in the three-dimensional coordinates;
step S41: selecting an initial rotation matrix:
further, according to each point in the scan data of the preset mapping area, the method comprises the steps of Registering;
Step S42: adjusting the transformation matrix through the matching error;
Step S43: continuously adjusting rotation matrix Translation vectorThe scanning data of the preset mapping area are accurately aligned with the three-dimensional coordinates.
In step S4, referring to fig. 6, a method of forming a three-dimensional geographic model includes:
step S44: carrying out weighted average on the registered preset mapping region scanning data and the three-dimensional coordinates, and fusing the preset mapping region scanning data and the three-dimensional coordinates:
Wherein, Is the data to be fused together,Is the scan data of the preset mapping region,Is weight, which represents the contribution degree of the scan data of the preset mapping area in fusion, and the weight is adjustedThe fusion result and weight can be controlledThe range of values of (1) is generally [0,1], where 0 represents full use of three-dimensional coordinate data and 1 represents full use of scan data, i.e., whenWhen the fusion result is completely determined by the three-dimensional coordinate data, whenWhen the fusion result is completely determined by the scanning data,Is a three-dimensional coordinate.
Step S45: and generating a matched three-dimensional geographic model according to the fused preset mapping region scanning data and the three-dimensional coordinates.
For example, assuming that the predetermined mapping region comprises a building, the scan data captures details of the building, and initially, the scan data of the building may have a positional deviation from known three-dimensional coordinates, the position and posture of the scan data may be adjusted by an ICP algorithm, and the weights may be adjusted by a weighted averageTo take into account more of the scan data or three-dimensional coordinates, e.g. the exterior shape of a building may be increased when better represented by the scan dataTo emphasize details of the laser scan, whereas if the known three-dimensional coordinates are more reliable, the decrease can be madeTo emphasize accurate location information, and by flexibly adjusting weights, to accurately align the scan data with known three-dimensional coordinates, thereby generating a three-dimensional geographic model of accurate location and rich detail.
Step S5: and carrying out precision calibration on the three-dimensional geographic model.
Specifically, in step S5, referring to fig. 7, the method further includes the steps of:
Step S50: obtaining geographic coordinates provided by known real geographic information, and further extracting points matched with the geographic coordinates provided by the known real geographic information from the three-dimensional geographic model;
Step S51: calculating an error between points of the three-dimensional geographic model and known real geographic information:
Wherein, Is the error that is present in the error,Is the geographic coordinates provided by the known real geographic information,Coordinates of three-dimensional geographic information contained in the three-dimensional geographic model;
Step S52: and according to the error, performing three-dimensional geographic model calibration:
Wherein, Is the coordinates of the three-dimensional geographic model after calibration,Is the original coordinates of the three-dimensional geographic model,Is an error.
For example, assuming that several surface points are selected, the true geographic coordinates of which are provided by the Global Positioning System (GPS), the three-dimensional geographic model contains the points, and the errors of the points between the geographic coordinates and the true coordinates of the three-dimensional geographic model are calculated, for example, for one surface point, the errors of the points between the geographic coordinates and the true coordinates of the three-dimensional geographic model are calculatedThe errors in three directions are comprehensively considered, and then the errors of all the earth surface points are comprehensively considered, and the statistical analysis of the errors is carried out, so that the need of the analysis in the earth surface points is determinedAnd correcting in three directions, and then adjusting the coordinates of the region through the calibration formula, so that the corrected coordinates of the three-dimensional geographic model are closer to the real geographic information.
In some embodiments, in step S10, referring to fig. 8, the method further comprises the steps of:
step S100: after the mapping image is obtained, carrying out graying treatment on the mapping image:
Wherein, Is the gray value of the gray scale,Is a red channel which is used for the control of the liquid,Is a green channel which is arranged on the side of the light source,Is the blue channel.
Specifically, the aboveIs set with reference to the degree of sensitivity of the human eye to different colors, the operator can manually adjust according to actual demand.
Step S101: carrying out histogram equalization on the mapping image;
Specifically, in performing histogram equalization, referring to fig. 9, the method includes the steps of:
Step S1010: calculating a gray histogram of the gray mapping image;
wherein, for the grey mapping image, counting the number of pixels of each grey level;
step S1011: normalized gray level histogram:
Wherein, among them, Is a gray-scale histogram,Is the total number of pixels, and the normalized histogram is obtained by dividing the number of pixels of each gray level in the gray level histogram by the total number of pixels, so as to represent the relative proportion of each gray level in the image.
Step S1012: calculating a cumulative distribution function of the normalized histogram:
Wherein, Is the gray level in the normalized histogramProbability of (2); the cumulative distribution function refers to gradually accumulating histograms starting from the lowest gray level.
Step S1013: mapping cumulative distribution function to new gray scale rangeThe mapping formula is as follows:
Wherein, Is the total number of gray levels,AndThe number of rows and columns of the mapping image;
Step S1014: applying the equalized mapped gray level to each pixel of the original mapping image to generate an equalized mapping image:
Wherein, Is the pixel value of the original mapping image.
In the equalized mapping image, the gray levels too concentrated in the original histogram will be more uniformly distributed over the entire gray range, thereby improving the contrast of the mapping image.
Step S102: and performing size adjustment of the mapping image.
In practical implementation, the resizing may use an interpolation method, wherein the application refers to bilinear interpolation, for an example of image reduction, the interpolation formula is:
Wherein, Is a mapping image that has been resized,Is the pixel value of the mapped image subjected to histogram equalization,Is an interpolation kernel function.
In some embodiments, in step S11, referring to fig. 10, the method further includes the steps of:
step S110: removing noise points in the scanned data by adopting Gaussian filtering:
Wherein, Is the scan data after the denoising process,Is the original coordinates of the scan data,Is the standard deviation of the gaussian filter and controls the window size of the gaussian filter.
Step S111: separating the ground point cloud data in the scanned data from the whole point cloud data by adopting a RANSAC method:
Wherein, Is the slope of the fitted ground,Is the intercept of the fitted ground;
step S112: position and attitude correction of the scan data is performed via GPS/IMU data:
Wherein, Is the geographical coordinates of the corrected scan data,Is the local coordinates of the original scan data,Is a rotation matrix of the rotation,Is a translation matrix;
Step S113: registering the scan data such that the scan data are located in the same coordinate system:
Wherein, Is a function of the error and,Is a source point cloud of the scan data,Is a cloud of target points of the scan data.
In practical implementation, the matrix is rotatedTranslation matrixAre matrices used to describe rigid body transformations in three-dimensional space, which are commonly used for transformations between coordinate systems.
Exemplary, rotating matrixIs an orthogonal matrix describing the rotation of an object in three dimensions, for vectors in three dimensionsThrough a rotation matrixIs the new coordinates of the functionThe calculation can be performed by the following formula:
Wherein, Is an element of the rotation matrix.
Exemplary, translation matrixIs a matrix describing translation transformations in space, for vectors in three-dimensional spaceThrough a translation matrixIs the new coordinates of the functionThe calculation can be performed by the following formula:
Wherein, Is the amount of translation of the translation matrix.
In practical implementations, correction of position and attitude is typically described using rigid-body transformations, including both rotational and translational transformations, which can be formulated by the following formulas:
the relative distance and angular relationship between the points and midpoints is maintained by this transformation.
In some embodiments, referring to fig. 11, the present invention further provides a geospatial information acquisition mapping apparatus, including:
The data processing module is used for acquiring a preset mapping area image and scanning data, and further carrying out preset processing on the preset mapping area image and the scanning data;
The coordinate calculation module is used for carrying out feature matching according to the preset mapping area image subjected to the preset processing, and further mapping the features of the preset mapping area image into a geographic coordinate system; calculating three-dimensional coordinates of a geographic space by utilizing matching features of images of a preset mapping area and preset parameters contained in photographic equipment data;
The model fusion module is used for registering the preset mapping region scanning data with the three-dimensional coordinates, and further fusing the registered preset mapping region scanning data with the three-dimensional coordinates to form a three-dimensional geographic model;
and the model calibration module is used for carrying out precision calibration on the three-dimensional geographic model.
In an actual implementation process, the geographic space information acquisition and mapping device performs geographic information acquisition and mapping by adopting the geographic space information acquisition and mapping device method.
Thus, in some embodiments, one aspect of the present invention also provides a computer medium having a computer program stored thereon, the computer program being executed by a processor to implement a geospatial information acquisition mapping apparatus method as described above.
Thus, in some embodiments, one aspect of the present invention also provides a computer, including a computer medium as described above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (7)

1. A geospatial information acquisition mapping method comprising the steps of:
Step S1: acquiring a preset mapping area image and scanning data, and further carrying out preset processing on the preset mapping area image and the scanning data;
step S2: according to the preset mapping area image subjected to the preset processing, performing feature matching, and further mapping the features of the preset mapping area image into a geographic coordinate system;
Step S3: calculating three-dimensional coordinates of a geographic space by utilizing matching features of images of a preset mapping area and preset parameters contained in photographic equipment data;
Step S4: registering the preset mapping region scanning data with the three-dimensional coordinates, and fusing the registered preset mapping region scanning data with the three-dimensional coordinates to form a three-dimensional geographic model;
Step S5: performing precision calibration on the three-dimensional geographic model;
in step S2, the method further comprises the steps of:
Step S20: dividing a preset mapping area image into a plurality of image blocks with fixed-size windows;
Step S21: calculating the mean square error of each image block:
and further calculating a correlation coefficient of each image block:
Wherein, Is mean square error,/>Is the pixel value of the image block,/>Is a correlation coefficient,/>、/>Is the average of the pixel values within the image block;
step S22: performing feature matching according to the mean square error and the correlation coefficient to generate matching point pairs;
Step S23: calculating a homography matrix according to the matching point pairs:
Wherein,
The mapping formula is:
Wherein, Is image 1,/>Is image 2,/>Is a matrix element,/>Is the feature point coordinates of image 1,/>Is the feature point coordinates of image 2;
in step S3, the method further comprises the steps of:
step S30: performing image correction of a preset mapping area to ensure that characteristic points of a plurality of mapping images contained in the preset mapping area image are matched with each other;
step S31: searching matching point pairs in a plurality of mapping images, and further calculating parallax information of a plurality of preset mapping region images:
Wherein, To map the left position of the image,/>The position of the right side of the mapping image;
step S32: calculating depth information using the parallax information:
Wherein, Is the baseline length between the imaging devices,/>Is the focal length of the image pickup apparatus;
step S33: converting the image coordinates of the preset mapping area into normalized plane coordinates:
and further converting the two-dimensional coordinates on the image plane of the preset mapping area into three-dimensional coordinates according to the depth information:
Wherein, Is the coordinates on the image plane of the preset mapping area,/>The coordinates of the center point of the image of the preset mapping area;
in step S4, the method further comprises the steps of:
step S40: minimizing a matching error between points of scan data of a preset mapping region:
Wherein, Is minimized,/>Is a point in the scan data of the preset mapping area,/>Is a rotation matrix,/>Is the position of the corresponding point in the three-dimensional coordinates;
step S41: selecting an initial rotation matrix:
further, according to each point in the scan data of the preset mapping area, the method comprises the steps of Registering;
step S42: adjusting a transformation matrix according to the matching error;
Step S43: continuously adjusting rotation matrix And translation vector/>Registering the scanning data of the preset mapping area with the three-dimensional coordinates;
step S44: carrying out weighted average on the registered preset mapping region scanning data and the three-dimensional coordinates, and fusing the preset mapping region scanning data and the three-dimensional coordinates:
Wherein, Is fusion data,/>Is the scan data of the preset mapping area,/>Is a weight representing the contribution degree of the scan data of the preset mapping region in fusion,/>Is a three-dimensional coordinate;
step S45: and generating a matched three-dimensional geographic model according to the fused preset mapping region scanning data and the three-dimensional coordinates.
2. A geospatial information acquisition mapping method according to claim 1 characterized in that in step S1 the method further comprises the steps of:
Step S10: shooting a preset mapping area at multiple positions to obtain mapping images containing different positions;
step S11: and scanning the earth surface of the preset mapping area by using the laser beam to acquire scanning data of the preset mapping area.
3. A geospatial information acquisition mapping method according to claim 2 wherein in step S10 the method further comprises the steps of:
Step S100: graying treatment is carried out on the mapping image:
Wherein, Is a gray value,/>Is a red channel,/>Is green channel,/>Is a blue channel;
step S101: carrying out histogram equalization on the mapping image;
step S102: performing size adjustment of the mapping image:
Wherein, Is a resized mapping image,/>Is the pixel value of the mapped image after histogram equalization,/>Is an interpolation kernel function.
4. A geospatial information acquisition mapping method according to claim 3 wherein in step S101 the method further comprises the steps of:
Step S1010: calculating a gray histogram of the gray mapping image;
step S1011: normalized gray level histogram:
Wherein, Is a gray level histogram,/>Is the total number of pixels;
step S1012: calculating a cumulative distribution function of the normalized histogram:
Wherein, Is the gray level/>, in the normalized histogramProbability of (2);
Step S1013: equalizing and mapping the cumulative distribution function:
Wherein, Is the total number of gray levels,/>And/>The number of rows and columns of the mapping image;
Step S1014: applying the equalized mapped gray level to each pixel of the original mapping image to generate an equalized mapping image:
Wherein, Is the pixel value of the original mapping image.
5. A geospatial information acquisition mapping method according to claim 2 characterized in that in step S11 the method further comprises the steps of:
step S110: removing noise points in the scanned data by adopting Gaussian filtering:
Wherein, Is denoised scan data,/>Is the original coordinates of the scanned data,/>Is the standard deviation of the Gaussian filter, and controls the window size of the Gaussian filter;
step S111: separating the ground point cloud data in the scanned data from the whole point cloud data by adopting a RANSAC method:
Wherein, Is the slope of the fitted ground,/>Is the intercept of the fitted ground;
step S112: position and attitude correction of the scan data is performed via GPS/IMU data:
Wherein, Is the geographical coordinates of the corrected scan data,/>Is the local coordinates of the original scan data,/>Is a rotation matrix,/>Is a translation matrix;
Step S113: registering the scan data such that the scan data are located in the same coordinate system:
Wherein, Is an error function,/>Is a source point cloud of scan data,/>Is a cloud of target points of the scan data.
6. A geospatial information acquisition mapping method according to claim 1 characterized in that in step S5 the method further comprises the steps of:
Step S50: obtaining geographic coordinates provided by known real geographic information, and further extracting points matched with the geographic coordinates provided by the known real geographic information from the three-dimensional geographic model;
Step S51: calculating an error between points of the three-dimensional geographic model and known real geographic information:
Wherein, Is error,/>Is the geographic coordinates provided by the known real geographic information,Coordinates of three-dimensional geographic information contained in the three-dimensional geographic model;
Step S52: and according to the error, performing three-dimensional geographic model calibration:
Wherein, Is the coordinates of the three-dimensional geographic model after calibration,/>Is the original coordinates of the three-dimensional geographic model,/>Is an error.
7. Geospatial information acquisition mapping apparatus employing a geospatial information acquisition mapping method as defined in any of claims 1-6, comprising:
The data processing module is used for acquiring a preset mapping area image and scanning data, and further carrying out preset processing on the preset mapping area image and the scanning data;
The coordinate calculation module is used for carrying out feature matching according to the preset mapping area image subjected to the preset processing, and further mapping the features of the preset mapping area image into a geographic coordinate system; calculating three-dimensional coordinates of a geographic space by utilizing matching features of images of a preset mapping area and preset parameters contained in photographic equipment data;
The model fusion module is used for registering the preset mapping region scanning data with the three-dimensional coordinates, and further fusing the registered preset mapping region scanning data with the three-dimensional coordinates to form a three-dimensional geographic model;
and the model calibration module is used for carrying out precision calibration on the three-dimensional geographic model.
CN202410139366.0A 2024-02-01 2024-02-01 Geographic space information acquisition mapping method and device Active CN117665841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410139366.0A CN117665841B (en) 2024-02-01 2024-02-01 Geographic space information acquisition mapping method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410139366.0A CN117665841B (en) 2024-02-01 2024-02-01 Geographic space information acquisition mapping method and device

Publications (2)

Publication Number Publication Date
CN117665841A CN117665841A (en) 2024-03-08
CN117665841B true CN117665841B (en) 2024-04-30

Family

ID=90066443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410139366.0A Active CN117665841B (en) 2024-02-01 2024-02-01 Geographic space information acquisition mapping method and device

Country Status (1)

Country Link
CN (1) CN117665841B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118168514A (en) * 2024-05-14 2024-06-11 南京苏测测绘科技有限公司 Underwater section mapping system and method based on intelligent algorithm imaging

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109239725A (en) * 2018-08-20 2019-01-18 广州极飞科技有限公司 Ground mapping method and terminal based on laser ranging system
CN109945845A (en) * 2019-02-02 2019-06-28 南京林业大学 A kind of mapping of private garden spatial digitalized and three-dimensional visualization method
CN114492070A (en) * 2022-02-14 2022-05-13 广东工贸职业技术学院 High-precision mapping geographic information virtual simulation technology and device
CN114998545A (en) * 2022-07-12 2022-09-02 深圳市水务工程检测有限公司 Three-dimensional modeling shadow recognition system based on deep learning
CN115457222A (en) * 2022-09-14 2022-12-09 北京建筑大学 Method for geographic registration of three-dimensional model in geographic information system
CN117237553A (en) * 2023-09-14 2023-12-15 广东省核工业地质局测绘院 Three-dimensional map mapping system based on point cloud image fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109239725A (en) * 2018-08-20 2019-01-18 广州极飞科技有限公司 Ground mapping method and terminal based on laser ranging system
CN109945845A (en) * 2019-02-02 2019-06-28 南京林业大学 A kind of mapping of private garden spatial digitalized and three-dimensional visualization method
CN114492070A (en) * 2022-02-14 2022-05-13 广东工贸职业技术学院 High-precision mapping geographic information virtual simulation technology and device
CN114998545A (en) * 2022-07-12 2022-09-02 深圳市水务工程检测有限公司 Three-dimensional modeling shadow recognition system based on deep learning
CN115457222A (en) * 2022-09-14 2022-12-09 北京建筑大学 Method for geographic registration of three-dimensional model in geographic information system
CN117237553A (en) * 2023-09-14 2023-12-15 广东省核工业地质局测绘院 Three-dimensional map mapping system based on point cloud image fusion

Also Published As

Publication number Publication date
CN117665841A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN107563964B (en) Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images
AU2011312140B2 (en) Rapid 3D modeling
US7773799B2 (en) Method for automatic stereo measurement of a point of interest in a scene
CN107155341B (en) Three-dimensional scanning system and frame
CN117665841B (en) Geographic space information acquisition mapping method and device
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
CN110930508B (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
JP3850541B2 (en) Advanced measuring device
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
WO2007133620A2 (en) System and architecture for automatic image registration
CN111693025B (en) Remote sensing image data generation method, system and equipment
EP2686827A1 (en) 3d streets
Guo et al. Mapping crop status from an unmanned aerial vehicle for precision agriculture applications
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN110782498A (en) Rapid universal calibration method for visual sensing network
CN112348775A (en) Vehicle-mounted all-round-looking-based pavement pool detection system and method
Hirschmüller et al. Stereo vision based reconstruction of huge urban areas from an airborne pushbroom camera (HRSC)
CN113415433A (en) Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
KR102159134B1 (en) Method and system for generating real-time high resolution orthogonal map for non-survey using unmanned aerial vehicle
CN114544006B (en) Low-altitude remote sensing image correction system and method based on ambient illumination condition
CN108917722B (en) Vegetation coverage degree calculation method and device
CN113963067B (en) Calibration method for calibrating large-view-field visual sensor by using small target
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant