CN116993627B - Laser scanning image data correction method - Google Patents

Laser scanning image data correction method Download PDF

Info

Publication number
CN116993627B
CN116993627B CN202311245852.2A CN202311245852A CN116993627B CN 116993627 B CN116993627 B CN 116993627B CN 202311245852 A CN202311245852 A CN 202311245852A CN 116993627 B CN116993627 B CN 116993627B
Authority
CN
China
Prior art keywords
dimensional
window
pixel
point cloud
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311245852.2A
Other languages
Chinese (zh)
Other versions
CN116993627A (en
Inventor
胡进芳
程海龙
常媛媛
张媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG LAIEN OPTIC-ELECTRONIC TECHNOLOGY CO LTD
Original Assignee
SHANDONG LAIEN OPTIC-ELECTRONIC TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG LAIEN OPTIC-ELECTRONIC TECHNOLOGY CO LTD filed Critical SHANDONG LAIEN OPTIC-ELECTRONIC TECHNOLOGY CO LTD
Priority to CN202311245852.2A priority Critical patent/CN116993627B/en
Publication of CN116993627A publication Critical patent/CN116993627A/en
Application granted granted Critical
Publication of CN116993627B publication Critical patent/CN116993627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a laser scanning image data correction method, which comprises the following steps: acquiring a three-dimensional target shape point cloud data image and a three-dimensional target model point cloud data image; performing three-dimensional sliding window operation on the three-dimensional target shape point cloud data image to obtain a plurality of three-dimensional windows, and obtaining color parameters of each pixel point in each three-dimensional window; obtaining the position density degree of each pixel point in each three-dimensional window, and obtaining the similarity index of each pixel point in each three-dimensional window by combining the color parameters; and classifying, matching and marking all the pixel points in the three-dimensional window according to the similarity index of each pixel point to obtain all the marked three-dimensional windows, and fusing the marked three-dimensional windows to obtain marked three-dimensional target shape point cloud data images, so that point cloud registration is carried out with the three-dimensional target model point cloud data images, and data correction is completed. The invention reduces the probability of error correction of the image data to a certain extent, so that the registered result is more accurate.

Description

Laser scanning image data correction method
Technical Field
The invention relates to the technical field of image processing, in particular to a laser scanning image data correction method.
Background
The correction of laser scanning image data refers to processing of original image data acquired by a laser scanner to eliminate problems such as distortion, distortion or noise possibly generated by the scanner itself or in the scanning process, so that the image data is more accurate, reliable and accords with the expected form. For point cloud data acquired by laser scanning, alignment and fusion can be performed through a point cloud registration algorithm, errors introduced by different scanning positions or angles are eliminated, consistency and accuracy of image data are improved, and common point cloud registration methods comprise an ICP algorithm, a feature matching algorithm and the like.
The dependence of the ICP algorithm on initial alignment is relatively high. The initial alignment error may cause the algorithm to fall into a locally optimal solution, or require more iterations to reach a globally optimal solution, so that when the point cloud data alignment is performed, the approximate position of the registration needs to be found first, otherwise, the problem of inaccurate result after the registration is caused.
Disclosure of Invention
In order to solve the above-described problems, the present invention provides a laser scanning image data correction method, the method comprising:
acquiring a three-dimensional target shape point cloud data image and a three-dimensional target model point cloud data image;
performing three-dimensional sliding window operation on the three-dimensional target shape point cloud data image to obtain a plurality of three-dimensional windows, and obtaining color parameters of each pixel point in each three-dimensional window;
acquiring the position density degree of each pixel point in each three-dimensional window according to the position information of each pixel point in each three-dimensional window; acquiring the color difference characteristic of each pixel point in each three-dimensional window according to the color parameters of each pixel point in each three-dimensional window; obtaining a similarity index of each pixel point in each three-dimensional window according to the position density degree of each pixel point in each three-dimensional window and the color difference characteristic of each pixel point in each three-dimensional window;
classifying, matching and marking all pixel points in each three-dimensional window according to the similarity index of each pixel point in each three-dimensional window to obtain all marked three-dimensional windows; fusing all marked three-dimensional windows to obtain marked three-dimensional target shape point cloud data images; and carrying out point cloud registration on the marked three-dimensional target shape point cloud data image and the three-dimensional target model point cloud data image.
Preferably, the method for obtaining the three-dimensional target shape point cloud data image and the three-dimensional target model point cloud data image includes the following specific steps:
carrying out three-dimensional laser scanning on a target area by using an unmanned aerial vehicle carrying a 3D scanner to obtain a three-dimensional target shape point cloud data image; modeling the target area through CAD and exporting the target area as a target model point cloud data image.
Preferably, the three-dimensional sliding window operation is performed on the three-dimensional target shape point cloud data image to obtain a plurality of three-dimensional windows, and the specific method includes:
using a predetermined sizeThe step length of the three-dimensional sliding window on the three-dimensional target shape point cloud data image is as followsA plurality of three-dimensional windows are obtained through the three-dimensional sliding window operation of the window; the size of the three-dimensional sliding window is set to be a common factor of the size of the three-dimensional target shape point cloud data image.
Preferably, the obtaining the color parameter of each pixel point in each three-dimensional window includes the following specific steps:
first, theIn a three-dimensional windowThe color parameter calculation expression of each pixel point is as follows:
in the method, in the process of the invention,represent the firstIn a three-dimensional windowColor parameters of the individual pixels;represent the firstIn a three-dimensional windowGray values of the individual pixels;represent the firstIn a three-dimensional windowGray values of the individual pixels;is a preset parameter.
Preferably, the specific formula for obtaining the position density degree of each pixel point in each three-dimensional window according to the position information of each pixel point in each three-dimensional window is as follows:
in the method, in the process of the invention,represent the firstIn a three-dimensional windowThe position density of each pixel point;represent the firstIn a three-dimensional windowThe pixel point and the firstA first difference value of each pixel point;represent the firstIn a three-dimensional windowThe pixel point and the firstA second difference value of the pixel points;represent the firstIn a three-dimensional windowThe pixel point and the firstA third difference value of the pixel points;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;an exponential function based on a natural constant;the representation takes absolute value;is a preset parameter.
Preferably, the firstFirst difference value of each pixel pointSecond difference and first difference of each pixel pointThe third difference value obtaining method of each pixel point comprises the following steps:
acquisition of the firstThree-dimensional coordinates of each pixel point in the three-dimensional window for the third dimensionExcept for the first three-dimensional windowThe first pixel pointA pixel point of the first pixelThe pixel point and the firstThe difference value of the horizontal coordinates of each pixel point is recorded as the firstThe pixel point and the firstA first difference value of each pixel point; will be the firstThe pixel point and the firstThe difference of the vertical coordinates of each pixel point is recorded as the firstThe pixel point and the firstA second difference value of the pixel points; will be the firstThe pixel point and the firstThe difference value of the vertical coordinates of each pixel point is recorded as the firstThe pixel point and the firstAnd a third difference value of the pixel points.
Preferably, the specific formula for obtaining the color difference characteristic of each pixel point in each three-dimensional window according to the color parameter of each pixel point in each three-dimensional window is as follows:
in the method, in the process of the invention,represent the firstIn a three-dimensional windowColor difference characteristics of the individual pixel points;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;is a preset parameter.
Preferably, the method for obtaining the similarity index of each pixel point in each three-dimensional window according to the position density degree of each pixel point in each three-dimensional window and the color difference characteristic of each pixel point in each three-dimensional window includes the following specific steps:
and taking the product of the position density degree of each pixel point in each three-dimensional window and the color difference characteristic of the corresponding pixel point as a similarity index of each pixel point in each three-dimensional window.
Preferably, the classifying, matching and marking are performed on all the pixel points in each three-dimensional window according to the similarity index of each pixel point in each three-dimensional window, so as to obtain all the marked three-dimensional windows, which comprises the following specific methods:
for any three-dimensional window, classifying all pixel points in the three-dimensional window by using an SVM algorithm based on the similarity index of each pixel point in the three-dimensional window to obtain a plurality of categories, marking the area formed by all pixel points of each category as the similar area in the three-dimensional window, marking the similar areas in the three-dimensional window, further obtaining the marked three-dimensional window, and similarly obtaining all marked three-dimensional windows.
Preferably, the method for fusing all marked three-dimensional windows to obtain the marked three-dimensional target shape point cloud data image includes the following specific steps:
and (3) adopting an interpolation method, and fusing the positions originally corresponding to the three-dimensional windows in the three-dimensional target shape point cloud data image into a complete marked three-dimensional target shape point cloud data image.
The technical scheme of the invention has the beneficial effects that: aiming at the problem that the similar point clouds of different areas are easy to match by the existing ICP algorithm, thereby causing errors. According to the invention, the color characteristics and the position density degree between the point cloud data in the laser scanning image are used for quantization, so that the similarity index of the point cloud data is obtained, the laser scanning image is further subjected to similar region division, and then the laser scanning image is matched and corrected with the template point cloud data, so that the probability of error correction of the image data is reduced to a certain extent, and the registered result is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart showing the steps of a method for correcting laser scanned image data according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description refers to specific embodiments, structures, features and effects of a laser scanning image data correction method according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of a laser scanning image data correction method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a method for correcting laser scanned image data according to an embodiment of the present invention is shown, the method includes the steps of:
step S001: and acquiring a three-dimensional target shape point cloud data image and a three-dimensional target model point cloud data image.
It should be noted that, for the point cloud data obtained by laser scanning, alignment and fusion can be performed through a point cloud registration algorithm, so that errors introduced by different scanning positions or angles are eliminated, consistency and precision of image data are improved, a common point cloud registration method is an ICP algorithm, but dependence of the ICP algorithm on initial alignment is high. The initial alignment error may cause the algorithm to fall into a locally optimal solution, or require more iterations to reach a globally optimal solution, so that when the point cloud data alignment is performed, the approximate position of the registration needs to be found first, otherwise, the problem of inaccurate result after the registration is caused.
The laser scanning image technology is generally used in industries of construction and engineering, cultural heritage protection and cultural relic reconstruction, geological and topographic mapping, traffic and urban planning and the like, and targets required to be acquired in the industries are large and have various angles, so that an unmanned aerial vehicle is required to scan the targets to obtain corresponding laser images; the point cloud data are obtained after laser scanning, the laser scanning image of the object is obtained after the point cloud data are densely distributed, and when the scanning image is corrected, the shape point cloud and the model point cloud of the image need to be obtained.
Specifically, in order to implement the laser scanning image data correction method provided in this embodiment, first, a three-dimensional target shape point cloud data image and a three-dimensional target model point cloud data image need to be acquired, and the specific process is as follows:
carrying out three-dimensional laser scanning on a target area by using an unmanned aerial vehicle carrying a 3D scanner to obtain a three-dimensional target shape point cloud data image; modeling a target area through CAD and exporting the target area as a target model point cloud data image, wherein the three-dimensional target shape point cloud data image and the three-dimensional target model point cloud data image are described by the embodiment with the sizes of 540 multiplied by 540; the shape point cloud data refers to moving point cloud data to be registered, namely, a pixel point corresponding to a three-dimensional target shape point cloud data image, which is a set containing discrete points, and each shape point cloud data has three-dimensional coordinates, and the coordinates (x, y, z) are used for representing the shape point cloud data in the embodiment. The model point cloud data is fixed point cloud data used as a reference, is pre-constructed point cloud data and represents the point cloud of a known shape or geometric model, and is processed, registered and modeled for comparison and matching with the shape point cloud data.
So far, the three-dimensional target shape point cloud data image and the three-dimensional target model point cloud data image are obtained through the method.
Step S002: and carrying out three-dimensional sliding window operation on the three-dimensional target shape point cloud data image to obtain a plurality of three-dimensional windows, and obtaining the color parameters of each pixel point in each three-dimensional window.
It should be noted that, the three-dimensional object shape point cloud data image may show different colors, mainly because the materials and characteristics of different surfaces of the object may cause the color change of the three-dimensional object shape point cloud data image. The optical properties, reflectivity and diffuse reflection properties of the material affect scattering and reflection of light, so that different gray values and colors are presented, and therefore, the color distribution of different areas on the three-dimensional target shape point cloud data image can be used as a characteristic of the three-dimensional target shape point cloud data image, and the color of any pixel point in the three-dimensional target shape point cloud data image and the color distribution of surrounding pixels can be used as a characteristic of the pixel point positioning.
Because the color characteristic and the distribution characteristic of each pixel point are calculated through the integral three-dimensional target shape point cloud data image, the point cloud distribution areas with different characteristics cannot be obtained, a plurality of three-dimensional windows are obtained through three-dimensional sliding windows on the three-dimensional target shape point cloud data image, the color characteristic and the distribution characteristic of each pixel point are analyzed in the three-dimensional windows, the color characteristic and the distribution characteristic in each point cloud data can be more accurately analyzed, and the point cloud distribution areas with different characteristics are obtained.
Presetting a parameterWherein the present embodiment usesThe embodiment is not specifically limited, but the embodiment is described inDepending on the particular implementation.
Specifically, a preset size is usedThe step length of the three-dimensional sliding window on the three-dimensional target shape point cloud data image is as followsA plurality of three-dimensional windows are obtained through the three-dimensional sliding window operation of the window; the three-dimensional sliding window is set to be a common factor of the size of the three-dimensional target shape point cloud data image in order to enable the three-dimensional sliding window to uniformly divide the three-dimensional target shape point cloud data image into a plurality of windows; and for any three-dimensional window, if the gray value of the pixel point in the three-dimensional window is 0, the three-dimensional window is not subjected to subsequent operation.
It should be noted that, as can be seen from the three-dimensional target shape point cloud data image, the image obtained by laser scanning has clear color characteristics, so that the image can be quantified by calculating the gray value of each pixel point in each three-dimensional window and combining the differences of surrounding pixel points in the three-dimensional sliding window, because different areas may have very similar pixel point color distribution, the quantified result is very sensitive to color change, and therefore, when the color area changes, the area with the same color distribution can be found from the change degree of the color parameters.
Specifically, the firstIn a three-dimensional windowThe color parameter calculation expression of each pixel point is as follows:
in the method, in the process of the invention,represent the firstIn a three-dimensional windowColor parameters of the individual pixels;represent the firstIn a three-dimensional windowGray values of the individual pixels;represent the firstIn a three-dimensional windowGray values of the individual pixels;for preset parameters, represent the firstThree of themThe size of the dimension window.
So far, the color parameter of each pixel point in each three-dimensional window is obtained through the method.
Step S003: and obtaining the position density degree of each pixel point in each three-dimensional window, and obtaining the similarity index of each pixel point in each three-dimensional window by combining the color parameters.
It should be noted that, through the three-dimensional sliding window operation performed on the three-dimensional object shape point cloud data image, a plurality of windows are obtained, although the color difference characteristics of the point cloud data can be obtained through the color parameters of each pixel point in different windows, similar color areas in one window are too many, and although individual areas are far away, the color difference characteristics are relatively close, the similar areas can be generated by only dividing the three-dimensional object shape point cloud data image by the color characteristics, a large amount of calculation amount can be generated because of too many similar areas, the image can be observed, if the structure of the object changes to some extent, the pixel point density between different areas is different, the point cloud data density reflects the complexity of the object area to some extent, and therefore the similar areas in each window can be found out more accurately through the color difference characteristics and the position density of the point cloud data.
It should be further noted that, the position density of the point cloud data refers to whether the number of pixels with almost the same color parameter in a certain area is enough, and whether the arrangement between the pixels is compact, so that the influence factors of the position density of the point cloud data mainly include two factors, namely, the number of pixels with small color parameter differences and the distance between the pixels with small color parameter differences.
Specifically, obtain the firstThree-dimensional coordinates of each pixel point in the three-dimensional window for the third dimensionExcept for the first three-dimensional windowThe first pixel pointA pixel point of the first pixelThe pixel point and the firstThe difference value of the horizontal coordinates of each pixel point is recorded as the firstThe pixel point and the firstA first difference value of each pixel point; will be the firstThe pixel point and the firstThe difference of the vertical coordinates of each pixel point is recorded as the firstThe pixel point and the firstA second difference value of the pixel points; will be the firstThe pixel point and the firstThe difference value of the vertical coordinates of each pixel point is recorded as the firstThe pixel point and the firstA third difference value of the pixel points; then the firstIn a three-dimensional windowThe calculation expression of the position density degree of each pixel point is as follows:
in the method, in the process of the invention,represent the firstIn a three-dimensional windowThe position density of each pixel point;represent the firstIn a three-dimensional windowThe pixel point and the firstA first difference value of each pixel point;represent the firstIn a three-dimensional windowThe pixel point and the firstA second difference value of the pixel points;represent the firstIn a three-dimensional windowThe pixel point and the firstA third difference value of the pixel points;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;an exponential function based on a natural constant;the representation takes absolute value;for preset parameters, represent the firstThe size of the three-dimensional window.
In the point cloud data of the three-dimensional target shape point cloud data image, the similarity index between the pixels corresponding to the point cloud data is calculated, the calculation is required to be performed by the color difference feature and the position density degree of the pixels, and the obtaining process of the position density degree also requires to use the color parameters to find out the pixels with small differences for calculation. Firstly, for a window, if the pixel points in the window are to be measured to be dense, firstly, whether the color parameters of the pixel points in the sliding window are uniform is to be measured, for any pixel in the window, the measurement is carried out by calculating the deviation degree of the color parameters between the pixel points of the rest window and the pixel points, and if the color distribution in the window is originally discrete, the similarity degree of the pixel points and other pixel points of the window is reduced. And then, the similarity index of each pixel point is obtained by calculating the position density degree of each pixel point and combining the average value of the difference values between the pixel point and the color parameters of other pixel points of the window.
Specifically, the firstIn a three-dimensional windowThe calculation expression of the color difference characteristics of the individual pixel points is as follows:
in the method, in the process of the invention,represent the firstIn a three-dimensional windowColor difference characteristics of the individual pixel points;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;represent the firstIn a three-dimensional windowColor parameters of the individual pixels;for preset parameters, represent the firstThe size of the three-dimensional window.
According to the firstIn a three-dimensional windowColor difference feature and the first pixel pointIn a three-dimensional windowObtaining the position density of each pixel pointIn a three-dimensional windowThe calculation expression of the similarity index of each pixel point is as follows:
in the method, in the process of the invention,represent the firstIn a three-dimensional windowSimilarity index of individual pixels;represent the firstIn a three-dimensional windowThe position density of each pixel point;represent the firstIn a three-dimensional windowColor difference characteristics of individual pixels.
So far, the similarity index of each pixel point in each three-dimensional window is obtained through the method.
Step S004: and classifying, matching and marking all the pixel points in the three-dimensional window according to the similarity index of each pixel point to obtain all the marked three-dimensional windows, and fusing the marked three-dimensional windows to obtain marked three-dimensional target shape point cloud data images, so that point cloud registration is carried out with the three-dimensional target model point cloud data images, and data correction is completed.
In addition, in the three-dimensional target shape point cloud data image acquired by means of three-dimensional laser scanning and the like, due to the existence of external factors such as shielding at different angles, uneven illumination and the like in the acquisition process, when equipment is used, a measured object is required to be scanned from different angles for multiple times to acquire more complete point cloud information, so that the point cloud acquired each time is in different coordinate systems, and the measured point cloud is finally in the same coordinate system through the processing of a point cloud registration technology by virtue of proper spatial transformation. However, when correction is performed, a large amount of calculation time is required for matching each point, and errors are easy to occur in the result, so that the similarity index of each point cloud data is obtained according to the color parameter difference and the position density degree of each point cloud data in the three-dimensional target shape point cloud data image, the similarity area in each window is further classified and matched and marked, all marked three-dimensional windows are fused to obtain a marked three-dimensional target shape point cloud data image, and point cloud registration is performed on the marked three-dimensional target shape point cloud data image, so that the three-dimensional target shape point cloud data image is corrected in a targeted manner.
Specifically, for any three-dimensional window, classifying all the pixels in the three-dimensional window by using an SVM algorithm based on the similarity index of each pixel in the three-dimensional window to obtain a plurality of categories, marking the area formed by all the pixels in each category as the similar area in the three-dimensional window, marking the similar areas in the three-dimensional window to obtain marked three-dimensional windows, and similarly obtaining all the marked three-dimensional windows, and fusing the positions originally corresponding to each three-dimensional window in the three-dimensional target shape point cloud data image into a complete marked three-dimensional target shape point cloud data image by adopting an interpolation method.
By the method, a plurality of similar areas of the target model point cloud data image are obtained, marking is carried out, point cloud registration is carried out on the marked three-dimensional target shape point cloud data image and the marked same similar areas of the three-dimensional target model point cloud data image by an ICP algorithm, and therefore targeted correction of the three-dimensional target shape point cloud data image is completed.
The SVM algorithm, interpolation method and ICP algorithm are known techniques, and will not be described in detail herein.
This embodiment is completed.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (5)

1. A method for correcting laser scanned image data, the method comprising the steps of:
acquiring a three-dimensional target shape point cloud data image and a three-dimensional target model point cloud data image;
performing three-dimensional sliding window operation on the three-dimensional target shape point cloud data image to obtain a plurality of three-dimensional windows, and obtaining color parameters of each pixel point in each three-dimensional window;
acquiring the position density degree of each pixel point in each three-dimensional window according to the position information of each pixel point in each three-dimensional window; acquiring the color difference characteristic of each pixel point in each three-dimensional window according to the color parameters of each pixel point in each three-dimensional window; obtaining a similarity index of each pixel point in each three-dimensional window according to the position density degree of each pixel point in each three-dimensional window and the color difference characteristic of each pixel point in each three-dimensional window;
classifying, matching and marking all pixel points in each three-dimensional window according to the similarity index of each pixel point in each three-dimensional window to obtain all marked three-dimensional windows; fusing all marked three-dimensional windows to obtain marked three-dimensional target shape point cloud data images; carrying out point cloud registration on the marked three-dimensional target shape point cloud data image and the three-dimensional target model point cloud data image;
the specific method for acquiring the color parameters of each pixel point in each three-dimensional window comprises the following steps:
first, theThird-dimensional window->The color parameter calculation expression of each pixel point is as follows:
in the method, in the process of the invention,indicate->Third-dimensional window->Color parameters of the individual pixels; />Indicate->Third-dimensional window->Gray values of the individual pixels; />Indicate->Third-dimensional window->Gray values of the individual pixels; />Is a preset parameter;
the specific formula for acquiring the position density degree of each pixel point in each three-dimensional window according to the position information of each pixel point in each three-dimensional window is as follows:
in the method, in the process of the invention,indicate->Third-dimensional window->The position density of each pixel point; />Indicate->Third-dimensional window->Pixel dot and->A first difference value of each pixel point; />Indicate->Third-dimensional window->Pixel dot and->A second difference value of the pixel points; />Indicate->Third-dimensional window->Pixel dot and->A third difference value of the pixel points; />Indicate->Third-dimensional window->Color parameters of the individual pixels; />Indicate->Third-dimensional window->Color parameters of the individual pixels; />An exponential function based on a natural constant; />The representation takes absolute value; />Is a preset parameter;
said firstFirst difference, first ∈of each pixel>Second difference and +.>The third difference value obtaining method of each pixel point comprises the following steps:
acquisition of the firstThree-dimensional coordinates of each pixel point in the three-dimensional window for +.>Except for->The +.>A pixel dot for adding->Pixel dot and->The difference of the horizontal coordinates of each pixel point is marked as +.>Pixel dot and->A first difference value of each pixel point; will be->Pixel dot and->The difference of the vertical coordinates of each pixel point is marked as +.>Pixel dot and->A second difference value of the pixel points; will be->Pixel dot and->The difference of the vertical coordinates of the pixel points is marked as +.>Pixel dot and->A third difference value of the pixel points;
the specific formula for acquiring the color difference characteristic of each pixel point in each three-dimensional window according to the color parameters of each pixel point in each three-dimensional window is as follows:
in the method, in the process of the invention,indicate->Third-dimensional window->Color difference characteristics of the individual pixel points; />Indicate->Third-dimensional window->Color of individual pixel pointsColor parameters; />Indicate->Third-dimensional window->Color parameters of the individual pixels; />Is a preset parameter;
the method for obtaining the similarity index of each pixel point in each three-dimensional window according to the position density degree of each pixel point in each three-dimensional window and the color difference characteristic of each pixel point in each three-dimensional window comprises the following specific steps:
and taking the product of the position density degree of each pixel point in each three-dimensional window and the color difference characteristic of the corresponding pixel point as a similarity index of each pixel point in each three-dimensional window.
2. The method for correcting laser scanning image data according to claim 1, wherein the steps of obtaining a three-dimensional target shape point cloud data image and a three-dimensional target model point cloud data image include the following steps:
carrying out three-dimensional laser scanning on a target area by using an unmanned aerial vehicle carrying a 3D scanner to obtain a three-dimensional target shape point cloud data image; modeling the target area through CAD and exporting the target area as a target model point cloud data image.
3. The method for correcting laser scanning image data according to claim 1, wherein the three-dimensional sliding window operation is performed on the three-dimensional target shape point cloud data image to obtain a plurality of three-dimensional windows, comprising the following specific steps:
using a predetermined sizeThe step length of the three-dimensional sliding window of the three-dimensional object shape point cloud data image is +.>A plurality of three-dimensional windows are obtained through the three-dimensional sliding window operation of the window; the size of the three-dimensional sliding window is set to be a common factor of the size of the three-dimensional target shape point cloud data image.
4. The method for correcting laser scanning image data according to claim 1, wherein the method for classifying, matching and marking all the pixels in each three-dimensional window according to the similarity index of each pixel in each three-dimensional window to obtain all marked three-dimensional windows comprises the following specific steps:
for any three-dimensional window, classifying all pixel points in the three-dimensional window by using an SVM algorithm based on the similarity index of each pixel point in the three-dimensional window to obtain a plurality of categories, marking the area formed by all pixel points of each category as the similar area in the three-dimensional window, marking the similar areas in the three-dimensional window, further obtaining the marked three-dimensional window, and similarly obtaining all marked three-dimensional windows.
5. The method for correcting laser scanning image data according to claim 1, wherein the method for fusing all marked three-dimensional windows to obtain marked three-dimensional target shape point cloud data images comprises the following specific steps:
and (3) adopting an interpolation method, and fusing the positions originally corresponding to the three-dimensional windows in the three-dimensional target shape point cloud data image into a complete marked three-dimensional target shape point cloud data image.
CN202311245852.2A 2023-09-26 2023-09-26 Laser scanning image data correction method Active CN116993627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311245852.2A CN116993627B (en) 2023-09-26 2023-09-26 Laser scanning image data correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311245852.2A CN116993627B (en) 2023-09-26 2023-09-26 Laser scanning image data correction method

Publications (2)

Publication Number Publication Date
CN116993627A CN116993627A (en) 2023-11-03
CN116993627B true CN116993627B (en) 2023-12-15

Family

ID=88534079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311245852.2A Active CN116993627B (en) 2023-09-26 2023-09-26 Laser scanning image data correction method

Country Status (1)

Country Link
CN (1) CN116993627B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931234A (en) * 2016-04-19 2016-09-07 东北林业大学 Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN116071283A (en) * 2023-04-07 2023-05-05 湖南腾琨信息科技有限公司 Three-dimensional point cloud image fusion method based on computer vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201708520D0 (en) * 2017-05-27 2017-07-12 Dawood Andrew A method for reducing artefact in intra oral scans

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931234A (en) * 2016-04-19 2016-09-07 东北林业大学 Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN116071283A (en) * 2023-04-07 2023-05-05 湖南腾琨信息科技有限公司 Three-dimensional point cloud image fusion method based on computer vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于激光点云的三维测量***设计;张婧婧;白晓;;现代电子技术(14);全文 *

Also Published As

Publication number Publication date
CN116993627A (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN109410256B (en) Automatic high-precision point cloud and image registration method based on mutual information
CN108320329B (en) 3D map creation method based on 3D laser
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN109614935B (en) Vehicle damage assessment method and device, storage medium and electronic equipment
Daftry et al. Building with drones: Accurate 3D facade reconstruction using MAVs
CN109523595B (en) Visual measurement method for linear angular spacing of building engineering
CN110443879B (en) Perspective error compensation method based on neural network
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN104748683A (en) Device and method for online and automatic measuring numerical control machine tool workpieces
CN113916130B (en) Building position measuring method based on least square method
CN111640158A (en) End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN111476242A (en) Laser point cloud semantic segmentation method and device
CN106570905A (en) Initial attitude verification method of non-cooperative target point cloud
CN112966542A (en) SLAM system and method based on laser radar
CN116452852A (en) Automatic generation method of high-precision vector map
JP2021056017A (en) Synthetic processing apparatus, synthetic processing system and synthetic processing method
CN114140539A (en) Method and device for acquiring position of indoor object
CN116310250A (en) Point cloud splicing method and system based on three-dimensional sensor and storage medium
CN115685160A (en) Target-based laser radar and camera calibration method, system and electronic equipment
CN116958420A (en) High-precision modeling method for three-dimensional face of digital human teacher
CN116805356A (en) Building model construction method, building model construction equipment and computer readable storage medium
CN114137564A (en) Automatic indoor object identification and positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant