CN117523111A - Method and system for generating three-dimensional scenic spot cloud model - Google Patents

Method and system for generating three-dimensional scenic spot cloud model Download PDF

Info

Publication number
CN117523111A
CN117523111A CN202410010426.9A CN202410010426A CN117523111A CN 117523111 A CN117523111 A CN 117523111A CN 202410010426 A CN202410010426 A CN 202410010426A CN 117523111 A CN117523111 A CN 117523111A
Authority
CN
China
Prior art keywords
point
coordinate system
acquisition
coordinates
world coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410010426.9A
Other languages
Chinese (zh)
Other versions
CN117523111B (en
Inventor
刘建明
徐花芝
李通
张奇伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Provincial Institute of Land Surveying and Mapping
Original Assignee
Shandong Provincial Institute of Land Surveying and Mapping
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Provincial Institute of Land Surveying and Mapping filed Critical Shandong Provincial Institute of Land Surveying and Mapping
Priority to CN202410010426.9A priority Critical patent/CN117523111B/en
Publication of CN117523111A publication Critical patent/CN117523111A/en
Application granted granted Critical
Publication of CN117523111B publication Critical patent/CN117523111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method and a system for generating a three-dimensional scenic spot cloud model, which relate to the technical field of three-dimensional modeling, and the technical key points of the scheme are as follows: establishing a world coordinate system by taking an initial acquisition point as an origin, scanning a target live-action by a laser scanner, and acquiring point cloud data; calculating the coordinates of the next acquisition point in the world coordinate system by measuring the distance between the last acquisition point and the next acquisition point and the angle in the three axial directions of the world coordinate system X, Y, Z; the method has the advantages that the precision is higher, the influence of signal interference is avoided, complex calculation is reduced, and the modeling efficiency is improved.

Description

Method and system for generating three-dimensional scenic spot cloud model
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to a method and a system for generating a three-dimensional scenic spot cloud model.
Background
With the continuous development of computer graphics and three-dimensional modeling technologies, a three-dimensional real-scene point cloud model is widely applied in a plurality of fields, such as virtual reality, augmented reality, robot vision and the like, and the three-dimensional real-scene point cloud model is a model obtained by collecting data, processing and reconstructing, and expresses three-dimensional information of objects or scenes in the real world in a point cloud form. The model has higher precision and fidelity, and can be widely applied to the fields of three-dimensional modeling, visualization, analysis and the like.
In the chinese application of application publication No. CN110570466a, a method, an apparatus, a computer device, and a storage medium for generating a three-dimensional scenic spot cloud model are disclosed. The method comprises the following steps: acquiring three-dimensional laser radar point cloud data of a target and synchronizing live-action images; performing point cloud classification processing on the three-dimensional laser radar point cloud data to obtain earth surface point cloud data; carrying out space three-adjustment processing on the live-action image to obtain an external azimuth element of the live-action image; and the live-action texture digital elevation model is matched with the live-action texture three-dimensional laser radar point cloud data, and a three-dimensional live-action point cloud model of the target is generated.
In the chinese application of application publication No. CN110570466a, a method, an apparatus, a computer device, and a storage medium for generating a three-dimensional scenic spot cloud model are disclosed. Acquiring three-dimensional laser radar point cloud data of a target and synchronizing live-action images; performing point cloud classification processing on the three-dimensional laser radar point cloud data to obtain earth surface point cloud data; carrying out space three-adjustment processing on the live-action image to obtain an external azimuth element of the live-action image; matching to construct a live-action texture digital elevation model of the target; and the live-action texture digital elevation model is matched with the live-action texture three-dimensional laser radar point cloud data, and a three-dimensional live-action point cloud model of the target is generated.
In combination with the above application, the prior art has the following disadvantages:
the point cloud data is positioned by other technical means such as GPS, and the wireless positioning technology such as GPS is greatly influenced by environmental factors, for example, in urban areas with dense buildings, the problem of inaccurate positioning or incapability of positioning can be caused due to the shielding of the GPS signals by the buildings. In addition, weather conditions (such as rain, fog, haze and the like) can also interfere with GPS signals, and the positioning accuracy is affected;
after all the point cloud data are acquired, coordinate registration is performed on all the point cloud data, and the processing mode is high in computational complexity, and can cause larger registration errors due to influences of various factors (such as errors of acquisition equipment, changes of acquisition environments and the like).
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a method and a system for generating a three-dimensional real scenic spot cloud model, which are characterized in that a world coordinate system is established by taking an initial acquisition point as an origin, a laser scanner is used for scanning a target real scene, and spot cloud data are acquired; calculating the coordinates of the next acquisition point in the world coordinate system by measuring the distance between the last acquisition point and the next acquisition point and the angle in the three axial directions of the world coordinate system X, Y, Z; performing primary transformation on the acquired point cloud data coordinates through the angle between the laser scanner coordinate system and the world coordinate system X, Y, Z, performing secondary transformation on the acquired point cloud data coordinates through the coordinates of the current acquisition point positioned in the world coordinate system, and calculating the coordinates of each point in the world coordinate system; combining all the point cloud data into a complete point cloud data set; and generating a three-dimensional real-scene point cloud model by using the preprocessed point cloud data and using a three-dimensional reconstruction algorithm, so that the defects in the background technology are overcome.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme: the method for generating the three-dimensional scenic spot cloud model comprises the following steps of:
determining a target live-action, establishing an initial acquisition point in the direction right south of the target live-action, establishing a world coordinate system by taking the initial acquisition point as an original point, scanning the target live-action by a laser scanner, and acquiring point cloud data;
after the data acquisition of the initial acquisition point is completed, the data acquisition device moves to the next acquisition point, and the coordinates of the next acquisition point in the world coordinate system are calculated by measuring the distance between the previous acquisition point and the next acquisition point and the angles of the three axes of the world coordinate system X, Y, Z;
scanning the target live-action again by using a laser scanner, recording three-dimensional coordinates of each point in a laser scanner coordinate system, carrying out primary transformation on the acquired point cloud data coordinates through angles between the laser scanner coordinate system and the three axes of a world coordinate system X, Y, Z, carrying out secondary transformation on the acquired point cloud data coordinates through the coordinates of the current acquisition point in the world coordinate system, and calculating the coordinates of each point in the world coordinate system;
moving to the next acquisition point to continue to acquire point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed and the point cloud data are converted into coordinates of a world coordinate system, and combining all the point cloud data into a complete point cloud data set;
and generating a three-dimensional real-scene point cloud model by using a three-dimensional reconstruction algorithm through the preprocessed point cloud data, carrying out detail enhancement on the generated three-dimensional real-scene point cloud model, optimizing the enhanced three-dimensional real-scene point cloud model, and exporting the optimized three-dimensional real-scene point cloud model.
Further, determining a target real scene for generating a three-dimensional real scene point cloud model; selecting an acquisition point in the direct south of the target live-action as an initial acquisition point for establishing a world coordinate system; establishing a world coordinate system by taking an initial acquisition point as an origin, taking an east-west direction as an X-axis direction, taking a north-south direction as a Y-axis direction and taking an up-down direction as a Z-axis direction; and scanning the target live-action by using a laser scanner, and recording the three-dimensional coordinates of each point to form point cloud data.
Further, the coordinates of the next acquisition point in the world coordinate system are calculated by the distance between the previous acquisition point and the next acquisition point and the angle in the three-axis direction with the world coordinate system X, Y, Z, and the calculation formula is as follows:
wherein,、/>、/>respectively representing the coordinates of the next acquisition point in the X, Y, Z axial direction, S represents the distance between the last acquisition point and the next acquisition point, +.>、/>、/>Respectively representing the angles of the three axes of the previous acquisition point to the next acquisition point and the world coordinate system X, Y, Z.
Further, through the angle between the laser scanner coordinate system and the world coordinate system X, Y, Z, the current acquisition point is taken as an original point, the coordinate of the world coordinate system is taken as a direction, the acquired point cloud data coordinate is transformed once, the coordinate of each point is calculated, and the calculation formula is as follows:
wherein,、/>、/>respectively represents the coordinate of each point in the X, Y, Z axial direction after one transformation, +.>、/>、/>Respectively representing the coordinates of each point in the direction of the axis of the laser scanner coordinate system X, Y, Z, +.>、/>、/>The angles between the three axes of the laser scanner coordinate system and the world coordinate system X, Y, Z are shown, respectively.
Further, the coordinates of each point in the world coordinate system are calculated by performing secondary transformation on the acquired point cloud data coordinates through the coordinates of the current acquisition point in the world coordinate system, and the calculation formula is as follows:
wherein,、/>、/>representing the coordinates of each point in the direction of the axis of the world coordinate system X, Y, Z, respectively, +.>、/>、/>Respectively represents the coordinate of each point in the X, Y, Z axial direction after one transformation, +.>、/>、/>Respectively representing the coordinates of the current acquisition point in the direction of the axis of the world coordinate system X, Y, Z.
Further, moving to the next acquisition point to continuously acquire point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed, and converting the coordinates of the point cloud data into coordinates of a world coordinate system;
combining the point cloud data of all the acquired points into a complete point cloud data set, wherein the combined point cloud data set contains three-dimensional coordinate information of all the points in the target live-action;
preprocessing the combined point cloud data, including removing noise, filtering redundant data and removing overlapped data.
Further, generating a three-dimensional real scenic spot cloud model by using a three-dimensional reconstruction algorithm through the preprocessed spot cloud data;
carrying out detail enhancement on the generated three-dimensional scenic spot cloud model, including adding surface detail and texture mapping operation;
adopting an iterative optimization algorithm, and optimizing the enhanced three-dimensional scenic spot cloud model by continuously adjusting parameters and structures of the model;
and exporting the optimized three-dimensional scenic spot cloud model.
The system for generating the three-dimensional scenic spot cloud model comprises a data acquisition module, an acquisition coordinate conversion module, a spot cloud coordinate conversion module and a three-dimensional reconstruction module; wherein:
the data acquisition module scans the target live-action through the laser scanner, acquires point cloud data, and moves to the next acquisition point to continuously acquire the point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed;
the acquisition coordinate conversion module is used for moving to the next acquisition point after completing data acquisition of the initial acquisition point, and calculating the coordinate of the next acquisition point in the world coordinate system by measuring the distance between the previous acquisition point and the next acquisition point and the angle in the three-axis direction of the world coordinate system X, Y, Z;
the point cloud coordinate conversion module is used for carrying out primary conversion on the collected point cloud data coordinates through the angle between the laser scanner coordinate system and the world coordinate system X, Y, Z, carrying out secondary conversion on the collected point cloud data coordinates through the coordinates of the current collecting point positioned in the world coordinate system, and calculating the coordinates of each point in the world coordinate system;
the model generation module is used for generating a three-dimensional real scenic spot cloud model through the preprocessed point cloud data by using a three-dimensional reconstruction algorithm, carrying out detail enhancement on the generated three-dimensional real scenic spot cloud model, optimizing the enhanced three-dimensional real scenic spot cloud model, and exporting the optimized three-dimensional real scenic spot cloud model.
(III) beneficial effects
The invention provides a method and a system for generating a three-dimensional scenic spot cloud model, which have the following beneficial effects:
(1) By establishing the world coordinate system, the position relation of each point in space can be determined, a unified reference frame is provided for subsequent point cloud data processing and model generation, and meanwhile, the establishment of the world coordinate system is also beneficial to realizing fusion and registration between different data sources, and the precision and the integrity of the model are improved.
(2) The distance between the last acquisition point and the next acquisition point and the angle in the three-axis direction of the world coordinate system X, Y, Z can be accurately measured respectively by using the distance meter and the angle meter, and compared with a GPS and other positioning methods, the method has the advantages of being higher in precision, free from the influence of signal interference, capable of performing stable work in various environments and beneficial to improving the accuracy and stability of subsequent coordinate calculation.
(3) The world coordinate system can be used for describing the positions and the movements of various objects, various three-dimensional data processing and analysis can be conveniently carried out by converting the point cloud data into the world coordinate system, the subsequent three-dimensional modeling point cloud data splicing is facilitated, complex calculation is reduced, and modeling efficiency is improved.
(4) The coordinates of the point cloud data are converted into the coordinates of the world coordinate system, the data of different acquisition points can be ensured to have the same reference coordinate system, the point cloud data of different positions can be integrated together, unified analysis and processing are carried out, and meanwhile, the world coordinate system is convenient for subsequent application and visualization, so that the data are more visual and easier to understand.
Drawings
FIG. 1 is a schematic diagram of steps of a method for generating a three-dimensional scenic spot cloud model according to the present invention;
FIG. 2 is a schematic flow chart of a method for generating a three-dimensional scenic spot cloud model according to the present invention;
fig. 3 is a schematic structural diagram of a generating system of the three-dimensional scenic spot cloud model.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 and 2, the present invention provides a method for generating a three-dimensional scenic spot cloud model, which includes the following steps:
step one: determining a target live-action, establishing an initial acquisition point in the direction right south of the target live-action, establishing a world coordinate system by taking the initial acquisition point as an original point, scanning the target live-action by a laser scanner, and acquiring point cloud data;
the first step comprises the following steps:
step 101: determining a target real scene for generating a three-dimensional real scene point cloud model, wherein the target real scene comprises any object or scene with three-dimensional characteristics such as a building, a terrain, a scene and the like;
step 102: selecting an acquisition point in the direct south of the target live-action as an initial acquisition point for establishing a world coordinate system;
step 103: establishing a world coordinate system by taking an initial acquisition point as an origin, taking an east-west direction as an X-axis direction, taking a north-south direction as a Y-axis direction and taking an up-down direction as a Z-axis direction;
step 104: and scanning the target live-action by using a laser scanner, and recording the three-dimensional coordinates of each point to form point cloud data.
It should be noted that, a collection point is selected in the direction right south of the target live-action as an initial collection point, the selection of the collection point is critical to the subsequent establishment of the world coordinate system and the collection of the point cloud data, and by selecting a suitable initial collection point, the accurate relative position relationship between the established world coordinate system and the target live-action can be ensured, so that a reliable foundation is provided for the subsequent point cloud data processing and model generation.
In use, the contents of steps 101 to 104 are combined:
by establishing the world coordinate system, the position relation of each point in space can be determined, a unified reference frame is provided for subsequent point cloud data processing and model generation, and meanwhile, the establishment of the world coordinate system is also beneficial to realizing fusion and registration between different data sources, and the precision and the integrity of the model are improved.
Step two: after the data acquisition of the initial acquisition point is completed, moving to the next acquisition point, and calculating the coordinate of the next acquisition point in the world coordinate system by measuring the distance between the previous acquisition point and the next acquisition point and the angle in the three-axis direction of the world coordinate system X, Y, Z;
the second step comprises the following steps:
step 201: after completing the data acquisition of the initial acquisition point, moving to the next acquisition point;
step 202: measuring the distance between the last acquisition point and the next acquisition point by using a range finder, and measuring the angles of the last acquisition point to the next acquisition point and the world coordinate system X, Y, Z in the three-axis direction by using an angle meter;
step 203: the coordinates of the next acquisition point in the world coordinate system are calculated through the distance between the last acquisition point and the next acquisition point and the angle between the three axial directions of the world coordinate system X, Y, Z, and the calculation formula is as follows:
wherein,、/>、/>respectively representing the coordinates of the next acquisition point in the X, Y, Z axial direction, S represents the distance between the last acquisition point and the next acquisition point, +.>、/>、/>Respectively representing the angles of the three axes of the previous acquisition point to the next acquisition point and the world coordinate system X, Y, Z.
It should be noted that a laser range finder or an ultrasonic range finder is generally used to measure the distance between two acquisition points. The instrument can accurately measure the distance, has good stability for different environmental conditions (such as illumination, weather and the like) and is not easy to be disturbed by the environment.
In use, the contents of steps 201 to 203 are combined:
the distance between the last acquisition point and the next acquisition point and the angle in the three-axis direction of the world coordinate system X, Y, Z can be accurately measured respectively by using the distance meter and the angle meter, and compared with a GPS and other positioning methods, the method has the advantages of higher precision, no influence of signal interference, capability of performing stable work in various environments and contribution to improving the accuracy and stability of subsequent coordinate calculation.
Step three: scanning the target live-action again by using a laser scanner, recording three-dimensional coordinates of each point in a laser scanner coordinate system, carrying out primary transformation on the acquired point cloud data coordinates through angles between the laser scanner coordinate system and the three axes of a world coordinate system X, Y, Z, carrying out secondary transformation on the acquired point cloud data coordinates through the coordinates of the current acquisition point in the world coordinate system, and calculating the coordinates of each point in the world coordinate system;
the third step comprises the following steps:
step 301: scanning the target live-action by using the laser scanner again, and recording the three-dimensional coordinates of each point in a coordinate system of the laser scanner to form point cloud data;
the world coordinate system uses an initial acquisition point as an origin, uses an east-west direction as an X-axis direction, uses a north-south direction as a Y-axis direction, uses an up-down direction as a Z-axis direction, uses a laser emission center as an origin, uses a scanning plane initial direction as an X-axis, uses a direction vertical to a scanning plane as a Y-axis, and uses a direction vertical to the scanning plane and the Y-axis as a Z-axis;
step 302: measuring the angle between the coordinate system of the laser scanner and the three axes of the world coordinate system X, Y, Z by using an angle measuring instrument, carrying out primary transformation on the collected point cloud data coordinates by taking the current collected point as an original point and taking the coordinate axis of the world coordinate system as a direction, and calculating the coordinates of each point according to the following calculation formula:
wherein,、/>、/>separate tableShowing the coordinates of each point in the X, Y, Z axis direction after one transformation, +.>、/>、/>Respectively representing the coordinates of each point in the direction of the axis of the laser scanner coordinate system X, Y, Z, +.>、/>、/>Respectively representing the angles between the three axes of the laser scanner coordinate system and the world coordinate system X, Y, Z;
step 303: and carrying out secondary transformation on the acquired point cloud data coordinates through the coordinates of the current acquisition point in the world coordinate system, and calculating the coordinates of each point in the world coordinate system, wherein the calculation formula is as follows:
wherein,、/>、/>representing the coordinates of each point in the direction of the axis of the world coordinate system X, Y, Z, respectively, +.>、/>、/>Respectively represents the coordinate of each point in the X, Y, Z axial direction after one transformation, +.>、/>、/>Respectively representing the coordinates of the current acquisition point in the direction of the axis of the world coordinate system X, Y, Z.
The coordinates of each point after one transformation were calculated by rotating the laser scanner coordinate system so that the X, Y, Z axis of the laser scanner coordinate system and the X, Y, Z axis of the world coordinate system were in the same direction.
In use, the contents of steps 301 to 303 are combined:
the world coordinate system is a universal and unified coordinate reference system, can be used for describing the positions and movements of various objects, can conveniently process and analyze various three-dimensional data by converting the point cloud data into the world coordinate system, is beneficial to the splicing of the point cloud data of the subsequent three-dimensional modeling, reduces complex calculation and improves modeling efficiency.
Step four: moving to the next acquisition point to continue to acquire point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed and the point cloud data are converted into coordinates of a world coordinate system, and combining all the point cloud data into a complete point cloud data set;
the fourth step comprises the following steps:
step 401: moving to the next acquisition point to continuously acquire point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed, and converting the coordinates of the point cloud data into coordinates of a world coordinate system;
step 402: combining the point cloud data of all the acquired points into a complete point cloud data set, wherein the combined point cloud data set contains three-dimensional coordinate information of all the points in the target live-action;
step 403: preprocessing the combined point cloud data, including removing noise, filtering redundant data, removing overlapped data and the like;
specifically, noise is removed: noise may come from various factors in the scanning process, such as equipment errors, environmental interference and the like, and is realized through some filtering algorithms, including bilateral filtering, gaussian filtering and the like, and the algorithms can effectively smooth point cloud data and reduce the influence of noise;
filtering the redundant data: in the point cloud data, redundant points possibly exist, which have no practical meaning for the construction of a model, but increase the calculation amount and the complexity of the model, and the redundant data is removed by setting a threshold value or compressing and simplifying the point cloud data by using some algorithms (such as VoxelGrid filtering);
removing overlapping data: in the point cloud data, there may be some overlapped points, which may cause misleading during model construction, affect the accuracy of the model, and by calculating the distance between adjacent points, setting a threshold value, removing the points with too close distance, thereby removing the overlapped data.
It should be noted that, the point cloud data acquisition is performed on the target live-action until the point cloud data acquisition of all the acquisition points is completed, and the coordinate conversion of the point cloud data into the coordinate of the world coordinate system can be completed by repeating the second to third steps, and since the coordinate conversion of all the point cloud data into the coordinate of the world coordinate system is completed, the data registration of the point cloud data is not required any more, and the merging operation of the point cloud data can be directly completed.
In use, the contents of steps 401 to 403 are combined:
the coordinates of the point cloud data are converted into the coordinates of the world coordinate system, the data of different acquisition points can be ensured to have the same reference coordinate system, the point cloud data of different positions can be integrated together, unified analysis and processing are carried out, and meanwhile, the world coordinate system is convenient for subsequent application and visualization, so that the data are more visual and easier to understand.
Step five: and generating a three-dimensional real-scene point cloud model by using a three-dimensional reconstruction algorithm through the preprocessed point cloud data, carrying out detail enhancement on the generated three-dimensional real-scene point cloud model, optimizing the enhanced three-dimensional real-scene point cloud model, and exporting the optimized three-dimensional real-scene point cloud model.
The fifth step comprises the following steps:
step 501: generating a three-dimensional real scenic spot cloud model by using a three-dimensional reconstruction algorithm through the preprocessed spot cloud data;
step 502: carrying out detail enhancement on the generated three-dimensional scenic spot cloud model, including operations such as adding surface details, texture mapping and the like;
specifically, the enhancement of the surface detail is realized by increasing the density of point clouds, optimizing the ordering of the point clouds, removing redundant data and the like, and the surface precision of the model is improved by increasing the number of sampling points; sorting the point cloud data, removing noise and redundant data, and improving the stability and efficiency of the model; optimizing the point cloud data by using an optimization algorithm, reducing redundant data, and improving the efficiency and the precision of the model;
specifically, in the texture mapping process, texture information is mapped to the surface of a three-dimensional model, and a texture image is loaded and mapped to a three-dimensional scenic spot cloud model, so that the visual effect and the authenticity of the model are improved, and the texture image and the surface of the model are aligned and mapped by using a texture mapping algorithm, so that the texture is correctly mapped and displayed;
step 503: adopting an iterative optimization algorithm, and optimizing the enhanced three-dimensional scenic spot cloud model by continuously adjusting parameters and structures of the model;
step 504: and (3) exporting the optimized three-dimensional scenic spot cloud model, wherein the formats of the three-dimensional scenic spot cloud model comprise obj, stl and the like.
It should be noted that, in detail enhancement and optimization, a balance between accuracy and detail performance of the model needs to be considered. Excessive enhancement and optimization may result in the model losing its original details and features, while too simple enhancement and optimization may not meet the application requirements, so a suitable enhancement and optimization method needs to be selected according to the specific application scenario and requirements to obtain the best effect.
In use, the contents of steps 501 to 503 are combined:
by using a three-dimensional reconstruction algorithm to generate a three-dimensional real-scene point cloud model and performing detail enhancement and optimization, a high-quality and vivid three-dimensional real-scene model can be obtained, and better support is provided for subsequent three-dimensional modeling, visualization, space analysis and other applications.
Referring to fig. 3, the present invention further provides a system for generating a three-dimensional scenic spot cloud model, which includes a data acquisition module, an acquisition coordinate conversion module, a spot cloud coordinate conversion module, and a three-dimensional reconstruction module; wherein:
the data acquisition module scans the target live-action through the laser scanner, acquires point cloud data, and moves to the next acquisition point to continuously acquire the point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed;
the acquisition coordinate conversion module is used for moving to the next acquisition point after completing data acquisition of the initial acquisition point, and calculating the coordinate of the next acquisition point in the world coordinate system by measuring the distance between the previous acquisition point and the next acquisition point and the angle in the three-axis direction of the world coordinate system X, Y, Z;
the point cloud coordinate conversion module is used for carrying out primary conversion on the collected point cloud data coordinates through the angle between the laser scanner coordinate system and the world coordinate system X, Y, Z, carrying out secondary conversion on the collected point cloud data coordinates through the coordinates of the current collecting point positioned in the world coordinate system, and calculating the coordinates of each point in the world coordinate system;
the model generation module is used for generating a three-dimensional real scenic spot cloud model through the preprocessed point cloud data by using a three-dimensional reconstruction algorithm, carrying out detail enhancement on the generated three-dimensional real scenic spot cloud model, optimizing the enhanced three-dimensional real scenic spot cloud model, and exporting the optimized three-dimensional real scenic spot cloud model.
In the application, the related formulas are all the numerical calculation after dimensionality removal, and the formulas are one formulas for acquiring a large amount of data and performing software simulation to obtain the latest real situation, and coefficients in the formulas are set by a person skilled in the art according to the actual situation.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (8)

1. The method for generating the three-dimensional scenic spot cloud model is characterized by comprising the following steps of:
determining a target live-action, establishing an initial acquisition point in the direction right south of the target live-action, establishing a world coordinate system by taking the initial acquisition point as an original point, scanning the target live-action by a laser scanner, and acquiring point cloud data;
after the data acquisition of the initial acquisition point is completed, the data acquisition device moves to the next acquisition point, and the coordinates of the next acquisition point in the world coordinate system are calculated by measuring the distance between the previous acquisition point and the next acquisition point and the angles of the three axes of the world coordinate system X, Y, Z;
scanning the target live-action again by using a laser scanner, recording three-dimensional coordinates of each point in a laser scanner coordinate system, carrying out primary transformation on the acquired point cloud data coordinates through angles between the laser scanner coordinate system and the three axes of a world coordinate system X, Y, Z, carrying out secondary transformation on the acquired point cloud data coordinates through the coordinates of the current acquisition point in the world coordinate system, and calculating the coordinates of each point in the world coordinate system;
moving to the next acquisition point to continue to acquire point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed and the point cloud data are converted into coordinates of a world coordinate system, and combining all the point cloud data into a complete point cloud data set;
and generating a three-dimensional real-scene point cloud model by using a three-dimensional reconstruction algorithm through the preprocessed point cloud data, carrying out detail enhancement on the generated three-dimensional real-scene point cloud model, optimizing the enhanced three-dimensional real-scene point cloud model, and exporting the optimized three-dimensional real-scene point cloud model.
2. The method for generating the three-dimensional scenic spot cloud model according to claim 1, wherein:
determining a target real scene for generating a three-dimensional real scene point cloud model; selecting an acquisition point in the direct south of the target live-action as an initial acquisition point for establishing a world coordinate system; establishing a world coordinate system by taking an initial acquisition point as an origin, taking an east-west direction as an X-axis direction, taking a north-south direction as a Y-axis direction and taking an up-down direction as a Z-axis direction; and scanning the target live-action by using a laser scanner, and recording the three-dimensional coordinates of each point to form point cloud data.
3. The method for generating the three-dimensional scenic spot cloud model according to claim 1, wherein:
the coordinates of the next acquisition point in the world coordinate system are calculated through the distance between the last acquisition point and the next acquisition point and the angle between the three axial directions of the world coordinate system X, Y, Z, and the calculation formula is as follows:
wherein,、/>、/>respectively representing the coordinates of the next acquisition point in the X, Y, Z axial direction, S represents the distance between the last acquisition point and the next acquisition point, +.>、/>、/>Respectively representing the angles of the three axes of the previous acquisition point to the next acquisition point and the world coordinate system X, Y, Z.
4. The method for generating the three-dimensional scenic spot cloud model according to claim 1, wherein:
the coordinates of each point are calculated by once transforming the collected point cloud data coordinates by taking the current collected point as an original point and taking the coordinate axis of the world coordinate system as a direction through the angle between the coordinate system of the laser scanner and the three axes of the world coordinate system X, Y, Z, and the calculation formula is as follows:
wherein,、/>、/>respectively represents the coordinate of each point in the X, Y, Z axial direction after one transformation, +.>、/>、/>Respectively representing the coordinates of each point in the direction of the axis of the laser scanner coordinate system X, Y, Z, +.>、/>、/>The angles between the three axes of the laser scanner coordinate system and the world coordinate system X, Y, Z are shown, respectively.
5. The method for generating the three-dimensional scenic spot cloud model according to claim 4, wherein:
and carrying out secondary transformation on the acquired point cloud data coordinates through the coordinates of the current acquisition point in the world coordinate system, and calculating the coordinates of each point in the world coordinate system, wherein the calculation formula is as follows:
wherein,、/>、/>representing the coordinates of each point in the direction of the axis of the world coordinate system X, Y, Z, respectively, +.>、/>、/>Respectively represents the coordinate of each point in the X, Y, Z axial direction after one transformation, +.>、/>、/>Respectively representing the coordinates of the current acquisition point in the direction of the axis of the world coordinate system X, Y, Z.
6. The method for generating the three-dimensional scenic spot cloud model according to claim 1, wherein:
moving to the next acquisition point to continuously acquire point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed, and converting the coordinates of the point cloud data into coordinates of a world coordinate system;
combining the point cloud data of all the acquired points into a complete point cloud data set, wherein the combined point cloud data set contains three-dimensional coordinate information of all the points in the target live-action; preprocessing the combined point cloud data, including removing noise, filtering redundant data and removing overlapped data.
7. The method for generating the three-dimensional scenic spot cloud model according to claim 1, wherein:
generating a three-dimensional real scenic spot cloud model by using a three-dimensional reconstruction algorithm through the preprocessed spot cloud data; carrying out detail enhancement on the generated three-dimensional scenic spot cloud model, including adding surface detail and texture mapping operation;
adopting an iterative optimization algorithm, and optimizing the enhanced three-dimensional scenic spot cloud model by continuously adjusting parameters and structures of the model; and exporting the optimized three-dimensional scenic spot cloud model.
8. The system for generating the three-dimensional scenic spot cloud model is used for realizing the method of any one of claims 1 to 7, and is characterized by comprising a data acquisition module, an acquisition coordinate conversion module, a point cloud coordinate conversion module and a three-dimensional reconstruction module; wherein:
the data acquisition module scans the target live-action through the laser scanner, acquires point cloud data, and moves to the next acquisition point to continuously acquire the point cloud data of the target live-action until the point cloud data acquisition of all the acquisition points is completed;
the acquisition coordinate conversion module is used for moving to the next acquisition point after completing data acquisition of the initial acquisition point, and calculating the coordinate of the next acquisition point in the world coordinate system by measuring the distance between the previous acquisition point and the next acquisition point and the angle in the three-axis direction of the world coordinate system X, Y, Z;
the point cloud coordinate conversion module is used for carrying out primary conversion on the collected point cloud data coordinates through the angle between the laser scanner coordinate system and the world coordinate system X, Y, Z, carrying out secondary conversion on the collected point cloud data coordinates through the coordinates of the current collecting point positioned in the world coordinate system, and calculating the coordinates of each point in the world coordinate system;
the model generation module is used for generating a three-dimensional real scenic spot cloud model through the preprocessed point cloud data by using a three-dimensional reconstruction algorithm, carrying out detail enhancement on the generated three-dimensional real scenic spot cloud model, optimizing the enhanced three-dimensional real scenic spot cloud model, and exporting the optimized three-dimensional real scenic spot cloud model.
CN202410010426.9A 2024-01-04 2024-01-04 Method and system for generating three-dimensional scenic spot cloud model Active CN117523111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410010426.9A CN117523111B (en) 2024-01-04 2024-01-04 Method and system for generating three-dimensional scenic spot cloud model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410010426.9A CN117523111B (en) 2024-01-04 2024-01-04 Method and system for generating three-dimensional scenic spot cloud model

Publications (2)

Publication Number Publication Date
CN117523111A true CN117523111A (en) 2024-02-06
CN117523111B CN117523111B (en) 2024-03-22

Family

ID=89763093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410010426.9A Active CN117523111B (en) 2024-01-04 2024-01-04 Method and system for generating three-dimensional scenic spot cloud model

Country Status (1)

Country Link
CN (1) CN117523111B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809759A (en) * 2015-04-03 2015-07-29 哈尔滨工业大学深圳研究生院 Large-area unstructured three-dimensional scene modeling method based on small unmanned helicopter
WO2017197617A1 (en) * 2016-05-19 2017-11-23 深圳市速腾聚创科技有限公司 Movable three-dimensional laser scanning system and movable three-dimensional laser scanning method
CN109448034A (en) * 2018-10-24 2019-03-08 华侨大学 A kind of part pose acquisition methods based on geometric primitive
CN109631836A (en) * 2019-01-15 2019-04-16 山东省国土测绘院 A kind of height of cloud base method for fast measuring
CN111696141A (en) * 2020-05-22 2020-09-22 武汉天际航信息科技股份有限公司 Three-dimensional panoramic scanning acquisition method and device and storage device
CN112767464A (en) * 2020-12-28 2021-05-07 三峡大学 Ground laser scanning three-dimensional point cloud data registration method
CN113221648A (en) * 2021-04-08 2021-08-06 武汉大学 Fusion point cloud sequence image guideboard detection method based on mobile measurement system
CN114820474A (en) * 2022-04-11 2022-07-29 南京拓控信息科技股份有限公司 Train wheel defect detection method based on three-dimensional information
CN115018983A (en) * 2022-05-31 2022-09-06 广东电网有限责任公司 Phase-shifting transformer site selection method, device, electronic equipment and storage medium
CN115797563A (en) * 2022-12-09 2023-03-14 武汉港迪智能技术有限公司 Whole ship modeling method with cooperation of multiple gate machines
CN116721228A (en) * 2023-08-10 2023-09-08 山东省国土测绘院 Building elevation extraction method and system based on low-density point cloud

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809759A (en) * 2015-04-03 2015-07-29 哈尔滨工业大学深圳研究生院 Large-area unstructured three-dimensional scene modeling method based on small unmanned helicopter
WO2017197617A1 (en) * 2016-05-19 2017-11-23 深圳市速腾聚创科技有限公司 Movable three-dimensional laser scanning system and movable three-dimensional laser scanning method
CN109448034A (en) * 2018-10-24 2019-03-08 华侨大学 A kind of part pose acquisition methods based on geometric primitive
CN109631836A (en) * 2019-01-15 2019-04-16 山东省国土测绘院 A kind of height of cloud base method for fast measuring
CN111696141A (en) * 2020-05-22 2020-09-22 武汉天际航信息科技股份有限公司 Three-dimensional panoramic scanning acquisition method and device and storage device
CN112767464A (en) * 2020-12-28 2021-05-07 三峡大学 Ground laser scanning three-dimensional point cloud data registration method
CN113221648A (en) * 2021-04-08 2021-08-06 武汉大学 Fusion point cloud sequence image guideboard detection method based on mobile measurement system
CN114820474A (en) * 2022-04-11 2022-07-29 南京拓控信息科技股份有限公司 Train wheel defect detection method based on three-dimensional information
CN115018983A (en) * 2022-05-31 2022-09-06 广东电网有限责任公司 Phase-shifting transformer site selection method, device, electronic equipment and storage medium
CN115797563A (en) * 2022-12-09 2023-03-14 武汉港迪智能技术有限公司 Whole ship modeling method with cooperation of multiple gate machines
CN116721228A (en) * 2023-08-10 2023-09-08 山东省国土测绘院 Building elevation extraction method and system based on low-density point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAQIANG ZHOU .ETAL: "Sparse Point Cloud Generation Based on Turntable 2D Lidar and Point Cloud Assembly in Augmented Reality Environment", IEEE, 28 June 2021 (2021-06-28) *
徐渊 等: "一种多目立体视觉的三维激光扫描***设计", 计算机与数字工程, no. 11, 30 November 2018 (2018-11-30) *

Also Published As

Publication number Publication date
CN117523111B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
US10297074B2 (en) Three-dimensional modeling from optical capture
US20190026400A1 (en) Three-dimensional modeling from point cloud data migration
JP4685313B2 (en) Method for processing passive volumetric image of any aspect
WO2019127445A1 (en) Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
JP2020035448A (en) Method, apparatus, device, storage medium for generating three-dimensional scene map
CN111275750A (en) Indoor space panoramic image generation method based on multi-sensor fusion
CN113096250A (en) Three-dimensional building model library system construction method based on unmanned aerial vehicle aerial image sequence
CN112446844B (en) Point cloud feature extraction and registration fusion method
CN116524109B (en) WebGL-based three-dimensional bridge visualization method and related equipment
CN112270702A (en) Volume measurement method and device, computer readable medium and electronic equipment
CN110986888A (en) Aerial photography integrated method
CN116625354A (en) High-precision topographic map generation method and system based on multi-source mapping data
CN113608234A (en) City data acquisition system
CN112465849A (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN116030208A (en) Method and system for building scene of virtual simulation power transmission line of real unmanned aerial vehicle
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN117523111B (en) Method and system for generating three-dimensional scenic spot cloud model
CN115631317B (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
CN114817439B (en) Holographic map construction method based on geographic information system
CN113532424B (en) Integrated equipment for acquiring multidimensional information and cooperative measurement method
CN116704112A (en) 3D scanning system for object reconstruction
Troccoli et al. A shadow based method for image to model registration
CN114187409A (en) Method for building ship model based on video image and laser radar point cloud fusion
Kim et al. Data simulation of an airborne lidar system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant