CN113034678A - Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence - Google Patents
Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence Download PDFInfo
- Publication number
- CN113034678A CN113034678A CN202110347485.1A CN202110347485A CN113034678A CN 113034678 A CN113034678 A CN 113034678A CN 202110347485 A CN202110347485 A CN 202110347485A CN 113034678 A CN113034678 A CN 113034678A
- Authority
- CN
- China
- Prior art keywords
- data
- dimensional
- image
- point
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 19
- 230000005540 biological transmission Effects 0.000 claims abstract description 13
- 238000005516 engineering process Methods 0.000 claims abstract description 10
- 238000010276 construction Methods 0.000 claims description 17
- 238000005259 measurement Methods 0.000 claims description 10
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical group C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000002360 preparation method Methods 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 3
- 230000001788 irregular Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 239000013598 vector Substances 0.000 description 8
- 230000008901 benefit Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 2
- 238000009616 inductively coupled plasma Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a three-dimensional rapid modeling method for a dam face of an extra-high arch dam based on swarm intelligence. The unmanned aerial vehicle sends the image control point data, the aerial photography data and the ground shooting data which are acquired by the data acquisition unit to the data processing unit through the wireless transmission unit, and after data processing and registration, the three-dimensional live-action model of the extra-high arch dam is constructed fully automatically by using Context Capture software. According to the invention, the unmanned aerial vehicle oblique photography technology and the three-dimensional laser scanner technology are combined, the unmanned aerial vehicle data acquisition and three-dimensional modeling are carried out in real time, and the three-dimensional modeling efficiency is high and the accuracy is high.
Description
Technical Field
The invention belongs to the technical field of three-dimensional modeling, and particularly relates to a three-dimensional rapid modeling method for an extra-high arch dam facing based on swarm intelligence.
Background
In the traditional engineering terrain construction field, when a building is subjected to three-dimensional modeling, manual measurement is usually adopted, the labor consumption is high, the measurement efficiency is low, manual sketch drawing and model rendering are carried out on three-dimensional modeling software, the modeling time sequence is long, the process is complex, the effective control of the construction period is not facilitated, and the serious problems of insufficient precision of a modeling model and the like are easily caused by manual measurement and drawing.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the problems in the prior art, the invention provides a three-dimensional rapid modeling method for the dam face of an extra-high arch dam based on group intelligence.
The technical scheme is as follows: in order to achieve the aim, the invention provides a three-dimensional rapid modeling method for a dam face of an ultra-high arch dam based on group intelligence, which comprises the following steps:
(1) preparing data: comprises an information acquisition unit and a flight control unit;
(2) data transmission: comprises a wireless transmission unit; the wireless transmission unit is used for transmitting the image control point data, the POS data, the oblique image and the laser scanning data which are acquired by the information acquisition unit to the model construction internal stage;
(3) constructing a model: the system comprises a data processing unit and a three-dimensional modeling unit; and the three-dimensional modeling unit is used for automatically constructing the three-dimensional live-action model of the ultra-high arch dam by using context Capture software according to the registered data.
Further, the specific steps for implementing the flight control unit in the data preparation are as follows: and the flight control unit is used for controlling the unmanned aerial vehicle to carry out aerial photography according to the formulated flight plan.
Further, the step of implementing the information acquisition unit in the data preparation is as follows: the information acquisition unit is used for acquiring image control point data, unmanned aerial vehicle aerial data and ground shooting data; the unmanned aerial photography data are POS data and oblique images during oblique photography of the unmanned aerial vehicle; the ground shooting data are laser scanning data of a ground three-dimensional laser scanner.
Further, the information acquisition unit in the data preparation comprises the following specific steps: before data acquisition, image control points are distributed by utilizing a carrier phase difference technology, the image control points need to be uniformly distributed in the whole measuring area range and have a certain distance from the edge of the measuring area, and the plane position and the elevation measurement of the image control points are synchronously carried out by adopting an RTK (real time kinematic) method; according to the position of the survey area, an unmanned aerial vehicle flight plan is formulated, and flight parameters are set according to the flight plan, wherein the flight plan comprises a preset track route and a flight mode.
Further, the multi-view adjustment processing in the model construction comprises the following specific steps: and performing feature extraction on the images according to an SIFT feature extraction algorithm, establishing an error equation of adjustment of a multi-vision image self-checking area network of connecting points, connecting lines, control point coordinates and POS data, and obtaining position and posture data of each image, namely the position and posture data of the image and object space coordinates of all encrypted points by joint adjustment calculation.
Further, the multi-view image dense matching in the model construction comprises the following specific steps: the method comprises the steps of utilizing Multi-view image joint adjustment to process obtained image external orientation elements and object space coordinates of encrypted points, combining a CMVS (Cluster Multi-view Stereo) method to carry out Cluster classification on images, and reducing data volume.
Further, the step of feature-based point cloud registration in the model construction is as follows: extracting a certain number of characteristic point coordinates on the three-dimensional laser point cloud, associating the characteristic point coordinates with corresponding homonymous points on the oblique image, acquiring the oblique image three-dimensional point cloud under a laser point cloud coordinate system through absolute orientation, and then establishing a mapping relation between the two point clouds, thereby realizing registration between the point clouds.
Further, the specific steps of feature-based point cloud registration in the model construction are as follows: the data sets of the oblique image point cloud and the ground laser point cloud are respectively P, Q, and the coordinate systems of the oblique image point cloud and the ground laser point cloud are respectively O-XYZ, O-XYZ and Pi、QiRespectively, a certain point, P, on the two point cloudsiAnd QiIs a homonymous feature point, wherein Pi(x,y,z)∈P,Qi(X, Y, Z) belongs to Q, the relation between the two point sets can be regarded as a rigid body transformation, and the following relation is satisfied between any homonymous points in the two point sets:
where μ is a scale factor of a coordinate system of the two-point set, R represents a rotation matrix, and R (α, β, γ) is expressed as follows:
representing a translation matrix, x0、y0、z0Respectively the translation values in the direction of each coordinate axis, representing a basic matrix, and a registration model respectively comprising 3 rotation parameters alpha, beta and gamma and 3 translation parameters x0、y0、z0And scaling parameter mu by 1 scale, and converting the oblique image point cloud in O-XYZ to the ground laser point cloud coordinate system in O-XYZ.
Further, the specific steps of constructing the data processing unit in the model construction in the step (3) are as follows: the data processing unit is used for carrying out multi-view image adjustment processing and multi-view image dense matching on the image control point data, the POS data and the oblique images to generate image point clouds, carrying out multi-station scanning and point cloud splicing on the laser scanning data to generate laser point clouds, and then carrying out data registration on the image point clouds and the laser point clouds by using point cloud registration based on features to lay a foundation for subsequent three-dimensional modeling.
Further, the specific steps of constructing the three-dimensional modeling unit in the model construction in the step (3) are as follows: and performing space-three encryption on the registered data, acquiring a large amount of high-density point cloud data after the space-three encryption, cutting and dividing the point cloud data into blocks, constructing an irregular triangulation network TIN for the dense point cloud in the divided blocks according to the set priority, constructing a white body three-dimensional model automatically and quickly, performing texture mapping and model refinement on the white body model, and finally generating a three-dimensional visual image of the surface of the extra-high arch dam.
The invention utilizes the unmanned aerial vehicle as a convenient, efficient and low-cost flight platform, and makes up for the defects of high cost, high requirement on a flight window, difficult production organization and the like of the traditional aerial survey. The unmanned aerial vehicle oblique photogrammetry technique has the characteristics of flexibility, economy and portability, and can acquire spatial information from a vertical angle and four oblique angles, so that high-resolution texture information of ground objects can be quickly acquired, and the efficiency of acquiring the texture information of the ground objects is effectively improved. The ground shooting adopts a three-dimensional laser scanning technology, is an active measurement technology with high efficiency, high precision and non-contact, can carry out field scanning under various severe observation conditions, and can rapidly acquire the surface three-dimensional coordinate data of a target object in a large area. And finally, by using Context Capture software, fusion modeling can be performed on the measurement data of oblique photography and the point cloud data of three-dimensional laser scanning, so that the real scene of the building is truly restored to the maximum extent.
Has the advantages that: compared with the prior art, the invention has the following advantages:
(1) the invention carries out information acquisition and three-dimensional modeling in a complex environment by remotely controlling the unmanned aerial vehicle;
(2) the unmanned aerial vehicle oblique photography technology and the three-dimensional laser scanner technology are combined, so that the advantage complementation is realized, the limitation that the traditional aerial photography technology can only obtain a ground image from one angle is broken through, the integrity of the point cloud of the shielded area of oblique photography is ensured, and the fineness of the model is improved;
(3) the invention can realize real-time three-dimensional modeling and drawing establishment and can carry out post-processing.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flowchart of a SIFT (Scale-Invariant Feature Transform) method in an embodiment;
FIG. 3 is a flow chart of a CMVS (Cluster Multi-view Stereo) method in an exemplary embodiment;
FIG. 4 is a flowchart of a PMVS (Patch-based Multi-view Stereo) method in an embodiment;
FIG. 5 is a schematic diagram of a three-dimensional modeling unit in an embodiment.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
The three-dimensional rapid modeling research method for the dam face of the ultra-high arch dam based on group intelligence in the specific engineering terrain comprises three parts, namely data preparation, data transmission and model construction, wherein the data preparation part comprises an information acquisition unit and a flight control unit as shown in figure 1, the information acquisition unit is used for acquiring relevant data of the dam face of the ultra-high arch dam, and the flight control unit is used for controlling an unmanned aerial vehicle to fly in a specific area; the data transmission part comprises a wireless transmission unit for transmitting dam surface data and flight parameters of the unmanned aerial vehicle; the model building part comprises a data processing unit and a three-dimensional modeling unit, wherein the data processing unit is used for processing the collected dam surface data to enable the three-dimensional modeling unit to better perform rapid modeling on the dam surface of the ultra-high arch dam. The method specifically comprises the following steps:
step S1, data preparation;
s1-1, before data acquisition, an information acquisition unit utilizes a carrier phase difference technology to lay image control points which need to be uniformly laid in the whole range of the measurement area of the extra-high arch dam and have a certain distance from the edge of the measurement area;
s1-2, performing RTK (real time kinematic) method on the image control point plane position and elevation measurement synchronously;
s1-3, making a flight plan of the unmanned aerial vehicle according to the position of the measuring area, wherein the flight plan comprises a preset track route and a flight mode;
s1-4, setting flight parameters according to the flight plan;
s1-5, controlling the unmanned aerial vehicle to carry out aerial photography by the flight control unit according to the formulated flight plan;
s1-6, the information acquisition unit acquires image control point data, unmanned aerial vehicle aerial data and ground shooting data; the unmanned aerial data are POS data and oblique images during oblique photography of the unmanned aerial vehicle; the ground shooting data are laser scanning data of a ground three-dimensional laser scanner;
step S2, data transmission;
and S2-1, the wireless transmission unit transmits the image control point data, the POS data, the oblique image and the laser scanning data acquired by the information acquisition unit to the model construction part.
Step S3, constructing a model;
s3-1, the data processing unit performs multi-view image adjustment processing on the image control point data, the POS data and the inclined images transmitted by the wireless transmission unit so as to eliminate the problems of geometric deformation and shielding among the image data of different angles of the dam surface;
s3-1-1, performing Feature extraction on an image according to a SIFT (Scale-inverse Feature Transform) Feature extraction algorithm, where the method flow is shown in fig. 2, a Scale space of an image may be obtained by convolving a Scale-variable gaussian function G (x, y, σ) with an input image I (x, y), and a difference between two adjacent gaussian Scale spaces is used to obtain a difference-of-gaussian Scale space DOG response value image G (x, y, σ), where:
I(x,y)=L(x,y,kσ)-L(x,y,σ)
in the formula:
sigma is a scale factor of the image, and the smaller the value is, the less the image is smoothed, and the smaller the corresponding scale is;is a convolution operation; (x, y) is the pixel point location; k is a constant representing a multiple of the adjacent scale space.
And detecting extreme points and accurately positioning key points, comparing each pixel point with upper and lower adjacent scales and 26 pixel points with the same scale (8 adjacent pixels with the same scale and 9 multiplied by 2 adjacent pixels with the same scale) in a DOG scale space to obtain local extreme points so as to fit a three-dimensional quadratic function to accurately determine the position and the scale of the key points, and simultaneously removing the key points with low contrast and unstable edge response points so as to enhance the matching stability and improve the anti-noise capability.
Determining direction parameters of key points, determining a gradient mode and a direction for each key point by using the characteristics of gradient and direction distribution of neighborhood pixels of the key points, wherein the calculation formula is as follows:
SIFT descriptor generation and image matching, wherein each feature point is described by using a 128-dimensional SIFT feature vector, and after the descriptors of the two images are generated, Euclidean distance of the feature vectors of the key points is used for judging to complete matching between the two images.
S3-1-2, resolving the connection points and the connection lines obtained by image feature matching, establishing an error equation of adjustment of a multi-view image self-checking area network of the connection points, the connection lines, the image control point data and the POS data, and regarding self-checking parameters, the image control point coordinates and external orientation elements as weighted observation values, wherein the basic error equation of the adjustment is as follows:
in the formula, A1、A2、A3、A4Is a matrix of coefficients corresponding to the error equation, X1Vector of corrections of coordinates of encrypted points, P1Being the weight of the observed value of the image point, X2As a vector of corrections of the coordinates of the image control points, P2Weight of observed value of image control point, X3Is the correction vector, P, of the exterior orientation element of the dam surface image3Weight of the observed value of the exterior orientation element, X4For self-checking parameter vectors, P4The weight of the observed value of the character calibration parameter is V, the residual vector is V, and the constant vector is L;
the above equation can be simplified as: v is AX-L, P
S3-1-3, performing joint adjustment calculation on the error equation to obtain the external orientation elements (namely the position and posture data of the photo) of each image and the object coordinates of all encrypted points;
s3-2, performing multi-view image dense matching by using the image exterior orientation element and the object space coordinates of the encrypted points obtained by multi-view image joint adjustment processing to generate an image point cloud;
s3-2-1, Cluster classification is carried out on the images by combining a CMVS (Cluster Multi-view Stereo) method, the flow of the method is shown in figure 3, redundant images are removed after SFM filtration, image size constraint and coverage constraint are repeatedly enhanced until the image size constraint is met, the purpose is to reduce the data size, delete data with poor dam surface image quality and ensure the dam surface image quality, and therefore the subsequent dense matching efficiency is improved;
s3-2-2, clustering, and then performing a PMVS (batch-based Multi-view Stereo) method, wherein the process is shown in fig. 4, and the method is characterized in that dense matching is completed on a plurality of images at the same position and different angles of the dam surface through three steps of initial feature matching, expansion and filtering, so that features of the dam surface blind area are supplemented by using information in the Multi-view images;
s3-2-3, acquiring dense three-dimensional point cloud;
s3-3, the data processing unit performs multi-station scanning and point cloud splicing on the laser scanning data to generate laser point cloud;
s3-3-1, stations are arranged at different positions around the dam face, a multi-station multi-view scanning mode is adopted, different positions of the dam face are scanned respectively, and block point cloud data of each position of the dam face are obtained;
s3-3-2, performing pairwise splicing on the partitioned point cloud data by adopting an improved ICP (inductively coupled plasma) algorithm, finding a point closest to each point of the cloud set to be spliced in a reference point set to establish a point pair mapping relation, and then solving an optimal coordinate conversion relational expression by least square iteration on the condition that the sum of squares of distances between the point pairs is minimum, thereby completing point cloud splicing and generating laser point clouds of each station;
s3-4, carrying out data registration on the image point cloud and the laser point cloud by using the point cloud registration based on the characteristics, and laying a foundation for subsequent three-dimensional modeling;
s3-4-1, extracting a certain number of characteristic point coordinates on the three-dimensional laser point cloud, and associating the characteristic point coordinates with corresponding homonymous points on the oblique image, wherein the oblique image point cloud and the ground laser point cloud data set are assumed to be P, B, C, B, respectively,Q, the coordinate systems of which are O-XYZ, O-XYZ and P respectivelyi、QiRespectively, a certain point, P, on the two point cloudsiAnd QiIs a homonymous feature point, wherein Pi(x,y,z)∈P,Qi(X,Y,Z)∈Q;
S3-4-2, acquiring the three-dimensional point cloud of the oblique image under the laser point cloud coordinate system through absolute orientation, and thus establishing a mapping relation between the two point clouds to realize registration between the point clouds. Any homonymous point in the two point sets satisfies the following relation:
where μ is a scale factor of a coordinate system of the two-point set, R represents a rotation matrix, and R (α, β, γ) can be expressed as follows:
representing a translation matrix, x0、y0、z0Respectively the translation values in the direction of each coordinate axis,representing a basic matrix, the registration model respectively comprises 3 rotation parameters alpha, beta and gamma and 3 translation parameters x0、y0、z0The 1 scale scales the parameter μ to convert the oblique image point cloud in O-XYZ to the ground laser point cloud coordinate system in O-XYZ.
S3-5, the three-dimensional modeling unit utilizes Context Capture software to automatically construct the three-dimensional real scene model of the ultra-high arch dam according to the registered data, as shown in FIG. 5;
s3-5-1, performing space-three encryption on the registered data;
s3-5-2, acquiring a large amount of high-density point cloud data after space-three encryption;
s3-5-3, carrying out block cutting and segmentation on the point cloud data, and constructing an irregular triangulation network TIN for dense point clouds in segmented blocks according to the set priority level;
s3-5-4, automatically and quickly constructing a white body three-dimensional model;
s3-5-5, performing texture mapping on the white body model;
s3-5-6, carrying out model refinement treatment;
and S3-5-7, generating a three-dimensional visual image of the dam face of the ultra-high arch dam.
Claims (10)
1. A three-dimensional rapid modeling method for an ultra-high arch dam face based on group intelligence is characterized by comprising the following steps:
(1) preparing data: comprises an information acquisition unit and a flight control unit;
(2) data transmission: comprises a wireless transmission unit; the wireless transmission unit is used for transmitting the image control point data, the POS data, the oblique image and the laser scanning data which are acquired by the information acquisition unit to the model construction internal stage;
(3) constructing a model: the system comprises a data processing unit and a three-dimensional modeling unit; and the three-dimensional modeling unit is used for automatically constructing the three-dimensional live-action model of the ultra-high arch dam by using context Capture software according to the registered data.
2. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the specific steps of implementing the flight control unit in the data preparation are as follows: and the flight control unit is used for controlling the unmanned aerial vehicle to carry out aerial photography according to the formulated flight plan.
3. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the step of implementing the information acquisition unit in the data preparation is as follows: the information acquisition unit is used for acquiring image control point data, unmanned aerial vehicle aerial data and ground shooting data; the unmanned aerial photography data are POS data and oblique images during oblique photography of the unmanned aerial vehicle; the ground shooting data are laser scanning data of a ground three-dimensional laser scanner.
4. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the information acquisition unit in the data preparation comprises the following specific steps: before data acquisition, image control points are distributed by utilizing a carrier phase difference technology, the image control points need to be uniformly distributed in the whole measuring area range and have a certain distance from the edge of the measuring area, and the plane position and the elevation measurement of the image control points are synchronously carried out by adopting an RTK (real time kinematic) method; according to the position of the survey area, an unmanned aerial vehicle flight plan is formulated, and flight parameters are set according to the flight plan, wherein the flight plan comprises a preset track route and a flight mode.
5. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the multi-view adjustment processing in the model construction comprises the following specific steps: and performing feature extraction on the images according to an SIFT feature extraction algorithm, establishing an error equation of adjustment of a multi-vision image self-checking area network of connecting points, connecting lines, control point coordinates and POS data, and obtaining position and posture data of each image, namely the position and posture data of the image and object space coordinates of all encrypted points by joint adjustment calculation.
6. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the multi-view dense matching in the model construction comprises the following specific steps: the method comprises the steps of utilizing Multi-view image joint adjustment to process obtained image external orientation elements and object space coordinates of encrypted points, combining a CMVS (Cluster Multi-view Stereo) method to carry out Cluster classification on images, and reducing data volume.
7. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the step of feature-based point cloud registration in the model construction is as follows: extracting a certain number of characteristic point coordinates on the three-dimensional laser point cloud, associating the characteristic point coordinates with corresponding homonymous points on the oblique image, acquiring the oblique image three-dimensional point cloud under a laser point cloud coordinate system through absolute orientation, and then establishing a mapping relation between the two point clouds, thereby realizing registration between the point clouds.
8. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the specific steps of feature-based point cloud registration in the model construction are as follows: the data sets of the oblique image point cloud and the ground laser point cloud are respectively P, Q, and the coordinate systems of the oblique image point cloud and the ground laser point cloud are respectively O-XYZ, O-XYZ and Pi、QiRespectively, a certain point, P, on the two point cloudsiAnd QiIs a homonymous feature point, wherein Pi(x,y,z)∈P,Qi(X, Y, z) belongs to Q, the relation between the two point sets can be regarded as a rigid body transformation, and the following relation is satisfied between any homonymous points in the two point sets:
where μ is a scale factor of a coordinate system of the two-point set, R represents a rotation matrix, and R (α, β, γ) is expressed as follows:
representing translation momentArray, x0、y0、z0Respectively the translation values in the direction of each coordinate axis, representing a basic matrix, and a registration model respectively comprising 3 rotation parameters alpha, beta and gamma and 3 translation parameters x0、y0、z0And scaling parameter mu by 1 scale, and converting the oblique image point cloud in O-XYZ to the ground laser point cloud coordinate system in O-XYZ.
9. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the concrete steps of constructing the data processing unit in the model construction in the step (3) are as follows: the data processing unit is used for carrying out multi-view image adjustment processing and multi-view image dense matching on the image control point data, the POS data and the oblique images to generate image point clouds, carrying out multi-station scanning and point cloud splicing on the laser scanning data to generate laser point clouds, and then carrying out data registration on the image point clouds and the laser point clouds by using point cloud registration based on features to lay a foundation for subsequent three-dimensional modeling.
10. The three-dimensional rapid modeling method for the dam face of the ultra-high arch dam based on the swarm intelligence as claimed in claim 1, wherein the concrete steps of building the three-dimensional modeling unit in the model building in the step (3) are as follows: and performing space-three encryption on the registered data, acquiring a large amount of high-density point cloud data after the space-three encryption, cutting and dividing the point cloud data into blocks, constructing an irregular triangulation network TIN for the dense point cloud in the divided blocks according to the set priority, constructing a white body three-dimensional model automatically and quickly, performing texture mapping and model refinement on the white body model, and finally generating a three-dimensional visual image of the surface of the extra-high arch dam.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110347485.1A CN113034678A (en) | 2021-03-31 | 2021-03-31 | Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110347485.1A CN113034678A (en) | 2021-03-31 | 2021-03-31 | Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113034678A true CN113034678A (en) | 2021-06-25 |
Family
ID=76453091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110347485.1A Pending CN113034678A (en) | 2021-03-31 | 2021-03-31 | Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113034678A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332382A (en) * | 2022-03-10 | 2022-04-12 | 烟台市地理信息中心 | Three-dimensional model manufacturing method based on unmanned aerial vehicle proximity photogrammetry |
CN114445581A (en) * | 2021-12-31 | 2022-05-06 | 杭州宏安四维科技有限公司 | Method for generating hydraulic engineering dam simulation three-dimensional model |
CN117128861A (en) * | 2023-10-23 | 2023-11-28 | 常州市建筑材料研究所有限公司 | Monitoring system and monitoring method for station-removing three-dimensional laser scanning bridge |
CN117215774A (en) * | 2023-08-21 | 2023-12-12 | 上海瞰融信息技术发展有限公司 | Engine system and method for automatically identifying and adapting live-action three-dimensional operation task |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110136259A (en) * | 2019-05-24 | 2019-08-16 | 唐山工业职业技术学院 | A kind of dimensional Modeling Technology based on oblique photograph auxiliary BIM and GIS |
CN111458720A (en) * | 2020-03-10 | 2020-07-28 | 中铁第一勘察设计院集团有限公司 | Airborne laser radar data-based oblique photography modeling method for complex mountainous area |
CN112288848A (en) * | 2020-10-13 | 2021-01-29 | 中国建筑第八工程局有限公司 | Method for calculating engineering quantity through three-dimensional modeling of unmanned aerial vehicle aerial photography |
-
2021
- 2021-03-31 CN CN202110347485.1A patent/CN113034678A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110136259A (en) * | 2019-05-24 | 2019-08-16 | 唐山工业职业技术学院 | A kind of dimensional Modeling Technology based on oblique photograph auxiliary BIM and GIS |
CN111458720A (en) * | 2020-03-10 | 2020-07-28 | 中铁第一勘察设计院集团有限公司 | Airborne laser radar data-based oblique photography modeling method for complex mountainous area |
CN112288848A (en) * | 2020-10-13 | 2021-01-29 | 中国建筑第八工程局有限公司 | Method for calculating engineering quantity through three-dimensional modeling of unmanned aerial vehicle aerial photography |
Non-Patent Citations (2)
Title |
---|
刘晶: "基于无人机航拍的滑坡实景三维建模及危险性评价研究", 《中国优秀博硕士学位论文全文数据库(硕士)基础科学辑》 * |
曹琳: "基于无人机倾斜摄影测量技术的三维建模及其精度分析", 《中国优秀博硕士学位论文全文数据库(硕士)-基础科学辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114445581A (en) * | 2021-12-31 | 2022-05-06 | 杭州宏安四维科技有限公司 | Method for generating hydraulic engineering dam simulation three-dimensional model |
CN114332382A (en) * | 2022-03-10 | 2022-04-12 | 烟台市地理信息中心 | Three-dimensional model manufacturing method based on unmanned aerial vehicle proximity photogrammetry |
CN114332382B (en) * | 2022-03-10 | 2022-06-07 | 烟台市地理信息中心 | Three-dimensional model manufacturing method based on unmanned aerial vehicle proximity photogrammetry |
CN117215774A (en) * | 2023-08-21 | 2023-12-12 | 上海瞰融信息技术发展有限公司 | Engine system and method for automatically identifying and adapting live-action three-dimensional operation task |
CN117215774B (en) * | 2023-08-21 | 2024-05-28 | 上海瞰融信息技术发展有限公司 | Engine system and method for automatically identifying and adapting live-action three-dimensional operation task |
CN117128861A (en) * | 2023-10-23 | 2023-11-28 | 常州市建筑材料研究所有限公司 | Monitoring system and monitoring method for station-removing three-dimensional laser scanning bridge |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN113034678A (en) | Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence | |
CN109945853B (en) | Geographic coordinate positioning system and method based on 3D point cloud aerial image | |
CN112927360A (en) | Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data | |
CN107767456A (en) | A kind of object dimensional method for reconstructing based on RGB D cameras | |
Guo et al. | High-precision detection method for large and complex steel structures based on global registration algorithm and automatic point cloud generation | |
CN112929626B (en) | Three-dimensional information extraction method based on smartphone image | |
CN115937288A (en) | Three-dimensional scene model construction method for transformer substation | |
CN113642463B (en) | Heaven and earth multi-view alignment method for video monitoring and remote sensing images | |
CN112270698A (en) | Non-rigid geometric registration method based on nearest curved surface | |
CN112767461A (en) | Automatic registration method for laser point cloud and sequence panoramic image | |
CN114741768A (en) | Three-dimensional modeling method for intelligent substation | |
Liu et al. | A novel adjustment model for mosaicking low-overlap sweeping images | |
CN114463521B (en) | Building target point cloud rapid generation method for air-ground image data fusion | |
Barazzetti et al. | Automated and accurate orientation of complex image sequences | |
Gao et al. | Multi-source data-based 3D digital preservation of largescale ancient chinese architecture: A case report | |
Li et al. | Research on multiview stereo mapping based on satellite video images | |
Dahaghin et al. | 3D thermal mapping of building roofs based on fusion of thermal and visible point clouds in uav imagery | |
CN112767459A (en) | Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion | |
CN110969650B (en) | Intensity image and texture sequence registration method based on central projection | |
CN116563377A (en) | Mars rock measurement method based on hemispherical projection model | |
CN116597080A (en) | Complete scene 3D fine model construction system and method for multi-source spatial data | |
CN116258832A (en) | Shovel loading volume acquisition method and system based on three-dimensional reconstruction of material stacks before and after shovel loading | |
Fritsch et al. | Photogrammetric point cloud collection with multi-camera systems | |
Li et al. | Automatic Keyline Recognition and 3D Reconstruction For Quasi‐Planar Façades in Close‐range Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210625 |