CN111637850B - Self-splicing surface point cloud measuring method without active visual marker - Google Patents
Self-splicing surface point cloud measuring method without active visual marker Download PDFInfo
- Publication number
- CN111637850B CN111637850B CN202010475819.9A CN202010475819A CN111637850B CN 111637850 B CN111637850 B CN 111637850B CN 202010475819 A CN202010475819 A CN 202010475819A CN 111637850 B CN111637850 B CN 111637850B
- Authority
- CN
- China
- Prior art keywords
- projector
- camera
- pose
- measured
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000000007 visual effect Effects 0.000 title claims abstract description 20
- 239000003550 marker Substances 0.000 title description 8
- 238000005259 measurement Methods 0.000 claims abstract description 26
- 238000011084 recovery Methods 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 33
- 230000036544 posture Effects 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000000691 measurement method Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 4
- 239000012634 fragment Substances 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2513—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2545—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with one projection direction and several detection directions, e.g. stereo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention relates to a self-splicing surface point cloud measuring method without active visual marks, wherein a camera and a projector can independently and freely move to obtain a series of modulated structured light images which jointly cover the whole surface to be measured, the images are decoded to obtain coded information, dense pixel matching is realized according to the coded information in the series of modulated structured light images, space geometric constraints of the images at different poses are simultaneously established, then the global poses of the camera and the projector corresponding to each image and the space coordinates of a reconstructed three-dimensional point are calculated and optimized under the framework of a motion recovery structure, and finally point cloud data on the whole surface to be measured under a unified world coordinate system are output, point cloud marks do not need to be arranged in advance, an independent post-splicing processing algorithm is not needed, and the operation is flexible, the method can be suitable for the accurate measurement of objects with different sizes and shapes.
Description
Technical Field
The invention belongs to the technical field of vision measurement. In particular to a self-splicing surface point cloud measuring method without active visual markers.
Background
The structured light measurement method is widely used for the point cloud measurement of the object surface due to the advantages of high precision, non-contact, low cost and the like. A typical structured light measurement system consists of a computer, an industrial camera and a projector, wherein the camera and the projector need to be fixed together to ensure that the relative pose of the camera and the projector is unchanged. Before actual measurement, the system needs to be calibrated to establish a matching relationship between the projector and camera image planes through the encoded information in the structured light image. During the measurement process, the camera captures a structured light image projected by the projector and modulated by the object surface, and the computer decodes and resolves the captured image to obtain a dense three-dimensional point cloud.
In the actual measurement process of an object, due to the limitation of the field of view of a camera and a projector, the shielding of the object itself, and the like, a single measurement of a common structured light measurement system can only acquire a local point cloud on the surface of the object. Therefore, the structured light measurement system needs to be moved around the object to perform multiple measurements, and the relative pose of the camera and the projector needs to be kept unchanged during the moving process. However, each measurement is performed in a different coordinate system, and in order to obtain a complete three-dimensional shape of the surface, the local measurement data need to be spliced into a uniform coordinate system.
At present, a great deal of research is carried out on the problem of point cloud splicing, but some problems still exist in practical application. A common method used in the industry is to attach visual markers to the surface of or near the object to be measured and splice two pieces of point cloud together by aligning the spatial coordinates of three or more common visual markers contained in the two adjacent pieces of point cloud. But with the increase of the number of times of point cloud splicing, splicing errors are accumulated. University of major graduates proposed a marker-based three-dimensional data stitching method (CN201610221163.1) that reconstructs the three-dimensional coordinates of all visual markers in the global coordinate system in advance. And then, the spatial coordinates of the visual mark points are used as a reference, and the local point cloud data are fused into a global coordinate system to reduce splicing errors. This method requires an additional camera to acquire the image of the marker point and the coordinates of the global reference point to be calculated in advance.
The visual mark is used for splicing the multi-station measurement point cloud data, visual mark points are required to be pasted on the surface of a measured object one by one, the preparation work before measurement is complicated and time-consuming, the complicated mark removing work is required after measurement, and in some cases, the mark points are not allowed to be pasted on the surface of the measured object at all. Another disadvantage of using the marker at the same time is that the object surface point cloud of the area covered by the marker cannot be accurately obtained.
Nanjing aerospace university has proposed an industrial photogrammetry method (CN201910202543.4) without encoding points, which utilizes a projector to project speckle images onto the surface of a measured object, a camera to shoot the object images covered with speckles from a plurality of poses, and establishes matching relations among different images according to speckle textures, thereby reconstructing three-dimensional point cloud. However, this method only allows the camera to shoot in different poses, while the projector can only remain stationary in one pose. When most objects are measured, due to the shielding of the objects and the limitation of the view field of the projector, the projector is fixed in one pose and cannot realize point cloud measurement of the whole surface of the object to be measured.
In addition, a method for splicing point cloud data measured by a structured light measuring system at different stations through software is also provided. Such a stitching algorithm is based on extracting common features in an overlapping region of two point clouds, and generally includes two steps: firstly, calculating rough coordinate transformation between two point clouds according to the extracted common characteristics; the results are then optimized using an Iterative Closest Point (ICP) algorithm. The splicing effect of the method for splicing the measured fragment point cloud data by using software depends on the shape of a measured object to a great extent and whether common characteristics can be extracted from different fragment point clouds, and many industrial parts do not meet the requirement. Therefore, this splicing method is not suitable for many industrial measurement problems.
Disclosure of Invention
The invention aims to provide a self-splicing surface point cloud measuring method without an active visual marker aiming at the problems brought forward by the background technology. Unlike the traditional structured light measurement method in which a camera and a projector are fixedly connected together, in the method of the present invention, both the camera and the projector can move independently and freely to acquire a series of modulated structured light images, which collectively cover the entire surface to be measured. Only the series of images and the measurement method provided by the invention are needed to directly output the point cloud data on the whole surface to be measured under a unified world coordinate system. The method does not need to lay mark points in advance, does not need an independent point cloud splicing post-processing algorithm, is flexible to operate, and can be suitable for accurate measurement of objects with different sizes and shapes.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
a self-stitched surface point cloud measurement method without active visual markers, wherein: the method comprises the following steps:
preparing a projector which can independently and freely move and can project structured light to a measured object, an industrial camera which can independently and freely move and can shoot images of the measured object and a computer, wherein the projector can project the structured light wrapped with coded information, and the computer is used for controlling analysis and calculation required by projector projection, camera shooting and three-dimensional point cloud measurement; calibrating intrinsic parameters of a camera and a projector, wherein the projector and the camera both adopt an intrinsic parameter model based on perspective projection;
adjusting the relative positions and postures of the object to be measured, the projector and the camera to enable the surface of the object to be measured to be positioned in a common field of view of the projector and the camera;
step three, keeping the relative pose among the object to be measured, the projector and the camera, projecting a group of structured light image sets P to the surface of the object to be measured by the projector, projecting each structured light to the surface of the object to be measured, simultaneously shooting a structured light field image modulated by the surface of the object by the camera, adding the image into the object structured light image set S, and judging whether the S contains the complete surface of the object, if not, executing the step four, and if so, executing the step five;
step four, keeping the pose of the measured object unchanged, keeping the pose of any one of the camera and the projector unchanged, flexibly changing the pose of the other one of the camera and the projector, helping to enlarge the measured area and ensure that the measured area still has a common view field relative to the camera and the projector with changed poses, and then returning to the step three;
decoding all images in the object structured light image set S to obtain coding information corresponding to each pixel, matching pixels of each structured light image according to the coding information for a structured light image group which is fixed in the pose of the projector and shot in a moving mode by the camera due to the fixed position of the structured light wrapped by the coding information, and directly matching the images by the pixels according to two groups of structured light images shot by the camera for a structured light image group which is fixed by the camera and changes the position of the projector due to the fixed position of the pixels shot by the camera so as to construct the space geometric constraint relation between the images corresponding to the camera and the projector in all the poses,
establishing a unified coordinate system, and solving all poses of the camera and the projector in the unified coordinate system through a frame of the motion recovery structure;
reconstructing complete point cloud data of the surface of the measured object in a unified coordinate system through dense pixel matching pairs and poses of all visual angles of the camera and the projector;
and step eight, using the camera internal parameters, the projector internal parameters which are calibrated in advance, and the three-dimensional coordinates of all the pose parameters and the space points which are obtained by calculation as optimization variables, utilizing the light speed adjustment for overall optimization, and finally reconstructing complete point cloud data of the measured object according to the optimized parameters.
In order to optimize the structural form, the specific measures adopted further comprise:
in the fifth step, a specific method for constructing a space geometric constraint relationship between the corresponding images of the camera and the projector under all the poses is as follows: note NpFor total number of projector movements, PiFor the pose of the projector after i movements, i is 1,2PRecord NiFor projector attitude PiThe total number of times the camera moves while it remains stationary is countedFor projector attitude PiKeeping the pose of the camera after the jth camera is moved when the camera is not moved, j is 1,2iRecording the in-position posture of the cameraThe modulation image set shot at the lower part isPosition and attitude of projector PiThe lower projected structured light image is Pi(ii) a For j e {1,2i},k∈{1,2,...,Ni},i∈{1,2,...,NPH, and j ≠ k for arbitrary modulation image setsPiAndmatching between the images by using the same coding information; for i e {1,2i-1} of any two sets of modulation image setsAndmatching between images is performed with the same pixel directly as a matching pixel pair.
In the sixth step, the camera pose is realized through the frame of the motion recovery structureAnd projector pose PiResolution of pose, where j ∈ {1, 2., Ni},i∈{1,2,...,NPFirstly, discrete sampling is carried out from all matched pixels to obtain a relatively sparse matching point set, space geometric constraint is established by utilizing the sparse matching point set, and then a projector P is solved by utilizing a motion recovery structure frameiAnd cameraPose in a unified coordinate system.
The motion restoration structural framework described above is an incremental three-dimensional reconstruction framework.
In the sixth step, the projector P is used1The coordinate system is unified coordinate system, and the camera is recoveredAnd a projector P1Estimating a basic matrix F of the projector and the camera according to the matched pixel pair between the projector and the camera, further calculating an essential matrix E, and decomposing the essential matrix E to obtain the cameraAnd a projector P1The position and orientation of the camera and the projector at each shooting position are solved in an incremental manner in a unified coordinate system through space geometric constraintPosition and posture in (1).
In the sixth step, after a basic matrix F of the projector and the camera is estimated, the basic matrix F is calculated according to a formula
Obtaining an essential matrix E, and obtaining a camera pose by utilizing SVD decompositionAnd projector pose P1;
Wherein X1And X2Is a cameraAnd a projector P1The homogeneous coordinate of the upper matching point, F is a basic matrix, E is an essential matrix, K1Is a projector internal reference matrix, K2Is a camera internal reference matrix.
The encoded information is phase.
Unlike the traditional structured light measurement method in which a camera and a projector are fixedly connected together, in the method of the present invention, both the camera and the projector can move independently and freely to acquire a series of modulated structured light images, which collectively cover the entire surface to be measured. Only the series of images and the measurement method provided by the invention are needed to directly output the point cloud data on the whole surface to be measured under a unified world coordinate system. The method directly projects the structured light wrapped with the coded information through the projector, does not need to lay mark points in advance, does not need an independent point cloud splicing post-processing algorithm, is flexible to operate, and can be suitable for accurate measurement of objects with different sizes and shapes.
The invention has the following advantages:
(1) the three-dimensional point cloud measuring method provided by the invention can independently adjust the relative positions of the projector and the camera under the given shooting rule according to the difference of the measuring objects, avoids the problem of incomplete measuring data caused by a conventional fixed structure, and can be suitable for the accurate measurement of objects with different sizes and shapes;
(2) the self-splicing point cloud measuring method provided by the invention does not need to arrange visual mark points on the surface of an object, does not need additional steps or equipment for data splicing, is convenient and quick, and can stably and reliably realize the self-splicing of point cloud data for a measured object without textures and characteristics.
(3) According to the point cloud measuring method provided by the invention, point cloud data under a unified world coordinate system is directly output through global optimization, and accumulated errors caused by sequential splicing of fragment point cloud data can be effectively reduced.
Drawings
FIG. 1 is a schematic view of the measurement process of the method of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a partially acquired image of an object being measured in an embodiment of the method of the present invention;
FIG. 4 is a graphical illustration of the visualization of the pose of the camera and projector in an embodiment of the method of the present invention;
FIG. 5 shows reconstructed point cloud data and its surface reconstruction result in an embodiment of the method of the present invention.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
The self-splicing surface point cloud measurement method without the active visual marker of the embodiment comprises the following steps: the method comprises the following steps:
preparing a projector which can independently and freely move and can project structured light to a measured object, an industrial camera which can independently and freely move and can shoot images of the measured object and a computer, wherein the projector can project the structured light wrapped with coded information, and the computer is used for controlling analysis and calculation required by projector projection, camera shooting and three-dimensional point cloud measurement; calibrating intrinsic parameters of a camera and a projector, wherein the projector and the camera both adopt an intrinsic parameter model based on perspective projection;
adjusting the relative positions and postures of the object to be measured, the projector and the camera to enable the surface of the object to be measured to be positioned in a common field of view of the projector and the camera;
step three, keeping the relative pose among the object to be measured, the projector and the camera, projecting a group of structured light image sets P to the surface of the object to be measured by the projector, projecting each structured light to the surface of the object to be measured, simultaneously shooting a structured light field image modulated by the surface of the object by the camera, adding the image into the object structured light image set S, and judging whether the S contains the complete surface of the object, if not, executing the step four, and if so, executing the step five;
step four, keeping the pose of the measured object unchanged, keeping the pose of any one of the camera and the projector unchanged, flexibly changing the pose of the other one of the camera and the projector, helping to enlarge the measured area and ensure that the measured area still has a common view field relative to the camera and the projector with changed poses, and then returning to the step three;
decoding all images in the object structured light image set S to obtain coding information corresponding to each pixel, matching pixels of each structured light image according to the coding information for a structured light image group which is fixed in the pose of the projector and shot in a moving mode by the camera due to the fixed position of the structured light wrapped by the coding information, and directly matching the images by the pixels according to two groups of structured light images shot by the camera for a structured light image group which is fixed by the camera and changes the position of the projector due to the fixed position of the pixels shot by the camera so as to construct the space geometric constraint relation between the images corresponding to the camera and the projector in all the poses,
establishing a unified coordinate system, and solving all poses of the camera and the projector in the unified coordinate system through a frame of the motion recovery structure;
reconstructing complete point cloud data of the surface of the measured object in a unified coordinate system through dense pixel matching pairs and poses of all visual angles of the camera and the projector;
and step eight, using the camera internal parameters, the projector internal parameters which are calibrated in advance, and the three-dimensional coordinates of all the pose parameters and the space points which are obtained by calculation as optimization variables, utilizing the light speed adjustment for overall optimization, and finally reconstructing complete point cloud data of the measured object according to the optimized parameters.
In the fifth step, the specific method for constructing the space geometric constraint relationship between the corresponding images of the camera and the projector under all the poses is as follows: note NpFor total number of projector movements, PiFor the pose of the projector after i movements, i is 1,2PRecord NiFor projector attitude PiThe total number of times the camera moves while it remains stationary is countedFor projector attitude PiKeeping the pose of the camera after the jth camera is moved when the camera is not moved, j is 1,2iRecording the in-position posture of the cameraThe modulation image set shot at the lower part isPosition and attitude of projector PiThe lower projected structured light image is Pi(ii) a For j e {1,2i},k∈{1,2,...,Ni},i∈{1,2,...,NPH, and j ≠ k for arbitrary modulation image setsPiAndmatching between the images by using the same coding information; for i e {1,2i-1} of any two sets of modulation image setsAndmatching between images is performed with the same pixel directly as a matching pixel pair.
In the sixth step, the camera pose is realized through the frame of the motion recovery structureAnd projector pose PiResolution of pose, where j ∈ {1, 2., Ni},i∈{1,2,...,NPFirstly, discrete sampling is carried out from all matched pixels to obtain a relatively sparse matching point set, space geometric constraint is established by utilizing the sparse matching point set, and then a projector P is solved by utilizing a motion recovery structure frameiAnd cameraPose in a unified coordinate system.
The motion restoration structural framework is an incremental three-dimensional reconstruction framework.
In the sixth step, the projector P is used1The coordinate system is unified coordinate system, and the camera is recoveredAnd a projector P1Estimating a basic matrix F of the projector and the camera according to the matched pixel pair between the projector and the camera, further calculating an essential matrix E, and decomposing the essential matrix E to obtain the cameraAnd a projector P1The poses of the camera and the projector in the unified coordinate system under each shooting position are solved incrementally through space geometric constraint.
In the sixth step, after a basic matrix F of the projector and the camera is estimated, the basic matrix F is calculated according to a formula
Obtaining an essential matrix E, and obtaining a camera pose by utilizing SVD decompositionAnd projector pose P1Of (1);
wherein X1And X2Is a cameraAnd a projector P1The homogeneous coordinate of the upper matching point, F is a basic matrix, E is an essential matrix, K1Is a projector internal reference matrix, K2Is a camera internal reference matrix.
The encoded information is the phase.
Specific examples of the method are given below:
the object to be measured is a porcelain-off vase, the shape of the porcelain-off vase is closed, and complete surface shape data cannot be obtained by measuring a single pose. The structured light projected by this example is horizontal and vertical tri-frequency quad-phase shifted fringe structured light, which encodes information as phase. Firstly, the wrapping phases of single frequency are respectively solved, the wrapping phases of different frequencies are used for making difference, and finally the horizontal and vertical absolute phases in the structured light are extracted.
In this embodiment, an AVTMako G-158B PoE camera is used, the imaging resolution is 2045 × 2045 pixels, a lens of a Schneider Kreuznach industrial camera with a focal length of 35mm is used, the projector is DLP4500 of texas instruments, and the resolution is 1240 × 912 pixels. And calibrating perspective projection parameters of the camera and the projector by adopting a plane-based calibration method.
In this embodiment, the object to be measured is 360 degrees closed, and in order to reconstruct a complete object, the position of the camera needs to be changed, but it needs to be ensured that the position of the projector remains unchanged when the camera is changed, and similarly, the position of the projector also needs to be changed, and it needs to be ensured that the position of the camera remains unchanged when the projector is changed, and a set of fringe images is shot before and after the projector is changed. The resulting image acquired is shown in fig. 3.
After acquiring complete image data of the surface of a measured object, decoding the image to obtain an absolute phase corresponding to each pixel, matching the pixels of the structured light images shot by all cameras under a fixed projector pose according to phase information, and directly matching the same pixels of two groups of structured light images shot by the cameras under the same pose according to the change of the projector position so as to construct space geometric constraint of the images under all viewing angles.
Then realizing the camera through the frame of the motion recovery structureAnd a projector PiPosition solution of (c), where j ∈ {1, 2., N ∈ ·i},i∈{1,2,...,NP}. The result is shown in fig. 4, where the small spatial polygon represents an industrial camera and the large spatial polygon represents a projector. In particular, embodiments employ an incremental three-dimensional reconstruction framework. First restore the cameraAnd a projector P1Position and attitude of (with projector P)1Is a unified coordinate system), a basic matrix F of the projector and the camera is estimated according to the matched pixel pairs between the projector and the camera, an essential matrix E is further calculated, and the essential matrix E is decomposed to obtain the cameraAnd a projector P1The poses of the camera and the projector in the unified coordinate system are calculated incrementally through space geometric constraint, the internal parameters of the camera and the projector, the coordinates of all the poses of the camera and the projector and the coordinates of all the space points of the projector are optimized through light beam adjustment, the optimized parameters are obtained, and finally complete point cloud data of the object are reconstructed, as shown in fig. 5. The reconstruction effect shows that the self-splicing surface point cloud measuring method without the active visual marking point is feasible in practical application, convenient to operate and complete in reconstruction data.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.
Claims (6)
1. A self-splicing surface point cloud measuring method without active visual markers is characterized in that: the method comprises the following steps:
preparing a projector which can independently and freely move and can project structured light to a measured object, an industrial camera which can independently and freely move and can shoot images of the measured object and a computer, wherein the projector can project the structured light wrapped with coded information, and the computer is used for controlling analysis and calculation required by projector projection, camera shooting and three-dimensional point cloud measurement; calibrating intrinsic parameters of a camera and a projector, wherein the projector and the camera both adopt an intrinsic parameter model based on perspective projection;
adjusting the relative positions and postures of the object to be measured, the projector and the camera to enable the surface of the object to be measured to be positioned in a common field of view of the projector and the camera;
keeping the relative pose among the object to be measured, the projector and the camera still, and projecting a group of structured light image sets by the projectorProjecting each structural light to the surface of the object to be measured, shooting a structural light field image modulated by the surface of the object by a camera at the same time, and adding the structural light field image into the object structural light image setIn and determineWhether the complete surface of the object is included or not is judged, if not, the fourth step is executed, and if yes, the fifth step is executed;
step four, keeping the pose of the measured object unchanged, keeping the pose of any one of the camera and the projector unchanged, flexibly changing the pose of the other one of the camera and the projector, helping to enlarge the measured area and ensure that the measured area still has a common view field relative to the camera and the projector with changed poses, and then returning to the step three;
step five, collecting the object structure light imageDecoding all images to obtain coding information corresponding to each pixel, matching the pixels of the camera image with the pixels of the projector image according to the coding information of the modulated structured light image group of the object surface modulated structured light image group shot by the fixed projector position and the moving camera according to the coded information of the modulated structured light image group, and directly matching the same position pixels of two groups of modulated structured light images shot by the camera before and after the projector position is changed for the object surface modulated structured light image group shot by the fixed camera and the changed projector position, thereby constructing the space geometric constraint relation between the corresponding images of the camera and the projector under all positions and positions;
establishing a unified coordinate system, and solving all poses of the camera and the projector in the unified coordinate system through a frame of the motion recovery structure;
reconstructing complete point cloud data of the surface of the measured object in a unified coordinate system through dense pixel matching pairs and poses of all visual angles of the camera and the projector;
and step eight, using the camera internal parameters, the projector internal parameters which are calibrated in advance, and the three-dimensional coordinates of all the pose parameters and the space points which are obtained by calculation as optimization variables, utilizing the light beam adjustment for overall optimization, and finally reconstructing complete point cloud data of the measured object according to the optimized parameters.
2. The method of claim 1, wherein the method comprises the following steps: in the fifth step, the specific method for constructing the space geometric constraint relationship between the corresponding images of the camera and the projector under all the poses is as follows: note the bookFor the total number of projector movements,for a projectoriThe pose after the secondary movement is determined,memory for recordingFor projector attitudeThe total number of times the camera moves while it remains stationary is countedFor projector attitudeWhen it is kept stilljThe pose of the camera after the camera is moved a second time,recording the in-position posture of the cameraThe modulation image set shot at the lower part isIn-position posture of projectorThe structure light image projected downwards is(ii) a For the,,And is andarbitrary modulation image set of,Andmatching between the images by using the same coding information; for theAny two sets of modulation image setsAndand matching between the images is carried out by directly taking the same pixel as a matching pixel pair.
3. The method of claim 2, wherein the method comprises the following steps: in the sixth step, the camera pose is realized through the frame of the motion recovery structureAnd projector poseIn which,Firstly, performing discrete sampling from all matched pixels to obtain a relatively sparse matching point set, wherein the number of matching point pairs in the relatively sparse matching point set is not less than 8, establishing space geometric constraint by using the sparse matching point set, and then calculating the projector by using a motion recovery structure frameAnd cameraPose in a unified coordinate system.
4. The method of claim 3 for self-stitched surface point cloud measurement without active visual markers, comprising: the motion recovery structure frame is an incremental three-dimensional reconstruction frame.
5. The method of claim 4, wherein the method comprises the following steps: in the sixth step, the projector is in the positionThe coordinate system of the lower part is a unified coordinate system according to the position and the attitude of the cameraAnd projector poseDiscrete sampled matched pixel pairs between to estimate the fundamental projector and camera matrixFAnd then calculate the essential matrixEDecomposition of essence matrixETo obtainAnd (4) the poses of the camera and the projector in the unified coordinate system are sequentially solved incrementally through space geometric constraint.
6. The method of claim 5, wherein the method comprises the following steps: in the sixth step, a basic matrix of the projector and the camera is estimatedFThen according to the formula
Obtaining an essential matrixEAnd then using SVD to decompose and obtain the cameraAnd a projectorThe pose of (a);
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010475819.9A CN111637850B (en) | 2020-05-29 | 2020-05-29 | Self-splicing surface point cloud measuring method without active visual marker |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010475819.9A CN111637850B (en) | 2020-05-29 | 2020-05-29 | Self-splicing surface point cloud measuring method without active visual marker |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111637850A CN111637850A (en) | 2020-09-08 |
CN111637850B true CN111637850B (en) | 2021-10-26 |
Family
ID=72326861
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010475819.9A Active CN111637850B (en) | 2020-05-29 | 2020-05-29 | Self-splicing surface point cloud measuring method without active visual marker |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111637850B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112733641B (en) * | 2020-12-29 | 2024-07-05 | 深圳依时货拉拉科技有限公司 | Object size measuring method, device, equipment and storage medium |
CN113140042B (en) * | 2021-04-19 | 2023-07-25 | 思看科技(杭州)股份有限公司 | Three-dimensional scanning splicing method and device, electronic device and computer equipment |
CN113432550B (en) * | 2021-06-22 | 2023-07-18 | 北京航空航天大学 | Three-dimensional measurement splicing method for large-size part based on phase matching |
CN113838266B (en) * | 2021-09-23 | 2023-04-07 | 广东中星电子有限公司 | Drowning alarm method and device, electronic equipment and computer readable medium |
CN114092335B (en) * | 2021-11-30 | 2023-03-10 | 群滨智造科技(苏州)有限公司 | Image splicing method, device and equipment based on robot calibration and storage medium |
CN114166146B (en) * | 2021-12-03 | 2024-07-02 | 香港理工大学深圳研究院 | Three-dimensional measurement method and device based on construction of coded image projection |
CN114279326B (en) * | 2021-12-22 | 2024-05-28 | 易思维(天津)科技有限公司 | Global positioning method of three-dimensional scanning equipment |
CN115442584B (en) * | 2022-08-30 | 2023-08-18 | 中国传媒大学 | Multi-sensor fusion type special-shaped surface dynamic projection method |
CN115330885B (en) * | 2022-08-30 | 2023-04-25 | 中国传媒大学 | Special-shaped surface dynamic projection method based on camera feedback |
CN116934871B (en) * | 2023-07-27 | 2024-03-26 | 湖南视比特机器人有限公司 | Multi-objective system calibration method, system and storage medium based on calibration object |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102483319A (en) * | 2009-09-11 | 2012-05-30 | 瑞尼斯豪公司 | Non-contact object inspection |
CN104299211A (en) * | 2014-09-25 | 2015-01-21 | 周翔 | Free-moving type three-dimensional scanning method |
CN206596100U (en) * | 2017-03-29 | 2017-10-27 | 武汉嫦娥医学抗衰机器人股份有限公司 | A kind of high definition polyphaser full-view stereo imaging system |
US9952036B2 (en) * | 2015-11-06 | 2018-04-24 | Intel Corporation | Systems, methods, and apparatuses for implementing maximum likelihood image binarization in a coded light range camera |
WO2018171851A1 (en) * | 2017-03-20 | 2018-09-27 | 3Dintegrated Aps | A 3d reconstruction system |
CN109945841A (en) * | 2019-03-11 | 2019-06-28 | 南京航空航天大学 | A kind of industrial photogrammetry method of no encoded point |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727277B (en) * | 2018-12-28 | 2022-10-28 | 江苏瑞尔医疗科技有限公司 | Body surface positioning tracking method for multi-eye stereo vision |
CN111189416B (en) * | 2020-01-13 | 2022-02-22 | 四川大学 | Structural light 360-degree three-dimensional surface shape measuring method based on characteristic phase constraint |
-
2020
- 2020-05-29 CN CN202010475819.9A patent/CN111637850B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102483319A (en) * | 2009-09-11 | 2012-05-30 | 瑞尼斯豪公司 | Non-contact object inspection |
CN104299211A (en) * | 2014-09-25 | 2015-01-21 | 周翔 | Free-moving type three-dimensional scanning method |
US9952036B2 (en) * | 2015-11-06 | 2018-04-24 | Intel Corporation | Systems, methods, and apparatuses for implementing maximum likelihood image binarization in a coded light range camera |
WO2018171851A1 (en) * | 2017-03-20 | 2018-09-27 | 3Dintegrated Aps | A 3d reconstruction system |
CN206596100U (en) * | 2017-03-29 | 2017-10-27 | 武汉嫦娥医学抗衰机器人股份有限公司 | A kind of high definition polyphaser full-view stereo imaging system |
CN109945841A (en) * | 2019-03-11 | 2019-06-28 | 南京航空航天大学 | A kind of industrial photogrammetry method of no encoded point |
Also Published As
Publication number | Publication date |
---|---|
CN111637850A (en) | 2020-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111637850B (en) | Self-splicing surface point cloud measuring method without active visual marker | |
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
Zhang et al. | A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection | |
Ahmadabadian et al. | A comparison of dense matching algorithms for scaled surface reconstruction using stereo camera rigs | |
CN104299211B (en) | Free-moving type three-dimensional scanning method | |
CN106091983B (en) | The complete scaling method of Vision Measuring System With Structured Light Stripe comprising scanning direction information | |
CN104537707B (en) | Image space type stereoscopic vision moves real-time measurement system online | |
CN113205592B (en) | Light field three-dimensional reconstruction method and system based on phase similarity | |
CN106803270A (en) | Unmanned aerial vehicle platform is based on many key frames collaboration ground target localization method of monocular SLAM | |
CN101750029B (en) | Characteristic point three-dimensional reconstruction method based on trifocal tensor | |
CN113592721B (en) | Photogrammetry method, apparatus, device and storage medium | |
CN112907631B (en) | Multi-RGB camera real-time human body motion capture system introducing feedback mechanism | |
WO2011145285A1 (en) | Image processing device, image processing method and program | |
CN109215118B (en) | Incremental motion structure recovery optimization method based on image sequence | |
CN110966932A (en) | Structured light three-dimensional scanning method based on known mark points | |
Furukawa et al. | One-shot entire shape acquisition method using multiple projectors and cameras | |
CN111060006A (en) | Viewpoint planning method based on three-dimensional model | |
CN105374067A (en) | Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof | |
CN116758136B (en) | Real-time online identification method, system, equipment and medium for cargo volume | |
CN114812558B (en) | Monocular vision unmanned aerial vehicle autonomous positioning method combining laser ranging | |
CN111105467B (en) | Image calibration method and device and electronic equipment | |
CN111739103A (en) | Multi-camera calibration system based on single-point calibration object | |
CN116625258A (en) | Chain spacing measuring system and chain spacing measuring method | |
CN104156974A (en) | Camera distortion calibration method on basis of multiple constraints | |
CN107941241B (en) | Resolution board for aerial photogrammetry quality evaluation and use method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |