CN112284293A - Method for measuring space non-cooperative target fine three-dimensional morphology - Google Patents
Method for measuring space non-cooperative target fine three-dimensional morphology Download PDFInfo
- Publication number
- CN112284293A CN112284293A CN202011552948.XA CN202011552948A CN112284293A CN 112284293 A CN112284293 A CN 112284293A CN 202011552948 A CN202011552948 A CN 202011552948A CN 112284293 A CN112284293 A CN 112284293A
- Authority
- CN
- China
- Prior art keywords
- tof
- frame
- camera
- cooperative target
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a method for measuring a space non-cooperative target fine three-dimensional topography, which comprises the steps of firstly carrying out combined calibration on a TOF camera and a monocular camera, determining the fixed connection relation of the cameras by utilizing a TOF-monocular camera fusion measurement system, aligning a high-resolution texture map and a low-resolution depth map to the same visual angle, guiding the low-resolution depth map of the TOF camera to be super-resolved by utilizing the high-resolution texture map of the monocular camera under the same scene, and applying the obtained high-resolution texture map and the high-resolution depth map to the space non-cooperative target three-dimensional topography measurement. The invention integrates the imaging advantages of the TOF-monocular camera, makes up the defects of the monocular camera and the TOF camera in application, and realizes the precise three-dimensional shape measurement of the space non-cooperative target. The method has the advantages of high efficiency, high precision and the like, and can be applied to various tasks of space non-cooperative target on-orbit service.
Description
Technical Field
The invention relates to the field of image measurement, in particular to a method for measuring the three-dimensional morphology of a space non-cooperative target.
Background
With the development of aerospace technology and the increasing frequency of human space activities, the related research of space non-cooperative targets is more active. Most objects faced by the space mission are space non-cooperative objects, and the space non-cooperative objects refer to artificial space objects which can not actively provide any effective cooperative information, and comprise space debris, waste and abandoned spacecrafts, hostile spacecrafts, space weapons launched by the hostile spacecrafts and the like. Because the space non-cooperative target lacks prior information, the environment background is complex in the space, and the image data is difficult to stably obtain, the acquisition of the image information and the accurate measurement of the three-dimensional morphology of the image information become one of the key technologies for completing the space task.
At present, main equipment for performing a space non-cooperative target measurement task in various countries comprises a laser radar, a visible light camera, a laser scanner and the like. Due to the lack of texture characteristics of a space non-cooperative target and the harsh space illumination environment, the visible light camera is difficult to stably extract characteristics; the laser radar has low resolution and is not suitable for precise measurement; laser scanners are expensive, require active scanning of the target, require high power consumption, and have poor real-time performance.
Disclosure of Invention
Aiming at the problems, the invention provides a method for measuring the space non-cooperative target fine three-dimensional morphology.
The technical scheme adopted by the invention for solving the technical problems is as follows: a method for measuring a space non-cooperative target fine three-dimensional shape comprises the following steps:
step 1, building a TOF-monocular camera fusion measurement system;
step 2, calibrating a TOF-monocular camera fusion measurement system;
step 3, collecting and processing image data by using the TOF-monocular camera fusion measurement system;
step 4, reconstructing a spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data;
and 5, repeating the step 3 and the step 4, continuously reconstructing the space non-cooperative target three-dimensional point cloud according to the 2D-3D image data, and registering each frame of point cloud one by one to form the dense and complete space non-cooperative target three-dimensional point cloud.
Preferably, the step 2 specifically comprises:
step 2.1, respectively and independently calibrating the TOF camera and the monocular camera by adopting a Zhang calibration method to respectively obtain internal parameters of the TOF camera and the monocular cameraAnd extrinsic parameters;
Step 2.2, the external parameters of the TOF camera and the monocular camera are jointly calibrated, and the fixed connection relation between the TOF camera and the monocular camera is obtained through a formula (1);
Preferably, the step 3 specifically comprises:
step 3.1, synchronously acquiring the second space non-cooperative target by utilizing the TOF-monocular camera fusion measurement systemnFrame depth mapTexture map;
Step 3.2, setting the second stepnFrame depth mapTexture mapRespectively expressed asAccording to the fixed connection relationship between the TOF camera and the monocular cameraCombining with the actual application scene, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), and obtaining the texture map under the depth map view angle, which is recorded as;
3.3, guiding the low-resolution depth map super-resolution acquired by the TOF camera by using the high-resolution texture map acquired by the monocular camera to acquire a high-resolution depth mapThe robust weighted least squares optimization framework applied to the texture map guided depth map super resolution is defined as follows:
wherein the content of the first and second substances,representing a high-resolution depth map that is continuously updated during the iterative solution process through equation (3),representing depth mapsUpper pixel pointiThe depth value of (a) is determined,representing super-resolved back pixelsiThe depth value of (a) is determined,expressing the weight of the depth smoothing term, and making it empirical,Representing by pixel pointsiIs a neighborhood of the center of the image,representing pixels in a neighborhoodjThe depth value of (a) is determined,the definition is as follows:
a texture-guide weight is represented that is,a gaussian function representing a distance based on pixels, is used to measure the similarity of pixels,respectively representing pixelsThe gray value of the texture map at (a),respectively, the weight constants are represented by,the self-defined parameters are adjusted according to the smooth characteristic of the depth map and are obtained according to experienceContinuously and iteratively updating the low-resolution depth map and the high-resolution texture map to obtain the high-resolution depth map;
Step 3.4, the high-resolution depth map is processedRecovery into a three-dimensional point cloudSetting the world coordinate system to coincide with the camera coordinate system, the world coordinate of the three-dimensional point cloud is,Representing high resolution depth mapsIf the depth value corresponds to the depth value, the high-resolution depth map can be recovered to form a three-dimensional point cloud through a formula (6);
Preferably, the step 4 specifically comprises:
step 4.1, repeat step 3, obtain the treated secondnTexture map of +1 frameThree-dimensional point cloud;
Step 4.2, calculating an initial pose value according to the 2D-2D texture mapThe method comprises the following steps of,
solving the pose by utilizing epipolar geometric constraint according to the characteristic matching point pairsnFrame and secondn+1 frame position and attitude relation rotation matrixAnd translation vectorRepresenting, setting three-dimensional points in spaceQIn the first placenFrame and secondnIn the pixel coordinate system of the +1 frame image, the pixel coordinates are expressed asEpipolar line equation (7) for epipolar geometric constraints is as follows:
wherein the content of the first and second substances,which represents a cross-product operation of the cross-product,the shown reference matrix of the monocular camera,respectively representing reference matrices within a cameraThe inversion is carried out after the inversion and the transposition,to representThe method can utilize an eight-point method to construct a linear equation set, and solve the pose through Singular Value Decomposition (SVD) to obtain an initial value;
And 4.3, carrying out ICP algorithm accurate registration according to the 3D-3D point cloud:
pose initial value to be solvedSubstituting ICP algorithm for interframe point cloud registration, and obtaining the point cloud registration by formula (6)nFrame and secondn+1 frame three-dimensional point cloudRespectively expressed as:
wherein the content of the first and second substances,a,brespectively representnFrame and secondn+1 number of three-dimensional point clouds,are respectively shown atnFrame and secondn+1 frame of the corresponding point to be matched in the three-dimensional point cloud,;
the method comprises the following steps of (1) constructing a spatial non-cooperative target three-dimensional point cloud problem, converting the problem into an Euclidean transformation for solving two adjacent frames of 3D point clouds: rotation matrixAnd translation vectorSo that:
Setting the pose to the initial valueSubstituting into formula (9) to carry out iterative solution to obtain an accurate rotation matrixAnd translation vectorThen, the second step can be expressed by the formula (8)nFrame and secondn+1 frame three-dimensional point cloudRegistration tonForming a local point cloud picture of a space non-cooperative target under a frame coordinate system;
and 5, repeating the step 3 and the step 4, reconstructing space non-cooperative target three-dimensional point clouds according to the 2D-3D image data, and registering each frame of point clouds one by one to form dense and complete space non-cooperative target three-dimensional point clouds and realize the fine three-dimensional morphology measurement of the space non-cooperative target.
Preferably, the TOF-monocular camera fusion measurement system comprises a TOF camera, a monocular camera and a data processing computer, wherein the TOF camera and the monocular camera are fixedly connected in the left-right direction and are connected with the data processing computer.
Compared with the prior art, the invention has the following beneficial effects:
1. firstly, carrying out combined calibration on a TOF-monocular camera, determining a fixed connection relation of the cameras, aligning a high-resolution texture map and a low-resolution depth map to the same visual angle, guiding the super-resolution of the TOF camera low-resolution depth map by using the high-resolution texture map of the monocular camera in the same scene, and applying the obtained high-resolution texture map and the high-resolution depth map to spatial non-cooperative target three-dimensional topography measurement;
2. the method integrates the imaging advantages of the TOF-monocular camera, makes up the defects of the monocular camera and the TOF camera in application, and realizes the precise three-dimensional shape measurement of the space non-cooperative target;
3. the method has the advantages of high efficiency, high precision and the like, and can be applied to various tasks of space non-cooperative target on-orbit service.
Drawings
FIG. 1 is a schematic view of a camera system calibration of the present invention;
wherein, 14, the monocular camera; 15. a TOF camera.
Detailed Description
The invention will now be described in detail with reference to fig. 1, wherein exemplary embodiments and descriptions of the invention are provided to explain the invention, but not to limit the invention. Wherein tof (time of flight) is directly translated into "time of flight". TOF camera imaging is an active imaging mode, i.e. a camera system emits laser to a target, and the distance to the target is calculated by measuring the time when a sensor receives reflected light from the target.
A method for measuring a space non-cooperative target fine three-dimensional shape comprises the following steps:
step 1, a TOF-monocular camera fusion measurement system is built, and the TOF-monocular camera fusion measurement system comprises a TOF camera 15, a monocular camera 14 and a data processing computer. The TOF camera and the monocular camera are fixedly connected left and right and have a public view, are connected with the data processing computer, synchronously acquire and store image data of the TOF camera and the monocular camera through the data processing computer, and perform subsequent image processing and other operations;
step 2, the TOF-monocular camera fusion measurement system is calibrated, the steps are as follows,
and 2.1, describing the characteristics of the TOF and monocular camera lenses by adopting a pinhole camera model, and taking a checkerboard image as a calibration plate. The TOF camera and the monocular camera are respectively and independently calibrated by a method of A flexible new technique for camera calibration (published in IEEE Transactions on Pattern Analysis and Machine understanding in 2000), that is, the Zhang calibration method is used for respectively obtaining internal parameters of the TOF camera and the monocular cameraAnd extrinsic parameters;
Step 2.2, the external parameters of the TOF camera and the monocular camera are jointly calibrated, and the fixed connection relation between the TOF camera and the monocular camera is obtained through a formula (1);
And step 3, acquiring and processing image data by using the TOF-monocular camera fusion measurement system, wherein the steps are as follows,
step 3.1, synchronously acquiring the second space non-cooperative target by utilizing the TOF-monocular camera fusion measurement systemnFrame depth mapTexture map,;
Step 3.2, setting the second stepnFrame depth mapTexture mapRespectively expressed asAccording to the fixed connection relationship between the TOF camera and the monocular cameraCombining with the actual application scene, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), and obtaining the texture map under the depth map view angle, which is recorded as;
3.3, guiding the low-resolution depth map super-resolution acquired by the TOF camera by using the high-resolution texture map acquired by the monocular camera to acquire a high-resolution depth mapThe robust weighted least squares optimization framework applied to the texture map guided depth map super resolution is defined as follows:
wherein the content of the first and second substances,representing a high-resolution depth map that is continuously updated during the iterative solution process through equation (3),representing depth mapsUpper pixel pointiThe depth value of (a) is determined,representing super-resolved back pixelsiThe depth value of (a) is determined,expressing the weight of the depth smoothing term, and making it empirical,Representing by pixel pointsiIs a neighborhood of the center of the image,representing pixels in a neighborhoodjThe depth value of (a) is determined,the definition is as follows:
a texture-guide weight is represented that is,a gaussian function representing a distance based on pixels, is used to measure the similarity of pixels,respectively representing pixelsThe gray value of the texture map at (a),respectively, the weight constants are represented by,the self-defined parameters are adjusted according to the smooth characteristic of the depth map and are obtained according to experienceContinuously and iteratively updating the low-resolution depth map and the high-resolution texture map to obtain a high-resolution depth map, and recording the high-resolution depth map as a high-resolution depth map;
Step 3.4, the high-resolution depth map is processedRecovery into a three-dimensional point cloudSetting the world coordinate system to coincide with the camera coordinate system, the world coordinate of the three-dimensional point cloud is,Representing high resolution depth mapsIf the depth value corresponds to the depth value, the high-resolution depth map can be recovered to form a three-dimensional point cloud through a formula (6);
Step 4, reconstructing a spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data, which comprises the following steps,
step 4.1, repeat step 3, obtain the treated secondnTexture map of +1 frameThree-dimensional point cloud;
Step 4.2, calculating an initial pose value according to the 2D-2D texture mapThe method comprises the following steps of,
the method comprises the steps of SIFT feature point extraction, matching, dead pixel elimination and the like, wherein SIFT feature points between two adjacent frames are matched through a fast nearest neighbor search algorithm (FLANN), a RANSAC algorithm is used for carrying out mismatching elimination, correct feature matching point pairs are reserved, the pose is solved by utilizing epipolar geometric constraint according to the correct feature matching point pairs, and the third-order texture image is setnFrame and secondn+1 frame position and attitude relation rotation matrixAnd translation vectorRepresenting a three-dimensional point Q in space atnFrame and secondnRespective pixel coordinate systems of +1 frame imageAre respectively shown asThe polar line equation for the epipolar geometric constraint is as follows:
wherein the content of the first and second substances,which represents a cross-product operation of the cross-product,the shown reference matrix of the monocular camera,respectively representing the inversion of the camera intra-reference matrix and the transposed inversion,to representThe method can utilize an eight-point method to construct a linear equation set, and solve the pose through Singular Value Decomposition (SVD) to obtain an initial value;
And 4.3, carrying out ICP algorithm accurate registration according to the 3D-3D point cloud:
pose initial value to be solvedSubstituting ICP algorithm for interframe point cloud registration, and obtaining the point cloud registration by formula (6)nFrame and secondn+1 frame three-dimensional point cloudRespectively expressed as:
wherein the content of the first and second substances,a,brespectively representnFrame and secondn+1 number of three-dimensional point clouds,are respectively shown atnFrame and secondn+1 frame of the corresponding point to be matched in the three-dimensional point cloud,;
the method comprises the following steps of (1) constructing a spatial non-cooperative target three-dimensional point cloud problem, converting the problem into an Euclidean transformation for solving two adjacent frames of 3D point clouds: rotation matrixAnd translation vectorSo that:
Setting the pose to the initial valueSubstituting into formula (9) to carry out iterative solution to obtain an accurate rotation matrixAnd translation vectorThen, the second step can be expressed by the formula (8)nFrame and secondn+1 frame three-dimensional point cloudRegistration tonForming a local point cloud picture of a space non-cooperative target under a frame coordinate system;
and 5, repeating the step 3 and the step 4, continuously reconstructing the spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data, and registering each frame of point cloud one by one to form a dense and complete spatial non-cooperative target three-dimensional point cloud so as to realize the fine three-dimensional morphology measurement of the spatial non-cooperative target.
In the implementation process, as shown in fig. 1, the TOF camera 15 and the monocular camera 14 are fixedly connected in the left-right direction and installed, the TOF camera and the monocular camera are both connected with the data processing computer, the TOF-monocular camera fusion measuring system is built, 20-30 checkerboard pictures are shot at the same time for the space non-cooperative target through the TOF-monocular camera fusion measuring system, the TOF camera and the monocular camera are respectively calibrated independently by adopting a zhang's calibration method, and internal parameters of the TOF camera and the monocular camera are obtainedAnd extrinsic parametersAfter the parameters are determined, the TOF camera and the monocular camera are calibrated in a combined mode, and the fixed connection relation between the TOF camera and the monocular camera is obtained through a formula (1)Synchronously acquiring the nth frame depth map of the space non-cooperative target by using a TOF and monocular camera fusion measurement systemTexture mapAccording to the relationship of fixed connectionCombining with practical application scenes, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), realizing the alignment of the texture map and the TOF camera, and obtaining the texture map under the depth map view angleContinuously and iteratively updating the low-resolution depth map and the high-resolution texture map, and calculating by formulas (3), (4) and (5) to obtain the high-resolution depth map. Guiding a low-resolution depth map acquired by a TOF camera to realize super-resolution by using a high-resolution texture map acquired by a monocular camera, carrying out image processing on the high-resolution depth map and the high-resolution color map, and processing the high-resolution depth map through a formula (6)Recovering three-dimensional point cloudCalculating the initial pose value according to a formula (7) by using the 2D-2D texture mapAnd acquiring 3D point clouds from the depth map, substituting the pose initial value into an ICP algorithm through a formula (8) to perform point cloud matching to form a local point cloud map of a space non-cooperative target, repeatedly calibrating a TOF-monocular camera fusion measurement system, continuously reconstructing a space non-cooperative target three-dimensional point cloud according to 2D-3D image data, registering each frame of point cloud one by one, outputting to form dense space non-cooperative target three-dimensional point cloud, and realizing space non-cooperative target fine three-dimensional topography measurement based on TOF-monocular camera fusion.
The technical solutions provided by the embodiments of the present invention are described in detail above, and the principles and embodiments of the present invention are explained herein by using specific examples, and the descriptions of the embodiments are only used to help understanding the principles of the embodiments of the present invention; meanwhile, for a person skilled in the art, according to the embodiments of the present invention, there may be variations in the specific implementation manners and application ranges, and in summary, the content of the present description should not be construed as a limitation to the present invention.
Claims (5)
1. A method for measuring a space non-cooperative target fine three-dimensional shape is characterized by comprising the following steps:
step 1, building a TOF-monocular camera fusion measurement system;
step 2, calibrating a TOF-monocular camera fusion measurement system;
step 3, collecting and processing image data by using the TOF-monocular camera fusion measurement system;
step 4, reconstructing a spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data;
and 5, repeating the step 3 and the step 4, continuously reconstructing the space non-cooperative target three-dimensional point cloud according to the 2D-3D image data, and registering each frame of point cloud one by one to form the dense and complete space non-cooperative target three-dimensional point cloud.
2. The method for measuring the fine three-dimensional morphology of the spatially non-cooperative target according to claim 1, wherein the step 2 specifically comprises:
step 2.1, respectively and independently calibrating the TOF camera and the monocular camera by adopting a Zhang calibration method to respectively obtain internal parameters of the TOF camera and the monocular cameraAnd extrinsic parameters;
Step 2.2, carrying out combined calibration on external parameters of the TOF camera (15) and the monocular camera (14), and obtaining the TOF camera according to a formula (1)And the fixation relation between monocular cameras;
3. The method for measuring the fine three-dimensional morphology of the spatially non-cooperative target according to claim 2, wherein the step 3 specifically comprises:
step 3.1, synchronously acquiring the second space non-cooperative target by utilizing the TOF-monocular camera fusion measurement systemnFrame depth mapTexture map;
Step 3.2, setting the second stepnFrame depth mapTexture mapRespectively expressed asAccording to the fixed connection relationship between the TOF camera and the monocular cameraCombining with the actual application scene, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), and obtaining the texture map under the depth map view angle, which is recorded as;
3.3, guiding the low-resolution depth map super-resolution acquired by the TOF camera by using the high-resolution texture map acquired by the monocular camera to acquire a high-resolution depth mapThe robust weighted least squares optimization framework applied to the texture map guided depth map super resolution is defined as follows:
wherein the content of the first and second substances,representing a high-resolution depth map that is continuously updated during the iterative solution process through equation (3),representing depth mapsUpper pixel pointiThe depth value of (a) is determined,representing super-resolved back pixelsiThe depth value of (a) is determined,expressing the weight of the depth smoothing term, and making it empirical,Representing by pixel pointsiIs a neighborhood of the center of the image,representing pixels in a neighborhoodjThe depth value of (a) is determined,the definition is as follows:
a texture-guide weight is represented that is,a gaussian function representing a distance based on pixels, is used to measure the similarity of pixels,respectively representing pixelsThe gray value of the texture map at (a),respectively, the weight constants are represented by,the self-defined parameters are adjusted according to the smooth characteristic of the depth map and are obtained according to experienceContinuously and iteratively updating the low-resolution depth map and the high-resolution texture map to obtain the high-resolution depth map;
Step 3.4, the high-resolution depth map is processedRecovery into a three-dimensional point cloudSetting the world coordinate system to coincide with the camera coordinate system, the world coordinate of the three-dimensional point cloud is,Representing high resolution depth mapsIf the depth value corresponds to the depth value, the high-resolution depth map can be recovered to form a three-dimensional point cloud through a formula (6);
4. The method for measuring the fine three-dimensional morphology of the spatially non-cooperative target according to claim 3, wherein the step 4 specifically comprises:
step 4.1, repeat step 3, obtain the treated secondnTexture map of +1 frameThree-dimensional point cloud;
Step 4.2, calculating an initial pose value according to the 2D-2D texture mapThe method comprises the following steps of,
solving the pose by utilizing epipolar geometric constraint according to the characteristic matching point pairsnFrame and secondn+1 frame position and attitude relation rotation matrixAnd translation vectorRepresenting a three-dimensional point Q in space atnFrame and secondnIn the pixel coordinate system of the +1 frame image, the pixel coordinates are expressed asEpipolar line equation (7) for epipolar geometric constraints is as follows:
wherein the content of the first and second substances,which represents a cross-product operation of the cross-product,the shown reference matrix of the monocular camera,respectively representing the inverse of the reference matrix of the cameraAnd the inversion is carried out after the transposition,to representThe method can utilize an eight-point method to construct a linear equation set, and solve the pose through Singular Value Decomposition (SVD) to obtain an initial value;
And 4.3, carrying out ICP algorithm accurate registration according to the 3D-3D point cloud:
pose initial value to be solvedSubstituting ICP algorithm for interframe point cloud registration, and obtaining the point cloud registration by formula (6)nFrame and secondn+1 frame three-dimensional point cloudRespectively expressed as:
wherein the content of the first and second substances,a,brespectively representnFrame and secondn+1 number of three-dimensional point clouds,are respectively shown atnFrame and secondn+1 frame of the corresponding point to be matched in the three-dimensional point cloud,;
the method comprises the following steps of (1) constructing a spatial non-cooperative target three-dimensional point cloud problem, converting the problem into an Euclidean transformation for solving two adjacent frames of 3D point clouds: rotation matrixAnd translation vectorSo that:
Setting the pose to the initial valueSubstituting for iterative solution to obtain accurate rotation matrixAnd translation vectorThen, the second step can be expressed by the formula (8)nFrame and secondn+1 frame three-dimensional point cloudRegistration tonForming a local point cloud picture of a space non-cooperative target under a frame coordinate system;
and 5, repeating the step 3 and the step 4, reconstructing space non-cooperative target three-dimensional point clouds according to the 2D-3D image data, and registering each frame of point clouds one by one to form dense and complete space non-cooperative target three-dimensional point clouds and realize the fine three-dimensional morphology measurement of the space non-cooperative target.
5. The method for measuring the fine three-dimensional morphology of the space non-cooperative target according to claim 1, wherein the TOF-monocular camera fusion measurement system comprises a TOF camera, a monocular camera and a data processing computer, and the TOF camera and the monocular camera are fixedly connected at the left and right and are connected with the data processing computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011552948.XA CN112284293B (en) | 2020-12-24 | 2020-12-24 | Method for measuring space non-cooperative target fine three-dimensional morphology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011552948.XA CN112284293B (en) | 2020-12-24 | 2020-12-24 | Method for measuring space non-cooperative target fine three-dimensional morphology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112284293A true CN112284293A (en) | 2021-01-29 |
CN112284293B CN112284293B (en) | 2021-04-02 |
Family
ID=74426167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011552948.XA Active CN112284293B (en) | 2020-12-24 | 2020-12-24 | Method for measuring space non-cooperative target fine three-dimensional morphology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112284293B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113674353A (en) * | 2021-08-18 | 2021-11-19 | 中国人民解放军国防科技大学 | Method for measuring accurate pose of space non-cooperative target |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09100662A (en) * | 1995-07-28 | 1997-04-15 | Miwa Lock Co Ltd | Key information reading mechanism |
CN108399610A (en) * | 2018-03-20 | 2018-08-14 | 上海应用技术大学 | A kind of depth image enhancement method of fusion RGB image information |
CN111223059A (en) * | 2020-01-04 | 2020-06-02 | 西安交通大学 | Robust depth map structure reconstruction and denoising method based on guide filter |
US20200226824A1 (en) * | 2017-11-10 | 2020-07-16 | Guangdong Kang Yun Technologies Limited | Systems and methods for 3d scanning of objects by providing real-time visual feedback |
CN111982071A (en) * | 2019-05-24 | 2020-11-24 | Tcl集团股份有限公司 | 3D scanning method and system based on TOF camera |
-
2020
- 2020-12-24 CN CN202011552948.XA patent/CN112284293B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09100662A (en) * | 1995-07-28 | 1997-04-15 | Miwa Lock Co Ltd | Key information reading mechanism |
US20200226824A1 (en) * | 2017-11-10 | 2020-07-16 | Guangdong Kang Yun Technologies Limited | Systems and methods for 3d scanning of objects by providing real-time visual feedback |
CN108399610A (en) * | 2018-03-20 | 2018-08-14 | 上海应用技术大学 | A kind of depth image enhancement method of fusion RGB image information |
CN111982071A (en) * | 2019-05-24 | 2020-11-24 | Tcl集团股份有限公司 | 3D scanning method and system based on TOF camera |
CN111223059A (en) * | 2020-01-04 | 2020-06-02 | 西安交通大学 | Robust depth map structure reconstruction and denoising method based on guide filter |
Non-Patent Citations (3)
Title |
---|
何英: "基于点云的非合作目标三维重建及近距离位姿测量研究", 《中国博士学位论文全文数据库 工程科技‖辑》 * |
叶勤 等: "基于极线及共面约束条件的Kinect点云配准方法", 《武汉大学学报.信息科技版》 * |
李三春: "RGBD视频序列预处理及量化编码方法研究", 《中国优秀硕士学位论文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113674353A (en) * | 2021-08-18 | 2021-11-19 | 中国人民解放军国防科技大学 | Method for measuring accurate pose of space non-cooperative target |
CN113674353B (en) * | 2021-08-18 | 2023-05-16 | 中国人民解放军国防科技大学 | Accurate pose measurement method for space non-cooperative target |
Also Published As
Publication number | Publication date |
---|---|
CN112284293B (en) | 2021-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108470370B (en) | Method for jointly acquiring three-dimensional color point cloud by external camera of three-dimensional laser scanner | |
CN109919911B (en) | Mobile three-dimensional reconstruction method based on multi-view photometric stereo | |
Liu et al. | Multiview geometry for texture mapping 2d images onto 3d range data | |
CN112102458A (en) | Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance | |
CN104537707B (en) | Image space type stereoscopic vision moves real-time measurement system online | |
CN106960442A (en) | Based on the infrared night robot vision wide view-field three-D construction method of monocular | |
CN103106688A (en) | Indoor three-dimensional scene rebuilding method based on double-layer rectification method | |
CN110070598A (en) | Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding | |
CN110211169B (en) | Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation | |
CN112907631B (en) | Multi-RGB camera real-time human body motion capture system introducing feedback mechanism | |
CN107038753B (en) | Stereoscopic vision three-dimensional reconstruction system and method | |
CN110060304B (en) | Method for acquiring three-dimensional information of organism | |
CN114782628A (en) | Indoor real-time three-dimensional reconstruction method based on depth camera | |
CN112580683B (en) | Multi-sensor data time alignment system and method based on cross correlation | |
CN113450416B (en) | TCSC method applied to three-dimensional calibration of three-dimensional camera | |
Wendel et al. | Automatic alignment of 3D reconstructions using a digital surface model | |
JP2024527156A (en) | System and method for optimal transport and epipolar geometry based image processing - Patents.com | |
CN110580715B (en) | Image alignment method based on illumination constraint and grid deformation | |
Wan et al. | Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform | |
CN108830921A (en) | Laser point cloud reflected intensity correcting method based on incident angle | |
CN112284293B (en) | Method for measuring space non-cooperative target fine three-dimensional morphology | |
CN112132971B (en) | Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium | |
Liu et al. | Research on 3D reconstruction method based on laser rotation scanning | |
CN112767459A (en) | Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion | |
CN117237553A (en) | Three-dimensional map mapping system based on point cloud image fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |