CN112284293A - Method for measuring space non-cooperative target fine three-dimensional morphology - Google Patents

Method for measuring space non-cooperative target fine three-dimensional morphology Download PDF

Info

Publication number
CN112284293A
CN112284293A CN202011552948.XA CN202011552948A CN112284293A CN 112284293 A CN112284293 A CN 112284293A CN 202011552948 A CN202011552948 A CN 202011552948A CN 112284293 A CN112284293 A CN 112284293A
Authority
CN
China
Prior art keywords
tof
frame
camera
cooperative target
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011552948.XA
Other languages
Chinese (zh)
Other versions
CN112284293B (en
Inventor
刘海波
刘子宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011552948.XA priority Critical patent/CN112284293B/en
Publication of CN112284293A publication Critical patent/CN112284293A/en
Application granted granted Critical
Publication of CN112284293B publication Critical patent/CN112284293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for measuring a space non-cooperative target fine three-dimensional topography, which comprises the steps of firstly carrying out combined calibration on a TOF camera and a monocular camera, determining the fixed connection relation of the cameras by utilizing a TOF-monocular camera fusion measurement system, aligning a high-resolution texture map and a low-resolution depth map to the same visual angle, guiding the low-resolution depth map of the TOF camera to be super-resolved by utilizing the high-resolution texture map of the monocular camera under the same scene, and applying the obtained high-resolution texture map and the high-resolution depth map to the space non-cooperative target three-dimensional topography measurement. The invention integrates the imaging advantages of the TOF-monocular camera, makes up the defects of the monocular camera and the TOF camera in application, and realizes the precise three-dimensional shape measurement of the space non-cooperative target. The method has the advantages of high efficiency, high precision and the like, and can be applied to various tasks of space non-cooperative target on-orbit service.

Description

Method for measuring space non-cooperative target fine three-dimensional morphology
Technical Field
The invention relates to the field of image measurement, in particular to a method for measuring the three-dimensional morphology of a space non-cooperative target.
Background
With the development of aerospace technology and the increasing frequency of human space activities, the related research of space non-cooperative targets is more active. Most objects faced by the space mission are space non-cooperative objects, and the space non-cooperative objects refer to artificial space objects which can not actively provide any effective cooperative information, and comprise space debris, waste and abandoned spacecrafts, hostile spacecrafts, space weapons launched by the hostile spacecrafts and the like. Because the space non-cooperative target lacks prior information, the environment background is complex in the space, and the image data is difficult to stably obtain, the acquisition of the image information and the accurate measurement of the three-dimensional morphology of the image information become one of the key technologies for completing the space task.
At present, main equipment for performing a space non-cooperative target measurement task in various countries comprises a laser radar, a visible light camera, a laser scanner and the like. Due to the lack of texture characteristics of a space non-cooperative target and the harsh space illumination environment, the visible light camera is difficult to stably extract characteristics; the laser radar has low resolution and is not suitable for precise measurement; laser scanners are expensive, require active scanning of the target, require high power consumption, and have poor real-time performance.
Disclosure of Invention
Aiming at the problems, the invention provides a method for measuring the space non-cooperative target fine three-dimensional morphology.
The technical scheme adopted by the invention for solving the technical problems is as follows: a method for measuring a space non-cooperative target fine three-dimensional shape comprises the following steps:
step 1, building a TOF-monocular camera fusion measurement system;
step 2, calibrating a TOF-monocular camera fusion measurement system;
step 3, collecting and processing image data by using the TOF-monocular camera fusion measurement system;
step 4, reconstructing a spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data;
and 5, repeating the step 3 and the step 4, continuously reconstructing the space non-cooperative target three-dimensional point cloud according to the 2D-3D image data, and registering each frame of point cloud one by one to form the dense and complete space non-cooperative target three-dimensional point cloud.
Preferably, the step 2 specifically comprises:
step 2.1, respectively and independently calibrating the TOF camera and the monocular camera by adopting a Zhang calibration method to respectively obtain internal parameters of the TOF camera and the monocular camera
Figure DEST_PATH_IMAGE001
And extrinsic parameters
Figure 704159DEST_PATH_IMAGE002
Step 2.2, the external parameters of the TOF camera and the monocular camera are jointly calibrated, and the fixed connection relation between the TOF camera and the monocular camera is obtained through a formula (1)
Figure DEST_PATH_IMAGE003
Figure 857185DEST_PATH_IMAGE004
(1)
Preferably, the step 3 specifically comprises:
step 3.1, synchronously acquiring the second space non-cooperative target by utilizing the TOF-monocular camera fusion measurement systemnFrame depth map
Figure DEST_PATH_IMAGE005
Texture map
Figure 935125DEST_PATH_IMAGE006
Step 3.2, setting the second stepnFrame depth map
Figure 314022DEST_PATH_IMAGE005
Texture map
Figure 189048DEST_PATH_IMAGE006
Respectively expressed as
Figure DEST_PATH_IMAGE007
According to the fixed connection relationship between the TOF camera and the monocular camera
Figure 512976DEST_PATH_IMAGE008
Combining with the actual application scene, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), and obtaining the texture map under the depth map view angle, which is recorded as
Figure DEST_PATH_IMAGE009
Figure 547053DEST_PATH_IMAGE010
(2);
3.3, guiding the low-resolution depth map super-resolution acquired by the TOF camera by using the high-resolution texture map acquired by the monocular camera to acquire a high-resolution depth map
Figure DEST_PATH_IMAGE011
The robust weighted least squares optimization framework applied to the texture map guided depth map super resolution is defined as follows:
Figure 73849DEST_PATH_IMAGE012
(3)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
representing a high-resolution depth map that is continuously updated during the iterative solution process through equation (3),
Figure 244193DEST_PATH_IMAGE014
representing depth maps
Figure DEST_PATH_IMAGE015
Upper pixel pointiThe depth value of (a) is determined,
Figure 41817DEST_PATH_IMAGE014
representing super-resolved back pixelsiThe depth value of (a) is determined,
Figure 45414DEST_PATH_IMAGE016
expressing the weight of the depth smoothing term, and making it empirical
Figure DEST_PATH_IMAGE017
Figure 18311DEST_PATH_IMAGE018
Representing by pixel pointsiIs a neighborhood of the center of the image,
Figure 325052DEST_PATH_IMAGE019
representing pixels in a neighborhoodjThe depth value of (a) is determined,
Figure 4164DEST_PATH_IMAGE020
the definition is as follows:
Figure 509706DEST_PATH_IMAGE021
(4)
Figure 299676DEST_PATH_IMAGE022
(5)
Figure 929765DEST_PATH_IMAGE023
a texture-guide weight is represented that is,
Figure 514199DEST_PATH_IMAGE024
a gaussian function representing a distance based on pixels, is used to measure the similarity of pixels,
Figure 26476DEST_PATH_IMAGE025
respectively representing pixels
Figure 151296DEST_PATH_IMAGE026
The gray value of the texture map at (a),
Figure 901471DEST_PATH_IMAGE027
respectively, the weight constants are represented by,
Figure 453544DEST_PATH_IMAGE028
the self-defined parameters are adjusted according to the smooth characteristic of the depth map and are obtained according to experience
Figure 379081DEST_PATH_IMAGE029
Continuously and iteratively updating the low-resolution depth map and the high-resolution texture map to obtain the high-resolution depth map
Figure 809057DEST_PATH_IMAGE030
Step 3.4, the high-resolution depth map is processed
Figure 115535DEST_PATH_IMAGE030
Recovery into a three-dimensional point cloud
Figure DEST_PATH_IMAGE031
Setting the world coordinate system to coincide with the camera coordinate system, the world coordinate of the three-dimensional point cloud is
Figure 589242DEST_PATH_IMAGE032
Figure DEST_PATH_IMAGE033
Representing high resolution depth maps
Figure 794221DEST_PATH_IMAGE030
If the depth value corresponds to the depth value, the high-resolution depth map can be recovered to form a three-dimensional point cloud through a formula (6)
Figure 901590DEST_PATH_IMAGE031
Figure 78887DEST_PATH_IMAGE034
(6)
Preferably, the step 4 specifically comprises:
step 4.1, repeat step 3, obtain the treated secondnTexture map of +1 frame
Figure DEST_PATH_IMAGE035
Three-dimensional point cloud
Figure 664108DEST_PATH_IMAGE036
Step 4.2, calculating an initial pose value according to the 2D-2D texture map
Figure DEST_PATH_IMAGE037
The method comprises the following steps of,
solving the pose by utilizing epipolar geometric constraint according to the characteristic matching point pairsnFrame and secondn+1 frame position and attitude relation rotation matrix
Figure 688871DEST_PATH_IMAGE038
And translation vector
Figure DEST_PATH_IMAGE039
Representing, setting three-dimensional points in spaceQIn the first placenFrame and secondnIn the pixel coordinate system of the +1 frame image, the pixel coordinates are expressed as
Figure 617159DEST_PATH_IMAGE040
Epipolar line equation (7) for epipolar geometric constraints is as follows:
Figure DEST_PATH_IMAGE041
(7)
wherein the content of the first and second substances,
Figure 212744DEST_PATH_IMAGE042
which represents a cross-product operation of the cross-product,
Figure DEST_PATH_IMAGE043
the shown reference matrix of the monocular camera,
Figure 957147DEST_PATH_IMAGE044
respectively representing reference matrices within a cameraThe inversion is carried out after the inversion and the transposition,
Figure DEST_PATH_IMAGE045
to represent
Figure 483856DEST_PATH_IMAGE046
The method can utilize an eight-point method to construct a linear equation set, and solve the pose through Singular Value Decomposition (SVD) to obtain an initial value
Figure DEST_PATH_IMAGE047
And 4.3, carrying out ICP algorithm accurate registration according to the 3D-3D point cloud:
pose initial value to be solved
Figure 933028DEST_PATH_IMAGE047
Substituting ICP algorithm for interframe point cloud registration, and obtaining the point cloud registration by formula (6)nFrame and secondn+1 frame three-dimensional point cloud
Figure 553758DEST_PATH_IMAGE048
Respectively expressed as:
Figure DEST_PATH_IMAGE049
wherein the content of the first and second substances,abrespectively representnFrame and secondn+1 number of three-dimensional point clouds,
Figure 203483DEST_PATH_IMAGE050
are respectively shown atnFrame and secondn+1 frame of the corresponding point to be matched in the three-dimensional point cloud,
Figure DEST_PATH_IMAGE051
the method comprises the following steps of (1) constructing a spatial non-cooperative target three-dimensional point cloud problem, converting the problem into an Euclidean transformation for solving two adjacent frames of 3D point clouds: rotation matrix
Figure 73613DEST_PATH_IMAGE052
And translation vector
Figure DEST_PATH_IMAGE053
So that:
Figure 326476DEST_PATH_IMAGE054
(8)
constructing a least squares problem, solving to minimize the sum of squares of errors
Figure DEST_PATH_IMAGE055
Figure 37598DEST_PATH_IMAGE056
(9)
Setting the pose to the initial value
Figure 899637DEST_PATH_IMAGE047
Substituting into formula (9) to carry out iterative solution to obtain an accurate rotation matrix
Figure 510923DEST_PATH_IMAGE052
And translation vector
Figure 290179DEST_PATH_IMAGE053
Then, the second step can be expressed by the formula (8)nFrame and secondn+1 frame three-dimensional point cloud
Figure DEST_PATH_IMAGE057
Registration tonForming a local point cloud picture of a space non-cooperative target under a frame coordinate system;
and 5, repeating the step 3 and the step 4, reconstructing space non-cooperative target three-dimensional point clouds according to the 2D-3D image data, and registering each frame of point clouds one by one to form dense and complete space non-cooperative target three-dimensional point clouds and realize the fine three-dimensional morphology measurement of the space non-cooperative target.
Preferably, the TOF-monocular camera fusion measurement system comprises a TOF camera, a monocular camera and a data processing computer, wherein the TOF camera and the monocular camera are fixedly connected in the left-right direction and are connected with the data processing computer.
Compared with the prior art, the invention has the following beneficial effects:
1. firstly, carrying out combined calibration on a TOF-monocular camera, determining a fixed connection relation of the cameras, aligning a high-resolution texture map and a low-resolution depth map to the same visual angle, guiding the super-resolution of the TOF camera low-resolution depth map by using the high-resolution texture map of the monocular camera in the same scene, and applying the obtained high-resolution texture map and the high-resolution depth map to spatial non-cooperative target three-dimensional topography measurement;
2. the method integrates the imaging advantages of the TOF-monocular camera, makes up the defects of the monocular camera and the TOF camera in application, and realizes the precise three-dimensional shape measurement of the space non-cooperative target;
3. the method has the advantages of high efficiency, high precision and the like, and can be applied to various tasks of space non-cooperative target on-orbit service.
Drawings
FIG. 1 is a schematic view of a camera system calibration of the present invention;
wherein, 14, the monocular camera; 15. a TOF camera.
Detailed Description
The invention will now be described in detail with reference to fig. 1, wherein exemplary embodiments and descriptions of the invention are provided to explain the invention, but not to limit the invention. Wherein tof (time of flight) is directly translated into "time of flight". TOF camera imaging is an active imaging mode, i.e. a camera system emits laser to a target, and the distance to the target is calculated by measuring the time when a sensor receives reflected light from the target.
A method for measuring a space non-cooperative target fine three-dimensional shape comprises the following steps:
step 1, a TOF-monocular camera fusion measurement system is built, and the TOF-monocular camera fusion measurement system comprises a TOF camera 15, a monocular camera 14 and a data processing computer. The TOF camera and the monocular camera are fixedly connected left and right and have a public view, are connected with the data processing computer, synchronously acquire and store image data of the TOF camera and the monocular camera through the data processing computer, and perform subsequent image processing and other operations;
step 2, the TOF-monocular camera fusion measurement system is calibrated, the steps are as follows,
and 2.1, describing the characteristics of the TOF and monocular camera lenses by adopting a pinhole camera model, and taking a checkerboard image as a calibration plate. The TOF camera and the monocular camera are respectively and independently calibrated by a method of A flexible new technique for camera calibration (published in IEEE Transactions on Pattern Analysis and Machine understanding in 2000), that is, the Zhang calibration method is used for respectively obtaining internal parameters of the TOF camera and the monocular camera
Figure 742893DEST_PATH_IMAGE058
And extrinsic parameters
Figure DEST_PATH_IMAGE059
Step 2.2, the external parameters of the TOF camera and the monocular camera are jointly calibrated, and the fixed connection relation between the TOF camera and the monocular camera is obtained through a formula (1)
Figure 510254DEST_PATH_IMAGE060
Figure DEST_PATH_IMAGE061
(1)
And step 3, acquiring and processing image data by using the TOF-monocular camera fusion measurement system, wherein the steps are as follows,
step 3.1, synchronously acquiring the second space non-cooperative target by utilizing the TOF-monocular camera fusion measurement systemnFrame depth map
Figure 579143DEST_PATH_IMAGE062
Texture map
Figure DEST_PATH_IMAGE063
Figure 896510DEST_PATH_IMAGE064
Step 3.2, setting the second stepnFrame depth map
Figure 877498DEST_PATH_IMAGE062
Texture map
Figure 358637DEST_PATH_IMAGE063
Respectively expressed as
Figure DEST_PATH_IMAGE065
According to the fixed connection relationship between the TOF camera and the monocular camera
Figure 896847DEST_PATH_IMAGE066
Combining with the actual application scene, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), and obtaining the texture map under the depth map view angle, which is recorded as
Figure DEST_PATH_IMAGE067
Figure 802924DEST_PATH_IMAGE068
(2);
3.3, guiding the low-resolution depth map super-resolution acquired by the TOF camera by using the high-resolution texture map acquired by the monocular camera to acquire a high-resolution depth map
Figure DEST_PATH_IMAGE069
The robust weighted least squares optimization framework applied to the texture map guided depth map super resolution is defined as follows:
Figure 874304DEST_PATH_IMAGE070
(3)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE071
representing a high-resolution depth map that is continuously updated during the iterative solution process through equation (3),
Figure 168755DEST_PATH_IMAGE072
representing depth maps
Figure DEST_PATH_IMAGE073
Upper pixel pointiThe depth value of (a) is determined,
Figure 191331DEST_PATH_IMAGE074
representing super-resolved back pixelsiThe depth value of (a) is determined,
Figure DEST_PATH_IMAGE075
expressing the weight of the depth smoothing term, and making it empirical
Figure 697837DEST_PATH_IMAGE076
Figure DEST_PATH_IMAGE077
Representing by pixel pointsiIs a neighborhood of the center of the image,
Figure 92565DEST_PATH_IMAGE078
representing pixels in a neighborhoodjThe depth value of (a) is determined,
Figure DEST_PATH_IMAGE079
the definition is as follows:
Figure 780421DEST_PATH_IMAGE080
(4)
Figure DEST_PATH_IMAGE081
(5)
Figure 294886DEST_PATH_IMAGE082
a texture-guide weight is represented that is,
Figure DEST_PATH_IMAGE083
a gaussian function representing a distance based on pixels, is used to measure the similarity of pixels,
Figure 288906DEST_PATH_IMAGE084
respectively representing pixels
Figure DEST_PATH_IMAGE085
The gray value of the texture map at (a),
Figure 549859DEST_PATH_IMAGE086
respectively, the weight constants are represented by,
Figure DEST_PATH_IMAGE087
the self-defined parameters are adjusted according to the smooth characteristic of the depth map and are obtained according to experience
Figure 469667DEST_PATH_IMAGE088
Continuously and iteratively updating the low-resolution depth map and the high-resolution texture map to obtain a high-resolution depth map, and recording the high-resolution depth map as a high-resolution depth map
Figure DEST_PATH_IMAGE089
Step 3.4, the high-resolution depth map is processed
Figure 753319DEST_PATH_IMAGE089
Recovery into a three-dimensional point cloud
Figure 783985DEST_PATH_IMAGE090
Setting the world coordinate system to coincide with the camera coordinate system, the world coordinate of the three-dimensional point cloud is
Figure DEST_PATH_IMAGE091
Figure 180786DEST_PATH_IMAGE092
Representing high resolution depth maps
Figure 333812DEST_PATH_IMAGE089
If the depth value corresponds to the depth value, the high-resolution depth map can be recovered to form a three-dimensional point cloud through a formula (6)
Figure 319742DEST_PATH_IMAGE090
Figure DEST_PATH_IMAGE093
(6)
Step 4, reconstructing a spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data, which comprises the following steps,
step 4.1, repeat step 3, obtain the treated secondnTexture map of +1 frame
Figure 971741DEST_PATH_IMAGE094
Three-dimensional point cloud
Figure DEST_PATH_IMAGE095
Step 4.2, calculating an initial pose value according to the 2D-2D texture map
Figure 133251DEST_PATH_IMAGE096
The method comprises the following steps of,
the method comprises the steps of SIFT feature point extraction, matching, dead pixel elimination and the like, wherein SIFT feature points between two adjacent frames are matched through a fast nearest neighbor search algorithm (FLANN), a RANSAC algorithm is used for carrying out mismatching elimination, correct feature matching point pairs are reserved, the pose is solved by utilizing epipolar geometric constraint according to the correct feature matching point pairs, and the third-order texture image is setnFrame and secondn+1 frame position and attitude relation rotation matrix
Figure DEST_PATH_IMAGE097
And translation vector
Figure 344263DEST_PATH_IMAGE098
Representing a three-dimensional point Q in space atnFrame and secondnRespective pixel coordinate systems of +1 frame imageAre respectively shown as
Figure DEST_PATH_IMAGE099
The polar line equation for the epipolar geometric constraint is as follows:
Figure 83067DEST_PATH_IMAGE100
(7)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE101
which represents a cross-product operation of the cross-product,
Figure 273179DEST_PATH_IMAGE102
the shown reference matrix of the monocular camera,
Figure DEST_PATH_IMAGE103
respectively representing the inversion of the camera intra-reference matrix and the transposed inversion,
Figure 351512DEST_PATH_IMAGE104
to represent
Figure DEST_PATH_IMAGE105
The method can utilize an eight-point method to construct a linear equation set, and solve the pose through Singular Value Decomposition (SVD) to obtain an initial value
Figure 267513DEST_PATH_IMAGE106
And 4.3, carrying out ICP algorithm accurate registration according to the 3D-3D point cloud:
pose initial value to be solved
Figure 460991DEST_PATH_IMAGE106
Substituting ICP algorithm for interframe point cloud registration, and obtaining the point cloud registration by formula (6)nFrame and secondn+1 frame three-dimensional point cloud
Figure DEST_PATH_IMAGE107
Respectively expressed as:
Figure 658055DEST_PATH_IMAGE108
wherein the content of the first and second substances,abrespectively representnFrame and secondn+1 number of three-dimensional point clouds,
Figure DEST_PATH_IMAGE109
are respectively shown atnFrame and secondn+1 frame of the corresponding point to be matched in the three-dimensional point cloud,
Figure 122053DEST_PATH_IMAGE110
the method comprises the following steps of (1) constructing a spatial non-cooperative target three-dimensional point cloud problem, converting the problem into an Euclidean transformation for solving two adjacent frames of 3D point clouds: rotation matrix
Figure 173403DEST_PATH_IMAGE097
And translation vector
Figure 650914DEST_PATH_IMAGE098
So that:
Figure DEST_PATH_IMAGE111
(8)
constructing a least squares problem, solving to minimize the sum of squares of errors
Figure 601071DEST_PATH_IMAGE112
Figure DEST_PATH_IMAGE113
(9)
Setting the pose to the initial value
Figure 189748DEST_PATH_IMAGE114
Substituting into formula (9) to carry out iterative solution to obtain an accurate rotation matrix
Figure DEST_PATH_IMAGE115
And translation vector
Figure 167325DEST_PATH_IMAGE116
Then, the second step can be expressed by the formula (8)nFrame and secondn+1 frame three-dimensional point cloud
Figure 815955DEST_PATH_IMAGE117
Registration tonForming a local point cloud picture of a space non-cooperative target under a frame coordinate system;
and 5, repeating the step 3 and the step 4, continuously reconstructing the spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data, and registering each frame of point cloud one by one to form a dense and complete spatial non-cooperative target three-dimensional point cloud so as to realize the fine three-dimensional morphology measurement of the spatial non-cooperative target.
In the implementation process, as shown in fig. 1, the TOF camera 15 and the monocular camera 14 are fixedly connected in the left-right direction and installed, the TOF camera and the monocular camera are both connected with the data processing computer, the TOF-monocular camera fusion measuring system is built, 20-30 checkerboard pictures are shot at the same time for the space non-cooperative target through the TOF-monocular camera fusion measuring system, the TOF camera and the monocular camera are respectively calibrated independently by adopting a zhang's calibration method, and internal parameters of the TOF camera and the monocular camera are obtained
Figure 192971DEST_PATH_IMAGE118
And extrinsic parameters
Figure DEST_PATH_IMAGE119
After the parameters are determined, the TOF camera and the monocular camera are calibrated in a combined mode, and the fixed connection relation between the TOF camera and the monocular camera is obtained through a formula (1)
Figure 365982DEST_PATH_IMAGE120
Synchronously acquiring the nth frame depth map of the space non-cooperative target by using a TOF and monocular camera fusion measurement system
Figure DEST_PATH_IMAGE121
Texture map
Figure 128840DEST_PATH_IMAGE122
According to the relationship of fixed connection
Figure DEST_PATH_IMAGE123
Combining with practical application scenes, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), realizing the alignment of the texture map and the TOF camera, and obtaining the texture map under the depth map view angle
Figure 285671DEST_PATH_IMAGE124
Continuously and iteratively updating the low-resolution depth map and the high-resolution texture map, and calculating by formulas (3), (4) and (5) to obtain the high-resolution depth map
Figure 589349DEST_PATH_IMAGE125
. Guiding a low-resolution depth map acquired by a TOF camera to realize super-resolution by using a high-resolution texture map acquired by a monocular camera, carrying out image processing on the high-resolution depth map and the high-resolution color map, and processing the high-resolution depth map through a formula (6)
Figure 443298DEST_PATH_IMAGE125
Recovering three-dimensional point cloud
Figure 824994DEST_PATH_IMAGE090
Calculating the initial pose value according to a formula (7) by using the 2D-2D texture map
Figure 955937DEST_PATH_IMAGE126
And acquiring 3D point clouds from the depth map, substituting the pose initial value into an ICP algorithm through a formula (8) to perform point cloud matching to form a local point cloud map of a space non-cooperative target, repeatedly calibrating a TOF-monocular camera fusion measurement system, continuously reconstructing a space non-cooperative target three-dimensional point cloud according to 2D-3D image data, registering each frame of point cloud one by one, outputting to form dense space non-cooperative target three-dimensional point cloud, and realizing space non-cooperative target fine three-dimensional topography measurement based on TOF-monocular camera fusion.
The technical solutions provided by the embodiments of the present invention are described in detail above, and the principles and embodiments of the present invention are explained herein by using specific examples, and the descriptions of the embodiments are only used to help understanding the principles of the embodiments of the present invention; meanwhile, for a person skilled in the art, according to the embodiments of the present invention, there may be variations in the specific implementation manners and application ranges, and in summary, the content of the present description should not be construed as a limitation to the present invention.

Claims (5)

1. A method for measuring a space non-cooperative target fine three-dimensional shape is characterized by comprising the following steps:
step 1, building a TOF-monocular camera fusion measurement system;
step 2, calibrating a TOF-monocular camera fusion measurement system;
step 3, collecting and processing image data by using the TOF-monocular camera fusion measurement system;
step 4, reconstructing a spatial non-cooperative target three-dimensional point cloud according to the 2D-3D image data;
and 5, repeating the step 3 and the step 4, continuously reconstructing the space non-cooperative target three-dimensional point cloud according to the 2D-3D image data, and registering each frame of point cloud one by one to form the dense and complete space non-cooperative target three-dimensional point cloud.
2. The method for measuring the fine three-dimensional morphology of the spatially non-cooperative target according to claim 1, wherein the step 2 specifically comprises:
step 2.1, respectively and independently calibrating the TOF camera and the monocular camera by adopting a Zhang calibration method to respectively obtain internal parameters of the TOF camera and the monocular camera
Figure 53733DEST_PATH_IMAGE002
And extrinsic parameters
Figure 987448DEST_PATH_IMAGE004
Step 2.2, carrying out combined calibration on external parameters of the TOF camera (15) and the monocular camera (14), and obtaining the TOF camera according to a formula (1)And the fixation relation between monocular cameras
Figure 395426DEST_PATH_IMAGE006
Figure 254315DEST_PATH_IMAGE008
(1)。
3. The method for measuring the fine three-dimensional morphology of the spatially non-cooperative target according to claim 2, wherein the step 3 specifically comprises:
step 3.1, synchronously acquiring the second space non-cooperative target by utilizing the TOF-monocular camera fusion measurement systemnFrame depth map
Figure 893500DEST_PATH_IMAGE010
Texture map
Figure DEST_PATH_IMAGE012
Step 3.2, setting the second stepnFrame depth map
Figure 680672DEST_PATH_IMAGE010
Texture map
Figure 399448DEST_PATH_IMAGE012
Respectively expressed as
Figure DEST_PATH_IMAGE014
According to the fixed connection relationship between the TOF camera and the monocular camera
Figure DEST_PATH_IMAGE016
Combining with the actual application scene, converting the texture map obtained by the monocular camera to the TOF camera view angle by the image pixel coordinate system corresponding relation of the formula (2), and obtaining the texture map under the depth map view angle, which is recorded as
Figure DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE020
(2);
3.3, guiding the low-resolution depth map super-resolution acquired by the TOF camera by using the high-resolution texture map acquired by the monocular camera to acquire a high-resolution depth map
Figure DEST_PATH_IMAGE022
The robust weighted least squares optimization framework applied to the texture map guided depth map super resolution is defined as follows:
Figure DEST_PATH_IMAGE024
(3)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE026
representing a high-resolution depth map that is continuously updated during the iterative solution process through equation (3),
Figure DEST_PATH_IMAGE028
representing depth maps
Figure DEST_PATH_IMAGE030
Upper pixel pointiThe depth value of (a) is determined,
Figure 410828DEST_PATH_IMAGE028
representing super-resolved back pixelsiThe depth value of (a) is determined,
Figure DEST_PATH_IMAGE032
expressing the weight of the depth smoothing term, and making it empirical
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE036
Representing by pixel pointsiIs a neighborhood of the center of the image,
Figure DEST_PATH_IMAGE038
representing pixels in a neighborhoodjThe depth value of (a) is determined,
Figure DEST_PATH_IMAGE040
the definition is as follows:
Figure DEST_PATH_IMAGE042
(4)
Figure DEST_PATH_IMAGE044
(5)
Figure DEST_PATH_IMAGE046
a texture-guide weight is represented that is,
Figure DEST_PATH_IMAGE048
a gaussian function representing a distance based on pixels, is used to measure the similarity of pixels,
Figure DEST_PATH_IMAGE050
respectively representing pixels
Figure DEST_PATH_IMAGE052
The gray value of the texture map at (a),
Figure DEST_PATH_IMAGE054
respectively, the weight constants are represented by,
Figure DEST_PATH_IMAGE056
the self-defined parameters are adjusted according to the smooth characteristic of the depth map and are obtained according to experience
Figure DEST_PATH_IMAGE058
Continuously and iteratively updating the low-resolution depth map and the high-resolution texture map to obtain the high-resolution depth map
Figure DEST_PATH_IMAGE060
Step 3.4, the high-resolution depth map is processed
Figure 727625DEST_PATH_IMAGE060
Recovery into a three-dimensional point cloud
Figure DEST_PATH_IMAGE062
Setting the world coordinate system to coincide with the camera coordinate system, the world coordinate of the three-dimensional point cloud is
Figure DEST_PATH_IMAGE064
Figure DEST_PATH_IMAGE066
Representing high resolution depth maps
Figure 486632DEST_PATH_IMAGE060
If the depth value corresponds to the depth value, the high-resolution depth map can be recovered to form a three-dimensional point cloud through a formula (6)
Figure 489442DEST_PATH_IMAGE062
Figure DEST_PATH_IMAGE068
(6)。
4. The method for measuring the fine three-dimensional morphology of the spatially non-cooperative target according to claim 3, wherein the step 4 specifically comprises:
step 4.1, repeat step 3, obtain the treated secondnTexture map of +1 frame
Figure DEST_PATH_IMAGE070
Three-dimensional point cloud
Figure DEST_PATH_IMAGE072
Step 4.2, calculating an initial pose value according to the 2D-2D texture map
Figure DEST_PATH_IMAGE074
The method comprises the following steps of,
solving the pose by utilizing epipolar geometric constraint according to the characteristic matching point pairsnFrame and secondn+1 frame position and attitude relation rotation matrix
Figure DEST_PATH_IMAGE076
And translation vector
Figure DEST_PATH_IMAGE078
Representing a three-dimensional point Q in space atnFrame and secondnIn the pixel coordinate system of the +1 frame image, the pixel coordinates are expressed as
Figure DEST_PATH_IMAGE080
Epipolar line equation (7) for epipolar geometric constraints is as follows:
Figure DEST_PATH_IMAGE082
(7)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE084
which represents a cross-product operation of the cross-product,
Figure DEST_PATH_IMAGE086
the shown reference matrix of the monocular camera,
Figure DEST_PATH_IMAGE088
respectively representing the inverse of the reference matrix of the cameraAnd the inversion is carried out after the transposition,
Figure DEST_PATH_IMAGE090
to represent
Figure DEST_PATH_IMAGE092
The method can utilize an eight-point method to construct a linear equation set, and solve the pose through Singular Value Decomposition (SVD) to obtain an initial value
Figure DEST_PATH_IMAGE094
And 4.3, carrying out ICP algorithm accurate registration according to the 3D-3D point cloud:
pose initial value to be solved
Figure 573739DEST_PATH_IMAGE094
Substituting ICP algorithm for interframe point cloud registration, and obtaining the point cloud registration by formula (6)nFrame and secondn+1 frame three-dimensional point cloud
Figure DEST_PATH_IMAGE096
Respectively expressed as:
Figure DEST_PATH_IMAGE098
wherein the content of the first and second substances,abrespectively representnFrame and secondn+1 number of three-dimensional point clouds,
Figure DEST_PATH_IMAGE100
are respectively shown atnFrame and secondn+1 frame of the corresponding point to be matched in the three-dimensional point cloud,
Figure DEST_PATH_IMAGE102
the method comprises the following steps of (1) constructing a spatial non-cooperative target three-dimensional point cloud problem, converting the problem into an Euclidean transformation for solving two adjacent frames of 3D point clouds: rotation matrix
Figure DEST_PATH_IMAGE104
And translation vector
Figure DEST_PATH_IMAGE106
So that:
Figure DEST_PATH_IMAGE108
(8)
constructing a least squares problem, solving to minimize the sum of squares of errors
Figure DEST_PATH_IMAGE110
Figure DEST_PATH_IMAGE112
(9)
Setting the pose to the initial value
Figure DEST_PATH_IMAGE114
Substituting for iterative solution to obtain accurate rotation matrix
Figure 773469DEST_PATH_IMAGE104
And translation vector
Figure 73083DEST_PATH_IMAGE106
Then, the second step can be expressed by the formula (8)nFrame and secondn+1 frame three-dimensional point cloud
Figure DEST_PATH_IMAGE116
Registration tonForming a local point cloud picture of a space non-cooperative target under a frame coordinate system;
and 5, repeating the step 3 and the step 4, reconstructing space non-cooperative target three-dimensional point clouds according to the 2D-3D image data, and registering each frame of point clouds one by one to form dense and complete space non-cooperative target three-dimensional point clouds and realize the fine three-dimensional morphology measurement of the space non-cooperative target.
5. The method for measuring the fine three-dimensional morphology of the space non-cooperative target according to claim 1, wherein the TOF-monocular camera fusion measurement system comprises a TOF camera, a monocular camera and a data processing computer, and the TOF camera and the monocular camera are fixedly connected at the left and right and are connected with the data processing computer.
CN202011552948.XA 2020-12-24 2020-12-24 Method for measuring space non-cooperative target fine three-dimensional morphology Active CN112284293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011552948.XA CN112284293B (en) 2020-12-24 2020-12-24 Method for measuring space non-cooperative target fine three-dimensional morphology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011552948.XA CN112284293B (en) 2020-12-24 2020-12-24 Method for measuring space non-cooperative target fine three-dimensional morphology

Publications (2)

Publication Number Publication Date
CN112284293A true CN112284293A (en) 2021-01-29
CN112284293B CN112284293B (en) 2021-04-02

Family

ID=74426167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011552948.XA Active CN112284293B (en) 2020-12-24 2020-12-24 Method for measuring space non-cooperative target fine three-dimensional morphology

Country Status (1)

Country Link
CN (1) CN112284293B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674353A (en) * 2021-08-18 2021-11-19 中国人民解放军国防科技大学 Method for measuring accurate pose of space non-cooperative target

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09100662A (en) * 1995-07-28 1997-04-15 Miwa Lock Co Ltd Key information reading mechanism
CN108399610A (en) * 2018-03-20 2018-08-14 上海应用技术大学 A kind of depth image enhancement method of fusion RGB image information
CN111223059A (en) * 2020-01-04 2020-06-02 西安交通大学 Robust depth map structure reconstruction and denoising method based on guide filter
US20200226824A1 (en) * 2017-11-10 2020-07-16 Guangdong Kang Yun Technologies Limited Systems and methods for 3d scanning of objects by providing real-time visual feedback
CN111982071A (en) * 2019-05-24 2020-11-24 Tcl集团股份有限公司 3D scanning method and system based on TOF camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09100662A (en) * 1995-07-28 1997-04-15 Miwa Lock Co Ltd Key information reading mechanism
US20200226824A1 (en) * 2017-11-10 2020-07-16 Guangdong Kang Yun Technologies Limited Systems and methods for 3d scanning of objects by providing real-time visual feedback
CN108399610A (en) * 2018-03-20 2018-08-14 上海应用技术大学 A kind of depth image enhancement method of fusion RGB image information
CN111982071A (en) * 2019-05-24 2020-11-24 Tcl集团股份有限公司 3D scanning method and system based on TOF camera
CN111223059A (en) * 2020-01-04 2020-06-02 西安交通大学 Robust depth map structure reconstruction and denoising method based on guide filter

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
何英: "基于点云的非合作目标三维重建及近距离位姿测量研究", 《中国博士学位论文全文数据库 工程科技‖辑》 *
叶勤 等: "基于极线及共面约束条件的Kinect点云配准方法", 《武汉大学学报.信息科技版》 *
李三春: "RGBD视频序列预处理及量化编码方法研究", 《中国优秀硕士学位论文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674353A (en) * 2021-08-18 2021-11-19 中国人民解放军国防科技大学 Method for measuring accurate pose of space non-cooperative target
CN113674353B (en) * 2021-08-18 2023-05-16 中国人民解放军国防科技大学 Accurate pose measurement method for space non-cooperative target

Also Published As

Publication number Publication date
CN112284293B (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN108470370B (en) Method for jointly acquiring three-dimensional color point cloud by external camera of three-dimensional laser scanner
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
Liu et al. Multiview geometry for texture mapping 2d images onto 3d range data
CN112102458A (en) Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN106960442A (en) Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN103106688A (en) Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN110070598A (en) Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding
CN110211169B (en) Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation
CN112907631B (en) Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN110060304B (en) Method for acquiring three-dimensional information of organism
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN112580683B (en) Multi-sensor data time alignment system and method based on cross correlation
CN113450416B (en) TCSC method applied to three-dimensional calibration of three-dimensional camera
Wendel et al. Automatic alignment of 3D reconstructions using a digital surface model
JP2024527156A (en) System and method for optimal transport and epipolar geometry based image processing - Patents.com
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
Wan et al. Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform
CN108830921A (en) Laser point cloud reflected intensity correcting method based on incident angle
CN112284293B (en) Method for measuring space non-cooperative target fine three-dimensional morphology
CN112132971B (en) Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium
Liu et al. Research on 3D reconstruction method based on laser rotation scanning
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN117237553A (en) Three-dimensional map mapping system based on point cloud image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant