CN107478203A - A kind of 3D imaging devices and imaging method based on laser scanning - Google Patents

A kind of 3D imaging devices and imaging method based on laser scanning Download PDF

Info

Publication number
CN107478203A
CN107478203A CN201710682232.3A CN201710682232A CN107478203A CN 107478203 A CN107478203 A CN 107478203A CN 201710682232 A CN201710682232 A CN 201710682232A CN 107478203 A CN107478203 A CN 107478203A
Authority
CN
China
Prior art keywords
camera
laser
point
angle
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710682232.3A
Other languages
Chinese (zh)
Other versions
CN107478203B (en
Inventor
王兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Li Jie
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710682232.3A priority Critical patent/CN107478203B/en
Publication of CN107478203A publication Critical patent/CN107478203A/en
Application granted granted Critical
Publication of CN107478203B publication Critical patent/CN107478203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • G01C11/025Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures by scanning the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A kind of 3D imaging devices and imaging method based on laser scanning, the present invention relates to the 3D imaging devices based on laser scanning and imaging method.The present invention is low with accuracy rate in order to solve the problems, such as the efficiency of prior art presence detection.The present invention includes:Laser (1), camera (2), laser-assisted (3), stepper motor (4), electric machine controller (5), connecting rod (6) and scanning object (7).Step of the present invention is:Step 1:Camera (2) demarcation is carried out, obtains calibrated camera (2) internal reference;Step 2:Calibrated camera (2) internal reference obtained according to step 1, carry out the demarcation of the 3D imaging devices based on laser scanning, obtain camera each time gathered data when pose;Step 3:The data and camera demarcated according to step 1 and step 2 each time gathered data when pose, calculate the cloud data of scanning, obtain scan model.The present invention is used for 3D scannings field.

Description

3D imaging device and imaging method based on laser scanning
Technical Field
The invention relates to a 3D imaging apparatus and an imaging method.
Background
In view of the higher and higher labor cost and the lean and refined product quality in the current society, intelligent waves are raised in all industries, and the idea of using robots to replace people is accepted by more and more industries, particularly the production and manufacturing industries.
In the production process of products, a lot of work consuming a lot of cost exists, such as a lot of repetitive work, work in a severe environment and work with high precision requirement, the work is carried out by using manpower, and the defects of poor efficiency, low precision and the like exist.
The machine is developed along with the development of production, and the functions which can be realized by the mechanical structure used in some production processes are single, and certain input limit values exist. If a manipulator is used for grabbing workpieces for assembly, all the workpieces are required to be arranged in order currently, and the position and the posture of each workpiece are required to be strict. In the production and transportation processes, workpieces may not meet the input requirements, and the manipulator is required to have certain intelligence, so that the pose of the workpieces can be identified in the scattered workpieces, and the workpieces can be normally grabbed.
In the production process, there are some works using manual identification, such as defect detection and screening of some workpieces, because the efficiency and error rate of manual work fluctuate with the change of time, if machines can be used instead of manual work, the efficiency and accuracy of detection can be ensured.
In the described example, no matter intelligent identification, grabbing and defect detection of workpieces, 3D matching is required, and most of the bases of 3D matching in the current technology are realized by using 3D model point clouds. The point cloud data with excellent quality can be subjected to operations such as surface fitting, overlapping judgment, pose identification and the like.
The current popular point cloud scanning schemes mainly comprise two categories, one category is a translational scanning scheme, a lead screw, a slide rail or a mechanical arm is used for controlling the parallel movement of the whole structure, images are shot at a specified distance for processing, and finally point cloud splicing is carried out.
The other type is a binocular or multi-view scheme, a calibrated binocular (multi-view) camera is adopted to shoot an image of a scanned object, a specific characteristic point calculation mode is used to calculate the 3D coordinates of corresponding points, and then the vacant positions in the middle of the 3D coordinate points are filled. However, the solution has a serious disadvantage that the generated point cloud has a large error, and the solution is not suitable for some industrial production requirements with high precision.
Disclosure of Invention
The invention aims to solve the problems of low scanning speed and low accuracy of obtained point cloud in the prior art, and provides a laser scanning-based 3D imaging device and an imaging method.
A laser scanning based 3D imaging apparatus comprising: the system comprises a laser, a camera, an auxiliary laser, a stepping motor, a motor controller and a connecting rod;
arranging a laser on one side of the camera, arranging an auxiliary laser on the other side of the camera, and shooting a scanning object on the scanning platform by the camera after the laser emits laser to the scanning object;
the camera is connected with a stepping motor through a connecting rod, and the stepping motor is in signal connection with a motor controller; the motor controller controls the stepping motor to move the camera through the connecting rod.
A3D imaging method based on laser scanning comprises the following steps:
the method comprises the following steps: calibrating the camera to obtain calibrated camera internal parameters;
step two: calibrating the 3D imaging device based on laser scanning according to the calibrated camera internal reference obtained in the first step to obtain the pose of the camera when acquiring data each time;
step three: and calculating the scanned point cloud data according to the data calibrated in the first step and the second step and the pose of the camera when acquiring the data each time, so as to obtain a scanning model.
The beneficial effects of the invention are as follows:
the laser scanning device avoids the defects of the prior art, adopts the rotating motor to drive the laser to scan, does not need the integral translation of the laser (camera), reduces the size of the integral mechanical structure and the mechanical space required during scanning, and can be arranged on a mechanical arm or a specified position point (such as above a bin).
The mechanical structure of the invention has expandability, for example, the invention can add auxiliary laser, the same processing mode as the main laser, can improve the integral scanning speed by more than 40%, if the speed requirement can not be met in the real case, a plurality of auxiliary lasers can be added on the current mechanical structure to improve the speed. If the requirement can not be met, the camera can be replaced by a high-speed camera, and the processing frame rate can reach more than hundred frames per second.
The calculation mode of the scanning point position is the calculation of a derived mathematical formula, filling points do not need to be generated, and the precision of the point cloud obtained by scanning is similar to that of a translation scheme and is higher than that of a binocular (multi-view) scheme.
In the scheme of the invention, a basler industrial camera (resolution 2010 × 2046 and maximum frame rate 90) is used, a main laser and an auxiliary laser are used for scanning a 30cm × 30cm material box, the scanning time is about 3 seconds, the precision of the obtained point cloud is about x:0.0625mm, y.
Drawings
FIG. 1 is a representation of point cloud data 1;
FIG. 2 is a representation of point cloud data 2;
FIG. 3 is a representation of point cloud data 3;
FIG. 4 is a front view of the apparatus of the present invention;
FIG. 5 is a side view of the apparatus of the present invention;
FIG. 6 is a software architecture diagram;
FIG. 7 is a model of a camera coordinate to image coordinate transformation;
FIG. 8 is a physical image coordinate system and an image coordinate system;
FIG. 9 is a schematic diagram of image distortion;
FIG. 10 is a schematic view of pincushion distortion;
FIG. 11 is a schematic view of barrel distortion;
FIG. 12 is a schematic diagram of a preliminary calibration system;
FIG. 13 is a graph of Angle 2 calculation 1;
FIG. 14 is a graph of Angle 2 calculation 2; in the figure 2 is angle 2;
FIG. 15 is a schematic view of a rotated section;
fig. 16 is a solved P-point XY coordinate.
Detailed Description
The first specific implementation way is as follows: as shown in fig. 4 and 5, a laser scanning-based 3D imaging apparatus includes a laser 1, a camera2, an auxiliary laser 3, a stepping motor 4, a motor controller 5, and a connecting rod 6;
arranging a laser 1 on one side of a camera2, arranging an auxiliary laser 3 on the other side of the camera, and shooting a scanned object 7 on a scanning platform by the camera2 after the laser 1 emits laser to the scanned object 7;
the camera2 is connected with a stepping motor 4 through a connecting rod 6, and the stepping motor 4 is in signal connection with a motor controller 5; the motor controller 5 controls the stepping motor 4 to move the camera2 through the connection rod 6.
The second embodiment is as follows: A3D imaging method based on laser scanning comprises the following steps:
the method comprises the following steps: calibrating the camera2 to obtain calibrated internal parameters of the camera 2;
step two: calibrating the 3D imaging device based on laser scanning according to the calibrated camera2 internal reference obtained in the first step to obtain the pose of the camera when acquiring data each time;
step three: and calculating the scanned point cloud data according to the data calibrated in the first step and the second step and the pose of the camera when acquiring the data each time, so as to obtain a scanning model.
The final objective of the invention is to generate point cloud data that can be used for other operations while satisfying the corresponding item.
The point cloud data is model data composed of 3D points, and a model composed of all the points is cloud-like, and is called point cloud data (the general format of point cloud data is × pcd or × xyz), and its representation is shown in fig. 1 to 3 (CloudCompare using viewing software).
The quality of the point cloud mainly includes the density and the precision of the point cloud, the density is the number of points in a unit size, and the precision is the deviation value of a measuring point from a real position.
All points form a three-dimensional model of the workpiece, and the model meeting the requirements of density and precision can be subjected to relevant operations such as identification, matching, detection and the like.
The requirements of the invention
Functional requirements
The invention can scan the point cloud of the area with the appointed size, such as the fixed size of a bin.
The invention can accept certain environmental influences, such as light change, bin position change and the like.
The invention can accept certain depth of field changes, such as bin height changes.
Performance requirements
Time is required, and the scanning process needs to be within a specified time.
The precision requirement is that the scanned point cloud meets the specified precision, such as the deviation is less than 0.5mm.
The density requirement is as follows: the scanned point cloud points need to meet a specified density, such as a density >4 points per square millimeter.
Mechanical demands
The mechanical structure size of the invention depends on certain requirements, the whole structure can not be too large, and the invention can be arranged at the designated position of the mechanical arm and can be matched with the movement of the mechanical arm to carry out scanning operation.
Software requirements
The software part of the invention needs to meet the performance requirement, and the invention does not depend on a fixed platform, and can use GPU programming to accelerate the scanning speed. Can communicate with other parts in the whole invention by using a specified communication mode (Ethernet, RS232 serial interface, USB3.0 serial interface and the like) to acquire or control other module data.
In the invention, the software part can be developed quickly by using an open source library (OpenCV, openGL, QT and the like), but a trial library, a cracked library and other third-party libraries are required to be avoided as much as possible, so that the problems of other program abnormity or program copyright and the like are prevented.
The software mainly comprises a central control module, a motor operation module, a camera operation module, an image processing module, a calculation module and the like, and a relational graph among the modules is shown in figure 6;
the central control module is a program central control module, plays a role of main control, and is responsible for the functions of controlling the whole program flow, communicating data, storing data, feeding back user input and the like.
The motor operation module is mainly used for receiving a control instruction of the central control module and controlling the starting, stopping, resetting and the like of the stepping motor.
The camera operation module mainly functions to circularly acquire camera images, and the camera images are acquired by the central control module.
The image processing module receives the image transmitted by the central control module, processes the image, extracts the position information of the laser line and sends the position information to the central control module.
The calculation module receives the laser line position information sent by the central control module, calculates three-dimensional point information corresponding to the laser line according to the calibrated internal parameters (angle, arm spread and the like) of the camera, and transmits the three-dimensional point information to the central control module.
Principle of the invention
Structure calibration
Camera calibration
The parameters required to be calculated by the camera in the measuring and using process are divided into two types, namely camera internal parameter and camera external parameter, which are described below respectively.
External reference of camera
We use the shooting point as the origin to buildEstablishing a world coordinate system Ow, establishing a camera coordinate system Oc by taking the center of a camera lens as an origin, setting a camera as a rigid body, and ensuring that a translation matrix T and a rotation matrix R exist so as to enable a point P (X) in the world coordinate system Ow w ,Y w ,Z w ) By the transformation of these two matrices, a point Q (X) in the camera coordinate system Oc can be transformed c ,Y c ,Z c ). Namely, it is
For convenient calculation, the coordinate system can be converted into homogeneous coordinates, as shown in (Represents 0,0,0):
the formula (2) is a conversion relation between a world coordinate system and a camera coordinate system;
the R and T variables are called external parameters of the camera, the pose of the camera in a world coordinate system is mainly described, and the external parameters of the camera change when the camera moves.
Camera internal reference
Intrinsic parameters of camera
When the position of a shooting point is converted from a world coordinate system to a camera coordinate system, the converted coordinate is linked with an image coordinate with reference to fig. 7, if the focal length f of the camera is known, the real physical position of the shooting point on a photosensitive element (CCD or CMOS) can be calculated according to a similar triangle principle, and then the real physical position can be converted into an image pixel position according to related parameters of the photosensitive element.
To facilitate understanding of the above transformation, two coordinate systems are defined,
the image physical coordinate system, i.e. the xoy plane in fig. 7, is parallel to the image coordinate system, but the origin of the coordinate system is the central position of the sensory element, and the data unit is the real physical size, generally mm.
The image is taken as a coordinate system, a uv plane in fig. 7 is a plane coordinate system seen by normally viewing the image, the origin of coordinates is at the upper left corner of the image, and the data unit is a pixel.
According to the focal length f of the camera, the relationship between the point in the camera coordinate system and the image physical coordinate system can be obtained, as shown in the following formula:
for the convenience of calculation, the same is converted into homogeneous coordinates, and the conversion is as follows:
equation (5) conversion relationship between camera coordinate system and image physical coordinate system
The coordinate system of the camera image and the physical coordinate system of the image can be obtained by converting relevant parameters of the photosensitive element, and the main parameters comprise the coordinates Cx and Cy of a main point O (the position of the center of the sensory element in the image) and the sizes Sx and Sy of single phase elements. The corresponding calculation formula is as follows (the warping factor γ should be considered when calculating accurately, but γ is generally 0 and is ignored here), and refer to fig. 8;
u=C x +x/S x
v=C y +y/S y
the conversion to homogeneous coordinates is:
the formula (6) is the conversion relation between the image physical coordinate system and the image coordinate system;
combining equations (2), (5), and (6) above, the following equations can be obtained:
simplifying the above formula yields the corresponding formula (8):
the formula (8) is a conversion relation between a world coordinate system and an image coordinate system;
distortion of camera
The above-described formula is established in a pinhole model in which light passes through a pinhole in a straight line to form an inverted image on a subsequent image capturing apparatus, but in a real image capturing apparatus, since the amount of light entering the pinhole is too small to rapidly capture the image, the apparatus employs a set of convex-concave lenses as a lens.
The use of lens groups does speed up the image generation time, but at the same time brings new problems, such as:
the use of a lens shape is not perfect, resulting in radial distortion of the image.
The lens is not perfectly parallel to the imaging plane, resulting in tangential distortion of the image.
The design defects of the lens and the distortion of the thin prism caused by the installation error.
Centrifugal distortion to which the optical center and the geometric center of the lens group do not coincide.
Due to the existence of various distortions, a certain deviation always exists between the position of a shot point seen by a user and the position of a pixel point in a pinhole model, the influence of the deviation on distance measurement is not negligible, so that the distortion influencing image distance measurement modeling needs to be determined, an image is calibrated through the distortion data, and the image of the camera mathematical model is close to the pinhole model as much as possible.
The distortion described above is only radial distortion and tangential distortion, which have a large effect on the image, and other distortion effects are small and are ignored here. As shown in fig. 9:
radial distortion is the distortion that causes the radial position of an image point to deviate, mainly due to the lens shape, and is symmetric about the main optical axis. Radial distortion includes positive distortion (pincushion distortion) and negative distortion (barrel distortion). As shown in fig. 10 and 11;
the distortion model formula for radial distortion is as follows:
x correct =x(1+k 1 r 2 +k 2 r 4 +k 3 r 6 +...)
y correct =y(1+k 1 r 2 +k 2 r 4 +k 3 r 6 +...)
formula (9) is a mathematical model of radial distortion
Wherein r is the distance from the image point to the image center point, and the influence of the high-order coefficient after k3 is not large, so that only k1, k2 and k3 are considered in the radial distortion.
The tangential distortion is caused by the fact that the main optical axes of the lens groups are not on the same line (the main optical axes are asymmetric), and the mathematical model of the tangential distortion is as follows:
equation (10) is a tangential distortion mathematical model
In the calculation of the camera model, the current distortion parameters are used to correct the image, and the calibration with general accuracy only needs the above 5 distortion parameters k1, k2, k3, p1, p2. The thin prism distortion can be calculated with higher precision, and the mathematical model of the thin prism distortion is as follows:
x correct =x+s 1 r 2
y correct =y+s 2 r 2
equation (11) is a mathematical model of distortion of thin prisms
Camera calibration principle
Based on the above-described related parameter formula, if we can provide enough corresponding points (pixel coordinates-world coordinates), enough equations can be established to solve all unknown parameters.
The current camera calibration methods mainly include a traditional camera calibration method, an active vision camera calibration method and a camera self-calibration method.
The method used by the invention is a Zhang calibration method (a camera calibration method of a single-plane checkerboard proposed in 1998 by Zhang Zhengyou), is a method between an interface traditional calibration method and an active visual calibration method, and has the advantages of high speed, simplicity in operation, high precision and the like.
Zhang calibration uses a calibration board with a black and white checkerboard grid at a specified interval, and for convenient calibration, a world coordinate system is established for calibrating the upper left corner point, the x direction is positive to the right, and the y direction is positive downwards.
The measurement principle is to detect the corner points of a plurality of checkerboard pictures and establish an equation according to the modeling principle mentioned above. And providing enough equations to simultaneously solve the internal parameter coefficients.
To derive the specific calibration mode. For convenience of operation, transform formula (2), and convert O c =RO w + T is transformed into another form, the formula is as follows:
if modeling is changed on the above formula, a final transformation relation formula of the image coordinate system and the world coordinate system can be obtained:
note that all the internal reference matrices have to be removed from the last column to keep the equation true since we change the form of the RT matrix.
Since our world coordinates are established on the calibration plate, the Z value of the measurement points is 0 (the measurement points are the corner points of the checkerboard). As Zw is 0, the above formula is simplified to obtain:
the expression (14) is the corresponding relation between the image coordinate system and the zero plane world coordinate system;
according to the above formula, the pixel point and the scanning point can be homography changed, and the change matrix can be set as H, that is:
formula (15) is a homography matrix;
equation (14) can be written as follows:
the expression (16) is the homography corresponding relation between the image coordinate system and the zero plane world coordinate system;
analysis of equation (16) shows that H should be a 3 x 3 matrix, and since the equations left and right are polar coordinates, the last row of the H-fixed matrix should be 0,1, so H currently has 6 unknowns left. The image point uv is the pixel position of the image point, the scanning point XY is the physical position of the angular point of the calibration plate, the position where the user is calibrated and placed and the shape of the calibration plate are determined, and a pair of corresponding points ((u, v) - (x, y)) are substituted into the formula (16), so that two equations can be obtained, and the derivation process is as follows:
the expansion can result in the following equation:
Z c u=h 00 X w +h O1 Y w +h 02
Z c v=h 10 X w +h 11 Y w +h 12
if N angular points exist in one calibration plate picture, 2N equations can be obtained, and a homography matrix H can be solved by selecting three angular points (for polar coordinates, the change of a scale factor does not change the result, so the selected three points do not need to be collinear).
After the homography matrix is solved, the internal and external parameters of the camera are calculated according to the homography matrix. Recall equation (15), let the camera's internal reference matrix be a, let the rotation matrix R be written in the form of rotation vector combinations, the offset matrix T be written in the form of vectors, and the homography matrix be written in the form of vector combinations, i.e.:
substituting into equation (15), we get the modified equation:
[h 1 h 2 h 3 ]=Z c A[r 1 r 2 t]
in the above formula r 1 And r 2 Essentially a vector of rotation that is rotated perpendicularly about the x, y axes, respectively, so r 1 And r 2 There are some special properties:
r 1 and r 2 Orthogonal, i.e. r 1 ·r 2 =0。
r 1 And r 2 Modulo equal, all 1, i.e. | r 1 |=|r 2 I =1 or r 1 ·r 1 =r 2 ·r 2 =1。
R above is represented by H, then:
r 1 =h 1 A -1 ÷Z c
r 2 =h 2 A -1 ÷Z c
combining the above equations, the following equation can be obtained (the vector dot product A.B is the strain A in the matrix multiplication T B or AB T ):
In the above formula h 1 And h 2 The invention has been calculated, the only unknown parameter is the internal parameter matrix A, recall the matrix representation of A, wherein there are four unknowns (f/Sx, f/Sy are respectively regarded as a parameter, do not influence the subsequent calculation, because do not need separate f, sx while calculating either), so need four equations can all be solved. Each picture can provide two equations, at least two pictures are needed for the fixed-scale camera, and if other needs exist, a plurality of pictures can be provided for respectively solving f, sx and Sy. The external reference matrices R and T are calculated by using the homography matrix H and the internal reference A (Zc can be calculated according to the rotation vector norm 1).
All The parameters of The camera are solved up to this point, but The parameters are only parameters in a mathematical model and are not very precise, and The original situation of teaching Zhang Zhen Zheng Youze is The above solution is associated with a through minimizing an algebric dispersion of biological theory. The idea is that the above geometric derivation is only a pure algebraic fit, with no physical significance, and for accurate results, a maximum likelihood parameter estimation method is employed.
Maximum likelihood estimation is a method of estimating the overall unknown parameters. It is mainly used for the point estimation problem. By point estimation is meant the estimation of the true value of an unknown parameter using the observed value of an estimator. Say and wear and say about a sentence: the parameter is selected in the parameter space such that the sample has the highest probability of taking an observed value. Briefly, the most probable result is the optimal estimate. The specific derivation process involves relatively complicated mathematical calculation, and the invention is not described.
Camera calibration implementation
The camera calibration implementation of the present invention uses the camera calibration correlation function (opencv2.4.9) in OPENCV, and the flow is the same as that described above, and is briefly described below.
The method comprises the steps of correctly operating a camera to obtain a shot picture, enabling the OPENCV to support various cameras, directly capturing the picture through a QueryFrame function of the OPENCV, and if the camera is not supported by the OPENCV, generating the picture by using an SDK carried by the camera, and converting the picture into a format supported by the OPENCV.
A needed calibration board is prepared, the calibration board used in the OPENCV is a black-and-white checkerboard, the physical size and the number of the checkerboard are determined by a user, the number of the corner points and the side length are not required theoretically, but the number of the transverse corner points and the longitudinal corner points is different for convenience of distinguishing.
After the shot picture is obtained, the FindChessBoardCorners function can be called to search the checkerboard corner points. The currently found corner is only a rough value, and if an accurate value is to be obtained, the position of the accurate corner of the cvFindCornerSubPix needs to be called.
After the chessboard angular points are found, the found chessboard angular points are required to be pressed into a matrix (cv:: mat), corresponding physical coordinates are also pressed into the matrix (note that the directions of the found angular points are uncertain, and the corresponding physical coordinates are required to be converted into the same positions), and meanwhile, the found chessboard grids can be drawn on an image through DrawChessBoardCorners.
After repeating the 3-4 processes for multiple times (one picture is taken each time to obtain a group of corresponding data), the CalibrateMemera 2 function is used, and the matrix obtained above is transmitted, so that the corresponding parameters of the camera can be obtained. Note that OPENCV provides distortion coefficients that include only K1, K2, p1, p2, K3.
The calibration precision can calculate new coordinates projected by the space points onto the image through a back projection function projectPoints, and then calculate the deviation of the new coordinates and the coordinates of the corner points, wherein the smaller the deviation is, the better the calibration result is.
After the distortion coefficient of the camera is calculated, the OPENCV provides an image calibration function, and an undistorted image is calibrated by undistorted or initUndendristorRectfyMap + remap, and the undistorted image is directly calibrated and is suitable for pictures. initUndrisistortRectifyMap obtains the calibration matrix, and remap uses the calibration matrix to calibrate, is applicable to the video.
Camera calibration results
After the camera is successfully calibrated, the internal parameters, distortion parameters and a correction matrix of the camera can be obtained.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the present embodiment differs from the first or second embodiment in that: the camera2 calibration in the first step is carried out, and the specific process of obtaining the calibrated camera2 internal reference is as follows:
the method comprises the following steps: determining calibration parameters of the camera 2; the camera calibration parameters comprise the number of calibration pictures, the transverse angle points and the longitudinal angle points of a calibration board chessboard;
the first step is: acquiring an image acquired by the camera 2;
step one is three: searching chessboard angular points in the image (the chessboard is a plurality of black and white alternate lattices, the calibration plate is a flat plate, and a board of the chessboard with fixed length and side length is drawn on the calibration plate, wherein the angular points are intersection points of black and white grids, and four grids correspond to one angular point) and judging whether the requirements on the number of the transverse angular points and the longitudinal angular points are met;
step one is: storing pixel coordinates and physical coordinates of corner points of the chessboard, which meet the requirements;
step one and five: repeating the first step, the second step and the fourth step until the number of the stored pictures is more than or equal to 10 and less than or equal to 15;
step one six: calculating the corresponding relation between the physical corner position and the pixel position of the calibration picture according to the obtained calibration pictures, and calculating a distortion matrix participating in the calibration picture;
step one and step seven: acquiring a calibration coefficient, and if the calibration coefficient is larger than a specified coefficient, re-calibrating the camera 2; if the coefficient is less than or equal to the specified coefficient, executing the step one, eight; the specified coefficient is the maximum allowable value of the error;
step one eight: and storing the internal reference matrix of the camera2, and calibrating and acquiring the image.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment mode and one of the first to third embodiment modes is: the specific process of calibrating the 3D imaging device based on laser scanning according to the calibrated camera (2) internal reference obtained in the step one in the step two is as follows:
step two, firstly: a calibration plate is arranged at the central position of a calibration table (the calibration table is a horizontal platform below the calibration table, the calibration plate is arranged on the calibration table, scanning equipment is arranged right above the calibration table, and the height of the calibration table can be adjusted), and the calibration plate is adjusted; the stepping motor (4) drives the cameras to rotate, and the cameras are calibrated in real time (after internal parameters of the cameras are determined, the position of each camera corresponds to the external parameters of one camera, the external parameters are the positions and postures of the cameras, and the external parameters need to be calibrated and determined, namely, the postures of the cameras are identified by deflection, barrel roll and pitch, rigid body postures in a three-dimensional space) so that the deflection angle, the barrel roll angle and the pitch angle of the cameras are all 0;
step two: as shown in fig. 12, the stepping motor (4) is controlled to drive the camera to rotate by more than or equal to 3 angles, and the camera is calibrated at the same time to obtain 3 position points of a common circle (knowing that a circle exists in a three-dimensional space and a three-dimensional coordinate of a point on the circle can be obtained, the problem of the center of the circle is solved, and the unique center of the circle can be determined by using three points), and the center position and the radius r of the circle are obtained by using a multi-point common circle formula;
step two and step three: controlling a laser (1) to project to the surface of a calibration table, shooting a photo by using a camera, processing a laser line in the photo, and obtaining the central position of the laser line (the position of a laser point is solved for each line of pixels, the point is a calculation point, and because the laser projection is a vertical line, a plurality of lines of pixels contain calculation points, and the calculation is repeated for each line of pixels, so that the positions of the laser points of a plurality of calculation points can be solved, and the positions of the points form the central position of the laser line);
step two, four: processing the relative offset between the position of each line point in the picture and the image position corresponding to the center of a CCD (CCD is an electronic element imaged by a camera), and calculating to obtain an angle 2 according to the camera internal reference obtained in the step one; the angle 2 is an included angle between a connecting line of an intersection point of a laser line and a calculation line (an image exists in a memory in a two-dimensional matrix form or can be called a two-dimensional array, and a pixel value of each position can be obtained through the designated line number and column number) and a camera optical center and a camera optical axis;
step two and step five: according to the angle 2, the current camera height h and the circle radius r obtained in the step two, obtaining an arm spread l and an angle 1; the angle 1 is a laser line deflection angle, namely an included angle between a connecting line of a calculation point and a laser and a connecting rod 6 (a mechanical structure connecting a camera and the laser, a connecting piece intersects the camera at one point and the laser at another point, a straight line can be obtained by connecting the two points, and the included angle is an included angle with the straight line);
as shown in fig. 13 and 14, the specific process is as follows:
the distance h and the circle radius R between the optical center of the camera and the calibration platform have the following relations with the arm spread l and the laser line offset angle 1:
l=l 1 +l 2 (3)
wherein l 1 The perpendicular line connecting the rods 6 is made for the calculation of the point, the distance of the intersection point of the perpendicular line and the connecting piece from the optical axis of the camera, l 2 Making a vertical line of the connecting piece for the calculation point, wherein the distance between the intersection point of the vertical line and the connecting piece and the laser is the distance between the intersection point of the vertical line and the connecting piece and the laser;
and (3) combining the three formulas to obtain a relational expression between the arm spread l and the laser line offset angle 1:
changing the height of the calibration platform to obtain a new set of relation between the arm spread l and the laser line deviation angle 1, so that
b=h+r (5)
c=tan(2)*h (6)
b and c are intermediate variables;
equation (4) becomes:
tan(1)×l-tan(1)×c=b (7)
two sets of measurement data were used in tandem as follows:
tan(1)×l-tan(1)×c 1 =b 1 (8)
tan(1)×l-tan(1)×c 2 =b 2 (9)
b and c corresponding to the height h are b1 and c1, and b and c corresponding to the height h1 are b2 and c2;
solving the above formula to obtain the angle 1 and the arm spread l value:
step two, step six: controlling a motor to move to a starting point (the leftmost side of a scanning position), calibrating a camera, and recording the current pose of the camera; controlling the motor to move to the terminal (the rightmost side), and recording the camera pose and the motor moving step number at the moment;
step two, seven: and setting a motor to acquire data once every N (N is determined by a user, the smaller N is, the denser point cloud points are, the larger N is, the more sparse points are) according to the density requirement of the scanning point cloud, and obtaining the pose of the camera when acquiring the data each time.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode is as follows: the difference between this embodiment and one of the first to fourth embodiments is: in the third step, the scanned point cloud data is calculated according to the data calibrated in the first step and the second step and the pose of the camera during data acquisition each time, and the specific process of obtaining the scanning model is as follows:
step three, firstly: setting the deflection angle of the current camera as an angle 3, and calculating the vertical distance h' from the scanning point to the arm span according to the angles 1, 2 and 3 and the arm span l; the angle 3 is the included angle between the optical axis of the camera and the perpendicular of the calibration platform (the platform used when the mark camera pose angle is 0); that is, the current deflection angle of the camera, as shown in fig. 15, can be calculated from the current number of operation steps of the motor;
the specific process is as follows:
the united type (12), formula (13) and formula (3) result in:
the distance Z value from the optical center of the camera to the scanned object is as follows:
obtaining a Z value calculation formula (16) in the vertical combination of (12) to (15):
step three: calculating the XY coordinates of the measurement point P according to the Z value obtained in the first step, as shown in FIG. 16;
step three: repeating the first step to the second step for each line of laser line position points in the image to obtain a data point corresponding to a frame of image (calculating each line to obtain a plurality of lines of calculation points (one for each line), and splicing the plurality of lines of points together to form a vertical line, wherein the vertical line is the position line corresponding to the laser line); in a frame of image, multiple lines of data exist, each line has a laser line position point, all lines are calculated in a mathematical model, but in the real scanning process, due to other factors such as imaging quality and camera distortion, the image can be cut, and each line of the cut image is calculated.
Step three and four: and controlling the motor to rotate, repeatedly executing the step three to the step three, and splicing the obtained data to obtain the 3D point cloud data in the scanning process.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The first embodiment is as follows:
1. calibration process
1. Camera calibration
1) And determining calibration parameters (the number of calibration pictures, the number of transverse angle points and the number of longitudinal angle points of a calibration board chessboard).
2) The acquisition camera acquires an image.
3) And searching the corner points of the chessboard in the image and judging whether the requirements are met.
4) And storing pixel coordinates and physical coordinates of corner points of the chessboard, which meet the requirements.
5) Repeating the processes 2) to 4) until the number of stored pictures reaches the specified number.
6) Calibrating internal parameters of the camera.
7) And acquiring a calibration coefficient which is smaller than the specified recalibration camera.
8) And storing the camera internal reference matrix, and calibrating and acquiring the image.
2. Structure calibration
1) Leveling the calibration platform.
2) The calibration plate is prevented from moving to the central position, and the direction of the coordinate system of the calibration plate is consistent with that of the coordinate system of the camera image as much as possible.
3) And acquiring a camera image, and acquiring the pose of the camera by using camera internal parameters.
4) The adjusting motor drives the camera to rotate, and the calibration plate is adjusted at the same time, so that the pose angles of the camera are all 0.
5) And controlling the motor to rotate for several times (the calibration plate is not moved), acquiring the camera poses for several times, and calculating the rotation radius R.
6) And adjusting the camera to a vertical position, and acquiring the current camera height.
7) And acquiring an image, processing the position of one laser line and recording.
8) Adjusting the height of the calibration platform, and repeating the process from 6 to 7.
9) And (4) simultaneously obtaining data, namely obtaining the offset angles and the arm extension lengths of the laser devices with the specified number (the number of camera image lines).
10 Adjusting the motor to the starting point, and operating to the end point to obtain the number of operating frames. And starting point and ending point camera poses.
2. Scanning process
1) And setting a camera rotation starting point, a camera rotation finishing point and a shooting signal step length.
2) And controlling the motor to return to the starting point.
3) The motor is controlled to rotate at a constant speed to drive the camera to rotate, and meanwhile, a processing signal is transmitted to enable the program to process the image.
4) And calculating and storing the point cloud corresponding to each frame of data.
5) And (5) splicing all the point clouds to obtain a final scanning model.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (5)

1. A3D imaging device based on laser scanning, characterized in that: the laser scanning-based 3D imaging device comprises: the device comprises a laser (1), a camera (2), an auxiliary laser (3), a stepping motor (4), a motor controller (5) and a connecting rod (6);
a laser (1) is arranged on one side of a camera (2), an auxiliary laser (3) is arranged on the other side of the camera, and after the laser (1) emits laser to a scanning object (7), the camera (2) shoots the scanning object (7) on a scanning platform;
the camera (2) is connected with the stepping motor (4) through a connecting rod (6), and the stepping motor (4) is in signal connection with the motor controller (5); the motor controller (5) controls the stepping motor (4) to move the camera (2) through the connecting rod (6).
2. A3D imaging method based on laser scanning is characterized in that: the laser scanning-based 3D imaging method comprises the following steps:
the method comprises the following steps: calibrating the camera (2) to obtain calibrated internal reference of the camera (2);
step two: calibrating the 3D imaging device based on laser scanning according to the calibrated internal reference of the camera (2) obtained in the first step to obtain the pose of the camera when data are acquired each time;
step three: and calculating the scanned point cloud data according to the data calibrated in the first step and the second step and the pose of the camera when acquiring the data each time, so as to obtain a scanning model.
3. A laser scanning based 3D imaging method according to claim 2, characterized in that: the camera (2) calibration in the first step is carried out, and the specific process of obtaining the calibrated internal reference of the camera (2) is as follows:
the method comprises the following steps: determining calibration parameters of the camera (2); the camera calibration parameters comprise the number of calibration pictures, the transverse angle points and the longitudinal angle points of a calibration board chessboard;
the first step is: acquiring an image acquired by a camera (2);
step one is three: searching chessboard angular points in the image, and judging whether the requirements on the number of transverse and longitudinal angular points are met;
step one is: storing pixel coordinates and physical coordinates of corner points of the chessboard, which meet the requirements;
step one and five: repeatedly executing the first step and the second step to the fourth step until the number of the stored pictures is more than or equal to 10 and less than or equal to 15;
step one is six: calculating the corresponding relation between the position of the physical corner point of the picture and the position of the pixel according to the picture obtained in the step one or five, and calculating the internal parameter distortion matrix of the camera;
step one, seven: acquiring a calibration coefficient, and if the calibration coefficient is larger than the specified coefficient, re-calibrating the camera (2); if the coefficient is less than or equal to the specified coefficient, executing the step one, eight; the specified coefficient is the maximum allowable value of the error;
step one eight: and (4) storing the internal reference matrix of the camera (2), and calibrating and acquiring the image.
4. A laser scanning based 3D imaging method according to claim 2, characterized in that: the specific process of calibrating the 3D imaging device based on laser scanning according to the calibrated camera (2) internal reference obtained in the step one in the step two is as follows:
step two, firstly: placing the calibration plate in the center of the calibration table, and adjusting the calibration plate; the stepping motor (4) drives the camera to rotate, and the camera is calibrated in real time, so that the deflection angle, the barrel roll angle and the pitch angle of the camera are all 0;
step two: controlling a stepping motor (4) to drive a camera to rotate by more than or equal to 3 angles, calibrating the camera simultaneously to obtain 3 position points of a common circle, and obtaining a circle center position and a circle radius r by using a multipoint common circle formula;
step two and step three: controlling a laser (1) to project on the surface of a calibration table, shooting a photo by using a camera, processing a laser line in the photo, and acquiring the central position of the laser line;
step two, four: processing the relative offset between the position of each line of points in the picture and the image position corresponding to the center of the CCD, and calculating to obtain an angle 2 according to the camera internal parameters obtained in the step one; the angle 2 is an included angle between L 'and the optical axis of the camera, L' is a connecting line between a point O and the optical center of the camera, and the point O is an intersection point of the laser line and the calculation line;
step two and step five: according to the angle 2, the current camera height h and the circle radius r obtained in the step two, obtaining an arm spread l and an angle 1; the angle 1 is a laser line deflection angle, namely an included angle between S 'and the connecting rod (6), and S' is a connecting line between the calculation point and the laser;
the specific process is as follows:
the distance h and the circle radius R between the optical center of the camera and the calibration platform have the following relations with the arm spread l and the laser line offset angle 1:
l=l 1 +l 2 (3)
wherein l 1 The perpendicular line of the connecting rod (6) is made for calculating the point, the distance between the intersection point of the perpendicular line and the connecting piece and the optical axis of the camera, l 2 Making a vertical line of the connecting piece for the calculation point, wherein the distance between the intersection point of the vertical line and the connecting piece and the laser is the distance between the intersection point of the vertical line and the connecting piece and the laser;
and (3) combining the three formulas to obtain a relational expression between the arm spread l and the laser line offset angle 1:
changing the height of the calibration platform to obtain a new set of relation between the arm spread l and the laser line offset angle 1, and ordering:
b=h+r (5)
c=tan(2)*h (6)
b and c are intermediate variables;
equation (4) becomes:
tan(1)×l-tan(1)×c=b (7)
two sets of measurement data were used for the following simultaneous types:
tan(1)×l-tan(1)×c 1 =b 1 (8)
tan(1)×l-tan(1)×c 2 =b 2 (9)
b and c corresponding to the height h are b1 and c1, and b and c corresponding to the height h1 are b2 and c2;
solving the above formula to obtain the angle 1 and the arm spread l value:
step two, step six: controlling the motor to move to a starting point, calibrating the camera, and recording the current pose of the camera; controlling the motor to move to the terminal point, and recording the camera pose and the motor moving steps at the moment;
step two, seven: and setting a motor to acquire data every N steps according to the density requirement of the scanning point cloud to obtain the pose of the camera when acquiring the data every time.
5. The laser scanning based 3D imaging method according to claim 4, characterized in that: in the third step, the scanned point cloud data is calculated according to the data calibrated in the first step and the second step and the pose of the camera during data acquisition each time, and the specific process of obtaining the scanning model is as follows:
step three, firstly: setting the deflection angle of the current camera as an angle 3, and calculating the vertical distance h' from a scanning point to the arm span according to the angles 1, 2 and 3 and the arm span l; the angle 3 is an included angle between the optical axis of the camera and the perpendicular line of the calibration platform; the specific process is as follows:
2
the united type (12), the formula (13) and the formula (3) are obtained:
the distance Z value from the optical center of the camera to the scanned object is as follows:
obtaining a Z value calculation formula (16) by simultaneous type (12) to (15):
step three: calculating the XY coordinates of the measuring point P according to the Z value obtained in the step three;
step three: repeatedly executing the steps from the first step to the second step on each line of laser line position points in the image to obtain a data point corresponding to one frame of image, namely the position line corresponding to the laser line;
step three and four: and controlling the motor to rotate, repeatedly executing the step three to the step three, and splicing the obtained data to obtain the 3D point cloud data in the scanning process.
CN201710682232.3A 2017-08-10 2017-08-10 3D imaging device and imaging method based on laser scanning Active CN107478203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710682232.3A CN107478203B (en) 2017-08-10 2017-08-10 3D imaging device and imaging method based on laser scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710682232.3A CN107478203B (en) 2017-08-10 2017-08-10 3D imaging device and imaging method based on laser scanning

Publications (2)

Publication Number Publication Date
CN107478203A true CN107478203A (en) 2017-12-15
CN107478203B CN107478203B (en) 2020-04-24

Family

ID=60599455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710682232.3A Active CN107478203B (en) 2017-08-10 2017-08-10 3D imaging device and imaging method based on laser scanning

Country Status (1)

Country Link
CN (1) CN107478203B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648237A (en) * 2018-03-16 2018-10-12 中国科学院信息工程研究所 A kind of space-location method of view-based access control model
CN108981719A (en) * 2018-10-12 2018-12-11 中国空气动力研究与发展中心超高速空气动力研究所 A kind of hypervelocity flight model pose measure of the change device and method
CN109146978A (en) * 2018-07-25 2019-01-04 南京富锐光电科技有限公司 A kind of high speed camera image deformation calibrating installation and method
CN110555872A (en) * 2019-07-09 2019-12-10 牧今科技 Method and system for performing automatic camera calibration of a scanning system
CN111090103A (en) * 2019-12-25 2020-05-01 河海大学 Three-dimensional imaging device and method for dynamically and finely detecting underwater small target
CN111452034A (en) * 2019-01-21 2020-07-28 广东若铂智能机器人有限公司 Double-camera machine vision intelligent industrial robot control system and control method
CN113033270A (en) * 2019-12-27 2021-06-25 深圳大学 3D object local surface description method and device adopting auxiliary axis and storage medium
CN115998045A (en) * 2023-01-13 2023-04-25 东莞市智睿智能科技有限公司 Shoe upper three-dimensional imaging device, calibration method and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1662228A1 (en) * 2004-11-19 2006-05-31 Harman Becker Automotive Systems GmbH Scanning of three-dimensional objects
CN101986350A (en) * 2010-10-22 2011-03-16 武汉大学 Monocular structured light-based three-dimensional modeling method
US20130278755A1 (en) * 2012-03-19 2013-10-24 Google, Inc Apparatus and Method for Spatially Referencing Images
CN106123798A (en) * 2016-03-31 2016-11-16 北京北科天绘科技有限公司 A kind of digital photography laser scanning device
KR20170037197A (en) * 2015-09-25 2017-04-04 한남대학교 산학협력단 Foldable frame for mobile mapping system with multi sensor module

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1662228A1 (en) * 2004-11-19 2006-05-31 Harman Becker Automotive Systems GmbH Scanning of three-dimensional objects
CN101986350A (en) * 2010-10-22 2011-03-16 武汉大学 Monocular structured light-based three-dimensional modeling method
US20130278755A1 (en) * 2012-03-19 2013-10-24 Google, Inc Apparatus and Method for Spatially Referencing Images
KR20170037197A (en) * 2015-09-25 2017-04-04 한남대학교 산학협력단 Foldable frame for mobile mapping system with multi sensor module
CN106123798A (en) * 2016-03-31 2016-11-16 北京北科天绘科技有限公司 A kind of digital photography laser scanning device

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648237A (en) * 2018-03-16 2018-10-12 中国科学院信息工程研究所 A kind of space-location method of view-based access control model
CN108648237B (en) * 2018-03-16 2022-05-03 中国科学院信息工程研究所 Space positioning method based on vision
CN109146978A (en) * 2018-07-25 2019-01-04 南京富锐光电科技有限公司 A kind of high speed camera image deformation calibrating installation and method
CN109146978B (en) * 2018-07-25 2021-12-07 南京富锐光电科技有限公司 High-speed camera imaging distortion calibration device and method
CN108981719A (en) * 2018-10-12 2018-12-11 中国空气动力研究与发展中心超高速空气动力研究所 A kind of hypervelocity flight model pose measure of the change device and method
CN108981719B (en) * 2018-10-12 2024-03-01 中国空气动力研究与发展中心超高速空气动力研究所 Ultra-high-speed flight model pose change measuring device and method
CN111452034A (en) * 2019-01-21 2020-07-28 广东若铂智能机器人有限公司 Double-camera machine vision intelligent industrial robot control system and control method
CN110555872B (en) * 2019-07-09 2023-09-05 牧今科技 Method and system for performing automatic camera calibration of scanning system
CN110555872A (en) * 2019-07-09 2019-12-10 牧今科技 Method and system for performing automatic camera calibration of a scanning system
CN111199559A (en) * 2019-07-09 2020-05-26 牧今科技 Method and system for performing automatic camera calibration of a scanning system
US11967113B2 (en) 2019-07-09 2024-04-23 Mujin, Inc. Method and system for performing automatic camera calibration for a scanning system
US11074722B2 (en) 2019-07-09 2021-07-27 Mujin, Inc. Method and system for performing automatic camera calibration for a scanning system
CN111090103A (en) * 2019-12-25 2020-05-01 河海大学 Three-dimensional imaging device and method for dynamically and finely detecting underwater small target
CN113033270B (en) * 2019-12-27 2023-03-17 深圳大学 3D object local surface description method and device adopting auxiliary axis and storage medium
CN113033270A (en) * 2019-12-27 2021-06-25 深圳大学 3D object local surface description method and device adopting auxiliary axis and storage medium
CN115998045A (en) * 2023-01-13 2023-04-25 东莞市智睿智能科技有限公司 Shoe upper three-dimensional imaging device, calibration method and equipment

Also Published As

Publication number Publication date
CN107478203B (en) 2020-04-24

Similar Documents

Publication Publication Date Title
CN107478203B (en) 3D imaging device and imaging method based on laser scanning
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN111369630A (en) Method for calibrating multi-line laser radar and camera
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
US9679385B2 (en) Three-dimensional measurement apparatus and robot system
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN106097367B (en) A kind of scaling method and device of binocular solid camera
CN110827392B (en) Monocular image three-dimensional reconstruction method, system and device
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
CN106340045B (en) Calibration optimization method in three-dimensional facial reconstruction based on binocular stereo vision
CN111854636B (en) Multi-camera array three-dimensional detection system and method
CN111028340A (en) Three-dimensional reconstruction method, device, equipment and system in precision assembly
CN107941153A (en) A kind of vision system of laser ranging optimization calibration
CN111145269A (en) Calibration method for external orientation elements of fisheye camera and single-line laser radar
CN115861445B (en) Hand-eye calibration method based on three-dimensional point cloud of calibration plate
CN113724337A (en) Camera dynamic external parameter calibration method and device without depending on holder angle
CN204854653U (en) Quick three -dimensional scanning apparatus
CN106813595B (en) Three-phase unit characteristic point matching method, measurement method and three-dimensional detection device
CN104167001A (en) Large-visual-field camera calibration method based on orthogonal compensation
CN102589529A (en) Scanning close-range photogrammetry method
CN105931177B (en) Image acquisition processing device and method under specific environment
CN109773589A (en) Method and device, the equipment of on-line measurement and processing guiding are carried out to workpiece surface
CN113658270A (en) Multi-view visual calibration method, device, medium and system based on workpiece hole center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Jie

Inventor before: Wang Xing

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180320

Address after: 150000 Nantong street, Nangang District, Harbin, Heilongjiang Province, No. 145-11

Applicant after: Li Jie

Address before: 150000 Harbin City, Heilongjiang 150000

Applicant before: Wang Xing

GR01 Patent grant
GR01 Patent grant