CN103278138B - Method for measuring three-dimensional position and posture of thin component with complex structure - Google Patents

Method for measuring three-dimensional position and posture of thin component with complex structure Download PDF

Info

Publication number
CN103278138B
CN103278138B CN201310159555.6A CN201310159555A CN103278138B CN 103278138 B CN103278138 B CN 103278138B CN 201310159555 A CN201310159555 A CN 201310159555A CN 103278138 B CN103278138 B CN 103278138B
Authority
CN
China
Prior art keywords
und
dimensional
camera
binocular
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310159555.6A
Other languages
Chinese (zh)
Other versions
CN103278138A (en
Inventor
贾立好
乔红
苏建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201310159555.6A priority Critical patent/CN103278138B/en
Publication of CN103278138A publication Critical patent/CN103278138A/en
Application granted granted Critical
Publication of CN103278138B publication Critical patent/CN103278138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method for measuring three-dimensional position and posture of a thin component with complex structure. The method comprises the following steps of: establishing a mapping relation between a three-dimensional world coordinate system and a two-dimensional image coordinate in a binocular vision measurement system; extracting the features on a reference posture thin component binocular image, establishing a prior feature library, extracting the features on a to-be-measured posture thin component binocular image, matching the extracted features with the prior feature library so as to obtain the same amount of feature matched pair sets corresponding to the prior feature library; obtaining a three-dimensional point cloud of feature points; calculating a main plane of the reference posture thin component three-dimensional point cloud, calculating the main plane of three-dimensional point cloud of the to-be-measured thin component and calculating the spatial position relation with the main plane of the reference posture thin component so as to obtain the three-dimensional position and posture of the to-be-measured thin component. The measurement method provided by the invention is beneficial for accurate positioning for the thin component in the technological processes of automatic manufacturing, assembly, welding, and the like of an industrial mechanical arm. According to the method, the implementation is simple and easy, the positioning accuracy is high, the cost is low, and the operation is convenient.

Description

The measuring method of a kind of labyrinth thin section part three-dimensional position and attitude
Technical field
The invention belongs to the vision measurement field in infotech, particularly relate to the vision measuring method of a kind of complicated thin section part three-dimensional position and attitude.
Background technology
Measuring method based on binocular vision can realize the three-dimensional dimension of parts in industrial production line and measure, locates and line Quality Control, be an important research field in current information technology, and be applied in the measurement and location etc. of common small size parts.And for large scale there is three-dimensional position and the attitude measurement of the thin section part (both sides gusset and car door etc. as automobile body-in-white) of labyrinth, yet there are no it and clearly apply.
Summary of the invention
For solving the problem, the invention provides the measuring method of a kind of thin section part three-dimensional position to be measured and attitude, it comprises:
Step 1: the spatial relation demarcating the inner parameter of binocular camera, external parameter and binocular camera, sets up the mapping relations of three-dimensional world coordinate system and two dimensional image coordinate in two CCD camera measure system;
Step 2: use the SIFT feature point pair in benchmark thin section part binocular image described in SIFT operator extraction, generates SIFT priori features storehouse, and uses described SIFT feature point to the reference plane of the three-dimensional planar at place as benchmark thin section part place;
Step 3: for the binocular solid correcting image pair of thin section part to be measured, extract multiple SIFT feature point pair respectively, and by extracted SIFT feature point pair with the SIFT feature point in described SIFT priori features storehouse to mating, to obtain with the SIFT feature point in described SIFT feature priori storehouse equal number and corresponding SIFT feature point to set;
Step 4: according to the image coordinate of obtained SIFT feature point to unique point in set, obtain the three-dimensional point cloud of described thin section part to be measured, and then obtain the space plane of this thin section part to be measured;
Step 5: by solving the transformation matrix between described space plane and reference plane, determines three-dimensional position and the attitude information of described thin section part to be measured.
The present invention adopts Binocular vision photogrammetry technology: (1) selectes suitable camera and camera lens according to application scenarios and field range, build binocular vision system, and respectively left and right two cameras are demarcated respectively, obtain inner parameter and the distortion coefficients of camera lens of left and right camera, according to the world coordinate system on reference plane, obtain the external parameter of left and right camera, then carry out binocular solid demarcation, obtain the spatial relation between left and right camera further, (2) to the binocular image gathered, successively carry out lens distortion calibration and three-dimensional correction process, obtain respectively binocular distortion correction image to binocular solid correcting image pair, for the binocular solid correcting image pair of benchmark pose thin section part, extract its SIFT feature, choose thin section part specific location (as pilot hole by hand, the positions such as pilot pin) SIFT feature, obtain its binocular characteristic matching to set, it can be used as SIFT priori features storehouse, for the binocular solid correcting image pair of thin section part to be measured, extract its SIFT feature, and carry out characteristic matching with priori features storehouse, obtain with it equal number and corresponding characteristic matching to set, (3) obtain the three-dimensional coordinate of characteristic matching to set by binocular mensuration, obtain the three-dimensional point cloud of unique point, (4) according to least square method, the principal plane of Calculation Basis pose thin section part three-dimensional point cloud and thin section part three-dimensional point cloud to be measured, by calculating the spatial relation between two principal planes, obtains three-dimensional position and the attitude of thin section part to be measured respectively.The complicated thin section part three-dimensional position that the present invention proposes and attitude measurement method, contribute to the accurate location to thin section part in the technological processs such as Industrial robot arm automatically manufactures, assembles, welding, method positioning precision of the present invention is high, with low cost, convenient operation.
Accompanying drawing explanation
Fig. 1 is the measuring method general diagram of complicated thin section part three-dimensional position and attitude in the present invention;
Fig. 2 is binocular camera imaging model and physics imaging and normalization imaging coordinate system in the present invention;
Fig. 3 is binocular camera stereo calibration schematic diagram in the present invention;
Fig. 4 is characteristic matching process flow diagram in the present invention;
Fig. 5 is the acquisition schematic diagram lacking SIFT feature in the present invention;
Fig. 6 is unique point three-dimensional point cloud calculation flow chart in the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
The invention discloses the vision measuring method of a kind of complicated thin section part three-dimensional position and attitude.
Fig. 1 shows the vision measuring method flow process general diagram of complicated thin section part three-dimensional position and attitude in the present invention.As shown in Figure 1, the method is divided into benchmark pose measurement and pose to be measured to estimate two stages.Wherein, the benchmark pose measurement stage comprises: the steps such as camera calibration, binocular image synchronous acquisition, binocular image extract SIFT feature distortion correction and three-dimensional correction, binocular image, priori features collection obtains, benchmark pose thin section part three-dimensional point cloud obtains, the three-dimensional pose estimation of benchmark pose thin section part; Pose estimation stages to be measured comprises: binocular image synchronous acquisition, binocular image extract SIFT feature distortion correction and three-dimensional correction, binocular image, the feature set of pose thin section part to be measured obtains, pose thin section part three-dimensional point cloud to be measured obtains, the pose estimation etc. of pose thin section part to be measured.
Because said reference pose measurement is identical with the implementation of corresponding steps in pose estimation to be measured, therefore be that four large steps are launched to describe below by its summary and induction, namely camera calibration, feature extraction and thin section part characterize, Binocular vision photogrammetry and three-dimensional point cloud obtains and thin section part three-dimensional position and Attitude estimation, and in conjunction with relevant drawings, each step of the present invention to be described in further detail.
The vision measuring method of complicated thin section part three-dimensional position disclosed by the invention and attitude specifically comprises:
Step 1: camera calibration.The i.e. inner parameter of calibration for cameras, the spatial relation between distortion coefficients of camera lens and binocular camera, to set up the mapping relations of reference plane three-dimensional world coordinate system and two dimensional image coordinate in two CCD camera measure system.
Setting whole working range is 5 meters × 5 meters, for this visual field, selects the camera lens that visual field is moderate.Consider these factors, preferably choose the camera lens (its visual angle is about 53 degree) that focal length is 6mm.Binocular camera horizontal positioned, spacing setting is 60cm, wherein with left side camera for benchmark camera.
Camera calibration relates generally to four coordinate systems, i.e. three-dimensional world coordinate system X wy wz w(initial point on reference plane, and Z waxle is perpendicular to reference plane), three-dimensional camera coordinate system X cy cz c(initial point is positioned at camera lens photocentre, and Z caxle and optical axis coincidence), two dimensional image physical coordinates system xO 1y (initial point is positioned at picture centre, and coordinate is physical coordinates), two dimensional image pixel coordinate system uOv (initial point is positioned at the image upper left corner, and coordinate is pixel coordinate).
Figure 2 illustrate each coordinate system and normalization coordinate system schematic diagram thereof in the present invention.As shown in Figure 2, adopt linear camera pin-hole model, the coordinate of definition space point P under three-dimensional world coordinate system is [X wy wz w] t, its corresponding homogeneous coordinates are P=[X wy wz w1] t; The coordinate of defining point P under three-dimensional camera coordinate system is [X cy cz c] t, its homogeneous coordinates are P c=[X cy cz c1] t; The subpoint of defining point P in two dimensional image plane is p ', and the coordinate under its two dimensional image physical coordinates system is [x y] t(unit: millimeter), the coordinate under its two dimensional image pixel coordinate system is [u v] t, its homogeneous coordinates are p '=[u v1] t.The optical axis of camera and the intersection point O of the plane of delineation 1pixel coordinate be [u 0v 0] t; The physical size of image as unit pixel in x-axis and y-axis direction is respectively dx, dy.The mapping relations then put between the coordinate under the two dimensional image coordinate of the coordinate of P under three-dimensional system of coordinate and its subpoint p ' in two dimensional image plane are:
Z c u v 1 = f dx 0 u 0 0 0 f dy v 0 0 0 0 1 0 R t O t 1 X w Y w Z w 1 - - - ( 1 )
= a x 0 u 0 0 0 a y v 0 0 0 0 1 0 R t O t 1 X w Y w Z w 1 = M 1 M 2 P = MP
Wherein, P=[X wy wz w1] tfor the homogeneous coordinates under three-dimensional world coordinate system; F is lens focus, a x=f/dx, a y=f/dy is the coke ratio of camera, and R, t are respectively rotation matrix between three-dimensional camera coordinate system and three-dimensional world coordinate system and translation vector; O t=[0 0 0], M 1, M 2be respectively inner parameter matrix and the external parameter matrix (i.e. homography matrix) of camera, M is the overall projection matrix of camera.
In whole calibration process, need inner parameter [the f dx dy u of calibration for cameras 0v 0] t, camera lens distortion factor [k 1k 2p 1p 2k 3] tand external parameter, described inner parameter comprises coke ratio, the center position of camera, and external parameter comprises rotation matrix R between three-dimensional camera coordinate system and three-dimensional world coordinate system and translation vector t.In calibration process, use chequered with black and white square mesh scaling board to demarcate, wherein, use little scaling board when demarcating inner parameter, its sizing grid is 50mm × 50mm, and meshes number is 8 (wide) × 12 (length); Use large scaling board when demarcating external parameter, its sizing grid is 100mm × 100mm, and meshes number is 4 (wide) × 6 (length).On scaling board, the detection of each angle point all adopts Harris Corner Detection Algorithm, and obtains sub-pixel corner location.
Step 1 specifically comprises the following steps:
Step 11: demarcate respectively left and right camera, obtains inner parameter and the external parameter matrix of each camera respectively.
A () first obtains the initial internal parameter of left and right camera and camera relative to the rotation matrix of little scaling board and translation vector.
Example is demarcated as with benchmark camera, for obtaining the inner parameter matrix of camera, first lens distortion is not considered, adopt the scaling method based on plane shock wave matrix, hand-held little scaling board, by constantly changing the attitude (at least 3 attitudes) of little scaling board, utilizing the plane between different points of view to mate (direct linear transformation's method) and calculating the initial internal parameter of camera and camera relative to the rotation matrix of little scaling board and translation vector; Camera internal parameter when namely trying to achieve zero lens distortion and camera are relative to the rotation matrix R of little scaling board and translation vector t.Make little scaling board place plane Z w=0, then:
Z c u v 1 = a x 0 u 0 0 0 a y v 0 0 0 0 1 0 R t O t 1 X w Y w 0 1 = a x 0 u 0 0 a y v 0 0 0 1 R t X w Y w 0 1 [ 0,1 ]
= a x 0 u 0 0 a y v 0 0 0 1 r 1 r 2 r 3 t X w Y w 0 1 = a x 0 u 0 0 a y v 0 0 0 1 r 1 r 2 t X w Y w 1 - - - ( 2 )
= H · X w Y w 1
Wherein, R=[r 1r 2r 3].
Therefore, the demarcation of benchmark camera can be completed by solving H matrix, obtaining the inner parameter of camera itself and camera relative to the rotation matrix R of little scaling board and translation vector t.
B () relative to the rotation matrix R of little scaling board and translation vector t based on the camera initial internal parameter of trying to achieve above and camera, is considered lens distortion, is asked for the distortion factor of camera lens further, the inner parameter of one-step optimization camera of going forward side by side.
The image pixel coordinates making in raw footage fault image a certain angle point detected is [u rawv raw] t, be [u without the image pixel coordinates of this angle point in lens distortion image under desirable pin-hole imaging model und v und] t, wherein, raw footage fault image is the acquired original image with lens distortion, then:
(1) three-dimensional world coordinate of angle point each on little scaling board is transformed in three-dimensional camera coordinate system, wherein, with the angle point of on little scaling board for initial point, set up three-dimensional world coordinate system, and then obtain the three-dimensional world coordinate of each angle point on scaling board according to the length computation of the chequered with black and white grid on little scaling board.Namely
X c Y c Z c = R X w Y w Z w 1 + t - - - ( 3 )
Wherein, R and t is camera relative to the rotation matrix of little scaling board and translation vector.
(2) project to the plane of delineation further, obtain the orthoscopic image physical coordinates [x of angle point at plane of delineation coordinate system undy und] twith image pixel coordinates [u undv und] t, namely
x und y und = fX c / Z c f Y c / Z c - - - ( 4 )
u und v und a x 0 u 0 0 a y v 0 0 0 1 x und y und - - - ( 5 )
(3) image pixel coordinates [u of angle point will detected in raw footage fault image rawv raw] tbe transformed to image physical coordinates [x rawy raw] t, and introduce the distortion coefficient [k of a certain initial setting 1k 2p 1p 2k 3] t, obtain the angle point image pixel coordinates after distortion correction [u ' undv ' und] twith image physical coordinates [x ' undy ' und] t, namely
x raw y raw 1 = dx 0 - u 0 dx 0 dy - v 0 dy 0 0 1 u raw v raw 1 - - - ( 6 )
x ′ und y ′ und = ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) x raw y raw + 2 p 1 x d y d + p 2 ( r 2 + 2 x d 2 ) p 1 ( r 2 + 2 y d 2 ) + 2 p 2 x d y d - - - ( 7 )
u ′ und v ′ und = a x 0 u 0 0 a y v 0 0 0 1 x ′ und y ′ und - - - ( 8 )
Wherein, in visible said process, the image pixel coordinates [u of the angle point detected in the image pixel coordinates of angle point and raw footage fault image after establishing distortion correction according to camera internal parameter and distortion parameter rawv raw] tbetween linear equation.
(4) for the N number of angle point used in demarcation, objective definition function:
min F = Σ i = 1 N [ ( u und i - u und ′ i ) 2 + ( v und i - v und ′ i ) 2 ] - - - ( 9 )
By this non-linear least square problem, by nonlinear optimization algorithm successive ignition, try to achieve the parameter value making objective function minimum, obtain camera internal parameter [the f dx dy u of global optimization 0v 0] twith distortion parameter [k 1k 2p 1p 2k 3] t.
C the external parameter of () calibration for cameras, the spatial relation namely between the plane of delineation and reference plane, obtains camera plane relative to the rotation matrix of reference plane and translation vector.
The present invention needs the three-dimensional coordinate measuring some specific location unique point on complicated thin section part, the external parameter matrix of camera when therefore needing acquisition to be world coordinate system with reference plane.Be placed on reference plane by large scaling board, namely in set scene, reference plane is world coordinate system X wo wy wplace plane, its concrete steps are as follows:
(1) three-dimensional world coordinate system X is determined wy wz w, be placed on reference plane by large scaling board, on large scaling board, an angle point of grid is as coordinate origin, and makes Z waxle is perpendicular to large scaling board reference plane;
(2) gather large scaling board image now, and the camera internal parameter obtained before utilizing and distortion coefficients of camera lens carry out distortion correction, obtain the image after correcting;
(3) Harris Corner Detection Algorithm is used the two dimensional image pixel coordinate of each angle point to be detected in the image after distortion correction;
(4) according to the three-dimensional camera coordinate of each angle point that distortion correction image under the three-dimensional world coordinate of angle point each on large scaling board and three-dimensional camera coordinate system detects, ask for camera relative to the rotation matrix of large scaling board and translation vector, the i.e. external parameter of camera.
Step 12: binocular solid is demarcated, and sets up the spatial relation between binocular camera.Namely obtain right camera relative to the rotation matrix of benchmark camera and translation matrix, i.e. homography matrix, concrete steps are as follows:
Fig. 3 shows binocular camera stereo calibration schematic diagram in the present invention.According to left and right camera relative to the rotation matrix of reference plane and translation vector, solve the spatial relation of large scaling board when reference plane between right camera and benchmark camera, namely obtain target object when level ground right camera relative to the rotation matrix of left camera and translation vector.As shown in Figure 3, the spatial point P under three-dimensional world coordinate system, at the three-dimensional camera coordinate system of left camera and right camera, (photocentre or initial point are respectively O cland O cR) under spatial point be respectively P cland P cR, make the external parameter matrix of left camera and right camera be respectively [R lt l] and [R rt r], then basis
P Cl=R lP+t l
P Cr=R rP+t r
P Cl=R T(P Cr-t)
Obtaining right camera relative to the spatial relation [R t] of left camera is
R = R r R l T - - - ( 10 )
t=t r-Rt l(11)
Step 13: carry out three-dimensional correction to binocular image, realizes coplanar, the row alignment of binocular image.Even if two camera optical axises are parallel, to the plane of delineation re-projection of left and right camera, make it accurately fall at grade, aim at the row of left and right image simultaneously.Based on the rotation matrix between the camera of left and right and translation vector etc., adopt view overlap maximization criterion and Bouguet calibration algorithm to realize three-dimensional correction, try to achieve the three-dimensional correction internal matrix M of left camera rect_lwith row alignment matrix R rect_l; The three-dimensional correction internal matrix M of right camera red_rwith row alignment matrix R rect_r, thus obtain binocular solid correcting image pair.Three-dimensional correction specific operation process is:
Order point P is at the image coordinate P of binocular original image centering raw_land P raw_r; At the image coordinate P of binocular distortion correction image pair und_land P und_l; At the image coordinate P of binocular solid correcting image centering rect_land P rect_r, then three-dimensional correction formula is:
P rect_l=M rect_lR l(M 1l) -1P und_l(12)
P rect_r=M rect_rR r(M 1r) -1P und_r(13)
Wherein, R land R rbe respectively the rotation matrix of left and right camera external parameter.
P und_l=M 1l·UnDistort(M 1lP raw_l)
P und_r=M 1r·UnDistort(M 1rP raw_r)
In formula,
The distortion correction function that UnDistort () is pixel.
Step 2: feature extraction and thin section part characterize.Thin section part binocular solid correcting image centering, use SIFT feature and neighborhood thereof to describe respectively and characterize thin section part image.
Fig. 4 shows the method flow diagram that in the present invention, feature extraction and thin section part characterize.Shown in accompanying drawing 4, the method specifically comprises the steps:
Step 21: the binocular image gathering thin section part, and carry out lens distortion calibration and three-dimensional correction respectively, and then obtain binocular distortion correction image to, binocular solid correcting image pair;
Step 22: the SIFT feature using SIFT operator extraction binocular solid correcting image centering, described SIFT feature comprises SIFT feature point and neighborhood describes;
Step 23: priori features collection obtains and thin section part characterizes.
In the benchmark pose measurement stage, for the binocular solid correcting image of thin section part to the SIFT feature detected in left image, choose the SIFT feature of (as pilot hole, pilot pin etc.) near multiple (at least 3) ad-hoc location by hand, and the SIFT feature in itself and right image is carried out characteristic matching, manual rejecting error hiding pair, obtain SIFT feature coupling to set, and then for forming the priori SIFT feature storehouse of benchmark pose thin section part.
In pose estimation stages to be measured, the SIFT feature that binocular solid correcting image centering from thin section part to be measured is extracted is carried out characteristic matching with priori SIFT feature storehouse respectively, and use RANSAC algorithm to reject error hiding result, formed with priori SIFT feature storehouse equal number and corresponding SIFT feature mate right.The object of its coupling is, if the SIFT feature logarithm of the binocular solid correcting image centering of thin section part to be measured is less than the characteristic number in priori SIFT feature storehouse, then corresponding with priori features storehouse according to homography matrix SIFT feature, the SIFT feature of disappearance is tried to achieve in conversion.
Fig. 5 shows in the present invention the obtain manner schematic diagram lacking SIFT feature.As shown in Figure 5, priori SIFT feature storehouse (being composed of by 8 pairs of SIFT feature) and thin section part binocular solid correcting image to be measured is given right to 7 of upper extraction pairs of SIFT feature couplings in figure, for the SIFT feature coupling of completion disappearance is right, according to the homography matrix H between two left images 1with a P 6, calculate thin section part binocular solid correcting image to be measured to the SIFT feature point P lacked in left image 6', the SIFT feature finally obtaining thin section part to be measured with priori SIFT feature storehouse equal number and corresponding is mated set.
Step 3: Binocular vision photogrammetry and benchmark thin section part three-dimensional point cloud obtain.
Fig. 6 shows Binocular vision photogrammetry and thin section part three-dimensional point cloud in the present invention and obtains schematic diagram.In the present invention, calculated the three-dimensional world coordinate of specific location feature point set on thin section part by binocular measuring method, thus obtain the three-dimensional point cloud of thin section part.
Be calculated as example with the three-dimensional coordinate of certain specific location point P on three dimensions thin section part, describe computation process, wherein, it is right that some P corresponds to certain SIFT feature coupling in the imaging of binocular solid correcting image centering.The coordinate of some P under the three-dimensional camera coordinate system of the left and right camera of binocular is respectively [X cly clz cl] t[X cry crz cr] t; This SIFT feature of binocular solid correcting image centering is mated right image coordinate and is respectively [u rect_lv rect_l] t[u rect_rv rect_r] t, its homogeneous coordinates are respectively P rect_l=[u rect_lv rect_ l1] tand P rect_r=[u rect_rv rect_r1] t.As shown in Figure 6, described Binocular vision photogrammetry and thin section part three-dimensional point cloud obtain and specifically comprise the steps:
Step 31: calculate the image coordinate of SIFT feature point in binocular distortion correction image pair.Wherein, in the benchmark pose measurement stage, the image coordinate of the binocular distortion correction image pair that the SIFT feature point on Calculation Basis thin section part obtains on benchmark thin section part; In the pose measurement stage to be measured, calculate the image coordinate of the binocular distortion correction image pair that the SIFT feature point on thin section part to be measured obtains in UUT.
Order point P binocular left and right distortion correction image on image coordinate be respectively [u und_lv und_l] t[u und_rv und_r] t, its corresponding homogeneous coordinates are respectively P und_l=[u und_lv und_l1] tand P und_r=[u und_rv und_r1] t, obtain according to formula (1):
Z cl u und _ l v und _ l 1 = M 1 l M 2 l X w Y w Z w 1 = M l X w Y w Z w 1 = m l 11 m l 12 m l 13 m l 14 m l 21 m l 22 m l 23 m l 24 m l 31 m l 32 m l 33 m l 34 X w Y w Z w 1 - - - ( 14 )
Z cr u und _ r v und _ r 1 = M 1 r M 2 r X w Y w Z w 1 = M r X w Y w Z w 1 = m r 11 m r 12 m r 13 m r 14 m r 21 m r 22 m r 23 m r 24 m r 31 m r 32 m r 33 m r 34 X w Y w Z w 1 - - - ( 15 )
In formula, M 1l, M 2land M 1r, M 2rbe respectively inner parameter matrix and the external parameter matrix of the left and right camera of binocular, M land M rbe respectively the overall projection matrix of the left and right camera of binocular, and
M l = m l 11 m l 12 m l 13 m l 14 m l 21 m l 22 m l 23 m l 24 m l 31 m l 32 m l 33 m l 34
M r = m r 11 m r 12 m r 13 m r 14 m r 21 m r 22 m r 23 m r 24 m r 31 m r 32 m r 33 m r 34
This image coordinate P binocular solid correcting image centering known rect_land P rect_r, then this is at the image coordinate P of binocular distortion correction image pair und_land P und_lbe respectively:
P und_l=M 1l(M rect_lR l) -1P rect_l
P und_r=M 1r(M rect_rr r) -1p rect_rstep 32: calculate the three-dimensional coordinate of SIFT feature point under world coordinate system.Wherein, in the benchmark pose measurement stage, the image coordinate according to the described SIFT feature point of benchmark thin section part calculates the three-dimensional coordinate under world coordinate system; In the pose measurement stage to be measured, the image coordinate according to the described SIFT feature point of thin section part to be measured calculates the three-dimensional coordinate under world coordinate system;
By this image coordinate P on binocular distortion correction image und_land P und_rsubstitute into pinhole camera modeling formula (14) and formula (15) respectively, in conjunction with the population parameter matrix M of left and right camera land M r, cancellation Z cland Z cr, obtain system of equations:
( m l 11 - m l 31 u und _ l ) X w + ( m l 12 - m l 32 u und _ l ) Y w + ( m l 13 - m l 33 u und _ l ) Z w = m 134 u und _ l - m l 14 ( m l 21 - m l 31 v und _ l ) X w + ( m l 22 - m l 32 v und _ l ) Y w + ( m l 23 - m l 33 v und _ l ) Z w = m 134 v und _ l - m l 24 ( m r 11 - m r 31 u und _ r ) X x + ( m r 12 - m r 32 u und _ r ) Y w + ( m r 13 - m r 33 u und _ r ) Z w = m r 34 u und _ r - m r 14 ( m r 21 - m r 31 v und - r ) X w + ( m r 22 - m r 32 v und _ r ) Y w + ( m r 23 - m r 33 v und _ r ) Z w = m r 34 v und _ r - m r 24
Write system of equations as matrix form:
A = X w Y w Z w = B
In formula,
A = m l 11 - m l 31 u und _ l m l 12 - m l 32 u und _ l m l 13 - m l 33 u und _ l m l 21 - m l 31 v und _ l m l 22 - m l 32 v und _ l m l 23 - m l 33 v und _ l m r 11 - m r 31 u und _ r m r 12 - m r 32 u und _ r m r 13 - m r 33 u und _ r m r 21 - m r 31 v und _ r m r 22 - m r 32 v und _ r m r 23 - m r 33 v und _ r
B = m l 34 u und _ l - m l 14 m l 34 v und _ l - m l 24 m r 34 u und _ r - m r 14 m r 34 v und _ r - m r 24
Least square method is utilized to try to achieve this three-dimensional coordinate under world coordinate system.
[X wY wZ w] T=((A TA) -1A TB) (16)
Step 4: thin section part three-dimensional position and Attitude estimation.
According to least square method, the principal plane of Calculation Basis pose thin section part three-dimensional point cloud and thin section part three-dimensional point cloud to be measured, by calculating the spatial relation between two principal planes, obtains three-dimensional position and the attitude of thin section part to be measured respectively.
Step 41: the space principal plane calculating three-dimensional point cloud; Namely in the benchmark pose measurement stage, using the three-dimensional planar at the three-dimensional point cloud place of the described SIFT feature point of benchmark thin section part as reference space principal plane π 0; In the pose measurement stage to be measured, using the plane at the three-dimensional power supply place of the described SIFT feature point of thin section part to be measured as space principal plane π to be measured;
Adopt least square method, according to the three-dimensional point cloud of benchmark pose thin section part, calculate its space reference principal plane π 0; According to the three-dimensional point cloud of thin section part to be measured, calculate its space principal plane π to be measured.Be solved to example with space principal plane π to be measured, make the equation of space plane π be: ax+by+cz+1=0, makes plane π normal vector n=(a, b, c), and the three-dimensional coordinate of specific location unique point is P i=(x i, y i, z i) t, then P is put ito the distance d of fit Plane π i=nP i+ 1, according to the principle of least square, require S=∑ (d i) 2, then by solving following equations group, normal vector n is solved.
∂ S a = 0 ∂ S b = 0 ∂ S c = 0 - - - ( 17 )
Step 42: solve space principal plane π to be measured and reference space principal plane π 0between transformation matrix, the unique point of benchmark thin section part and thin section part to be measured is projected on its principal plane respectively, utilize the subpoint on these principal planes, calculate the rotation matrix between two planes and translation vector, thus accurately determine three-dimensional position and the attitude information of thin section part relative datum thin section part to be measured.
In the description of said method step, for simplicity, benchmark pose measurement and pose measurement stage to be measured are carried out also line description, but when practical application, first camera calibration is carried out, then carry out the acquisition in benchmark pose measurement and priori features storehouse, the calculating etc. of reference plane, finally perform the pose measurement stage to be measured, namely solve three-dimensional position and the attitude information of actual thin section part to be measured based on reference plane.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; be understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a measuring method for thin section part three-dimensional position to be measured and attitude, it comprises:
Step 1: the spatial relation demarcating the inner parameter of binocular camera, external parameter and binocular camera, sets up the mapping relations of three-dimensional world coordinate system and two dimensional image coordinate in two CCD camera measure system;
Step 2: use the SIFT feature point pair in SIFT operator extraction benchmark thin section part binocular image, generate SIFT priori features storehouse, and use described SIFT feature point to the reference plane of the three-dimensional planar at place as benchmark thin section part place;
Step 3: for the binocular solid correcting image pair of thin section part to be measured, extract multiple SIFT feature point pair respectively, and by extracted SIFT feature point pair with the SIFT feature point in described SIFT priori features storehouse to mating, to obtain with the SIFT feature point in described SIFT feature priori storehouse equal number and corresponding SIFT feature point to set;
Step 4: according to the image coordinate of obtained SIFT feature point to unique point in set, obtain the three-dimensional point cloud of described thin section part to be measured, and then obtain the space plane of this thin section part to be measured;
Step 5: by solving the transformation matrix between described space plane and reference plane, determines three-dimensional position and the attitude information of described thin section part to be measured.
2. measuring method as claimed in claim 1, it is characterized in that, described step 1 specifically comprises:
Step 11: set up the mapping relations between one camera two dimensional image coordinate system and three-dimensional world coordinate system respectively;
Step 12: demarcate single camera respectively, obtains camera internal parameter and external parameter;
Step 13: according to obtained camera internal parameter and external parameter, carry out stereo calibration, set up the spatial relation between binocular camera.
3. measuring method as claimed in claim 2, it is characterized in that, the inner parameter of described camera comprises the coke ratio of camera and the center position of camera, and external parameter comprises rotation matrix between three-dimensional camera coordinate system and three-dimensional world coordinate system and translation vector.
4. measuring method as claimed in claim 2, is characterized in that, the spatial relation between described binocular camera represents rotation matrix between camera and translation vector.
5. measuring method as claimed in claim 1, is characterized in that, the concrete steps that in step 2, SIFT priori features storehouse obtains comprise:
Step 21: the binocular original image pair gathering benchmark thin section part, utilizing the distortion coefficients of camera lens of binocular camera and three-dimensional correction parameter to described binocular original image to correcting respectively, obtaining the binocular solid correcting image pair of benchmark thin section part;
Step 22: extract SIFT feature from the binocular solid correcting image centering of described benchmark thin section part, the SIFT priori features storehouse of composition benchmark thin section part.
6. measuring method as claimed in claim 1, it is characterized in that, step 3 specifically comprises the steps:
Step 31, gather the binocular original image pair of thin section part to be measured, utilizing the distortion coefficients of camera lens of binocular camera and three-dimensional correction parameter to described binocular original image to correcting respectively, obtaining the binocular solid correcting image pair of thin section part to be measured;
Step 32: extract SIFT feature point pair from the binocular solid correcting image centering of described thin section part to be measured, and by extracted SIFT feature point pair with the SIFT feature point in described SIFT priori features storehouse to mating, and then to obtain with equal number in described SIFT priori features storehouse and corresponding SIFT feature point pair.
7. the measuring method as described in any one of claim 5-6, is characterized in that, the rotation matrix between described three-dimensional correction parameter comprises based on described binocular camera and translation vector, the three-dimensional correction internal matrix of the binocular camera of acquisition and row alignment matrix.
8. measuring method as claimed in claim 1, it is characterized in that, step 4 specifically comprises:
Step 41: the image coordinate right according to the SIFT feature point on described thin section part to be measured, utilizes the mapping relations of three-dimensional world coordinate system and two dimensional image coordinate in the two CCD camera measure system set up, obtains the three-dimensional world coordinate that described SIFT feature point is right;
Step 42: the three-dimensional world coordinate right according to described SIFT feature point obtains described SIFT feature point to the three-dimensional planar at place, and it can be used as the space plane of thin section part to be measured.
9. measuring method as claimed in claim 8, it is characterized in that, the right image coordinate of described SIFT feature point is binocular solid correcting image coordinate on thin section part to be measured, the three-dimensional world coordinate of described SIFT feature point according to its binocular distortion correction image on image coordinate obtain, wherein said binocular distortion correction image obtains as follows to upper image coordinate:
P und_l=M 1l(M rect_lR l) -1P rect_l
P und_r=M 1r(M rect_rR r) -1P rect_r
Wherein, P und_l=[u und_lv und_l1] tand P und_r=[u und_rv und_r1] tbe respectively the image coordinate of binocular distortion correction image pair, u und_l, v und_l, u und_r, v und_rbe respectively pixel coordinate, P rect_land P rect_rfor the image coordinate of binocular solid correcting image centering, M 1land M 1rbe respectively the inner parameter matrix of binocular camera, M rect_land M rect_rbe respectively the three-dimensional correction parameter matrix of binocular camera.
10. measuring method as claimed in claim 9, it is characterized in that, the three-dimensional coordinate of described SIFT feature point obtains as follows:
[X wY wZ w] T=((A TA) -1A TB)
A = m l 11 - m l 31 u und _ l m l 12 - m l 32 u und _ l m l 13 - m l 33 u und _ l m l 21 - m l 31 v und _ l m l 22 - m l 32 v und _ l m l 23 - m l 33 v und _ l m r 11 - m r 31 u und _ r m r 12 - m r 32 u und _ r m r 13 - m r 33 u und _ r m r 21 - m r 31 v und _ r m r 22 - m r 32 v und _ r m r 23 - m r 33 v und _ r
B = m l 34 u und _ l - m l 14 m l 34 v und _ l - m l 24 m r 34 u und _ r - m r 14 m r 34 v und _ r - m r 24
Wherein, (X w, Y w, Z w) be the three-dimensional world coordinate of described SIFT feature point; m land m rbe respectively the inner parameter matrix M of binocular camera 1land M 1relement.
CN201310159555.6A 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure Active CN103278138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310159555.6A CN103278138B (en) 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310159555.6A CN103278138B (en) 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure

Publications (2)

Publication Number Publication Date
CN103278138A CN103278138A (en) 2013-09-04
CN103278138B true CN103278138B (en) 2015-05-06

Family

ID=49060727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310159555.6A Active CN103278138B (en) 2013-05-03 2013-05-03 Method for measuring three-dimensional position and posture of thin component with complex structure

Country Status (1)

Country Link
CN (1) CN103278138B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714571B (en) * 2013-09-23 2016-08-10 西安新拓三维光测科技有限公司 A kind of based on photogrammetric single camera three-dimensional rebuilding method
CN104636743B (en) * 2013-11-06 2021-09-03 北京三星通信技术研究有限公司 Method and device for correcting character image
CN104154875B (en) * 2014-08-20 2017-02-15 深圳大学 Three-dimensional data acquisition system and acquisition method based on two-axis rotation platform
CN104484870B (en) * 2014-11-25 2018-01-12 北京航空航天大学 Verify Plane location method
CN104880176B (en) * 2015-04-15 2017-04-12 大连理工大学 Moving object posture measurement method based on prior knowledge model optimization
CN105157592B (en) * 2015-08-26 2018-03-06 北京航空航天大学 The deformed shape of the deformable wing of flexible trailing edge and the measuring method of speed based on binocular vision
CN105741290B (en) * 2016-01-29 2018-06-19 中国人民解放军国防科学技术大学 A kind of printed circuit board information indicating method and device based on augmented reality
CN106352855A (en) * 2016-09-26 2017-01-25 北京建筑大学 Photographing measurement method and device
CN107423772A (en) * 2017-08-08 2017-12-01 南京理工大学 A kind of new binocular image feature matching method based on RANSAC
CN108010085B (en) * 2017-11-30 2019-12-31 西南科技大学 Target identification method based on binocular visible light camera and thermal infrared camera
CN108955685B (en) * 2018-05-04 2021-11-26 北京航空航天大学 Refueling aircraft taper sleeve pose measuring method based on stereoscopic vision
CN110858403B (en) * 2018-08-22 2022-09-27 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN109700465A (en) * 2019-01-07 2019-05-03 广东体达康医疗科技有限公司 A kind of mobile three-dimensional wound scanning device and its workflow
CN111179356A (en) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 Binocular camera calibration method, device and system based on Aruco code and calibration board
CN113378606A (en) * 2020-03-10 2021-09-10 杭州海康威视数字技术股份有限公司 Method, device and system for determining labeling information
CN111536981B (en) * 2020-04-23 2023-09-12 中国科学院上海技术物理研究所 Embedded binocular non-cooperative target relative pose measurement method
CN114559131A (en) * 2020-11-27 2022-05-31 北京颖捷科技有限公司 Welding control method and device and upper computer
CN113822945A (en) * 2021-09-28 2021-12-21 天津朗硕机器人科技有限公司 Workpiece identification and positioning method based on binocular vision
CN114913246B (en) * 2022-07-15 2022-11-01 齐鲁空天信息研究院 Camera calibration method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065351A (en) * 2012-12-16 2013-04-24 华南理工大学 Binocular three-dimensional reconstruction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07287764A (en) * 1995-05-08 1995-10-31 Omron Corp Stereoscopic method and solid recognition device using the method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065351A (en) * 2012-12-16 2013-04-24 华南理工大学 Binocular three-dimensional reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
党乐.基于双目立体视觉的三维重建方法研究.《中国优秀硕士学位论文全文数据库(信息科技辑)》.2010,第72-80页. *

Also Published As

Publication number Publication date
CN103278138A (en) 2013-09-04

Similar Documents

Publication Publication Date Title
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN103971353B (en) Splicing method for measuring image data with large forgings assisted by lasers
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
CN102376089B (en) Target correction method and system
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
CN102663767B (en) Method for calibrating and optimizing camera parameters of vision measuring system
CN110189400B (en) Three-dimensional reconstruction method, three-dimensional reconstruction system, mobile terminal and storage device
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN104778716B (en) Lorry compartment volume measuring method based on single image
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN104537707A (en) Image space type stereo vision on-line movement real-time measurement system
CN102693543B (en) Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN112686877A (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN102519434A (en) Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data
CN110879080A (en) High-precision intelligent measuring instrument and measuring method for high-temperature forge piece
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN102103746A (en) Method for calibrating parameters in camera through solving circular ring points by utilizing regular tetrahedron
CN113947638B (en) Method for correcting orthographic image of fish-eye camera
CN102914295A (en) Computer vision cube calibration based three-dimensional measurement method
CN113902809A (en) Method for jointly calibrating infrared camera and laser radar
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant