CN104048601B - Complete imaging mapping method based on coordinate transform - Google Patents

Complete imaging mapping method based on coordinate transform Download PDF

Info

Publication number
CN104048601B
CN104048601B CN201410275210.1A CN201410275210A CN104048601B CN 104048601 B CN104048601 B CN 104048601B CN 201410275210 A CN201410275210 A CN 201410275210A CN 104048601 B CN104048601 B CN 104048601B
Authority
CN
China
Prior art keywords
mtd
mtr
msub
mrow
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410275210.1A
Other languages
Chinese (zh)
Other versions
CN104048601A (en
Inventor
刘凌云
罗敏
吴岳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Automotive Technology
Original Assignee
Hubei University of Automotive Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Automotive Technology filed Critical Hubei University of Automotive Technology
Priority to CN201410275210.1A priority Critical patent/CN104048601B/en
Publication of CN104048601A publication Critical patent/CN104048601A/en
Application granted granted Critical
Publication of CN104048601B publication Critical patent/CN104048601B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Lenses (AREA)

Abstract

The present invention relates to the complete imaging mapping method based on coordinate transform, and it includes pose of camera centering and the undistorted focal plane of image, and mapping mathematical model is established backward;The pose of camera centering carries out vision pose detection using using video camera to 2D calibrating templates in detection platform, obtain the angle of inclination of camera optical axis and detection platform, the angle of pitch and deflection angle of double freedom precise adjusting device are adjusted on this basis, it is final to ensure camera optical axis perpendicular to detection platform;When mapping mathematical model is built upon camera optical axis perpendicular to detection plane backward described image is undistorted focal plane on the basis of acquired image.Mapping method structure, be coordinate position mapping relations from projection plane to imaging plane i.e. reflection method backward.It is simple, reasonable that the mapping method of the present invention is conceived, and its energy individual element produces output image, will not produce calculating waste problem and be easy to the application of high-precision interpolation algorithm.

Description

Complete imaging mapping method based on coordinate transform
Technical field
The present invention relates to a kind of complete imaging mapping method, more particularly to a kind of complete imaging mapping based on coordinate transform Method.
Background technology
Vision detection technology suffers from being widely applied in the every field of industry at present, the measurement hand based on machine vision Section and method have also obtained quick development, but the vision measurement research to physical dimension is concentrated mainly on to micro-structure or chi Very little smaller parts, it is main reason is that CCD device Pixel-level relative accuracy is only 10 at present-3The order of magnitude, and vision system Unite between Polaroid imaging region and detection resolution inversely.When carrying out image measurement to small items, due to Visual field is smaller, can accordingly improve the resolving power of image measurement to improve measurement accuracy;And for larger or slender parts geometry The comprehensive detection of size, because resolution ratio is not high so as to causing accuracy of detection can not meet in the complete image of Polaroid acquisition Application request.
Can solve the measurement of large-size part vision according to breaking the whole up into parts, collecting zero again for whole complete imaging basic thought Contradiction between middle visual field and image resolution ratio, document 1【The machine of the large scale machine components such as He Boxia, Zhang Zhisheng, Xu Sun Hao Device vision high-precision measuring method.China Mechanical Engineering, 2,009 20 (1)】Carried for the machine components with bar shaped grain surface Go out the sequence image calibration method based on textural characteristics, but to the random detection pair of smooth surface or surface texture As helpless, its application field is extremely limited.
Document 2【Precision size detection algorithm researchs of the such as Liu Lingyun, Luo Min based on image mosaic, manufacturing technology and machine Bed, 2012,11】It is middle that image mosaic technology is applied in vision measurement, image projection is established using calibrating camera mode Model, it is proposed that the merging algorithm for images based on pose conversion realizes the accuracy registration between image, and is driven by positioner Video camera precise transformation pose obtains image sequence, and the experimental verification algorithm has higher splicing precision.By the sequence of collection Image is mapped to the committed step that same datum plane is the stitching algorithm after eliminating distortion, but due in the algorithm not to shooting Machine external parameter is any limitation as, and the mathematical modeling for focal plane mapping that the image established is undistorted is complex.
The mathematical modeling for focal plane mapping that image employed in document 2 is undistorted as shown in accompanying drawing one, wherein, { C } is Camera coordinate system during actual imaging, { C ' } are undistorted virtual video camera coordinate system, its Xc ', Yc ' axle respectively with world coordinates It is the same orientation of respective shaft of { W }.Then mapping point (u ', v ')TWith actual imaging coordinate (u, v)TMeet following relation:
R in above formulaij(i=1,2,3;J=1,2,3,4) the corresponding element for being matrix M in video camera pinhole imaging system mathematical modeling Element.Due to mapping relations formula (1) represent be from imaging plane to projection plane coordinate position mapping, belong to and map forward Method, it is that gray-level interpolation algorithm is considerably complicated used by obtaining projection plane point position gray value in specific implementation, this undoubtedly increases Add the details degeneration of the calculating time overhead of computer CPU and image more obvious.
The content of the invention
The present invention is to solve existing complete imaging mapping method complexity, adds the calculating time of computer CPU Expense and the details of image degenerate the problems such as more obvious and propose it is a kind of can individual element produce output image, will not produce Calculate waste problem and be easy to the complete imaging mapping method based on coordinate transform of the application of high-precision interpolation algorithm.
The present invention is achieved by the following technical solutions:
The mapping method of the above-mentioned complete imaging based on coordinate transform, it includes pose of camera centering, the shooting Seat in the plane appearance centering is built upon video camera and is arranged on by means of a double freedom precise adjusting device above measurement plane, and it is wrapped Include following steps:
1) video camera shoots multiple image to the target plane of different positions and pose, by each characteristic point on target and its image Corresponding points relation application plane reference method between corresponding picture point, optimize the accurate inside ginseng for obtaining video camera of search Number;
2) target is lain in measurement plane, world coordinate system { W is set using target plane as X/Y plane1};
3) video camera is reused to target planar imaging, by characteristic point on target and its picture point corresponding relation, has been demarcated Video camera internal reference world coordinate system { W is drawn according to pinhole imaging system formula1Relative to the description of camera coordinate system { C } pose Homogeneous transform matrixTo the homogeneous transform matrixIt is fixed around camera coordinate system { C } to carry out Euler's angular transformation acquisition The RPY angles of axle x-y-z rotations;
4) respective angles that the coordinate system { C } measured by treating rotates successively relative to two axles of Y, the Z of its own, which are less than, to be set When determining threshold epsilon, the vertical centering of video camera is completed;Otherwise pitching and the deflection angle of double freedom precise adjusting device are suitably adjusted, Return to step 3) again to target imaging measurement.
The complete imaging mapping method based on coordinate transform, wherein, the mapping method also include establish image without Distort focal plane mapping mathematical model backward;
The foundation of described image is undistorted focal plane mapping mathematical model backward is carried out based on the pose of camera, Point P { X in measurement plane when initially setting up camera optical axis perpendicular to measurement planew,Yw,0}TWith image coordinate (u, v)TIt Between actual imaging mathematical modeling, then establish imaging mathematical modelings of the same point P in virtual video camera in measurement plane, then Finally establish the mathematical modeling that the undistorted focal plane of image maps backward.
The complete imaging mapping method based on coordinate transform, wherein, the imaging mathematical modulo in the virtual video camera Type is built upon camera coordinate system { C ' } and world coordinate system { W1Under the premise of posture identical, pass that its mathematical modeling meets It is that formula is:
In above-mentioned formula (4), f, Sx、SyFor video camera internal reference;ZC'Thrown for photocentre in virtual video camera coordinate system { C ' } Z axis Shadow, determined by picture resolution δ set on projection plane;u0′、v0' it is coordinate of the optical centre in mapping graph picture.
The complete imaging mapping method based on coordinate transform, wherein, described image is undistorted, and focal plane maps backward The relational expression of satisfaction is:
In above-mentioned formula (5), ξ, f, Sx、Sy、u0、v0For video camera internal reference;α、ZCTo join outside video camera, wherein ZC=cpz- H, h are target thicknesses, and α is α obtained by Euler's angular transformation during centering;u0′、v0' it is seat of the optical centre in mapping graph picture Mark;ZC′Project in virtual video camera coordinate system { C ' } Z axis for photocentre, determined by picture resolution δ set on projection plane.
The complete imaging mapping method based on coordinate transform, wherein, described image resolving power δ is along row or column direction Size representated by single pixel, it meets relational expression:
That is ZC′=f δ (6);
In above-mentioned formula (6), Hei、WidFor the length/width of mapping area on projection plane;Row、ColFor mapping graph as Row/column;F is video camera internal reference focal length;ZC′Projected for photocentre in virtual video camera coordinate system { C ' } Z axis.
Beneficial effect:
Complete imaging mapping method of the invention based on coordinate transform is simple, reasonable, wherein, pose of camera aligning method It is that vision pose detection is carried out to 2D calibrating templates in detection platform using video camera, obtains camera optical axis and detection platform Angle of inclination, the angle of pitch and deflection angle of double freedom precise adjusting device are adjusted on this basis, finally ensures camera light Axle is perpendicular to detection platform.
Meanwhile using the coordinate position mapping relations from projection plane to imaging plane, establish the undistorted focal plane of image Mapping mathematical model backward, this reflection method backward will not produce calculate waste problem and conveniently using high-precision interpolation algorithm come Realize.
Brief description of the drawings
Fig. 1 is that the undistorted focal plane of image of the complete imaging mapping method of the invention based on coordinate transform maps number backward Learn model schematic.
Embodiment
As shown in figure 1, the complete imaging mapping method of the invention based on coordinate transform, it includes:
First, the centering of pose of camera
1) video camera shoots multiple image to the target plane of different positions and pose, by each characteristic point on target and its image Corresponding points relation application plane reference method between corresponding picture point, optimize the accurate inside ginseng for obtaining video camera of search Number.
2) target is lain in measurement plane, world coordinate system { W is set using target plane as X/Y plane1}。
3) video camera is reused to target planar imaging, by characteristic point on target and its picture point corresponding relation, has been demarcated Video camera internal reference world coordinate system { W is drawn according to pinhole imaging system formula (1)1Retouched relative to camera coordinate system { C } pose The homogeneous transform matrix statedTo the matrixEuler's angular transformation acquisition is carried out around camera coordinate system { C } according to formula (2) The RPY angles of fixing axle x-y-z rotations.
In above-mentioned formula (2), (cpxcpycpz) it is coordinate system { W1Coordinate of the origin in coordinate system { C };α、β、γ The respective angles rotated successively relative to tri- axles of X, Y, the Z of its own for coordinate system { C }, wherein, X, Y, Z of coordinate system { C } Three axles distinguish after corresponding rotation alpha, β, γ with coordinate system { W1Same orientation.
4) when measured β, γ angle is less than given threshold ε, the vertical centering of video camera is completed.Otherwise appropriate adjustment is double certainly Pitching and deflection angle by degree precise adjusting device, return to step 3) again to target imaging measurement.
2nd, the undistorted focal plane of image mapping mathematical model backward is established
As shown in figure 1, the foundation of the undistorted focal plane of image mapping mathematical model backward is based on above-mentioned position for video camera Appearance, specifically first that the optical axis of video camera is vertical with measurement plane holding all the time, then actual imaging mathematical modeling is represented by:
In above-mentioned formula (3), f, Sx、Sy、u0、v0For video camera internal reference;ZC, α be video camera outside join, wherein ZC=cpz- h, h For target thicknesses, α is α obtained by Euler's angular transformation during centering;
Similarly, same point P { X in measurement planew,Yw,0}TIn virtual video camera (virtual video camera coordinate system { C ' } and generation Boundary's coordinate system { W } same to posture) in be imaged when with picture point (u, v)TBetween meet relational expression:
In above-mentioned formula (4), f, Sx、SyFor video camera internal reference;ZC'Thrown for photocentre in virtual video camera coordinate system { C ' } Z axis Shadow, determined by picture resolution δ set on projection plane;u′0、v0' it is coordinate of the optical centre in mapping graph picture;
The mathematical modeling that then the undistorted focal plane of image maps backward can be reduced to:
In above-mentioned formula (5), ξ, f, Sx、Sy、u0、v0For video camera internal reference;α、ZCTo join outside video camera, wherein ZC=cpz- H, h are target thicknesses, and α is α obtained by Euler's angular transformation during centering;u′0、v0' it is seat of the optical centre in mapping graph picture Mark;ZC′Projected for photocentre in virtual video camera coordinate system { C ' } Z axis, by picture resolution δ set on projection plane (along row Or the size representated by column direction single pixel) determine;
Wherein,That is ZC′=f δ (6);
In above-mentioned formula (6), Hei、WidFor the length/width of mapping area on projection plane;Row、ColFor mapping graph as Row/column;F is video camera internal reference focal length;ZC′Projected for photocentre in virtual video camera coordinate system { C ' } Z axis.
Complete imaging mapping method of the invention based on coordinate transform is simple, reasonable, wherein, pose of camera aligning method It is that vision pose detection is carried out to 2D calibrating templates in detection platform using video camera, obtains camera optical axis and detection platform Angle of inclination, the angle of pitch and deflection angle of double freedom precise adjusting device are adjusted on this basis, finally ensures camera light Axle is perpendicular to detection platform.
Meanwhile using the coordinate position mapping relations from projection plane to imaging plane, establish the undistorted focal plane of image Mapping mathematical model backward, this reflection method backward will not produce calculate waste problem and conveniently using high-precision interpolation algorithm come Realize.

Claims (5)

1. a kind of mapping method of the complete imaging based on coordinate transform, it is characterised in that described including pose of camera centering Pose of camera centering is built upon video camera and is arranged on by means of a double freedom precise adjusting device above measurement plane, It comprises the following steps:
1) video camera shoots multiple image to the target plane of different positions and pose, corresponding on its image by each characteristic point on target Picture point between corresponding points relation application plane reference method, optimize the accurate inner parameter for obtaining video camera of search;
2) target is lain in measurement plane, world coordinate system { W is set using target plane as X/Y plane1};
3) video camera is reused to target planar imaging, by characteristic point on target and its picture point corresponding relation, taking the photograph of having demarcated Camera internal reference draws world coordinate system { W according to pinhole imaging system formula1Relative to the homogeneous of camera coordinate system { C } pose description Transformation matrixTo the homogeneous transform matrixEuler's angular transformation acquisition is carried out around camera coordinate system { C } fixing axle x- The RPY angles of y-z rotations;
4) treat that the respective angles that measured coordinate system { C } rotates successively relative to two axles of Y, the Z of its own are less than setting threshold During value ε, the vertical centering of video camera is completed;Otherwise pitching and the deflection angle of double freedom precise adjusting device are suitably adjusted, is returned Step 3) is again to target imaging measurement.
2. the complete imaging mapping method based on coordinate transform as claimed in claim 1, it is characterised in that the mapping method Also include establishing the undistorted focal plane of image mapping mathematical model backward;
The foundation of described image is undistorted focal plane mapping mathematical model backward is carried out based on the pose of camera, i.e., first Point P { X in measurement plane when first establishing camera optical axis perpendicular to measurement planew,Yw,0}TWith image coordinate (u, v)TBetween Actual imaging mathematical modeling, imaging mathematical modelings of the same point P in virtual video camera in measurement plane is then established, then finally Establish the mathematical modeling that the undistorted focal plane of image maps backward.
3. the complete imaging mapping method based on coordinate transform as claimed in claim 2, it is characterised in that the virtual shooting Imaging mathematical modeling in machine is built upon virtual video camera coordinate system { C ' } and world coordinate system { W1Posture identical premise Under, the relational expression that its mathematical modeling meets is:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>u</mi> <mo>&amp;prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>v</mi> <mo>&amp;prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mfrac> <mo>&amp;CenterDot;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mfrac> <mi>f</mi> <msub> <mi>S</mi> <mi>x</mi> </msub> </mfrac> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msubsup> <mi>u</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mfrac> <mi>f</mi> <msub> <mi>S</mi> <mi>y</mi> </msub> </mfrac> </mtd> <mtd> <msubsup> <mi>v</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>&amp;CenterDot;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>&amp;CenterDot;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mfrac> <mo>&amp;CenterDot;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mfrac> <mi>f</mi> <msub> <mi>S</mi> <mi>x</mi> </msub> </mfrac> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msubsup> <mi>u</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> </mtd> <mtd> <mrow> <msubsup> <mi>u</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;CenterDot;</mo> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mfrac> <mi>f</mi> <msub> <mi>S</mi> <mi>y</mi> </msub> </mfrac> </mtd> <mtd> <msubsup> <mi>v</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> </mtd> <mtd> <mrow> <msubsup> <mi>v</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;CenterDot;</mo> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>&amp;CenterDot;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
In above-mentioned formula (4), f, Sx、SyFor video camera internal reference;ZC'Projected for photocentre in virtual video camera coordinate system { C ' } Z axis, by Set picture resolution δ is determined on projection plane;u′0、v′0For coordinate of the optical centre in mapping graph picture.
4. the complete imaging mapping method based on coordinate transform as claimed in claim 2, it is characterised in that described image is without abnormal Become focal plane map backward the relational expression of satisfaction into:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>u</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> <msub> <mi>Z</mi> <mi>C</mi> </msub> </mfrac> <mo>&amp;CenterDot;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <mi>&amp;xi;</mi> <mo>&amp;CenterDot;</mo> <mi>f</mi> </mrow> <msub> <mi>S</mi> <mi>x</mi> </msub> </mfrac> <mi>cos</mi> <mi>&amp;alpha;</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mfrac> <mrow> <mi>&amp;xi;</mi> <mo>&amp;CenterDot;</mo> <mi>f</mi> </mrow> <msub> <mi>S</mi> <mi>x</mi> </msub> </mfrac> <mi>sin</mi> <mi>&amp;alpha;</mi> </mrow> </mtd> <mtd> <mrow> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>Z</mi> <mi>C</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mrow> <mi>&amp;xi;</mi> <mo>&amp;CenterDot;</mo> <mi>f</mi> </mrow> <msub> <mi>S</mi> <mi>y</mi> </msub> </mfrac> <mi>sin</mi> <mi>&amp;alpha;</mi> </mrow> </mtd> <mtd> <mrow> <mfrac> <mrow> <mi>&amp;xi;</mi> <mo>&amp;CenterDot;</mo> <mi>f</mi> </mrow> <msub> <mi>S</mi> <mi>y</mi> </msub> </mfrac> <mi>cos</mi> <mi>&amp;alpha;</mi> </mrow> </mtd> <mtd> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>Z</mi> <mi>C</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>Z</mi> <mi>C</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>&amp;CenterDot;</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mfrac> <mi>f</mi> <msub> <mi>S</mi> <mi>x</mi> </msub> </mfrac> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msubsup> <mi>u</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;CenterDot;</mo> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mfrac> <mi>f</mi> <msub> <mi>S</mi> <mi>y</mi> </msub> </mfrac> </mtd> <mtd> <mrow> <msubsup> <mi>v</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;CenterDot;</mo> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>Z</mi> <msup> <mi>C</mi> <mo>&amp;prime;</mo> </msup> </msub> </mtd> </mtr> </mtable> </mfenced> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&amp;CenterDot;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>u</mi> <mo>&amp;prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>v</mi> <mo>&amp;prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
In above-mentioned formula (5), ξ, f, Sx、Sy、u0、v0For video camera internal reference;α、ZCTo join outside video camera, wherein ZC=cpz- h, h are Target thicknesses,cpzProjected for Z axis of the arbitrfary point in target plane in camera coordinate system { C }, α is Eulerian angles change during centering Change gained α;(u ', v ') is that the image coordinate (u, v) of actual imaging is mapped to the image coordinate of corresponding points in virtual image;u′0、 v′0For coordinate of the optical centre in mapping graph picture;ZC′Projected for photocentre in virtual video camera coordinate system { C ' } Z axis, by projecting Set picture resolution δ is determined in plane.
5. the complete imaging mapping method based on coordinate transform as described in claim 3 or 4, it is characterised in that described image Resolving power δ is that it meets relational expression along the size representated by the single pixel of row or column direction: That is ZC′=f δ (6);
In above-mentioned formula (6), Hei、WidFor the length/width of mapping area on projection plane;Row、ColFor mapping graph picture row/ Row;F is video camera internal reference focal length;ZC′Projected for photocentre in virtual video camera coordinate system { C ' } Z axis.
CN201410275210.1A 2014-06-19 2014-06-19 Complete imaging mapping method based on coordinate transform Expired - Fee Related CN104048601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410275210.1A CN104048601B (en) 2014-06-19 2014-06-19 Complete imaging mapping method based on coordinate transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410275210.1A CN104048601B (en) 2014-06-19 2014-06-19 Complete imaging mapping method based on coordinate transform

Publications (2)

Publication Number Publication Date
CN104048601A CN104048601A (en) 2014-09-17
CN104048601B true CN104048601B (en) 2018-01-23

Family

ID=51501766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410275210.1A Expired - Fee Related CN104048601B (en) 2014-06-19 2014-06-19 Complete imaging mapping method based on coordinate transform

Country Status (1)

Country Link
CN (1) CN104048601B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106403828B (en) * 2016-08-30 2020-03-20 成都唐源电气股份有限公司 Single-track contact line residual height measuring method and system based on checkerboard calibration
CN106648109A (en) * 2016-12-30 2017-05-10 南京大学 Real scene real-time virtual wandering system based on three-perspective transformation
CN107167116B (en) * 2017-03-13 2020-05-01 湖北汽车工业学院 Visual detection method for spatial arc pose
CN111922510B (en) * 2020-09-24 2021-10-01 武汉华工激光工程有限责任公司 Laser visual processing method and system
CN112102419B (en) * 2020-09-24 2024-01-26 烟台艾睿光电科技有限公司 Dual-light imaging equipment calibration method and system and image registration method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334267A (en) * 2008-07-25 2008-12-31 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device
CN102788559A (en) * 2012-07-19 2012-11-21 北京航空航天大学 Optical vision measuring system with wide-field structure and measuring method thereof
CN103702607A (en) * 2011-07-08 2014-04-02 修复型机器人公司 Calibration and transformation of a camera system's coordinate system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0566110A (en) * 1991-09-06 1993-03-19 Honda Motor Co Ltd Calibration method for optical type measuring device
JP5013047B2 (en) * 2006-03-07 2012-08-29 日立造船株式会社 Correction method for displacement measurement using captured images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334267A (en) * 2008-07-25 2008-12-31 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device
CN103702607A (en) * 2011-07-08 2014-04-02 修复型机器人公司 Calibration and transformation of a camera system's coordinate system
CN102788559A (en) * 2012-07-19 2012-11-21 北京航空航天大学 Optical vision measuring system with wide-field structure and measuring method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像拼接的尺寸精密检测算法研究;刘凌云等;《制造技术与机床》;20121231(第11期);第106-110页 *
大尺寸机械零件的机器视觉高精度测量方法;何博侠等;《中国机械工程》;20090131;第20卷(第1期);第5-10页 *

Also Published As

Publication number Publication date
CN104048601A (en) 2014-09-17

Similar Documents

Publication Publication Date Title
CN104048601B (en) Complete imaging mapping method based on coordinate transform
CN104182982B (en) Overall optimizing method of calibration parameter of binocular stereo vision camera
CN104050650B (en) Integrally-imaging image splicing method based on coordinate transformation
CN103971353B (en) Splicing method for measuring image data with large forgings assisted by lasers
WO2019219013A1 (en) Three-dimensional reconstruction method and system for joint optimization of human body posture model and appearance model
CN103744086B (en) A kind of high registration accuracy method of ground laser radar and close-range photogrammetry data
CN104154875A (en) Three-dimensional data acquisition system and acquisition method based on two-axis rotation platform
CN106504321A (en) Method using the method for photo or video reconstruction three-dimensional tooth mould and using RGBD image reconstructions three-dimensional tooth mould
CN104034305B (en) A kind of monocular vision is the method for location in real time
CN102679959B (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
CN105698699A (en) A binocular visual sense measurement method based on time rotating shaft constraint
CN107507246A (en) A kind of camera marking method based on improvement distortion model
CN106780573B (en) A kind of method and system of panorama sketch characteristic matching precision optimizing
CN105931222A (en) High-precision camera calibration method via low-precision 2D planar target
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN111091599B (en) Multi-camera-projector system calibration method based on sphere calibration object
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
WO2019056782A1 (en) Sphere projection common tangent line-based multi-camera calibration and parameter optimization method
CN104807405B (en) Three-dimensional coordinate measurement method based on light ray angle calibration
CN113793270A (en) Aerial image geometric correction method based on unmanned aerial vehicle attitude information
CN104200476B (en) The method that camera intrinsic parameter is solved using the circular motion in bimirror device
CN107909543A (en) A kind of flake binocular vision Stereo matching space-location method
CN104123725B (en) A kind of computational methods of single line array camera homography matrix H
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180123

Termination date: 20210619