CN1888814A - Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor - Google Patents

Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor Download PDF

Info

Publication number
CN1888814A
CN1888814A CNA2006100149045A CN200610014904A CN1888814A CN 1888814 A CN1888814 A CN 1888814A CN A2006100149045 A CNA2006100149045 A CN A2006100149045A CN 200610014904 A CN200610014904 A CN 200610014904A CN 1888814 A CN1888814 A CN 1888814A
Authority
CN
China
Prior art keywords
image
center line
sensor
viewpoint
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006100149045A
Other languages
Chinese (zh)
Other versions
CN100388319C (en
Inventor
彭翔
丁雅斌
田劲东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Esun Display Co., Ltd.
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CNB2006100149045A priority Critical patent/CN100388319C/en
Publication of CN1888814A publication Critical patent/CN1888814A/en
Application granted granted Critical
Publication of CN100388319C publication Critical patent/CN100388319C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

A multi-viewpoints gesture estimate and self-demarcate method for three-dimensional initiative vision sensor belongs to three-dimensional figure imaging and sculpting technology. The said sensor is made up with numeric projector and camera. Collect object vein picture; project a set of orthogonal list map to object early or late and collect corresponding coding list picture; count the two-dimensional coordinate of characteristic point to vein picture and phase value to coding list picture. Use transforming arithmetic from phase position to coordinate to seek the corresponding relationship of diagnostic point of object between the project plane to numeric projector and the imaging plane to camera. Change viewpoint and repeat the said steps by using polar ray geometry restriction element to base optimization equation to automatically estimate multi-viewpoint location gesture and self-demarcate three-dimensional initiative vision sensor. It is veracious, automatic and special to locale multi-viewpoints gesture estimate and self-demarcate of vision sensor.

Description

The multi-viewpoint attitude of three-dimensional active vision sensor is estimated and self-calibrating method
Technical field
The multi-viewpoint attitude that the present invention relates to a kind of three-dimensional active vision sensor is estimated and self-calibrating method, belongs to 3 D digital imaging and Modeling Technology.
Background technology
3 D digital imaging and moulding (3DIM-3D Digital Imaging and Modeling) are active research emerging interdisciplinary fields in the world in recent years.It is applied to all many-sides such as reverse engineering, historical relic's protection, medical diagnosis, industrial detection and virtual reality widely.As obtaining one of main means of three-dimensional information, based on the three-dimensional active vision sensor of phase mapping have that speed is fast, resolution is high, advantage such as noncontact and whole audience data are obtained and be subjected to extensive concern and research.Find the solution the direction of motion of sensor, estimated sensor is an important process at the position and attitude of different points of view.Because the visual field of depth transducer is limited, and is subjected to the restriction of direction of observation and the shape of object own, can not once obtain describing the full detail of body form.In actual applications, need be placed on object in the perform region before the sensor, obtain describing the depth image of body form from a viewpoint, movable sensor then obtains the depth information of other parts of object in new viewpoint.Repeat this process, till the whole depth informations that obtain object.Therefore, accurately providing the position and attitude information of each viewpoint is the key that guarantees to calibrate and merge the depth image data of many viewpoints collections in unified coordinate system.
In addition, before the use three-dimensional visual sensor carried out three-dimensional measurement, sensor must be demarcated with the inner parameter of acquisition sensor and the structural parameters of sensor through accurate.Traditional scaling method automaticity is low, needs the two dimension of a known accurate coordinates to demarcate target and high-precision translation stage and moves along the vertical direction of two-dimentional target, to finish three-dimensional demarcation.People such as Hu (Optical Engineering, Vol.42, No.2,2003, pp487-493) and Shao Shuanyun etc. (optics journal, Vol.25, No.2,2005, pp 203-206) all adopted this method.For these class methods, if the structural parameters of measuring system change, for example operating distance changes, and then whole sensor must be demarcated again, thereby can't realize easily, mobile measurement.And this carries out the occasion that many viewpoints are measured for the regular adjustment sensing station of needs, and frequent repeating demarcated and made such sensor almost be difficult to practicality.
Therefore, we wish the function that three-dimensional visual sensor can have synchronously attitude estimation automatically and demarcate automatically, no longer need to use the standard calibration facility to obtain outside three-dimensional data.Such ability has been arranged, and no matter vision sensor moves to any viewpoint, can both obtain the posture position of this viewpoint automatically, and finish demarcation automatically, and three-dimensional measurement can interruptedly not gone on.
In recent years, the research of the self-calibration technology of vision sensor is received very big concern, but the majority in these feasible methods is the development to passive vision transducer calibration method (as stereoscopic vision) and based drive depth perception.And to the three-dimensional active vision sensor based on phase mapping, scaling method is static manually-operated mostly, and existing camera self-calibration method, as Zhang (IEEE Transactions on Robotics and Automation, Vol.12, No.1,1996, pp103-113), can not directly be applied in three-dimensional active vision sensor.
The technical literature that can contrast has down with five pieces:
[1] patent of invention: application number 02156599.6
[2] patent of invention: application number 200510061174.x
[3]Hu?Q.,Huang?P.S.,Fu?Q.,et?al.Calibration?of?a?three-dimensional?shape?measurementsystem[J].Opt.Eng.,2003,42(2):487-4938
[4] Shao Shuanyun, Su Xianyu, Wang Hua etc. the system calibrating of modulation measurement technology of profiling. optics journal, 2005,25 (2): 203-206
[5]Zhang?Z.,Luong?Q.,Faugeras?O..Motion?of?an?uncalibrated?stereo?rig:self-calibration?andmetric?reconstruction.IEEE?Transactions?on?Robotics?and?Automation,1996,12(1):103-113
Summary of the invention
The object of the present invention is to provide a kind of multi-viewpoint attitude of three-dimensional active vision sensor to estimate and self-calibrating method, this method can improve the precision and the synchronous self-calibration sensor parameter of the multi-viewpoint attitude estimation of three-dimensional active vision sensor, has improved the practicality of three-dimensional active vision sensor.
The present invention is realized by the following technical programs: a kind of multi-viewpoint attitude of three-dimensional active vision sensor is estimated and self-calibrating method, described vision sensor is made up of digital projector and video camera, their position relative fixed, in measurement range, measure, it is characterized in that may further comprise the steps placing object:
1. sensor is at V 1On the viewpoint position, order is carried out following projection and gatherer process:
1), and stores in the computing machine by the texture image of camera acquisition testee;
2) generate one group of vertical stripe pattern by computing machine, throw to testee with digital projector, because the change in depth of object causes the generation of strip encoding image, by the vertical strip encoding image of camera acquisition, and store in the computing machine, by the striped automatic analysis technology, decoding obtains the relative phase image of object then;
3) generate the image that a width of cloth is positioned at the lateral centre of image and has only a vertical curve by computing machine, with digital projector this image projection is arrived the testee surface, and with storing into behind the camera acquisition in the computing machine, by this auxiliary vertical center line, be the absolute phase image with the relative phase image transitions;
4) with vertical stripe pattern, revolve and turn 90 degrees into the horizontal stripe image, throw to testee by digital projector again, this level code stripe pattern of camera acquisition stores in the computing machine;
5) same, with vertical center line image, revolve and turn 90 degrees into the horizontal center line image, with digital projector with this image projection to the testee surface, and with storing into behind the camera acquisition in the computing machine;
2. the texture image of the object that step 1 is obtained, vertically strip encoding image, vertically center line image, level code stripe pattern and horizontal center line image, carry out following processing:
1) texture image is carried out signature analysis: extract the two dimensional image coordinate of object feature point, promptly the coordinate of unique point on the video camera imaging plane (u, v);
2) vertical strip encoding image being carried out striped analyzes automatically: demodulate phase function in the main value scope with the FFT method from deforming stripe figure, extract with phase-unwrapping algorithm then and launch phase diagram Φ u
3) vertical center line image is carried out signature analysis: behind image binaryzation, extract the shared two dimensional image coordinate of center line;
4) utilize phase place to arrive the transformation of coordinates algorithm, obtain the lateral coordinates of unique point on the projector projects plane, this algorithm is: with the vertical center line of expansion phase diagram Φ u combination of vertical striped, be transformed to absolute phase figure Φ earlier a, Φ a ( x , y ) = Φ u ( x , y ) - Φ u k ( i , j ) , Φ wherein u kBe center line place pixel location (i, phase value j); Then, obtain the horizontal ordinate u ' of the correspondence position of arbitrary characteristics point M on projection plane, u ′ = Φ a M ( u , v ) 2 π p , Φ wherein a MBe the absolute phase of unique point M, p is the space periodic of projection striped;
5) in like manner, repeat this step 2), 3), 4) process, processing horizontal strip encoding image and horizontal center line image obtain the ordinate v ' of the correspondence position of arbitrary characteristics point M on projection plane, v ′ = Φ a M ( u , v ) 2 π p , Φ herein a M(u v) obtains by processing horizontal strip encoding image and horizontal center line image;
6) so far, obtain on the object respective coordinates of unique point M on imaging plane and projection plane arbitrarily, (u, v) and (u ', v '), for utilizing the polar curve geometrical constraint, the initialization that performs data is prepared;
3. in the sensor measurement scope, and guarantee to have under the preceding topic that overlaps the zone, move freely sensor to new vision point with the image of previous viewpoint collection 2, object to be throwed and gatherer process, repeating step 1 and step 2 are at vision point 2Down, obtain on the object respective coordinates on imaging plane and projection plane of unique point M arbitrarily (u, v) and (u ', v ');
4. utilize the polar curve geometrical constraint, promptly with the polar curve equation m ~ 2 T F m ~ 1 = 0 , Wherein
Figure A20061001490400072
With
Figure A20061001490400073
The homogeneous coordinates that are the corresponding point on imaging plane and the projecting plane represent that F is called fundamental matrix, F = K p - T [ t ] × RK c - 1 , Consider data noise influence foundation optimization equation: min Kc , Kp , R * , t * , R , t [ Σ j = 1 p ( m ~ 2 j T F 12 m ~ 1 j ) 2 ] , The number of p representation feature point wherein, F 12Fundamental matrix between expression plane 1 and the plane 2 obtains rotation matrix R and the translation matrix t of this viewpoint with respect to object coordinates system, the inner parameter (K of sensor c, K p), and the structural parameters (R of sensor *, t *).Wherein find the solution K c, K p, R *, t *For the demarcation certainly of sensor parameters, find the solution R, t is called the viewpoint attitude and estimates;
5. many viewpoint positions attitude is stored, be used for follow-up multi-view depth images match and synthetic the use.
Advantage of the present invention is: the present invention is first with polar curve geometrical constraint principle, is applied to based on the multi-viewpoint attitude of the three-dimensional active vision sensor of phase mapping to estimate and synchronously automatically in the research of calibration sensor parameter.Propose the transform method of " phase place-coordinate ", utilized the stripe pattern of projection quadrature, sought the corresponding point on imaging plane and the projection plane fast.On this basis with the polar curve constraint applies in the active three-dimensional visual sensor, set up the mathematical model of optimizing equation, find the solution sensor at many viewpoints athletic posture and three-dimension sensor structural parameters.This method does not need auxiliary calibration equipment, can improve demarcation efficient, and the on-the-spot multi-viewpoint attitude that is particularly suitable for three-dimensional visual sensor is estimated and the self-calibration sensor parameter.In addition, this method is to all significant such as the application in fields such as the Automated inspection of complex object, robot location.
Description of drawings
Fig. 1 is that the three-dimensional active vision sensor of realizing the inventive method is arranged synoptic diagram.Among the figure, 101 is digital projector, and 103 is video camera, and 102 is the emergent pupil P of the projection lens of digital projector 101, and 104 is the entrance pupil E of the imaging len of video camera 103, and adjuster bar 105 is used for regulating the height and the angle of video camera 103, and 106 is computing machine.
Fig. 2 is polar curve geometrical constraint figure in stereoscopic vision.
Fig. 3 is vertical stripe pattern.
Fig. 4 is the horizontal stripe image.
Fig. 5 is vertical center line image.
Fig. 6 is the horizontal center line image.
Fig. 7 example texture image.
Fig. 8 is two synoptic diagram that viewpoint is measured object.V among the figure 1And V 2Represent two viewpoints, at first vision point 1, the imaging plane and the projection plane of video camera and projector are respectively I 1, I 2, at second vision point 2, respective imaging plane and projection plane are I 3, I 4C 1And C 3The entrance pupil of expression video camera, C 2And C 4The emergent pupil of expression projector, m i(i=1 ..., 4), the corresponding point of representation space point M on 4 planes.(R *t *) sensor construction parameter between expression video camera and the projector, (R lt l) represent that depth transducer is by vision point 1Transform to vision point 2The time, the evolution of video camera, (R rt r) represent that depth transducer is by vision point 1Transform to vision point 2The time, the evolution of projector.
Fig. 9 vision point 1The point cloud chart picture (depth image) that collects.
Figure 10 vision point 2The point cloud chart picture (depth image) that collects.
Figure 11 is for transforming under the same coordinate system grid image after the coupling.
Embodiment
Below the inventive method is described in further details.The polar curve constraint is one of important principles of stereoscopic vision.But it can not be applied directly to the active vision sensor.The present invention is first with polar curve geometrical constraint principle, is applied to based on the multi-viewpoint attitude of the three-dimensional active vision sensor of phase mapping to estimate and synchronously automatically on the calibration sensor parameter.As shown in Figure 2, the diagram of polar curve geometrical constraint in binocular stereo vision.When two video cameras are taken object simultaneously, obtain image I 1And I 2If m 1And m 2Be the subpoint of 1 M in space on two images, claim m 1And m 2Be corresponding point.Make C 1And C 2Be respectively the photocentre of two video cameras, some m 2Online l 2On, claim l 2For in image I 2Go up corresponding to a m 1(be positioned at image I 1On) polar curve.Point m 1Online l 1On, claim l 1Be image I 1Go up corresponding to a m 2Polar curve.Making (Rt) is the evolution of second video camera with respect to first video camera, K 1, K 2Be respectively the confidential reference items matrix of two video cameras, R, K 1, K 2Be 3 * 3 matrix, t is 3 * 1 matrix.Release by pin-hole model
m ~ 2 T F m ~ 1 = 0 - - - ( 1 )
F = K 2 - T [ t ] × RK 1 - 1 - - - ( 2 )
Formula (1) is called the polar curve equation.
Figure A20061001490400083
With
Figure A20061001490400084
Be corresponding point m 1And m 2Homogeneous expression.m 1Polar curve l 2Be expressed as l 2 = F m ~ 1 = ( α , β , γ ) , m 2Polar curve l 1Be expressed as l 1 = F T m ~ 2 . As can be seen, as long as know the corresponding point of arbitrfary point, space on two video camera imaging planes, just can utilize the polar curve geometrical constraint to obtain the inner parameter K of two video cameras 1, K 2Relative attitude position Rt with two video cameras.
Phase place is to transformation of coordinates: because the projection process of projector is the inverse process of the imaging process of video camera, therefore, we can be applied to three-dimensional active vision sensor with polar curve geometrical constraint principle.And the corresponding point of arbitrfary point, space on the imaging plane of the projection plane of projector and video camera can be obtained to the transformation of coordinates method by phase place.We utilize the 5 width of cloth image (texture images of object, vertical strip encoding image, vertical center line image, level code stripe pattern and horizontal center line image), just can determine the corresponding point of arbitrfary point, space on the imaging plane of the projection plane of projector and video camera.
Texture image provide unique point on the object the two dimensional image coordinate (u, v).By vertical strip encoding image and vertical center line image, calculate absolute phase image Φ a, Φ a ( x , y ) = Φ u ( x , y ) - Φ u k ( i , j ) , And then obtain the horizontal ordinate u ' of unique point on the projector projects plane, u ′ = Φ a M ( u , v ) 2 π p . In like manner, by level code stripe pattern and horizontal center line image, calculate absolute phase image Φ once more a, and then obtain the ordinate v ' of unique point on the projector projects plane.Thereby obtain the corresponding point of arbitrfary point, space on the imaging plane of the projection plane of projector and video camera (u, v) and (u ', v ').
Multi-viewpoint attitude is estimated and is demarcated certainly: K cK p, the inner parameter of video camera and projector; R *t *, the relative position relation of video camera and projector; R it i, the kinematic parameter of each viewpoint.As Fig. 8, be example with two viewpoints.2 imaging planes of total video camera, 2 projection plane of projector.Then all there are corresponding point the arbitrfary point, space on 4 planes, can utilize the polar curve geometrical constraint, optimizes equation in any two interplanar foundation
min Kc , Kp , R * , t * , R , t [ Σ j = 1 p ( m ~ 2 j T F 12 m ~ 1 j ) 2 ] - - - ( 3 )
With 4 width of cloth images, in twos after the combination, optimize equations (3) the composition system of equations that accumulates together, find the solution for 6 with Levenberg-Marquart (LM) optimization method.Can obtain the parameter of the posture position and the sensor of each viewpoint position.
The concrete steps of the inventive method are as follows:
With (as Fig. 7) in kind is example, gathers two viewpoints.Solve the posture position of viewpoint, simultaneously the calibration sensor parameter.
1. sensor is on the 1st viewpoint position, the following projection of sequential operation, and gatherer process:
1), and stores (as Fig. 7) in the computing machine into by the texture image of camera acquisition testee;
2) generate one group of vertical candy strip (as Fig. 3) by computing machine.Throw to testee with digital projector,,, and store in the computing machine by the vertical strip encoding figure of camera acquisition because the change in depth of object causes the generation of strip encoding image.Can obtain the relative phase image of object then by the decoding of striped automatic analysis technology;
3) generate the image (as Fig. 5) that a width of cloth is positioned at the lateral centre of image and has only a vertical curve by computing machine.With digital projector this image projection is arrived the testee surface, and with storing in the computing machine behind the camera acquisition.By this auxiliary center line, be the absolute phase image with the relative phase image transitions;
4) with vertical candy strip, revolve the level of turning 90 degrees into, i.e. orthogonal directions (as Fig. 4).Throwed to testee by digital projector, camera acquisition level code stripe pattern stores in the computing machine again;
5) same, with vertical center line image, revolve and turn 90 degrees into horizontal center line image (as Fig. 6).With digital projector this image projection is arrived the testee surface, and with storing into behind the camera acquisition in the computing machine;
2. the texture image of the object that step 1 is obtained, vertical strip encoding image, vertical center line image, level code stripe pattern and horizontal center line image, then image is handled:
1) texture image is carried out signature analysis: extract the two dimensional image coordinate of object feature point, promptly the coordinate of unique point on the video camera imaging plane (u, v);
2) vertical strip encoding image being carried out striped analyzes automatically: demodulate phase function in the main value scope with the FFT method from deforming stripe figure, extract with phase-unwrapping algorithm then and launch phase diagram Φ u
3) vertical center line image is carried out signature analysis: behind image binaryzation, extract the shared two dimensional image coordinate of center line;
4) utilize phase place to arrive the transformation of coordinates algorithm, obtain the lateral coordinates of unique point on the projector projects plane.This algorithm is: elder generation is with the expansion phase diagram Φ of vertical striped uIn conjunction with vertical center line, be transformed to absolute phase figure Φ a, Φ a ( x , y ) = Φ u ( x , y ) - Φ u k ( i , j ) , Φ u kBe center line place pixel location (i, phase value j).Then, can obtain the horizontal ordinate u ' of the correspondence position of arbitrary characteristics point M on projection plane, u ′ = Φ a M ( u , v ) 2 π p , Φ wherein a MBe the absolute phase of unique point M, p is the space periodic of projection striped;
5) in like manner, repeat this step 2), 3), 4) process, processing horizontal phase diagram and horizontal center line can obtain the ordinate v ' of the correspondence position of arbitrary characteristics point M on projection plane, v ′ = Φ a M ( u , v ) 2 π p , Φ herein a M(u v) obtains by processing horizontal strip encoding figure and horizontal center line image;
6) so far, obtain on the object respective coordinates of unique point M on imaging plane and projection plane arbitrarily, (u, v) and (u ', v '), for utilizing the polar curve geometrical constraint, the initialization that performs data is prepared;
3. in the sensor measurement scope, and guarantee to have under the preceding topic that overlaps the zone, move freely sensor to new vision point with the image of previous viewpoint collection 2And object gathered, repeating step 1 and step 2 obtain under the new viewpoint, and the respective coordinates of unique point on imaging plane and projection plane (u, v) and (u ', v ');
4. two viewpoints, 2 imaging planes of total video camera, 2 projection plane of projector.Then all there are corresponding point the arbitrfary point, space on 4 planes, utilizes the polar curve geometrical constraint, sets up and optimizes equation
min Kc , Kp , R * , t * , R , t [ Σ j = 1 p ( m ~ 2 j T F 12 m ~ 1 j ) 2 + ( m ~ 3 j T F 13 m ~ 1 j ) 2 + ( m ~ 4 j T F 14 m ~ 1 j ) 2 + ( m ~ 3 j T F 23 m ~ 2 j ) 2 + ( m ~ 4 j T F 24 m ~ 2 j ) 2 + ( m ~ 4 j T F 34 m ~ 3 j ) 2 ] - - - ( 4 )
(number of p representation feature point wherein, F 12, F 13, F 14, F 23, F 24, F 34Represent the fundamental matrix between the plane in twos,
Figure A20061001490400112
The homogeneous coordinates that are corresponding point are represented), obtain the posture position (R that two viewpoints with respect to object coordinates are respectively 1, t 1) and (R 2, t 2), the inner parameter (K of sensor c, K p), and the structural parameters (R of sensor *, t *).
5. store two viewpoint position attitudes, in order to follow-up multi-view depth images match and synthetic the use.
Embodiment
The structure of the three-dimensional visual sensor of actual design as shown in Figure 1.101 is digital projector, and 103 is video camera.102 is the emergent pupil P of the projection lens of digital projector 101, and 104 is the entrance pupil E of the imaging len of video camera 103.Adjuster bar 105 is used for regulating the height and the angle of video camera 103, and 106 is computing machine.
According to the step of narrating above, (as Fig. 7) in kind gathered two viewpoints.Solve the posture position of viewpoint, simultaneously the calibration sensor parameter.
The result who demarcates is:
(1) inner parameter of video camera: K c = 3564.36 - 5.99209 252.127 0 3552.63 229.525 0 0 1 pixel
The inner parameter of projector: K p = 3044.83 - 5.30216 366.425 0 3023.41 176.327 0 0 1 pixel
Sensor construction parameter between projector and the video camera
R * = 0.9896 0.0117 0.1437 - 0.0032 1 - 0.0585 - 0.1434 - 0.0524 0.9880 - - - t * = - 15.0004 40.366 0.085
(2) posture position of viewpoint 1 is
R 1 = 0.991861 - 0.0128653 0.126676 - 0.00945275 - 0.999577 - 0.0275033 0.126977 0.026082 - 0.991563 - - - t 1 = - 26.3503 - 43.8632 1277.3
The posture position of viewpoint 2 is
R 2 = 0.999781 - 0.0118122 - 0.017255 - 0.0123243 - 0.999478 - 0.0298786 - 0.0168931 0.0300847 - 0.999405 - - - t 2 = - 52.1421 - 41.2982 1332.25
(3) set an accurate three-dimensional calibrated reference, the result that check is demarcated.At x, y, the error criterion difference of three directions of z is respectively 0.0927mm, 0.0750mm and 0.2562mm.
(4) real-world object is measured two viewpoints, utilized the viewpoint attitude of trying to achieve, the image with two viewpoints transforms in the same coordinate system.Fig. 9 and Figure 10 are respectively the point cloud chart picture (depth image) that collects of two viewpoints.Figure 11 is for transforming under the same coordinate system grid image after the coupling.

Claims (1)

1. the multi-viewpoint attitude of a three-dimensional active vision sensor is estimated and self-calibrating method, described vision sensor is made up of digital projector and video camera, their position relative fixed are measured placing object in measurement range, it is characterized in that may further comprise the steps:
1) at V 1On the viewpoint position, order is carried out following projection and gatherer process:
(1) texture image of camera acquisition testee, and store in the computing machine;
(2) generate one group of vertical stripe pattern by computing machine, throw to testee with digital projector, because the change in depth of object causes the generation of strip encoding image, by the vertical strip encoding image of camera acquisition, and store in the computing machine, by the striped automatic analysis technology, decoding obtains the relative phase image of object then;
(3) generate the image that a width of cloth is positioned at the lateral centre of image and has only a vertical curve by computing machine, with digital projector this image projection is arrived the testee surface, and with storing into behind the camera acquisition in the computing machine, by this auxiliary vertical center line, be the absolute phase image with the relative phase image transitions;
(4) with vertical stripe pattern, revolve and turn 90 degrees into the horizontal stripe image, throw to testee by digital projector again, this level code stripe pattern of camera acquisition stores in the computing machine;
(5) same, with vertical center line image, revolve and turn 90 degrees into the horizontal center line image, with digital projector with this image projection to the testee surface, and with storing into behind the camera acquisition in the computing machine;
2) texture image of the object that step 1) is obtained, vertically strip encoding image, vertically center line image, level code stripe pattern and horizontal center line image, carry out following processing:
(1) texture image is carried out signature analysis: extract the two dimensional image coordinate of object feature point, promptly the coordinate of unique point on the video camera imaging plane (u, v);
(2) vertical strip encoding image being carried out striped analyzes automatically: demodulate phase function in the main value scope with the FFT method from deforming stripe figure, extract with phase-unwrapping algorithm then and launch phase diagram Φ u
(3) vertical center line image is carried out signature analysis: behind image binaryzation, extract the shared two dimensional image coordinate of center line;
(4) utilize phase place to arrive the transformation of coordinates algorithm, obtain the lateral coordinates of unique point on the projector projects plane, this algorithm is: elder generation is with the expansion phase diagram Φ of vertical striped uIn conjunction with vertical center line, be transformed to absolute phase figure Φ a, Φ a ( x , y ) = Φ u ( x , y ) - Φ u k ( i , j ) , Φ wherein u kBe center line place pixel location (i, phase value j); Then, obtain the horizontal ordinate u ' of the correspondence position of arbitrary characteristics point M on projection plane, u ′ = Φ a M ( u , v ) 2 π p , Φ wherein a MBe the absolute phase of unique point M, p is the space periodic of projection striped;
(5) in like manner, repeat (2), (3), (4) process of this step, processing horizontal strip encoding figure and horizontal center line image obtain the ordinate v ' of the correspondence position of arbitrary characteristics point M on projection plane, v ′ = Φ a M ( u , v ) 2 π p , Φ herein a M(u v) obtains by processing horizontal strip encoding figure and horizontal center line image;
(6) so far, obtain on the object respective coordinates of unique point M on imaging plane and projection plane arbitrarily, (u, v) and (u ', v '), for utilizing the polar curve geometrical constraint, the initialization that performs data is prepared;
3) in the sensor measurement scope, and guarantee to have under the preceding topic that overlaps the zone, move freely sensor to new vision point with the image of previous viewpoint collection 2, object is throwed and gatherer process repeating step 1) and step 2), at vision point 2Down, obtain on the object respective coordinates on imaging plane and projection plane of unique point M arbitrarily (u, v) and (u ', v ');
4) utilize the polar curve geometrical constraint, promptly with the polar curve equation: m ~ 2 T F m ~ 1 = 0 , Wherein
Figure A2006100149040003C3
With
Figure A2006100149040003C4
The homogeneous coordinates that are the corresponding point on imaging plane and the projection plane represent that F is called fundamental matrix, F = K p - T [ t ] × RK c - 1 , Set up and optimize equation: min Kc , Kp , R * , t * , R , t [ Σ j = 1 p ( m ~ 2 j T F 12 m ~ 1 j ) 2 ] , The number of p representation feature point wherein, F 12Fundamental matrix between expression plane 1 and the plane 2 obtains rotation matrix R and the translation matrix t of this viewpoint with respect to object coordinates system, the inner parameter (K of sensor c, K p), and the structural parameters (R of sensor *, t *), wherein find the solution K c, K p, R *, t *Be called the demarcation certainly of sensor parameters, find the solution R, t is called the viewpoint attitude and estimates;
5) many viewpoint positions attitude is stored, be used for follow-up multi-view depth images match and synthetic the use.
CNB2006100149045A 2006-07-25 2006-07-25 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor Active CN100388319C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100149045A CN100388319C (en) 2006-07-25 2006-07-25 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100149045A CN100388319C (en) 2006-07-25 2006-07-25 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor

Publications (2)

Publication Number Publication Date
CN1888814A true CN1888814A (en) 2007-01-03
CN100388319C CN100388319C (en) 2008-05-14

Family

ID=37578103

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100149045A Active CN100388319C (en) 2006-07-25 2006-07-25 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor

Country Status (1)

Country Link
CN (1) CN100388319C (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100468457C (en) * 2007-02-08 2009-03-11 深圳大学 Method for matching depth image
CN102055982A (en) * 2011-01-13 2011-05-11 浙江大学 Coding and decoding methods and devices for three-dimensional video
CN102155909A (en) * 2010-12-17 2011-08-17 华中科技大学 Nano-scale three-dimensional shape measurement method based on scanning electron microscope
CN102222347A (en) * 2010-06-16 2011-10-19 微软公司 Creating range image through wave front coding
CN101582165B (en) * 2009-06-29 2011-11-16 浙江大学 Camera array calibration algorithm based on gray level image and spatial depth data
CN101350101B (en) * 2008-09-09 2011-12-07 北京航空航天大学 Method for auto-registration of multi-amplitude deepness image
CN102411779A (en) * 2011-08-19 2012-04-11 中国科学院西安光学精密机械研究所 Image-based object model matching posture measurement method
CN101320425B (en) * 2007-06-06 2012-05-16 夏普株式会社 Image processing apparatus, image forming apparatus, and image processing method
CN102622751A (en) * 2012-02-28 2012-08-01 南京理工大学常熟研究院有限公司 Image processing method of three-dimensional camera
CN102654391A (en) * 2012-01-17 2012-09-05 深圳大学 Stripe projection three-dimensional measurement system based on bundle adjustment principle and calibration method thereof
CN103267491A (en) * 2012-07-17 2013-08-28 深圳大学 Method and system for automatically acquiring complete three-dimensional data of object surface
CN103471500A (en) * 2013-06-05 2013-12-25 江南大学 Conversion method of plane coordinate and space three-dimensional coordinate point in vision of monocular machine
CN103884272A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Method and device for determination of object position, and mobile electronic device
TWI493963B (en) * 2011-11-01 2015-07-21 Acer Inc Image generating device and image adjusting method
CN104864807A (en) * 2015-04-10 2015-08-26 深圳大学 Manipulator hand-eye calibration method based on active binocular vision
WO2015165181A1 (en) * 2014-04-28 2015-11-05 京东方科技集团股份有限公司 Method and apparatus for controlling projection of wearable device, and wearable device
CN106546192A (en) * 2016-10-12 2017-03-29 上海大学 A kind of high reflection Free-Form Surface and system
CN107270810A (en) * 2017-04-28 2017-10-20 深圳大学 The projector calibrating method and device of multi-faceted projection
CN107622262A (en) * 2017-11-06 2018-01-23 深圳市唯特视科技有限公司 A kind of posture estimation method based on overlapping limbs and adaptive viewpoint selection
CN110889845A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Measuring method and device, electronic device and storage medium
CN111351473A (en) * 2020-04-27 2020-06-30 华中科技大学无锡研究院 Viewpoint planning method, device and measuring system based on robot
CN111561871A (en) * 2019-02-14 2020-08-21 柯尼卡美能达株式会社 Data processing apparatus, data processing method, and storage medium
CN111739145A (en) * 2019-03-19 2020-10-02 上海汽车集团股份有限公司 Automobile model display system
CN115514877A (en) * 2021-06-22 2022-12-23 爱思开海力士有限公司 Apparatus and method for noise reduction from multi-view image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630406B (en) * 2008-07-14 2011-12-28 华为终端有限公司 Camera calibration method and camera calibration device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6858826B2 (en) * 1996-10-25 2005-02-22 Waveworx Inc. Method and apparatus for scanning three-dimensional objects
FI111755B (en) * 2001-11-23 2003-09-15 Mapvision Oy Ltd Method and system for calibrating an artificial vision system
CN1236277C (en) * 2002-12-17 2006-01-11 北京航空航天大学 Overall calibrating method for multi-vision sensor detecting system
CN1216273C (en) * 2002-12-17 2005-08-24 北京航空航天大学 Method for calibrating structure optical vision sensor
CN1251157C (en) * 2002-12-27 2006-04-12 中国科学院自动化研究所 Object three-dimensional model quick obtaining method based on active vision
CN1240992C (en) * 2004-07-13 2006-02-08 深圳大学 Multiple differentiation three-dimensional digital imaging method based on space orthogonal striped projection
CN100370220C (en) * 2005-10-19 2008-02-20 浙江工业大学 Single-image self-calibration for relative parameter of light structural three-dimensional system

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100468457C (en) * 2007-02-08 2009-03-11 深圳大学 Method for matching depth image
CN101320425B (en) * 2007-06-06 2012-05-16 夏普株式会社 Image processing apparatus, image forming apparatus, and image processing method
CN101350101B (en) * 2008-09-09 2011-12-07 北京航空航天大学 Method for auto-registration of multi-amplitude deepness image
CN101582165B (en) * 2009-06-29 2011-11-16 浙江大学 Camera array calibration algorithm based on gray level image and spatial depth data
CN102222347B (en) * 2010-06-16 2014-07-09 微软公司 Creating range image through wave front coding
CN102222347A (en) * 2010-06-16 2011-10-19 微软公司 Creating range image through wave front coding
CN102155909A (en) * 2010-12-17 2011-08-17 华中科技大学 Nano-scale three-dimensional shape measurement method based on scanning electron microscope
CN102155909B (en) * 2010-12-17 2012-12-19 华中科技大学 Nano-scale three-dimensional shape measurement method based on scanning electron microscope
CN102055982B (en) * 2011-01-13 2012-06-27 浙江大学 Coding and decoding methods and devices for three-dimensional video
CN102055982A (en) * 2011-01-13 2011-05-11 浙江大学 Coding and decoding methods and devices for three-dimensional video
CN102411779A (en) * 2011-08-19 2012-04-11 中国科学院西安光学精密机械研究所 Image-based object model matching posture measurement method
CN102411779B (en) * 2011-08-19 2014-12-10 中国科学院西安光学精密机械研究所 Image-based object model matching posture measurement method
TWI493963B (en) * 2011-11-01 2015-07-21 Acer Inc Image generating device and image adjusting method
CN102654391B (en) * 2012-01-17 2014-08-20 深圳大学 Stripe projection three-dimensional measurement system based on bundle adjustment principle and calibration method thereof
CN102654391A (en) * 2012-01-17 2012-09-05 深圳大学 Stripe projection three-dimensional measurement system based on bundle adjustment principle and calibration method thereof
CN102622751A (en) * 2012-02-28 2012-08-01 南京理工大学常熟研究院有限公司 Image processing method of three-dimensional camera
CN103267491A (en) * 2012-07-17 2013-08-28 深圳大学 Method and system for automatically acquiring complete three-dimensional data of object surface
CN103267491B (en) * 2012-07-17 2016-01-20 深圳大学 The method and system of automatic acquisition complete three-dimensional data of object surface
CN103884272A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Method and device for determination of object position, and mobile electronic device
CN103884272B (en) * 2012-12-20 2016-10-05 联想(北京)有限公司 A kind of object space determines method, device and mobile electronic device
CN103471500B (en) * 2013-06-05 2016-09-21 江南大学 A kind of monocular camera machine vision midplane coordinate and the conversion method of 3 d space coordinate point
CN103471500A (en) * 2013-06-05 2013-12-25 江南大学 Conversion method of plane coordinate and space three-dimensional coordinate point in vision of monocular machine
WO2015165181A1 (en) * 2014-04-28 2015-11-05 京东方科技集团股份有限公司 Method and apparatus for controlling projection of wearable device, and wearable device
US9872002B2 (en) 2014-04-28 2018-01-16 Boe Technology Group Co., Ltd. Method and device for controlling projection of wearable apparatus, and wearable apparatus
CN104864807A (en) * 2015-04-10 2015-08-26 深圳大学 Manipulator hand-eye calibration method based on active binocular vision
CN106546192A (en) * 2016-10-12 2017-03-29 上海大学 A kind of high reflection Free-Form Surface and system
CN106546192B (en) * 2016-10-12 2019-08-06 上海大学 A kind of high reflection Free-Form Surface and system
CN107270810A (en) * 2017-04-28 2017-10-20 深圳大学 The projector calibrating method and device of multi-faceted projection
CN107270810B (en) * 2017-04-28 2018-06-22 深圳大学 The projector calibrating method and device of multi-faceted projection
WO2018196303A1 (en) * 2017-04-28 2018-11-01 深圳大学 Projector calibration method and apparatus based on multi-directional projection
CN107622262A (en) * 2017-11-06 2018-01-23 深圳市唯特视科技有限公司 A kind of posture estimation method based on overlapping limbs and adaptive viewpoint selection
CN111561871B (en) * 2019-02-14 2022-01-04 柯尼卡美能达株式会社 Data processing apparatus, data processing method, and storage medium
CN111561871A (en) * 2019-02-14 2020-08-21 柯尼卡美能达株式会社 Data processing apparatus, data processing method, and storage medium
CN111739145A (en) * 2019-03-19 2020-10-02 上海汽车集团股份有限公司 Automobile model display system
CN110889845A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Measuring method and device, electronic device and storage medium
CN110889845B (en) * 2019-11-29 2022-11-11 深圳市商汤科技有限公司 Measuring method and device, electronic device and storage medium
CN111351473A (en) * 2020-04-27 2020-06-30 华中科技大学无锡研究院 Viewpoint planning method, device and measuring system based on robot
CN111351473B (en) * 2020-04-27 2022-03-04 华中科技大学无锡研究院 Viewpoint planning method, device and measuring system based on robot
CN115514877A (en) * 2021-06-22 2022-12-23 爱思开海力士有限公司 Apparatus and method for noise reduction from multi-view image
CN115514877B (en) * 2021-06-22 2024-03-19 爱思开海力士有限公司 Image processing apparatus and noise reduction method

Also Published As

Publication number Publication date
CN100388319C (en) 2008-05-14

Similar Documents

Publication Publication Date Title
CN1888814A (en) Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor
CN103743352B (en) A kind of 3 D deformation measuring method based on polyphaser coupling
CN102032878B (en) Accurate on-line measurement method based on binocular stereo vision measurement system
CN105261060A (en) Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method
CN104315995B (en) TOF depth camera three-dimensional coordinate calibration device and method based on virtual multi-cube standard target
CN102679959B (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
CN102155923A (en) Splicing measuring method and system based on three-dimensional target
CN105115560B (en) A kind of non-contact measurement method of cabin volume of compartment
CN1975324A (en) Double-sensor laser visual measuring system calibrating method
CN103714571A (en) Single camera three-dimensional reconstruction method based on photogrammetry
Ahmadabadian et al. An automatic 3D reconstruction system for texture-less objects
CN101149836B (en) Three-dimensional reconfiguration double pick-up camera calibration method
CN104154875A (en) Three-dimensional data acquisition system and acquisition method based on two-axis rotation platform
CN104316083B (en) The TOF depth camera three-dimensional coordinate caliberating devices and method of a kind of virtual many spheroid centre of sphere positioning
Yang et al. Flexible and accurate implementation of a binocular structured light system
CN110940295B (en) High-reflection object measurement method and system based on laser speckle limit constraint projection
CN102184563A (en) Three-dimensional scanning method, three-dimensional scanning system and three-dimensional scanning device used for plant organ form
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN102436676A (en) Three-dimensional reestablishing method for intelligent video monitoring
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN112254680B (en) Multi freedom's intelligent vision 3D information acquisition equipment
CN111640156A (en) Three-dimensional reconstruction method, equipment and storage equipment for outdoor weak texture target
CN106489062A (en) System and method for measuring the displacement of mobile platform
CN105737849A (en) Calibration method of relative position between laser scanner and camera on tunnel car
Pahwa et al. Dense 3D reconstruction for visual tunnel inspection using unmanned aerial vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Assignee: Shenzhen Esun Display Co., Ltd.

Assignor: Shenzhen University

Contract record no.: 2010440020139

Denomination of invention: Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor

Granted publication date: 20080514

License type: Exclusive License

Open date: 20070103

Record date: 20100819

DD01 Delivery of document by public notice

Addressee: Gao Juan

Document name: Notification that Application Deemed not to be Proposed

ASS Succession or assignment of patent right

Owner name: SHENZHEN ESUN DISPLAY CO., LTD.

Free format text: FORMER OWNER: SHENZHEN UNIVERSITY

Effective date: 20121213

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 518060 SHENZHEN, GUANGDONG PROVINCE TO: 518048 SHENZHEN, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20121213

Address after: 518048, B301, three floor, No. 4001, Fu Qiang Road, Futian District, Guangdong, Shenzhen, Shenzhen, China, AB

Patentee after: Shenzhen Esun Display Co., Ltd.

Address before: Nanshan District Nanyou 518060 Shenzhen Road, Guangdong No. 2336

Patentee before: Shenzhen University

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 518133 23rd floor, Yishang science and technology creative building, Jiaan South Road, Haiwang community Central District, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN ESUN DISPLAY Co.,Ltd.

Address before: 518048 B301, 3rd floor, block AB, 4001 Fuqiang Road, Futian District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN ESUN DISPLAY Co.,Ltd.