Embodiment
Below in conjunction with accompanying drawing, the preferred embodiment of Camera calibration method and apparatus provided by the invention is described in detail.
With reference to figure 1, Camera calibration method embodiment one of the present invention comprises:
A1, the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken carry out feature point detection respectively.
Be appreciated that the degree of depth deriving means in the present embodiment is meant, by physics mode such as depth camera or obtain the device of depth information by software mode such as traditional solid matching method.And the degree of depth is the distance of the captured object of video camera distance, is an amount relevant with parallax.Generally, the degree of depth is big, and then parallax is little; The degree of depth is little, and then parallax is big.According to the parallax depth model, can obtain parallax p and depth z
pRelation:
As can be seen, parallax is a nonlinear function of the degree of depth.In a lot of the application, D compares z
pMuch bigger, can be suitable for linear model:
。Therefore above-mentioned nonlinear function can be reduced to linear function, thereby improve counting yield.
In embodiments of the present invention, described degree of depth deriving means can be separate unit or at least two depth camera, also can be to comprise that the logical video camera of at least two Daeporis is a binocular camera.When wherein degree of depth deriving means comprises depth camera, with a kind of video camera that can directly obtain depth information is example, this video camera is that the method by physics is directly measured the distance that gets access to the subject distance video camera, as by radar, mode such as infrared, and represents with the form of gray level image.The high more expression degree of depth of gray level is more little, and parallax is big more; The more little expression degree of depth of gray level is big more, and parallax is more little.When degree of depth deriving means comprises that at least two Daeporis lead to video camera, be a series of restriction relations of utilizing between binocular camera, the left and right cameras photographic images is carried out the solid coupling, obtain parallax, and then utilize the relationship model of the parallax and the degree of depth, solve depth map.
The contiguous video camera of described degree of depth deriving means is meant that mainly this video camera can photograph at least a portion of the captured scene of degree of depth deriving means.
Described detected unique point is the characteristic information of the image represented with two-dimensional coordinate, will guarantee that the unique point that detects maintains the invariance to conversion such as image zoom, translation, rotations, is the stable characteristics point in the process that detects.
The unique point of the image that the contiguous video camera of the unique point of A2, image that the degree of depth deriving means of described detection is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity.
But described matching characteristic point is meant the identical point of characteristic information of image that degree of depth deriving means takes and the image of taking with the contiguous video camera of described degree of depth deriving means.
A3, based on described degree of depth deriving means, but obtain the three-dimensional world coordinate of described matching characteristic point.
In the present embodiment, but the three-dimensional world coordinate that depth information that obtains according to degree of depth deriving means or parallax information can be determined described matching characteristic point.
But A4, according to the three-dimensional world coordinate of described matching characteristic point, but reach the corresponding relation of the image coordinate of described matching characteristic point in the image that the contiguous video camera of described degree of depth deriving means is taken, determine the parameter of the video camera that described degree of depth deriving means is contiguous.
Wherein, can adopt the parameter of existing camera marking method (as: Tsai standardization, plane reference method etc.), not repeat them here by the contiguous video camera of steps A 4 acquisitions.
As can be seen from the above technical solutions, in the embodiment of the invention, the main three-dimensional world coordinate that obtains unique point by degree of depth deriving means, and utilize the three-dimensional world coordinate of described unique point and the corresponding relation of the image coordinate in the image that video camera to be calibrated is taken thereof, the demarcation of calibrating camera is treated in realization, compare with the scheme that prior art is demarcated multiple-camera by homography matrix between image, owing to do not need to estimate the homography matrix between principal and subordinate's video camera, therefore the parameter of the video camera that obtains is more stable and accurate; And the present invention can accurately obtain the three-dimensional world coordinate of unique point, at timing signal, demarcates moving of thing and can improve the operability of camera calibration not in fixing direction; Camera calibration method of the present invention in addition can adopt identical scaling method to many contiguous video cameras, simplifies the Camera calibration process.
Among the Camera calibration method embodiment two of the present invention with depth camera as degree of depth deriving means, with reference to Fig. 2 is the 2D/3D multi-view point video conference system that a depth camera and contiguous video camera constitute, the process flow diagram of the Camera calibration method of present embodiment comprises as shown in Figure 3:
B1, the image that the two dimensional image and the contiguous video camera of this depth camera of depth camera shooting are taken carry out feature point detection respectively.
Depth camera is taken and can be obtained common RGB two dimensional image and corresponding depth map thereof.
B2, described detected characteristics point is mated, but determine matching characteristic point between the image that two dimensional image that described depth camera is taken and the contiguous video camera of this depth camera take.
In the present embodiment, depth camera can only be taken under one group of parameter condition and be obtained the single width two dimensional image, also can take under at least two group different parameters conditions and obtain at least two width of cloth two dimensional images.For the latter, can adopt following two kinds of different matching ways:
Mode one, after described at least two width of cloth two dimensional images are carried out feature point detection respectively, the unique point of the image that the unique point separately of described at least two width of cloth two dimensional images and the contiguous video camera of depth camera are taken is mated, like this, but the common factor of the unique point of the image that determined matching characteristic point is the contiguous video camera of described at least two width of cloth two dimensional images and depth camera to be taken.
Mode two, after described at least two width of cloth two dimensional images are carried out feature point detection respectively, described at least two width of cloth two dimensional images are mated with the image that the video camera of described vicinity is taken respectively, like this, but the union of the unique point that determined matching characteristic point is a image that described at least two width of cloth two dimensional images are taken with the video camera of described vicinity respectively to be complementary.
B3, the depth map that shooting obtains according to depth camera, but the three-dimensional world coordinate of the described matching characteristic point of acquisition.
Wherein, for the matching way among the described B2 one,, but therefore can all can obtain the three-dimensional world coordinate of matching characteristic point under the different parameters condition in each width of cloth image that depth camera is taken according to arbitrary depth map of depth camera shooting because but described matching characteristic point all exists; And for the matching way among the described B2 two, because but matching characteristic point may only exist in the parts of images that depth camera is taken under the different parameters condition, therefore, but but need obtain the three-dimensional world coordinate of match point according to the depth map that has matching characteristic point.
But but B4, according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point the corresponding relation of the image coordinate in the image that the video camera of described vicinity is taken, obtain the parameter of the video camera of described vicinity.
Wherein, can adopt existing camera marking method (for example: Tsai standardization, gridiron pattern standardization and plane standardization etc.) to obtain the parameter of contiguous video camera, not repeat them here.
Present embodiment is the concrete application of embodiment one, in the present embodiment with depth camera as degree of depth deriving means, but obtaining the three-dimensional world coordinate time of matching characteristic point like this, method intuitively by physics obtains depth map, but obtains the three-dimensional world coordinate in conjunction with the information of described matching characteristic point again.This method is fairly simple and directly perceived, and the three-dimensional world coordinate that obtains is relatively prepared.
The video camera that leads to at least two Daeporis among the Camera calibration method embodiment three of the present invention is as degree of depth deriving means, is the 2D/3D multi-view point video conference system that logical video camera of two Daeporis and contiguous video camera constitute with reference to Fig. 4 for binocular camera, present embodiment is an example with two Camera calibration methods, process flow diagram comprises as shown in Figure 5:
C1, the image that the image and the contiguous video camera of described binocular camera of binocular camera shooting are taken carry out feature point detection respectively.
Wherein, the image that binocular camera is taken can be described binocular camera under the different parameters condition, take and comprise at least two width of cloth images of choosing respectively in the resulting at least two group images of scene of calibrating template.If when taking multiple series of images, environmental baseline such as the illumination of experiment and institute's clear picture degree of clapping are identical, choose wherein which width of cloth image to result's influence also not quite.Generally speaking, can directly choose first width of cloth image of each group.
C2, the unique point of described detection is mated, but determine matching characteristic point between the image that image that described binocular camera is taken and the contiguous video camera of described binocular camera take.
In the present embodiment, can to selected at least two width of cloth images separately unique point and the unique point of the image taken of the contiguous video camera of depth camera mate, as described in the matching way among the embodiment two one, like this, but the common factor of the unique point of the image that determined matching characteristic point is the contiguous video camera of described at least two width of cloth two dimensional images and binocular camera to be taken.
C3, according at least two group different parameters of described binocular camera, but and the image coordinate of described matching characteristic point in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point.
Wherein, at least two group different parameters of described binocular camera are the parameters according to the binocular camera of existing standardization demarcation, can be under the different parameters condition by binocular camera, at least two groups that Same Scene is taken comprise the image (choosing respectively in the described at least two group images of at least two width of cloth images described in the step C1) of calibrating template here, and described at least two group images are carried out respectively following step C31 to C34 obtained.Wherein, the variation of parameter condition does not need too big, guaranteeing that the same scene part in the image captured under the different parameters condition can be not convenient the description very little, below is that example is described with the situation of two groups of different parameters:
The set of diagrams that C31, the described binocular camera of detection are taken under same group of parameter condition is a unique point as tessellated non-coplanar angle point.Here can move the non-coplanar angle point of detected image gridiron pattern at the optical axis direction of video camera by calibrating template.
Image coordinate (the x of C32, the described angle point of acquisition
f, y
f), be appreciated that according to the transformation for mula between the coordinate system of the different levels of image, can obtain the image pixel coordinate of described angle point.
C33, determine described angle point three-dimensional world coordinate (x
w, y
w, z
w), owing to be to move a detected image chessboard non-coplanar angle point by calibrating template at the optical axis direction of video camera in step C31, the three-dimensional world coordinate of described angle point is fixed.
C34, according to the image coordinate of described angle point and the three-dimensional world coordinate of described angle point, described binocular camera is demarcated, determine the binocular camera parameter.
By said method, can obtain two groups of camera parameters: f respectively
1, (C
x 1, C
y 1), k
1 1, s
x 1, r
11 1, r
12 1, r
13 1, r
21 1, r
22 1, r
23 1, r
31 1, r
32 1, r
33 1, T
x 1, T
y 1, T
z 1f
2, (C
x 2, C
y 2), k
1 2, s
x 2, r
11 2, r
12 2, r
13 2, r
21 2, r
22 2, r
23 2, r
31 2, r
32 2, r
33 2, T
x 2, T
y 2, T
z 2Wherein subscript 1,2 is represented I group parameter and II group parameter respectively.Wherein f is confidential reference items focal length (mm), C
x, C
yBe respectively the pixel coordinates of photocentre, k
1Be the coefficient of first order of camera lens radial distortion, s
xIt is the uncertainty scale size factor.External parameter R, T are respectively rotation matrix and the translation vectors between three dimensions world coordinate system and the camera coordinate system.T wherein
x, T
y, T
zBe from world coordinates be tied to camera coordinates be conversion along three translation of axes amounts.
If camera coordinates ties up to the direction under the world coordinate system: be rotated counterclockwise angle [alpha] around X-axis, be rotated counterclockwise angle beta around Y-axis, be rotated counterclockwise angle γ around the Z axle, then rotation matrix is R=R
αR
βR
γ
After through feature point detection and coupling, but can get access to the image coordinate (x of matching characteristic point in two width of cloth images (hereinafter referred to as image 1 and image 2) that selected binocular camera is taken
f 1, y
f 1) and (x
f 2, y
f 2); Wherein, subscript 1 and 2 difference presentation video 1 and images 2.
In the present embodiment, can also be to (x
f 1, y
f 1) and (x
f 2, y
f 2) distort to eliminate and handle, particularly, but can be to the image coordinate (x of each matching characteristic point in image 1 and image 2
f 1, y
f 1) and (x
f 2, y
f 2), obtain through distortion by following formula (1) to (6) respectively and eliminate the ideal image coordinate (x that handles
u 1, y
u 1) and (x
u 2, y
u 2):
x
d=d′
x(x
f-C
x)/s
x (1)
y
d=d
y(y
f-C
y) (2)
x
u=x
d(1+k
1r
2) (3)
y
u=y
d(1+k
1r
2) (4)
(x wherein
d, y
dBut) be the real image coordinate of matching characteristic point, d
x, d
yBe respectively the distance (mm) between x direction (scan-line direction) and the y direction adjacent C CD photosensitive unit center, N
CxBe the number (producer provides by video camera) of directions X photosensitive unit, N
FxBe the number of pixels of the every row sampling of computing machine, i.e. the directions X size of image (pixel pix number).
Owing to obtained the corresponding camera parameters of image 1 and image 2, but and the desirable image coordinate of matching characteristic point in image 1 and image 2, but just can be to each its three-dimensional world coordinate (x of matching characteristic point calculating
w, y
w, z
w), can obtain the approximate optimal solution of three-dimensional world coordinate by finding the solution following overdetermination system of linear equations (7), this system of equations can be found the solution by least square fitting.
But but C4, according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point the corresponding relation of the image coordinate in the image that the video camera of described vicinity is taken, obtain the parameter of the video camera of described vicinity.
After through feature point detection and coupling, but can get access to the image coordinate of matching characteristic point in the image that contiguous video camera is taken.
Can adopt existing camera marking method (as Tsai standardization, plane reference method etc.) to obtain the parameter of contiguous video camera, not repeat them here.
Present embodiment is the concrete application of embodiment one, in the present embodiment with binocular camera as degree of depth deriving means, but obtaining the three-dimensional world coordinate time of matching characteristic point like this, the parameter of the binocular camera of Huo Deing in advance, but the three-dimensional world coordinate obtained in conjunction with the information of described matching characteristic point again.This method ratio is easier to realize, needs the logical video camera of at least two Daeporis just can reach the function that depth camera obtains depth information.
Among the Camera calibration method embodiment four of the present invention, the image that can under at least two group different parameters conditions, take respectively for single degree of depth deriving means, carry out Camera calibration method embodiment two of the present invention or embodiment three arbitrary described methods respectively, obtain at least two group parameters of contiguous video camera; Afterwards at least two group parameters that obtained are weighted on average, obtain the parameter of the video camera of described vicinity.
Be appreciated that the single degree of depth deriving means here can be the separate unit depth camera, as described in embodiment two; Also can be the logical video cameras of at least two Daeporis, as described in embodiment three.
Among the Camera calibration method embodiment five of the present invention, the 2D/3D multi-view point video conference system that can constitute by a plurality of degree of depth deriving means is as shown in Figure 6 demarcated, the image of taking respectively at least two degree of depth deriving means, carry out Camera calibration method embodiment two of the present invention or embodiment three arbitrary described methods respectively, obtain at least two group parameters of contiguous video camera; Afterwards at least two group parameters that obtained are weighted on average, obtain the parameter of the video camera of described vicinity.
Be appreciated that a plurality of degree of depth deriving means here can be at least two depth camera, and every depth camera to the method for its contiguous camera calibration as described in the embodiment two; Also can be the logical video camera of at least two Daeporis and the combination of at least one depth camera, the logical video camera of at least two Daeporis to the method for its camera calibration that is close to as described in the embodiment three.
One of ordinary skill in the art will appreciate that all or part of step that realizes in the foregoing description method is to instruct relevant hardware to finish by program, described program can be stored in the computer read/write memory medium, and this program can comprise when carrying out: the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken carries out feature point detection respectively; Unique point to described detection is mated, but determines the matching characteristic point between the image that image that described degree of depth deriving means is taken and the contiguous video camera of this device take; Based on described degree of depth deriving means, but the three-dimensional world coordinate of the described matching characteristic point of acquisition; But but according to the three-dimensional world coordinate of described matching characteristic point, described matching characteristic point in the image that the video camera of described vicinity is taken image coordinate and the corresponding relation between described two kinds of coordinates, obtain the parameter of the video camera of described vicinity.Here alleged storage medium, as: ROM/RAM, magnetic disc, CD etc.
Camera calibration method embodiment six of the present invention provides the Camera calibration method to the non-vicinity of degree of depth deriving means, and with reference to figure 7, the present embodiment method comprises:
The image that the video camera of D1, the image that the contiguous video camera of degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means is taken carries out feature point detection respectively.
Wherein, the video camera of described and the non-vicinity of degree of depth deriving means is contiguous with the video camera that described degree of depth deriving means is close to.The video camera of the non-vicinity of described degree of depth deriving means can photograph at least a portion of the contiguous shot by camera scene of degree of depth deriving means.
The unique point of the image that D2, the video camera that the degree of depth deriving means of described detection is contiguous are taken is mated with the unique point of the image that the video camera of the non-vicinity of described degree of depth deriving means is taken, but determines the matching characteristic point between the image of video camera shooting of image that the contiguous video camera of described degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means.
D3, according to the parameter of the contiguous video camera of predetermined described degree of depth device, but obtain the three-dimensional world coordinate of described matching characteristic point.
In the present embodiment, the parameter of the video camera of described degree of depth device vicinity can adopt Camera calibration method embodiment one to embodiment five arbitrary described method of the present invention to obtain.
But but D4, according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point the corresponding relation of the image coordinate in the image that the video camera of described non-vicinity is taken, obtain the parameter of the video camera of described non-vicinity.
Wherein, can adopt existing camera marking method (as Tsai standardization, plane reference method etc.) obtain as described in the parameter of contiguous video camera, do not repeat them here.
A kind of parameter of simply obtaining video camera to be calibrated far away is provided in the present embodiment, is the parameter by the contiguous video camera of degree of depth deriving means, in conjunction with
With reference to figure 8, camera calibration device embodiment one of the present invention comprises feature point detection unit 110, Feature Points Matching unit 120, unique point three-dimensional coordinate acquiring unit 130 and demarcates unit 140:
Feature point detection unit 110 is used for the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken is carried out feature point detection respectively.
In embodiments of the present invention, described degree of depth deriving means can be separate unit or at least two depth camera, also can be to comprise that at least two Daeporis lead to video camera, can also be the combination of the logical video camera of at least one depth camera and at least one Daepori.
The image that described degree of depth deriving means is taken can be a single image, also can be that two width of cloth are with epigraph.
Feature Points Matching unit 120, the unique point of the image that the contiguous video camera of the unique point of the image that the degree of depth deriving means that is used for that feature point detection unit 110 is detected is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity.
Unique point three-dimensional coordinate acquiring unit 130 is used for based on described degree of depth deriving means, but obtains the three-dimensional world coordinate that described Feature Points Matching unit 120 mates the matching characteristic point that obtains.
Demarcate unit 140, but but be used for the three-dimensional world coordinate of the matching characteristic point that obtains according to described unique point three-dimensional coordinate acquiring unit 130 and matching characteristic point that described Feature Points Matching units match 120 obtains corresponding relation in the image coordinate of the image of the video camera shooting of described vicinity, obtain the parameter of the video camera of described vicinity.
But the degree of depth deriving means in the present embodiment is the three-dimensional world coordinate that obtains matching characteristic point by unique point three-dimensional coordinate acquiring unit 130, but but it is more accurate to demarcate three-dimensional world coordinate and the parameter of matching characteristic point two-dimensional coordinate acquisition in the image of to be calibrated video camera shooting of unit 140 by described matching characteristic point like this, and has overcome the defective of the poor operability of prior art.
As shown in Figure 9, camera calibration device embodiment two of the present invention is similar with camera calibration device embodiment one of the present invention, and the difference part is that in the present embodiment, described degree of depth deriving means specifically is a binocular camera; Described unique point three-dimensional coordinate acquiring unit 130 further comprises:
Parameter acquiring unit 131 is used to obtain at least two group different parameters of binocular camera.
Three-dimensional coordinate acquiring unit 132, be used at least two group different parameters according to the binocular camera of described parameter acquiring unit 131 acquisitions, but and the image coordinate of the matching characteristic point that obtains of described Feature Points Matching units match 120 in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point.
But but demarcate unit 140 this moment is to be used for the three-dimensional world coordinate of the matching characteristic point that the three-dimensional coordinate acquiring unit 132 according to described unique point three-dimensional coordinate acquiring unit 130 obtains and the corresponding relation of the image coordinate of described matching characteristic point in the image of the video camera shooting of described vicinity, obtains the parameter of the video camera of described vicinity.
Wherein, the image that described degree of depth deriving means is taken specifically is meant: under described at least two group different parameters conditions, shooting comprises at least two width of cloth images of choosing respectively in the resulting at least two group images of scene of calibrating template from described binocular camera.
As shown in figure 10, camera calibration device embodiment three of the present invention is similar with camera calibration device embodiment two of the present invention, and the difference part is that in the present embodiment, described parameter acquiring unit 131 further comprises:
Corner Detection unit 141 is used to detect described binocular camera under same group of parameter condition, takes the tessellated non-coplane angle point of each width of cloth image in the resulting set of diagrams picture of scene that comprises calibrating template.
Angular coordinate acquiring unit 151 is used to obtain the image coordinate of the angle point that detects described Corner Detection unit 141, and is described angle point specified three-dimensional world coordinates.
Binocular camera parameter acquiring unit 161 is used for the image coordinate and the three-dimensional world coordinate of the angle point that obtains according to described angular coordinate acquiring unit 151, and described binocular camera is demarcated, and determines binocular camera and this group image corresponding parameters.
Three-dimensional coordinate acquiring unit 132 is at least two group different parameters according to the binocular camera of 161 acquisitions of the binocular camera parameter acquiring unit in the described parameter acquiring unit 131 at this moment, but and the image coordinate of the matching characteristic point that obtains of described Feature Points Matching units match 120 in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point.
In the foregoing description of camera calibration device of the present invention, if what described feature point detection unit 110 detected is the unique points of two width of cloth of described degree of depth deriving means shooting with epigraph, the unique point that described Feature Points Matching unit 120 also is used for the image that two width of cloth that described degree of depth deriving means is taken take with the contiguous video camera of the unique point of epigraph and described degree of depth deriving means is mated, but determined matching characteristic point is that described two width of cloth are with the common factors of epigraph with the unique point of the image of the video camera shooting of described degree of depth deriving means vicinity; Perhaps,
Described Feature Points Matching unit 120 also is used for two width of cloth that described degree of depth deriving means is taken and mates with the image that the contiguous video camera of described degree of depth deriving means is taken respectively with epigraph, but the union of the unique point that the image that determined matching characteristic point is described two width of cloth to be taken with the contiguous video camera of described degree of depth deriving means respectively with epigraph is complementary.
As shown in figure 11, camera calibration device embodiment four of the present invention can be to demarcating with the video camera of the non-vicinity of degree of depth deriving means, and this device comprises feature point detection unit 410, Feature Points Matching unit 420, unique point three-dimensional coordinate acquiring unit 430 and demarcates unit 440:
Feature point detection unit 410, the image that the image that the video camera that is used for that degree of depth deriving means is close to is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken carries out feature point detection respectively.
Feature Points Matching unit 420, the image characteristic point that the image that the contiguous video camera of degree of depth deriving means that is used for that described feature point detection unit 410 is detected is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken mates, but determines the matching characteristic point between the image that the video camera of image that the contiguous video camera of described degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means takes.
Unique point three-dimensional coordinate acquiring unit 430 is used for the parameter according to the video camera of predetermined described vicinity, but obtains the three-dimensional world coordinate of the matching characteristic point of described Feature Points Matching unit 420 couplings.
Demarcate unit 440, but but be used for the three-dimensional world coordinate of the matching characteristic point that obtains according to described unique point three-dimensional coordinate acquiring unit 430 and described matching characteristic point corresponding relation in the image coordinate of the image of the video camera shooting of the non-vicinity of described degree of depth deriving means, obtain the parameter of the video camera of described vicinity.
In the present embodiment, the camera calibration device can also comprise proximity parameter acquiring unit 450, is used to adopt Camera calibration method embodiment one to embodiment six arbitrary described camera marking method of the present invention to obtain the parameter of the contiguous video camera of described degree of depth deriving means.
In summary it can be seen, in the embodiment of the invention, the main three-dimensional world coordinate that obtains unique point by degree of depth deriving means, and utilize the three-dimensional world coordinate of described unique point and the corresponding relation of the image coordinate in the image that video camera to be calibrated is taken thereof, the demarcation of calibrating camera is treated in realization, compare with the scheme that prior art is demarcated multiple-camera by homography matrix between image, owing to do not need to estimate the homography matrix between principal and subordinate's video camera, therefore the parameter of the video camera that obtains is more stable and accurate; And the present invention can accurately obtain the three-dimensional world coordinate of unique point, at timing signal, demarcates moving of thing and can improve the operability of camera calibration not in fixing direction; Camera calibration method of the present invention in addition can adopt identical scaling method to many contiguous video cameras, simplifies the Camera calibration process.
More than the Camera calibration method and apparatus that the embodiment of the invention provided is described in detail, used specific embodiment herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and thought thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.