CN101630406A - Camera calibration method and camera calibration device - Google Patents

Camera calibration method and camera calibration device Download PDF

Info

Publication number
CN101630406A
CN101630406A CN200810130737A CN200810130737A CN101630406A CN 101630406 A CN101630406 A CN 101630406A CN 200810130737 A CN200810130737 A CN 200810130737A CN 200810130737 A CN200810130737 A CN 200810130737A CN 101630406 A CN101630406 A CN 101630406A
Authority
CN
China
Prior art keywords
image
deriving means
camera
video camera
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200810130737A
Other languages
Chinese (zh)
Other versions
CN101630406B (en
Inventor
李凯
刘源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Innovation Polymerization LLC
Tanous Co
Original Assignee
Shenzhen Huawei Communication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huawei Communication Technologies Co Ltd filed Critical Shenzhen Huawei Communication Technologies Co Ltd
Priority to CN200810130737XA priority Critical patent/CN101630406B/en
Publication of CN101630406A publication Critical patent/CN101630406A/en
Application granted granted Critical
Publication of CN101630406B publication Critical patent/CN101630406B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a camera calibration method and a camera calibration device and is applied in the technical filed of images and video. The method and the device mainly acquire a three-dimensional world coordinate of a characteristic point through a depth acquiring device, and realize the calibration of the camera to be calibrated by using a correspondence of the three-dimensional world coordinate of the characteristic point and an image coordinate in an image taken by the camera to be calibrated. Compared with the scheme that a plurality of cameras are calibrated through a homographic matrix between images in the prior art, the method and the device can acquire stable and accurate parameters of the camera because of not needing to estimate the homographic matrix between a master camera and a slave camera. The method and the device can accurately acquire the three-dimensional world coordinate of the characteristic point. The movement of a calibration object can be in an unfixed direction when the camera is calibrated, which improves the operability of the camera calibration. Additionally, the camera calibration method can adopt the same calibration method to a plurality of adjacent cameras so as to simplify the camera calibration process.

Description

Camera calibration method and camera calibration device
Technical field
The present invention relates to image and video technique field, relate in particular to Camera calibration method and camera calibration device.
Background technology
In computer vision and photogrammetric field,, must carry out camera calibration in order to obtain the corresponding relation of computer picture picture element to the actual physics spatial point.Camera calibration is meant under certain camera model, through its image is handled, utilizes a series of mathematic(al) manipulations and computing method, asks for the process of the inside and outside parameter of camera model.At present existing a large amount of camera marking method, comprising traditional camera marking method of asking for the camera model parameter based on specific experiment condition such as shape, the known calibrated reference of size, do not rely on calibrated reference yet, only utilize the image of video camera surrounding environment in motion process and the camera self-calibration method that the corresponding relation between the image is demarcated video camera.The traditional use calibrated reference such as the method for calibrating template have obtained to use more widely, the two-step approach that Tsai is wherein typically arranged, these traditional methods since take and calibration process in need to use calibrated reference always, thereby brought very big inconvenience to shooting operation and scaling method use.
In free viewpoint video was used, the inside and outside ginseng of obtaining multiple-camera was very important, and the polar curve of image is proofreaied and correct, the interpolation generation of the obtaining of the degree of depth/disparity map, free video etc. will use nearly all that video camera is inside and outside joins.If adopting traditional camera marking method demarcates multiple cameras, need utilize calibrated reference to demarcate one by one to every video camera, implement comparatively complicated and loaded down with trivial details, particularly when the camera model parameter changed, how quick, effective each video camera is demarcated again was a problem that must solve.
A kind of scheme of not using calibrating template and demarcating multiple-camera by homography matrix between image is provided in the prior art, this scheme is carried out timing signal to multiple-camera and comprised: each video camera in one group of video camera is synchronously caught piece image, wherein a video camera is a main camera, and other video camera is from video camera; Extract minutiae from every width of cloth image that each video camera captures; The unique point that use is extracted is estimated the homography matrix between the set of diagrams picture; Use described homography matrix, obtain a linear solution of the external parameter of each video camera.
In realizing process of the present invention, the inventor finds that there is following problem at least in the Camera calibration scheme of above-mentioned prior art: above-mentioned scheme of demarcating multiple-camera by homography matrix between image need estimate that the homography matrix between principal and subordinate's video camera just can try to achieve the external parameter of video camera, because the stability and the reliability of the estimation procedure of homography matrix are all relatively poor, therefore the external parameter of the video camera of being tried to achieve according to this is also relatively poor; In addition, adopt and demarcate the thing scaling method, need move at fixed-direction demarcating thing, its operability complexity also relatively poor and calibration process is higher.
Summary of the invention
The embodiment of the invention provides Camera calibration method and camera calibration device, adopts Camera calibration method of the present invention can obtain the parameter of video camera more accurately, has improved the operability of camera calibration.
A kind of Camera calibration method that the embodiment of the invention provides comprises:
The image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken carries out feature point detection respectively;
The unique point of the image that the contiguous video camera of the unique point of the image that the degree of depth deriving means of described detection is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity;
Based on described degree of depth deriving means, but the three-dimensional world coordinate of the described matching characteristic point of acquisition;
But but, determine the parameter of the video camera that described degree of depth deriving means is contiguous according to the three-dimensional world coordinate of described matching characteristic point and the corresponding relation of the image coordinate of described matching characteristic point in the image that the contiguous video camera of described degree of depth deriving means is taken.
A kind of Camera calibration method provided by the invention comprises:
The image that the image that the contiguous video camera of degree of depth deriving means is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken carries out feature point detection respectively;
The image characteristic point that the image that the video camera that the degree of depth deriving means of described detection is contiguous is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken mates, but determines the matching characteristic point between the image that the video camera of image that the contiguous video camera of described degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means takes;
According to the parameter of the contiguous video camera of predetermined described degree of depth deriving means, but the three-dimensional world coordinate of the described matching characteristic point of acquisition;
But but, obtain the parameter of the video camera of the non-vicinity of described degree of depth deriving means according to the three-dimensional world coordinate of described matching characteristic point and the corresponding relation of the image coordinate of described matching characteristic point in the image that described not contiguous video camera is taken.
A kind of camera calibration device that the embodiment of the invention provides comprises:
The feature point detection unit is used for the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken is carried out feature point detection respectively;
The Feature Points Matching unit, the unique point of the image that the contiguous video camera of the unique point of the image that the degree of depth deriving means that is used for that described feature point detection unit is detected is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity;
Unique point three-dimensional coordinate acquiring unit is used for based on described degree of depth deriving means, but obtains the three-dimensional world coordinate of described matching characteristic point;
Demarcate the unit, but but be used for obtaining the parameter of the video camera of described vicinity according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point corresponding relation in the image coordinate of the image of the contiguous video camera shooting of described degree of depth deriving means.
As can be seen from the above technical solutions, in the embodiment of the invention, the main three-dimensional world coordinate that obtains unique point by degree of depth deriving means, and utilize the three-dimensional world coordinate of described unique point and the corresponding relation of the image coordinate in the image that video camera to be calibrated is taken thereof, the demarcation of calibrating camera is treated in realization, compare with the scheme that prior art is demarcated multiple-camera by homography matrix between image, owing to do not need to estimate the homography matrix between principal and subordinate's video camera, therefore the parameter of the video camera that obtains is more stable and accurate; And the present invention can accurately obtain the three-dimensional world coordinate of unique point, at timing signal, demarcates moving of thing and can improve the operability of camera calibration not in fixing direction; Camera calibration method of the present invention in addition can adopt identical scaling method to many contiguous video cameras, simplifies the Camera calibration process.
Description of drawings
Fig. 1 is the process flow diagram of Camera calibration method embodiment one of the present invention;
Fig. 2 is the structural representation of the 2D/3D multi-view point video conference system that depth camera and contiguous video camera thereof constitute among the Camera calibration method embodiment two of the present invention;
Fig. 3 is the process flow diagram of Camera calibration method embodiment two of the present invention;
Fig. 4 is the structural representation of the 2D/3D multi-view point video conference system that binocular camera and contiguous video camera thereof constitute among the Camera calibration method embodiment three of the present invention;
Fig. 5 is the process flow diagram of Camera calibration method embodiment three of the present invention;
Fig. 6 is the structural representation of the 2D/3D multi-view point video conference system that a plurality of degree of depth deriving means constitute among the Camera calibration method embodiment five of the present invention;
Fig. 7 is the process flow diagram of Camera calibration method embodiment six of the present invention;
Fig. 8 is the structural representation of camera calibration device embodiment one of the present invention;
Fig. 9 is the structural representation of camera calibration device embodiment two of the present invention;
Figure 10 is the structural representation of camera calibration device embodiment three of the present invention;
Figure 11 is the structural representation of camera calibration device embodiment four of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiment of Camera calibration method and apparatus provided by the invention is described in detail.
With reference to figure 1, Camera calibration method embodiment one of the present invention comprises:
A1, the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken carry out feature point detection respectively.
Be appreciated that the degree of depth deriving means in the present embodiment is meant, by physics mode such as depth camera or obtain the device of depth information by software mode such as traditional solid matching method.And the degree of depth is the distance of the captured object of video camera distance, is an amount relevant with parallax.Generally, the degree of depth is big, and then parallax is little; The degree of depth is little, and then parallax is big.According to the parallax depth model, can obtain parallax p and depth z pRelation:
x L D = x p D - z p x R - x B D = x p - x B D - z p ⇒ x L - x R + x B D = x B D - z p ⇒ p = x B ( 1 - D D - z p ) = x B ( 1 z p D - 1 + 1 )
As can be seen, parallax is a nonlinear function of the degree of depth.In a lot of the application, D compares z pMuch bigger, can be suitable for linear model: p ≈ - x B D z p 。Therefore above-mentioned nonlinear function can be reduced to linear function, thereby improve counting yield.
In embodiments of the present invention, described degree of depth deriving means can be separate unit or at least two depth camera, also can be to comprise that the logical video camera of at least two Daeporis is a binocular camera.When wherein degree of depth deriving means comprises depth camera, with a kind of video camera that can directly obtain depth information is example, this video camera is that the method by physics is directly measured the distance that gets access to the subject distance video camera, as by radar, mode such as infrared, and represents with the form of gray level image.The high more expression degree of depth of gray level is more little, and parallax is big more; The more little expression degree of depth of gray level is big more, and parallax is more little.When degree of depth deriving means comprises that at least two Daeporis lead to video camera, be a series of restriction relations of utilizing between binocular camera, the left and right cameras photographic images is carried out the solid coupling, obtain parallax, and then utilize the relationship model of the parallax and the degree of depth, solve depth map.
The contiguous video camera of described degree of depth deriving means is meant that mainly this video camera can photograph at least a portion of the captured scene of degree of depth deriving means.
Described detected unique point is the characteristic information of the image represented with two-dimensional coordinate, will guarantee that the unique point that detects maintains the invariance to conversion such as image zoom, translation, rotations, is the stable characteristics point in the process that detects.
The unique point of the image that the contiguous video camera of the unique point of A2, image that the degree of depth deriving means of described detection is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity.
But described matching characteristic point is meant the identical point of characteristic information of image that degree of depth deriving means takes and the image of taking with the contiguous video camera of described degree of depth deriving means.
A3, based on described degree of depth deriving means, but obtain the three-dimensional world coordinate of described matching characteristic point.
In the present embodiment, but the three-dimensional world coordinate that depth information that obtains according to degree of depth deriving means or parallax information can be determined described matching characteristic point.
But A4, according to the three-dimensional world coordinate of described matching characteristic point, but reach the corresponding relation of the image coordinate of described matching characteristic point in the image that the contiguous video camera of described degree of depth deriving means is taken, determine the parameter of the video camera that described degree of depth deriving means is contiguous.
Wherein, can adopt the parameter of existing camera marking method (as: Tsai standardization, plane reference method etc.), not repeat them here by the contiguous video camera of steps A 4 acquisitions.
As can be seen from the above technical solutions, in the embodiment of the invention, the main three-dimensional world coordinate that obtains unique point by degree of depth deriving means, and utilize the three-dimensional world coordinate of described unique point and the corresponding relation of the image coordinate in the image that video camera to be calibrated is taken thereof, the demarcation of calibrating camera is treated in realization, compare with the scheme that prior art is demarcated multiple-camera by homography matrix between image, owing to do not need to estimate the homography matrix between principal and subordinate's video camera, therefore the parameter of the video camera that obtains is more stable and accurate; And the present invention can accurately obtain the three-dimensional world coordinate of unique point, at timing signal, demarcates moving of thing and can improve the operability of camera calibration not in fixing direction; Camera calibration method of the present invention in addition can adopt identical scaling method to many contiguous video cameras, simplifies the Camera calibration process.
Among the Camera calibration method embodiment two of the present invention with depth camera as degree of depth deriving means, with reference to Fig. 2 is the 2D/3D multi-view point video conference system that a depth camera and contiguous video camera constitute, the process flow diagram of the Camera calibration method of present embodiment comprises as shown in Figure 3:
B1, the image that the two dimensional image and the contiguous video camera of this depth camera of depth camera shooting are taken carry out feature point detection respectively.
Depth camera is taken and can be obtained common RGB two dimensional image and corresponding depth map thereof.
B2, described detected characteristics point is mated, but determine matching characteristic point between the image that two dimensional image that described depth camera is taken and the contiguous video camera of this depth camera take.
In the present embodiment, depth camera can only be taken under one group of parameter condition and be obtained the single width two dimensional image, also can take under at least two group different parameters conditions and obtain at least two width of cloth two dimensional images.For the latter, can adopt following two kinds of different matching ways:
Mode one, after described at least two width of cloth two dimensional images are carried out feature point detection respectively, the unique point of the image that the unique point separately of described at least two width of cloth two dimensional images and the contiguous video camera of depth camera are taken is mated, like this, but the common factor of the unique point of the image that determined matching characteristic point is the contiguous video camera of described at least two width of cloth two dimensional images and depth camera to be taken.
Mode two, after described at least two width of cloth two dimensional images are carried out feature point detection respectively, described at least two width of cloth two dimensional images are mated with the image that the video camera of described vicinity is taken respectively, like this, but the union of the unique point that determined matching characteristic point is a image that described at least two width of cloth two dimensional images are taken with the video camera of described vicinity respectively to be complementary.
B3, the depth map that shooting obtains according to depth camera, but the three-dimensional world coordinate of the described matching characteristic point of acquisition.
Wherein, for the matching way among the described B2 one,, but therefore can all can obtain the three-dimensional world coordinate of matching characteristic point under the different parameters condition in each width of cloth image that depth camera is taken according to arbitrary depth map of depth camera shooting because but described matching characteristic point all exists; And for the matching way among the described B2 two, because but matching characteristic point may only exist in the parts of images that depth camera is taken under the different parameters condition, therefore, but but need obtain the three-dimensional world coordinate of match point according to the depth map that has matching characteristic point.
But but B4, according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point the corresponding relation of the image coordinate in the image that the video camera of described vicinity is taken, obtain the parameter of the video camera of described vicinity.
Wherein, can adopt existing camera marking method (for example: Tsai standardization, gridiron pattern standardization and plane standardization etc.) to obtain the parameter of contiguous video camera, not repeat them here.
Present embodiment is the concrete application of embodiment one, in the present embodiment with depth camera as degree of depth deriving means, but obtaining the three-dimensional world coordinate time of matching characteristic point like this, method intuitively by physics obtains depth map, but obtains the three-dimensional world coordinate in conjunction with the information of described matching characteristic point again.This method is fairly simple and directly perceived, and the three-dimensional world coordinate that obtains is relatively prepared.
The video camera that leads to at least two Daeporis among the Camera calibration method embodiment three of the present invention is as degree of depth deriving means, is the 2D/3D multi-view point video conference system that logical video camera of two Daeporis and contiguous video camera constitute with reference to Fig. 4 for binocular camera, present embodiment is an example with two Camera calibration methods, process flow diagram comprises as shown in Figure 5:
C1, the image that the image and the contiguous video camera of described binocular camera of binocular camera shooting are taken carry out feature point detection respectively.
Wherein, the image that binocular camera is taken can be described binocular camera under the different parameters condition, take and comprise at least two width of cloth images of choosing respectively in the resulting at least two group images of scene of calibrating template.If when taking multiple series of images, environmental baseline such as the illumination of experiment and institute's clear picture degree of clapping are identical, choose wherein which width of cloth image to result's influence also not quite.Generally speaking, can directly choose first width of cloth image of each group.
C2, the unique point of described detection is mated, but determine matching characteristic point between the image that image that described binocular camera is taken and the contiguous video camera of described binocular camera take.
In the present embodiment, can to selected at least two width of cloth images separately unique point and the unique point of the image taken of the contiguous video camera of depth camera mate, as described in the matching way among the embodiment two one, like this, but the common factor of the unique point of the image that determined matching characteristic point is the contiguous video camera of described at least two width of cloth two dimensional images and binocular camera to be taken.
C3, according at least two group different parameters of described binocular camera, but and the image coordinate of described matching characteristic point in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point.
Wherein, at least two group different parameters of described binocular camera are the parameters according to the binocular camera of existing standardization demarcation, can be under the different parameters condition by binocular camera, at least two groups that Same Scene is taken comprise the image (choosing respectively in the described at least two group images of at least two width of cloth images described in the step C1) of calibrating template here, and described at least two group images are carried out respectively following step C31 to C34 obtained.Wherein, the variation of parameter condition does not need too big, guaranteeing that the same scene part in the image captured under the different parameters condition can be not convenient the description very little, below is that example is described with the situation of two groups of different parameters:
The set of diagrams that C31, the described binocular camera of detection are taken under same group of parameter condition is a unique point as tessellated non-coplanar angle point.Here can move the non-coplanar angle point of detected image gridiron pattern at the optical axis direction of video camera by calibrating template.
Image coordinate (the x of C32, the described angle point of acquisition f, y f), be appreciated that according to the transformation for mula between the coordinate system of the different levels of image, can obtain the image pixel coordinate of described angle point.
C33, determine described angle point three-dimensional world coordinate (x w, y w, z w), owing to be to move a detected image chessboard non-coplanar angle point by calibrating template at the optical axis direction of video camera in step C31, the three-dimensional world coordinate of described angle point is fixed.
C34, according to the image coordinate of described angle point and the three-dimensional world coordinate of described angle point, described binocular camera is demarcated, determine the binocular camera parameter.
By said method, can obtain two groups of camera parameters: f respectively 1, (C x 1, C y 1), k 1 1, s x 1, r 11 1, r 12 1, r 13 1, r 21 1, r 22 1, r 23 1, r 31 1, r 32 1, r 33 1, T x 1, T y 1, T z 1f 2, (C x 2, C y 2), k 1 2, s x 2, r 11 2, r 12 2, r 13 2, r 21 2, r 22 2, r 23 2, r 31 2, r 32 2, r 33 2, T x 2, T y 2, T z 2Wherein subscript 1,2 is represented I group parameter and II group parameter respectively.Wherein f is confidential reference items focal length (mm), C x, C yBe respectively the pixel coordinates of photocentre, k 1Be the coefficient of first order of camera lens radial distortion, s xIt is the uncertainty scale size factor.External parameter R, T are respectively rotation matrix and the translation vectors between three dimensions world coordinate system and the camera coordinate system.T wherein x, T y, T zBe from world coordinates be tied to camera coordinates be conversion along three translation of axes amounts.
If camera coordinates ties up to the direction under the world coordinate system: be rotated counterclockwise angle [alpha] around X-axis, be rotated counterclockwise angle beta around Y-axis, be rotated counterclockwise angle γ around the Z axle, then rotation matrix is R=R αR βR γ
After through feature point detection and coupling, but can get access to the image coordinate (x of matching characteristic point in two width of cloth images (hereinafter referred to as image 1 and image 2) that selected binocular camera is taken f 1, y f 1) and (x f 2, y f 2); Wherein, subscript 1 and 2 difference presentation video 1 and images 2.
In the present embodiment, can also be to (x f 1, y f 1) and (x f 2, y f 2) distort to eliminate and handle, particularly, but can be to the image coordinate (x of each matching characteristic point in image 1 and image 2 f 1, y f 1) and (x f 2, y f 2), obtain through distortion by following formula (1) to (6) respectively and eliminate the ideal image coordinate (x that handles u 1, y u 1) and (x u 2, y u 2):
x d=d′ x(x f-C x)/s x (1)
y d=d y(y f-C y) (2)
x u=x d(1+k 1r 2) (3)
y u=y d(1+k 1r 2) (4)
d x ′ = d x N cx N fx - - - ( 5 )
r 2 = x d 2 + y d 2 - - - ( 6 )
(x wherein d, y dBut) be the real image coordinate of matching characteristic point, d x, d yBe respectively the distance (mm) between x direction (scan-line direction) and the y direction adjacent C CD photosensitive unit center, N CxBe the number (producer provides by video camera) of directions X photosensitive unit, N FxBe the number of pixels of the every row sampling of computing machine, i.e. the directions X size of image (pixel pix number).
Owing to obtained the corresponding camera parameters of image 1 and image 2, but and the desirable image coordinate of matching characteristic point in image 1 and image 2, but just can be to each its three-dimensional world coordinate (x of matching characteristic point calculating w, y w, z w), can obtain the approximate optimal solution of three-dimensional world coordinate by finding the solution following overdetermination system of linear equations (7), this system of equations can be found the solution by least square fitting.
( f 1 r 11 1 - x u 1 r 31 1 ) x w + ( f 1 r 12 1 - x u 1 r 32 1 ) y w + ( f 1 r 13 1 - x u 1 r 33 1 ) z w = x u 1 T z 1 - f 1 T x 1 ( f 1 r 21 1 - y u 1 r 31 1 ) x w + ( f 1 r 22 1 - y u 1 r 32 1 ) y w + ( f 1 r 23 1 - y u 1 r 33 1 ) z w = y u 1 T z 1 - f 1 T y 1 ( f 2 r 11 2 - x u 2 r 31 2 ) x w + ( f 2 r 12 2 - x u 2 r 32 2 ) y w + ( f 2 r 13 2 - x u 2 r 33 2 ) z w = x u 2 T z 2 - f 2 T x 2 ( f 2 r 21 2 - y u 2 r 31 2 ) x w + ( f 2 r 22 2 - y u 2 r 32 2 ) y w + ( f 2 r 23 2 - y u 2 r 33 2 ) z w = y u 2 T z 2 - f 2 T y 2 - - - ( 7 )
But but C4, according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point the corresponding relation of the image coordinate in the image that the video camera of described vicinity is taken, obtain the parameter of the video camera of described vicinity.
After through feature point detection and coupling, but can get access to the image coordinate of matching characteristic point in the image that contiguous video camera is taken.
Can adopt existing camera marking method (as Tsai standardization, plane reference method etc.) to obtain the parameter of contiguous video camera, not repeat them here.
Present embodiment is the concrete application of embodiment one, in the present embodiment with binocular camera as degree of depth deriving means, but obtaining the three-dimensional world coordinate time of matching characteristic point like this, the parameter of the binocular camera of Huo Deing in advance, but the three-dimensional world coordinate obtained in conjunction with the information of described matching characteristic point again.This method ratio is easier to realize, needs the logical video camera of at least two Daeporis just can reach the function that depth camera obtains depth information.
Among the Camera calibration method embodiment four of the present invention, the image that can under at least two group different parameters conditions, take respectively for single degree of depth deriving means, carry out Camera calibration method embodiment two of the present invention or embodiment three arbitrary described methods respectively, obtain at least two group parameters of contiguous video camera; Afterwards at least two group parameters that obtained are weighted on average, obtain the parameter of the video camera of described vicinity.
Be appreciated that the single degree of depth deriving means here can be the separate unit depth camera, as described in embodiment two; Also can be the logical video cameras of at least two Daeporis, as described in embodiment three.
Among the Camera calibration method embodiment five of the present invention, the 2D/3D multi-view point video conference system that can constitute by a plurality of degree of depth deriving means is as shown in Figure 6 demarcated, the image of taking respectively at least two degree of depth deriving means, carry out Camera calibration method embodiment two of the present invention or embodiment three arbitrary described methods respectively, obtain at least two group parameters of contiguous video camera; Afterwards at least two group parameters that obtained are weighted on average, obtain the parameter of the video camera of described vicinity.
Be appreciated that a plurality of degree of depth deriving means here can be at least two depth camera, and every depth camera to the method for its contiguous camera calibration as described in the embodiment two; Also can be the logical video camera of at least two Daeporis and the combination of at least one depth camera, the logical video camera of at least two Daeporis to the method for its camera calibration that is close to as described in the embodiment three.
One of ordinary skill in the art will appreciate that all or part of step that realizes in the foregoing description method is to instruct relevant hardware to finish by program, described program can be stored in the computer read/write memory medium, and this program can comprise when carrying out: the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken carries out feature point detection respectively; Unique point to described detection is mated, but determines the matching characteristic point between the image that image that described degree of depth deriving means is taken and the contiguous video camera of this device take; Based on described degree of depth deriving means, but the three-dimensional world coordinate of the described matching characteristic point of acquisition; But but according to the three-dimensional world coordinate of described matching characteristic point, described matching characteristic point in the image that the video camera of described vicinity is taken image coordinate and the corresponding relation between described two kinds of coordinates, obtain the parameter of the video camera of described vicinity.Here alleged storage medium, as: ROM/RAM, magnetic disc, CD etc.
Camera calibration method embodiment six of the present invention provides the Camera calibration method to the non-vicinity of degree of depth deriving means, and with reference to figure 7, the present embodiment method comprises:
The image that the video camera of D1, the image that the contiguous video camera of degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means is taken carries out feature point detection respectively.
Wherein, the video camera of described and the non-vicinity of degree of depth deriving means is contiguous with the video camera that described degree of depth deriving means is close to.The video camera of the non-vicinity of described degree of depth deriving means can photograph at least a portion of the contiguous shot by camera scene of degree of depth deriving means.
The unique point of the image that D2, the video camera that the degree of depth deriving means of described detection is contiguous are taken is mated with the unique point of the image that the video camera of the non-vicinity of described degree of depth deriving means is taken, but determines the matching characteristic point between the image of video camera shooting of image that the contiguous video camera of described degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means.
D3, according to the parameter of the contiguous video camera of predetermined described degree of depth device, but obtain the three-dimensional world coordinate of described matching characteristic point.
In the present embodiment, the parameter of the video camera of described degree of depth device vicinity can adopt Camera calibration method embodiment one to embodiment five arbitrary described method of the present invention to obtain.
But but D4, according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point the corresponding relation of the image coordinate in the image that the video camera of described non-vicinity is taken, obtain the parameter of the video camera of described non-vicinity.
Wherein, can adopt existing camera marking method (as Tsai standardization, plane reference method etc.) obtain as described in the parameter of contiguous video camera, do not repeat them here.
A kind of parameter of simply obtaining video camera to be calibrated far away is provided in the present embodiment, is the parameter by the contiguous video camera of degree of depth deriving means, in conjunction with
With reference to figure 8, camera calibration device embodiment one of the present invention comprises feature point detection unit 110, Feature Points Matching unit 120, unique point three-dimensional coordinate acquiring unit 130 and demarcates unit 140:
Feature point detection unit 110 is used for the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken is carried out feature point detection respectively.
In embodiments of the present invention, described degree of depth deriving means can be separate unit or at least two depth camera, also can be to comprise that at least two Daeporis lead to video camera, can also be the combination of the logical video camera of at least one depth camera and at least one Daepori.
The image that described degree of depth deriving means is taken can be a single image, also can be that two width of cloth are with epigraph.
Feature Points Matching unit 120, the unique point of the image that the contiguous video camera of the unique point of the image that the degree of depth deriving means that is used for that feature point detection unit 110 is detected is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity.
Unique point three-dimensional coordinate acquiring unit 130 is used for based on described degree of depth deriving means, but obtains the three-dimensional world coordinate that described Feature Points Matching unit 120 mates the matching characteristic point that obtains.
Demarcate unit 140, but but be used for the three-dimensional world coordinate of the matching characteristic point that obtains according to described unique point three-dimensional coordinate acquiring unit 130 and matching characteristic point that described Feature Points Matching units match 120 obtains corresponding relation in the image coordinate of the image of the video camera shooting of described vicinity, obtain the parameter of the video camera of described vicinity.
But the degree of depth deriving means in the present embodiment is the three-dimensional world coordinate that obtains matching characteristic point by unique point three-dimensional coordinate acquiring unit 130, but but it is more accurate to demarcate three-dimensional world coordinate and the parameter of matching characteristic point two-dimensional coordinate acquisition in the image of to be calibrated video camera shooting of unit 140 by described matching characteristic point like this, and has overcome the defective of the poor operability of prior art.
As shown in Figure 9, camera calibration device embodiment two of the present invention is similar with camera calibration device embodiment one of the present invention, and the difference part is that in the present embodiment, described degree of depth deriving means specifically is a binocular camera; Described unique point three-dimensional coordinate acquiring unit 130 further comprises:
Parameter acquiring unit 131 is used to obtain at least two group different parameters of binocular camera.
Three-dimensional coordinate acquiring unit 132, be used at least two group different parameters according to the binocular camera of described parameter acquiring unit 131 acquisitions, but and the image coordinate of the matching characteristic point that obtains of described Feature Points Matching units match 120 in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point.
But but demarcate unit 140 this moment is to be used for the three-dimensional world coordinate of the matching characteristic point that the three-dimensional coordinate acquiring unit 132 according to described unique point three-dimensional coordinate acquiring unit 130 obtains and the corresponding relation of the image coordinate of described matching characteristic point in the image of the video camera shooting of described vicinity, obtains the parameter of the video camera of described vicinity.
Wherein, the image that described degree of depth deriving means is taken specifically is meant: under described at least two group different parameters conditions, shooting comprises at least two width of cloth images of choosing respectively in the resulting at least two group images of scene of calibrating template from described binocular camera.
As shown in figure 10, camera calibration device embodiment three of the present invention is similar with camera calibration device embodiment two of the present invention, and the difference part is that in the present embodiment, described parameter acquiring unit 131 further comprises:
Corner Detection unit 141 is used to detect described binocular camera under same group of parameter condition, takes the tessellated non-coplane angle point of each width of cloth image in the resulting set of diagrams picture of scene that comprises calibrating template.
Angular coordinate acquiring unit 151 is used to obtain the image coordinate of the angle point that detects described Corner Detection unit 141, and is described angle point specified three-dimensional world coordinates.
Binocular camera parameter acquiring unit 161 is used for the image coordinate and the three-dimensional world coordinate of the angle point that obtains according to described angular coordinate acquiring unit 151, and described binocular camera is demarcated, and determines binocular camera and this group image corresponding parameters.
Three-dimensional coordinate acquiring unit 132 is at least two group different parameters according to the binocular camera of 161 acquisitions of the binocular camera parameter acquiring unit in the described parameter acquiring unit 131 at this moment, but and the image coordinate of the matching characteristic point that obtains of described Feature Points Matching units match 120 in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point.
In the foregoing description of camera calibration device of the present invention, if what described feature point detection unit 110 detected is the unique points of two width of cloth of described degree of depth deriving means shooting with epigraph, the unique point that described Feature Points Matching unit 120 also is used for the image that two width of cloth that described degree of depth deriving means is taken take with the contiguous video camera of the unique point of epigraph and described degree of depth deriving means is mated, but determined matching characteristic point is that described two width of cloth are with the common factors of epigraph with the unique point of the image of the video camera shooting of described degree of depth deriving means vicinity; Perhaps,
Described Feature Points Matching unit 120 also is used for two width of cloth that described degree of depth deriving means is taken and mates with the image that the contiguous video camera of described degree of depth deriving means is taken respectively with epigraph, but the union of the unique point that the image that determined matching characteristic point is described two width of cloth to be taken with the contiguous video camera of described degree of depth deriving means respectively with epigraph is complementary.
As shown in figure 11, camera calibration device embodiment four of the present invention can be to demarcating with the video camera of the non-vicinity of degree of depth deriving means, and this device comprises feature point detection unit 410, Feature Points Matching unit 420, unique point three-dimensional coordinate acquiring unit 430 and demarcates unit 440:
Feature point detection unit 410, the image that the image that the video camera that is used for that degree of depth deriving means is close to is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken carries out feature point detection respectively.
Feature Points Matching unit 420, the image characteristic point that the image that the contiguous video camera of degree of depth deriving means that is used for that described feature point detection unit 410 is detected is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken mates, but determines the matching characteristic point between the image that the video camera of image that the contiguous video camera of described degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means takes.
Unique point three-dimensional coordinate acquiring unit 430 is used for the parameter according to the video camera of predetermined described vicinity, but obtains the three-dimensional world coordinate of the matching characteristic point of described Feature Points Matching unit 420 couplings.
Demarcate unit 440, but but be used for the three-dimensional world coordinate of the matching characteristic point that obtains according to described unique point three-dimensional coordinate acquiring unit 430 and described matching characteristic point corresponding relation in the image coordinate of the image of the video camera shooting of the non-vicinity of described degree of depth deriving means, obtain the parameter of the video camera of described vicinity.
In the present embodiment, the camera calibration device can also comprise proximity parameter acquiring unit 450, is used to adopt Camera calibration method embodiment one to embodiment six arbitrary described camera marking method of the present invention to obtain the parameter of the contiguous video camera of described degree of depth deriving means.
In summary it can be seen, in the embodiment of the invention, the main three-dimensional world coordinate that obtains unique point by degree of depth deriving means, and utilize the three-dimensional world coordinate of described unique point and the corresponding relation of the image coordinate in the image that video camera to be calibrated is taken thereof, the demarcation of calibrating camera is treated in realization, compare with the scheme that prior art is demarcated multiple-camera by homography matrix between image, owing to do not need to estimate the homography matrix between principal and subordinate's video camera, therefore the parameter of the video camera that obtains is more stable and accurate; And the present invention can accurately obtain the three-dimensional world coordinate of unique point, at timing signal, demarcates moving of thing and can improve the operability of camera calibration not in fixing direction; Camera calibration method of the present invention in addition can adopt identical scaling method to many contiguous video cameras, simplifies the Camera calibration process.
More than the Camera calibration method and apparatus that the embodiment of the invention provided is described in detail, used specific embodiment herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and thought thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (17)

1, a kind of Camera calibration method is characterized in that, comprising:
The image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken carries out feature point detection respectively;
The unique point of the image that the contiguous video camera of the unique point of the image that the degree of depth deriving means of described detection is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity;
Based on described degree of depth deriving means, but the three-dimensional world coordinate of the described matching characteristic point of acquisition;
But but, determine the parameter of the video camera that described degree of depth deriving means is contiguous according to the three-dimensional world coordinate of described matching characteristic point and the corresponding relation of the image coordinate of described matching characteristic point in the image that the contiguous video camera of described degree of depth deriving means is taken.
2, the method for claim 1 is characterized in that, described degree of depth deriving means is by the separate unit depth camera; Or at least two depth camera; Or the logical video camera of at least two Daeporis.
3, the method for claim 1, it is characterized in that, if described degree of depth deriving means takes two width of cloth with epigraph, the unique point of the image of the video camera shooting that the unique point of the image that the degree of depth deriving means of described detection is taken and described degree of depth deriving means are contiguous is mated and is comprised:
Two width of cloth that described degree of depth deriving means is taken mate with the unique point of the image that the contiguous video camera of the unique point of epigraph and described degree of depth deriving means is taken, but two width of cloth that determined matching characteristic point is described degree of depth deriving means to be taken are with the common factors of epigraph with the unique point of the image of the video camera shooting of described degree of depth deriving means vicinity; Perhaps,
Two width of cloth that described degree of depth deriving means is taken mate with the image that the contiguous video camera of described degree of depth deriving means is taken respectively with epigraph, but the union of the unique point that the image that two width of cloth that determined matching characteristic point is described degree of depth deriving means to be taken are taken with the contiguous video camera of described degree of depth deriving means respectively with epigraph is complementary.
4, the method for claim 1 is characterized in that, based on described degree of depth deriving means, but the three-dimensional world coordinate of the described matching characteristic point of acquisition comprises:
According at least two group different parameters of binocular camera, but and the image coordinate of described matching characteristic point in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point;
The image that described degree of depth deriving means is taken specifically is meant: under described at least two group different parameters conditions, shooting comprises at least two width of cloth images of choosing respectively in the resulting at least two group images of scene of calibrating template from described binocular camera.
5, method as claimed in claim 4, it is characterized in that, at least two group different parameters of described binocular camera be by to binocular camera under at least two group different parameters conditions, take the resulting at least two group images of scene comprise calibrating template and carry out following steps respectively and obtained:
Detect described binocular camera under same group of parameter condition, take the tessellated non-coplane angle point of the resulting set of diagrams picture of scene that comprises calibrating template;
Obtain the image coordinate of described angle point;
For described angle point is determined the three-dimensional world coordinate;
According to the image coordinate and the three-dimensional world coordinate of described angle point, described binocular camera is demarcated, determine the parameter of described binocular camera.
6, method as claimed in claim 4 is characterized in that, but the image coordinate of described matching characteristic point in the image that binocular camera is taken is to eliminate the image coordinate of handling through distortion.
7, a kind of Camera calibration method is characterized in that, comprising:
For the image that single degree of depth deriving means is taken respectively under at least two group different parameters conditions, carry out respectively as each described method of claim 1 to 6, obtain at least two group parameters of contiguous video camera;
At least two group parameters that obtained are weighted on average, obtain the parameter of the video camera of described vicinity.
8, a kind of Camera calibration method is characterized in that, comprising:
For the image that at least two degree of depth deriving means are taken respectively, carry out respectively as each described method of claim 1 to 6, obtain at least two group parameters of contiguous video camera;
At least two group parameters that obtained are weighted on average, obtain the parameter of the video camera of described vicinity.
9, a kind of Camera calibration method is characterized in that, comprising:
The image that the image that the contiguous video camera of degree of depth deriving means is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken carries out feature point detection respectively;
The image characteristic point that the image that the video camera that the degree of depth deriving means of described detection is contiguous is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken mates, but determines the matching characteristic point between the image that the video camera of image that the contiguous video camera of described degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means takes;
According to the parameter of the contiguous video camera of predetermined described degree of depth deriving means, but the three-dimensional world coordinate of the described matching characteristic point of acquisition;
But but, obtain the parameter of the video camera of the non-vicinity of described degree of depth deriving means according to the three-dimensional world coordinate of described matching characteristic point and the corresponding relation of the image coordinate of described matching characteristic point in the image that described not contiguous video camera is taken.
10, Camera calibration method as claimed in claim 9 is characterized in that, the parameter of the video camera that described degree of depth deriving means is contiguous is to adopt as each described method of claim 1 to 8 to obtain.
11, a kind of camera calibration device is characterized in that, comprising:
The feature point detection unit is used for the image that the image and the contiguous video camera of described degree of depth deriving means of the shooting of degree of depth deriving means are taken is carried out feature point detection respectively;
The Feature Points Matching unit, the unique point of the image that the contiguous video camera of the unique point of the image that the degree of depth deriving means that is used for that described feature point detection unit is detected is taken and described degree of depth deriving means is taken is mated, but determines the matching characteristic point between the image of image that described degree of depth deriving means takes and the video camera shooting of described degree of depth deriving means vicinity;
Unique point three-dimensional coordinate acquiring unit is used for based on described degree of depth deriving means, but obtains the three-dimensional world coordinate of described matching characteristic point;
Demarcate the unit, but but be used for obtaining the parameter of the video camera of described vicinity according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point corresponding relation in the image coordinate of the image of the contiguous video camera shooting of described degree of depth deriving means.
12, camera calibration device as claimed in claim 11 is characterized in that, described degree of depth deriving means is the separate unit depth camera; Or at least two depth camera; Or the logical video camera of at least two Daeporis.
13, camera calibration device as claimed in claim 11, it is characterized in that, if what described feature point detection unit detected is the unique points of two width of cloth of described degree of depth deriving means shooting with epigraph, the unique point that described Feature Points Matching unit is used for the image that two width of cloth that described degree of depth deriving means is taken take with the contiguous video camera of the unique point of epigraph and described degree of depth deriving means is mated, but determined matching characteristic point is that described two width of cloth are with the common factors of epigraph with the unique point of the image of the video camera shooting of described degree of depth deriving means vicinity; Perhaps,
Described Feature Points Matching unit is used for two width of cloth that described degree of depth deriving means is taken and mates with the image that the contiguous video camera of described degree of depth deriving means is taken respectively with epigraph, but the union of the unique point that the image that determined matching characteristic point is described two width of cloth to be taken with the contiguous video camera of described degree of depth deriving means respectively with epigraph is complementary.
14, camera calibration device as claimed in claim 11 is characterized in that, described degree of depth deriving means specifically is a binocular camera; Described unique point three-dimensional coordinate acquiring unit comprises:
Parameter acquiring unit is used to obtain at least two group different parameters of binocular camera;
The three-dimensional coordinate acquiring unit, be used at least two group different parameters according to described parameter acquiring unit acquisition, but and the image coordinate of the matching characteristic point that obtains of described Feature Points Matching units match in the image that described binocular camera is taken, but obtain the three-dimensional world coordinate of described matching characteristic point;
The image that described degree of depth deriving means is taken specifically is meant: under described at least two group different parameters conditions, shooting comprises at least two width of cloth images of choosing respectively in the resulting at least two group images of scene of calibrating template from described binocular camera.
15, camera calibration device as claimed in claim 14 is characterized in that, described parameter acquiring unit specifically comprises:
The Corner Detection unit is used to detect described binocular camera under same group of parameter condition, takes the tessellated non-coplane angle point of the resulting set of diagrams picture of scene that comprises calibrating template;
The angular coordinate acquiring unit is used to obtain the image coordinate of the angle point that detects described Corner Detection unit, and is described angle point specified three-dimensional world coordinates;
The binocular camera parameter acquiring unit is used for the image coordinate and the three-dimensional world coordinate of the angle point that obtains according to described angular coordinate acquiring unit, and described binocular camera is demarcated, and determines the parameter of described binocular camera.
16, a kind of camera calibration device is characterized in that, comprising:
The feature point detection unit, the image that the image that the video camera that is used for that degree of depth deriving means is close to is taken and the video camera of the non-vicinity of described device are taken carries out feature point detection respectively;
The Feature Points Matching unit, the image characteristic point that the image that the contiguous video camera of degree of depth deriving means that is used for that described feature point detection unit is detected is taken and the video camera of the non-vicinity of described degree of depth deriving means are taken mates, but determines the matching characteristic point between the image that the video camera of image that the contiguous video camera of described degree of depth deriving means is taken and the non-vicinity of described degree of depth deriving means takes;
Unique point three-dimensional coordinate acquiring unit is used for the parameter according to the contiguous video camera of predetermined described degree of depth deriving means, but obtains the three-dimensional world coordinate of described matching characteristic point;
Demarcate the unit, but but be used for obtaining the parameter of the video camera of described vicinity according to the three-dimensional world coordinate of described matching characteristic point and described matching characteristic point corresponding relation in the image coordinate of the image of the video camera shooting of the non-vicinity of described degree of depth deriving means.
17, camera calibration device as claimed in claim 16, it is characterized in that, also comprise: the proximity parameter acquiring unit is used to adopt the parameter of the contiguous video camera of degree of depth deriving means as described in obtaining as each described camera marking method of claim 1 to 8.
CN200810130737XA 2008-07-14 2008-07-14 Camera calibration method and camera calibration device Expired - Fee Related CN101630406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810130737XA CN101630406B (en) 2008-07-14 2008-07-14 Camera calibration method and camera calibration device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810130737XA CN101630406B (en) 2008-07-14 2008-07-14 Camera calibration method and camera calibration device

Publications (2)

Publication Number Publication Date
CN101630406A true CN101630406A (en) 2010-01-20
CN101630406B CN101630406B (en) 2011-12-28

Family

ID=41575507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810130737XA Expired - Fee Related CN101630406B (en) 2008-07-14 2008-07-14 Camera calibration method and camera calibration device

Country Status (1)

Country Link
CN (1) CN101630406B (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783027A (en) * 2010-02-26 2010-07-21 浙江大学 Dynamic scene three-dimensional recording method based on multiple image sensors
CN102075686A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Robust real-time on-line camera tracking method
CN102253570A (en) * 2010-06-09 2011-11-23 微软公司 Thermal turning depth camera light source
CN101789123B (en) * 2010-01-27 2011-12-07 中国科学院半导体研究所 Method for creating distance map based on monocular camera machine vision
CN102479220A (en) * 2010-11-30 2012-05-30 财团法人资讯工业策进会 Image retrieval system and method thereof
CN103035008A (en) * 2012-12-15 2013-04-10 北京工业大学 Multi-camera system weighting calibrating method
CN103578109A (en) * 2013-11-08 2014-02-12 中安消技术有限公司 Method and device for monitoring camera distance measurement
CN103745474A (en) * 2014-01-21 2014-04-23 南京理工大学 Image registration method based on inertial sensor and camera
CN103813151A (en) * 2012-11-02 2014-05-21 索尼公司 Image processing apparatus and method, image processing system and program
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
CN104240233A (en) * 2014-08-19 2014-12-24 长春理工大学 Method for solving camera homography matrix and projector homography matrix
CN104464173A (en) * 2014-12-03 2015-03-25 国网吉林省电力有限公司白城供电公司 Power transmission line external damage protection system based on space image three-dimensional measurement
CN104574425A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Calibration and linkage method for primary camera system and secondary camera system on basis of rotary model
CN104680535A (en) * 2015-03-06 2015-06-03 南京大学 Calibration target, calibration system and calibration method for binocular direct-vision camera
CN104933755A (en) * 2014-03-18 2015-09-23 华为技术有限公司 Static object reconstruction method and system
CN105378794A (en) * 2013-06-04 2016-03-02 特斯托股份公司 3d recording device, method for producing 3d image, and method for setting up 3d recording device
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
WO2017008516A1 (en) * 2015-07-15 2017-01-19 华为技术有限公司 Two-camera relative position calculation system, device and apparatus
CN107016707A (en) * 2017-04-13 2017-08-04 四川大学 A kind of integration imaging super large three-dimensional scenic shooting image bearing calibration
CN107705565A (en) * 2017-09-27 2018-02-16 于贵庆 A kind of control method for stopping based on data analysis
CN108765495A (en) * 2018-05-22 2018-11-06 山东大学 A kind of quick calibrating method and system based on binocular vision detection technology
CN109118563A (en) * 2018-07-13 2019-01-01 深圳供电局有限公司 A method of digital orthophoto map is extracted from LOD paging Surface texture model
CN109272453A (en) * 2018-08-31 2019-01-25 盎锐(上海)信息科技有限公司 Model building device and localization method based on 3D video camera
CN109559353A (en) * 2018-11-30 2019-04-02 Oppo广东移动通信有限公司 Camera module scaling method, device, electronic equipment and computer readable storage medium
WO2019109226A1 (en) * 2017-12-04 2019-06-13 深圳市沃特沃德股份有限公司 Binocular camera calibration method and device
CN109961482A (en) * 2017-12-22 2019-07-02 比亚迪股份有限公司 Camera calibration method, device and vehicle
CN109961484A (en) * 2017-12-22 2019-07-02 比亚迪股份有限公司 Camera calibration method, device and vehicle
CN110209997A (en) * 2019-06-10 2019-09-06 成都理工大学 Depth camera automatic Calibration algorithm based on three-dimensional feature point
CN110458897A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Multi-cam automatic calibration method and system, monitoring method and system
CN110966734A (en) * 2019-11-11 2020-04-07 珠海格力电器股份有限公司 Air conditioner air supply control method based on three-dimensional space, computer readable storage medium and air conditioner
CN111161339A (en) * 2019-11-18 2020-05-15 珠海随变科技有限公司 Distance measuring method, device, equipment and computer readable medium
CN114782549A (en) * 2022-04-22 2022-07-22 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN115457147A (en) * 2022-09-16 2022-12-09 北京的卢深视科技有限公司 Camera calibration method, electronic device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2991163B2 (en) * 1997-07-23 1999-12-20 日本電気株式会社 Camera calibration device
US6826299B2 (en) * 2000-07-31 2004-11-30 Geodetic Services, Inc. Photogrammetric image correlation and measurement system and method
CN1243324C (en) * 2003-09-29 2006-02-22 上海交通大学 Precision-adjustable neural network camera calibrating method
CN100388319C (en) * 2006-07-25 2008-05-14 深圳大学 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor
CN100554873C (en) * 2007-07-11 2009-10-28 华中科技大学 A kind of based on two-dimensional encoded 3 D measuring method

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789123B (en) * 2010-01-27 2011-12-07 中国科学院半导体研究所 Method for creating distance map based on monocular camera machine vision
CN101783027A (en) * 2010-02-26 2010-07-21 浙江大学 Dynamic scene three-dimensional recording method based on multiple image sensors
CN101783027B (en) * 2010-02-26 2012-02-29 浙江大学 Dynamic scene three-dimensional recording method based on multiple image sensors
CN102253570B (en) * 2010-06-09 2014-05-07 微软公司 Thermal turning depth camera light source
CN102253570A (en) * 2010-06-09 2011-11-23 微软公司 Thermal turning depth camera light source
CN102479220A (en) * 2010-11-30 2012-05-30 财团法人资讯工业策进会 Image retrieval system and method thereof
CN102075686B (en) * 2011-02-10 2013-10-30 北京航空航天大学 Robust real-time on-line camera tracking method
CN102075686A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Robust real-time on-line camera tracking method
CN103813151B (en) * 2012-11-02 2017-09-26 索尼公司 Image processing apparatus and method, image processing system
CN103813151A (en) * 2012-11-02 2014-05-21 索尼公司 Image processing apparatus and method, image processing system and program
CN103035008A (en) * 2012-12-15 2013-04-10 北京工业大学 Multi-camera system weighting calibrating method
CN105378794A (en) * 2013-06-04 2016-03-02 特斯托股份公司 3d recording device, method for producing 3d image, and method for setting up 3d recording device
CN103578109A (en) * 2013-11-08 2014-02-12 中安消技术有限公司 Method and device for monitoring camera distance measurement
CN103578109B (en) * 2013-11-08 2016-04-20 中安消技术有限公司 A kind of CCTV camera distance-finding method and device
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
US9684962B2 (en) 2013-12-10 2017-06-20 Tsinghua University Method and system for calibrating surveillance cameras
WO2015085779A1 (en) * 2013-12-10 2015-06-18 Tsinghua University Method and system for calibrating surveillance cameras
CN103824278B (en) * 2013-12-10 2016-09-21 清华大学 The scaling method of CCTV camera and system
CN103745474B (en) * 2014-01-21 2017-01-18 南京理工大学 Image registration method based on inertial sensor and camera
CN103745474A (en) * 2014-01-21 2014-04-23 南京理工大学 Image registration method based on inertial sensor and camera
CN104933755B (en) * 2014-03-18 2017-11-28 华为技术有限公司 A kind of stationary body method for reconstructing and system
CN104933755A (en) * 2014-03-18 2015-09-23 华为技术有限公司 Static object reconstruction method and system
CN104240233A (en) * 2014-08-19 2014-12-24 长春理工大学 Method for solving camera homography matrix and projector homography matrix
CN104464173A (en) * 2014-12-03 2015-03-25 国网吉林省电力有限公司白城供电公司 Power transmission line external damage protection system based on space image three-dimensional measurement
CN104574425A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Calibration and linkage method for primary camera system and secondary camera system on basis of rotary model
CN104680535A (en) * 2015-03-06 2015-06-03 南京大学 Calibration target, calibration system and calibration method for binocular direct-vision camera
WO2017008516A1 (en) * 2015-07-15 2017-01-19 华为技术有限公司 Two-camera relative position calculation system, device and apparatus
US10559090B2 (en) 2015-07-15 2020-02-11 Huawei Technologies Co., Ltd. Method and apparatus for calculating dual-camera relative position, and device
WO2018014730A1 (en) * 2016-07-18 2018-01-25 华为技术有限公司 Method for adjusting parameters of camera, broadcast-directing camera, and broadcast-directing filming system
CN106251334A (en) * 2016-07-18 2016-12-21 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN106251334B (en) * 2016-07-18 2019-03-01 华为技术有限公司 A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN107016707A (en) * 2017-04-13 2017-08-04 四川大学 A kind of integration imaging super large three-dimensional scenic shooting image bearing calibration
CN107016707B (en) * 2017-04-13 2019-09-03 四川大学 A kind of integration imaging super large three-dimensional scenic shooting method for correcting image
CN107705565A (en) * 2017-09-27 2018-02-16 于贵庆 A kind of control method for stopping based on data analysis
WO2019109226A1 (en) * 2017-12-04 2019-06-13 深圳市沃特沃德股份有限公司 Binocular camera calibration method and device
CN109961482A (en) * 2017-12-22 2019-07-02 比亚迪股份有限公司 Camera calibration method, device and vehicle
CN109961484A (en) * 2017-12-22 2019-07-02 比亚迪股份有限公司 Camera calibration method, device and vehicle
CN108765495A (en) * 2018-05-22 2018-11-06 山东大学 A kind of quick calibrating method and system based on binocular vision detection technology
CN108765495B (en) * 2018-05-22 2021-04-30 山东大学 Rapid calibration method and system based on binocular vision detection technology
CN109118563A (en) * 2018-07-13 2019-01-01 深圳供电局有限公司 A method of digital orthophoto map is extracted from LOD paging Surface texture model
CN109118563B (en) * 2018-07-13 2023-05-02 深圳供电局有限公司 Method for extracting digital orthographic image from LOD paging surface texture model
CN109272453A (en) * 2018-08-31 2019-01-25 盎锐(上海)信息科技有限公司 Model building device and localization method based on 3D video camera
CN109272453B (en) * 2018-08-31 2023-02-10 上海盎维信息技术有限公司 Modeling device and positioning method based on 3D camera
CN109559353B (en) * 2018-11-30 2021-02-02 Oppo广东移动通信有限公司 Camera module calibration method and device, electronic equipment and computer readable storage medium
CN109559353A (en) * 2018-11-30 2019-04-02 Oppo广东移动通信有限公司 Camera module scaling method, device, electronic equipment and computer readable storage medium
CN110209997A (en) * 2019-06-10 2019-09-06 成都理工大学 Depth camera automatic Calibration algorithm based on three-dimensional feature point
CN110458897B (en) * 2019-08-13 2020-12-01 北京积加科技有限公司 Multi-camera automatic calibration method and system and monitoring method and system
CN110458897A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Multi-cam automatic calibration method and system, monitoring method and system
CN110966734A (en) * 2019-11-11 2020-04-07 珠海格力电器股份有限公司 Air conditioner air supply control method based on three-dimensional space, computer readable storage medium and air conditioner
CN111161339B (en) * 2019-11-18 2020-11-27 珠海随变科技有限公司 Distance measuring method, device, equipment and computer readable medium
CN111161339A (en) * 2019-11-18 2020-05-15 珠海随变科技有限公司 Distance measuring method, device, equipment and computer readable medium
CN114782549A (en) * 2022-04-22 2022-07-22 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN114782549B (en) * 2022-04-22 2023-11-24 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN115457147A (en) * 2022-09-16 2022-12-09 北京的卢深视科技有限公司 Camera calibration method, electronic device and storage medium

Also Published As

Publication number Publication date
CN101630406B (en) 2011-12-28

Similar Documents

Publication Publication Date Title
CN101630406B (en) Camera calibration method and camera calibration device
US11570423B2 (en) System and methods for calibration of an array camera
US10334168B2 (en) Threshold determination in a RANSAC algorithm
US10846885B2 (en) Methods and computer program products for calibrating stereo imaging systems by using a planar mirror
US9117269B2 (en) Method for recognizing objects in a set of images recorded by one or more cameras
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
CN101661617B (en) Method and device for camera calibration
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN108805921A (en) Image-taking system and method
Liao et al. A calibration method for uncoupling projector and camera of a structured light system
Skuratovskyi et al. Outdoor mapping framework: from images to 3d model
Gao et al. A novel Kinect V2 registration method for large-displacement environments using camera and scene constraints
Liu et al. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system
WO2024043055A1 (en) Camera calibration device, camera calibration method, and recording medium
Lasang et al. Combining high resolution color and depth images for dense 3D reconstruction
Brückner et al. Self-calibration of camera networks: Active and passive methods
Fortenbury et al. Robust 2D/3D calibration using RANSAC registration
Molana et al. A Single-perspective Novel Panoramic View from Radially Distorted Non-central Images.
Wang et al. Investigation of Factors Influencing Calibration Accuracy of Camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180212

Address after: California, USA

Patentee after: Global innovation polymerization LLC

Address before: California, USA

Patentee before: Tanous Co.

Effective date of registration: 20180212

Address after: California, USA

Patentee after: Tanous Co.

Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No.

Patentee before: HUAWEI DEVICE Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111228