WO2007007924A1 - Method for calibrating distortion of multi-view image - Google Patents
Method for calibrating distortion of multi-view image Download PDFInfo
- Publication number
- WO2007007924A1 WO2007007924A1 PCT/KR2005/002228 KR2005002228W WO2007007924A1 WO 2007007924 A1 WO2007007924 A1 WO 2007007924A1 KR 2005002228 W KR2005002228 W KR 2005002228W WO 2007007924 A1 WO2007007924 A1 WO 2007007924A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cameras
- images
- brightness
- points
- compensating
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 239000011159 matrix material Substances 0.000 claims description 31
- 230000009466 transformation Effects 0.000 claims description 12
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims description 2
- 208000003464 asthenopia Diseases 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 3
- 241000226585 Antennaria plantaginifolia Species 0.000 description 2
- 241000206607 Porphyra umbilicalis Species 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/133—Equalising the characteristics of different image components, e.g. their average brightness or colour balance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
Definitions
- the present invention relates to a multi-view image distortion correcting method and, more particularly, to a multi-view image distortion correcting method capable of effectively providing three-dimensional stereoscopic image display by compensating for brightness uniformities of cameras, lens distortion, inter-camera errors and image sizes, and inter-cameras brightness and color uniformities occurring between multi- view images picked up with at least four cameras.
- Background Art
- the two eyes sense the object in different view points, and the brain can recognize the object as a thee-dimensional stereoscopic object by combining the two images sensed with two eyes.
- Korean Patent Registration No. 397511 and 433635 various binocular-type three-dimensional display systems modeling such a human viewing system haven been proposed.
- Methods of obtaining the multi-view image are classified into a toed-in type and a parallel type according to a construction of the multi-view camera.
- the optical axes of all the cameras with respect to the object are rotated to converge to one point.
- the parallel type several cameras are disposed to be parallel to each other, and the lenses of the cameras are driven to horizontally move so as to form a convergent point on the object.
- CCD Charge Coupled Device
- the present invention provides a multi-view image distortion correcting method of compensating for distortion and uniformity between multi-view images by using at least four cameras.
- the present invention also provides a multi-view image distortion correcting method of compensating for brightness uniformities of cameras, lens distortion, inter- camera errors and image sizes, and inter-cameras brightness and color uniformities occurring between four- view images picked up with four cameras.
- a multi-view image distortion correcting method comprising steps of: (A) generating a brightness difference map for images picked up with at least four cameras and compensating for brightness uniformities of the cameras by using the picked-up images and the brightness map; (B) detecting patterns of the images picked up with the cameras, and obtaining lens distortion coefficients of the detected patterns of the images through a Hough transformation, and compensating for lens distortion of the cameras by using the lens distortion coefficients; (C) extracting first correspondence information including characteristic points and correspondence points for the images picked up with the at least four cameras, calculating a covariance matrix and a rotational matrix from the first correspondence information, and compensating for errors and image sizes between the at least four cameras by using the rotational matrix; and (D) extracting second correspondence information on the images picked up with the at least four cameras and compensating for brightness and color uniformities between the at least four cameras by performing homography and affine transformation using the second correspondence information.
- the step (A) may comprise steps of: (A-I) picking up the images with the at least four cameras so that each image is fully filled with the same plane; (A-2) transforming each of the picked-up images into a YCbCr space and extracting a Y channel; (A-3) calculating reference brightness values in the Y channel with respect to the cameras; (A-4) generating a brightness difference map by using the reference brightness values; and (A-5) compensating for the brightness uniformities of the cameras by summing the Y channel and the brightness difference map.
- the step (B) may comprise steps of: (B-I) picking up the patterns with the cameras and detecting corner points from the patterns of the picked-up images; (B-2) grouping the detected corner points into points of a straight line in a Hough space; (B-3) estimating coefficients of the straight line by using the grouped points; (B-4) obtaining the lens distortion coefficients of the cameras by using the estimated coefficients of the straight line; and (B-5) compensating for the lens distortion of the cameras by using the lens distortion coefficients.
- the step (C) may comprise steps of: (C-I) picking up the same patterns with the at least four cameras; (C-2) detecting the characteristic points from the pattern of image picked up with specific one of the at least four cameras; (C-3) extracting the characteristic point and the correspondence points of other cameras by using a SSD (Sum of Squared Difference) method; (C-4) calculating the covariance matrix from the first correspondence matrix including the characteristic points and the correspondence points; (C-5) calculating an orthogonal matrix from the covariance matrix by using an EVD (Eigen Value Decomposition) method; (C-6) calculating a rotational matrix by using the orthogonal matrix; and (C-7) compensating for errors and image sizes between the at least four cameras by using the rotational matrix.
- SSD Serial of Squared Difference
- step (D) may comprise steps of: (D-I) extracting second correspondence information between the images picked up with the at least four cameras; (D-2) extracting the correspondence points of the entire images by applying the second correspondence information to the homography; and (D-3) compensating for the brightness and color uniformities between the at least four cameras by applying the correspondence points of the entire images to the affine transformation.
- the multi-view image distortion correcting method since the patterns of images are detected by using at least four cameras to compensate for lens distortion of the cameras, it is possible to obtain images of which barrel distortion, pincushion distortion, or other radial distortion is removed.
- FIG. 1 is a flowchart showing a multi-view image distortion correcting method according to the present invention.
- FIG. 2 is a detailed flowchart showing a step of compensating brightness uniformities of cameras in the multi-view image distortion correcting method shown in
- FIG. 1 is a schematic view showing an image generation model with respect to occurrence of lens distortion of the cameras.
- FIG. 4 is a detailed flowchart showing a step of compensating for the lens distortion of the cameras in the multi-view image distortion correcting method shown in FIG. 1.
- FIG. 5 is a detailed flowchart showing a step of compensating for inter-camera error and image sizes in the multi-view image distortion correcting method shown in FIG. 1.
- FIG. 6 is a detailed flowchart showing a step of compensating for inter-camera brightness and color uniformity in the multi-view image distortion correcting method shown in FIG. 1.
- FIGs. 30 FIGs.
- FIGs. 7 A and 7B show images before and after the step of compensating for the brightness uniformities of the cameras according to the embodiment of the present invention.
- FIGs. 8 A and 8B show images before and after the step of compensating for the lens distortion of the cameras according to the embodiment of the present invention.
- FIGs. 9 A and 9B show images before and after the step of compensating for the inter-camera error and image sizes according to the embodiment of the present invention.
- FIGs. 1OA and 1OB show images before and after the step of compensating for the brightness and color uniformities of the cameras according to the embodiment of the present invention.
- FIG. 1 is a flowchart showing a multi-view image distortion correcting method according to the present invention.
- FIG. 2 is a detailed flowchart showing a step of compensating brightness uniformities of cameras in the multi-view image distortion correcting method shown in FIG. 1.
- FIG. 3 is a schematic view showing an image generation model with respect to occurrence of lens distortion of the cameras.
- FIG. 4 is a detailed flowchart showing a step of compensating for the lens distortion of the cameras in the multi-view image distortion correcting method shown in FIG. 1.
- FIG. 1 is a flowchart showing a multi-view image distortion correcting method according to the present invention.
- FIG. 2 is a detailed flowchart showing a step of compensating brightness uniformities of cameras in the multi-view image distortion correcting method shown in FIG. 1.
- FIG. 3 is a schematic view showing an image generation model with respect to occurrence of lens distortion of the cameras.
- FIG. 4 is a detailed flowchart showing a step of compensating for the lens distortion of the cameras in
- FIG. 5 is a detailed flowchart showing a step of compensating for inter-camera error and image sizes in the multi-view image distortion correcting method shown in FIG. 1.
- FIG. 6 is a detailed flowchart showing a step of compensating for inter-camera brightness and color uniformity in the multi-view image distortion correcting method shown in FIG. 1.
- the multi-view image distortion correcting method includes a step SlOO of compensating for brightness uniformities of the cameras, a step S200 of compensating for lens distortion of the cameras, a step S300 of compensating for inter-camera errors and image sizes, and a step S400 of compensating for inter-cameras brightness and color uniformities.
- step S 130 of compensating for brightness uniformities of the cameras is described more in detail with reference to FIG. 2.
- the picked-up images are transformed into a YCbCr space to extract a Y
- Reference brightness values in the Y channel for the cameras such as maximum brightness values lmax and average brightness values laver are calculated from the following Equation 1.
- I(x,y) denotes a brightness value at a coordinate (x,y) of a pixel in a picked- up image
- h denotes the number of pixels in the Y-axis direction in the picked-up image
- w denotes the number of pixels in the X-axis direction in the picked-up image.
- a brightness difference map Id(x, y) is generated from the following Equation 2 by using the calculated reference brightness value such as the maximum brightness value lmax and the average brightness value (S 140)
- step SlOO of compensating for the brightness uniformities of the cameras according to the present invention all the pixels of images for the cameras are processed by using the Y channel and the brightness difference map, very uniform brightness for each camera can be obtained.
- a point P in the real space corresponds to a point p in the image of the camera.
- the point P may correspond to a point p' which is shifted due to camera lens distortion such as barrel distortion and pincushion distortion.
- the shift from the point p to the point p' is proportional to a distance from an optical center. More specifically, the shift can be represented by the following Equation 3, which is a quadratic equation with respect to the distance d from the optical center.
- x and y are the coordinates of the undistorted point p when the point P is projected on the image of camera
- x' and y' are the coordinates of the distorted point p' shifted due to the lens distortion when the point P is projected on the image of camera
- differences f(d) of distances in the directions are represented by the following Equation 4.
- kl and k2 are lens distortion coefficients. As shown in FIGs. 3 and 4, if there are at least two pairs of the distorted point p' and the distortion-removed (undistorted) point p, the distortion caused from the lens can be removed by using the lens distortion coefficients kl and k2.
- the same patterns are picked up with the cameras, and corner points are defined and detected from the patterns of the picked-up images (S210).
- the patterns are lattice pattern so as to allow the corner points to be easily detected.
- the detected corner points for the cameras are grouped into points on a straight line in the Hough space (S220). Since resolution in the Hough space contains an accuracy of the straight line, it is preferable that the points on the straight line are roughly grouped with a low resolution.
- Coefficients of the straight line are estimated by using the grouped points (S230).
- the straight line can be estimated by using at least two points.
- the coefficients of the straight line are obtained by using the following Equation 3 so that distances between the three or more points and the straight line are minimized.
- Equation 3 is coefficients of the straight line and can be estimated from Equation 3 by using a singular value decomposition method.
- Equation 7 By substituting coordinates (xl, yl), ..., (xn, yn) of the grouped points for Equation 6, the following Equation 7 is obtained in a form of matrix equation.
- the lens distortion coefficients kl and k2 can be calculated from Equation 7 by using the least square method. [72] By substituting the lens distortion coefficients kl and k2 obtained from Equation 7 for Equations 3 and 4, the coordinates of the lens -distortion-removed point p are obtained from the coordinates of the distorted point p' (S250).
- the inter-camera error and image size compensating step S300 is described more in detail with reference to FIG. 5.
- the same patterns are picked up with cameras (S310).
- the patterns are lattice patterns so as to allow the characteristic points and correspondence points to be easily detected in the following steps. More preferably, the patterns are variously-colored lattice patterns.
- a characteristic point is defined and detected from the pattern of image picked up with specific one of the cameras (S320).
- the correspondence points are extracted by using the characteristic points detected in the step S320 and an SSD (Sum of Squared Difference) method represented by the following Equation 8 (S330).
- Ad ) Jl w Jl w ⁇ U ⁇ U+ kJ+ n , I 2 (i+ k- d x J+ I- d y ))
- ⁇ (u, v) -(u-v)2
- 1(x, y) is a brightness value at x and y positions of an image I
- (dx, dy) is a disparity of the correspondence point
- W is a size of a characteristic region around the characteristic point.
- the correspondence points can be extracted from Equation 8 by finding the value d corresponding to the maximum value of the function f(d).
- a covariance matrix Cov represented by the following Equation 9 is calculated from the correspondence information on the extracted characteristic points and the correspondence points (S340).
- Equation 9 (xaver, yaver) denotes an average point and is obtained by dividing sum of values in the axes by the number of points n.
- Equation 10 (S350).
- D is a diagonal matrix.
- Equation 10 the column vectors of U represent long and short axes, and elements of D represent size of the long and short axes in the point distribution.
- a rotational matrix R can be calculated by using the orthogonal matrix U as shown in the following Equation 11 (S360).
- the image of the second is applied to the image of the first camera, so that the inter-camera error caused from the distortion of housing, sensor position, and lens mount and the inter-camera image size error caused from the difference of focal lengths are compensated (S370).
- the correspondence information between the images of the cameras is extracted by using a pre-set image (S410).
- a homography step of applying the extracted correspondence information to the projective transformation is performed as shown in the following Equation 12, so that correspondence points on the entire images are extracted (S420).
- ⁇ is a scale vector.
- Equation 14 all the elements hi 1, ..., h33 can be obtained from at least four correspondence points, and the scale vector ⁇ can be eliminated in the projective transformation matrix.
- the projective transformation matrix of Equation 12 is estimated, a correspondence point with respect to a point can be easily set by using the estimated projective transformation matrix. Therefore, the correspondence points on the entire images can be extracted.
- aR, aG, aB, bR, bG, and bB can be obtained by using the correspondence points.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Studio Devices (AREA)
Abstract
There is provided a multi-view image distortion correcting method capable of effectively providing three-dimensional stereoscopic image display by compensating for brightness uniformities of cameras, lens distortion, inter-camera errors and image sizes, and inter-cameras brightness and color uniformities occurring between multi-view images picked up with at least four cameras.
Description
Description
METHOD FOR CALIBRATING DISTORTION OF MULTI- VIEW IMAGE
Technical Field
[1] The present invention relates to a multi-view image distortion correcting method and, more particularly, to a multi-view image distortion correcting method capable of effectively providing three-dimensional stereoscopic image display by compensating for brightness uniformities of cameras, lens distortion, inter-camera errors and image sizes, and inter-cameras brightness and color uniformities occurring between multi- view images picked up with at least four cameras. Background Art
[2] Human can see an object stereoscopically due to binocular disparity of two eyes.
The two eyes sense the object in different view points, and the brain can recognize the object as a thee-dimensional stereoscopic object by combining the two images sensed with two eyes. As disclosed in Korean Patent Registration No. 397511 and 433635, various binocular-type three-dimensional display systems modeling such a human viewing system haven been proposed.
[3] However, in the conventional binocular-type three-dimensional display system, the viewing points are limited to left and right eyes. Therefore, when an observer departs from a limited viewing region or an object is defocused, the stereoscopic image cannot be recognized, and eye fatigue and dizziness may occur, so that the aforementioned method has a limitation in the actual applications.
[4] In order to solve the problems of the conventional binocular-type stereoscopic image display, various multi-view stereoscopic three-dimensional display system have been actively developed as disclosed in "Multi-view autostereoscopic 3D display", by N. A. Dodgson, J. R. Moore, and S. R. Lang (International Broadcasting Convention '99, 10th- 14th, pp. 497-502, September 1999). In the multi-view display system, since the multi-view image is obtained and display by using a multi-lens stereoscopic camera, the number of viewing points increases, so that the viewing range can be widened and more natural three-dimensional display can be obtained.
[5] Methods of obtaining the multi-view image are classified into a toed-in type and a parallel type according to a construction of the multi-view camera. In the toed-in type, the optical axes of all the cameras with respect to the object are rotated to converge to one point. In the parallel type, several cameras are disposed to be parallel to each other, and the lenses of the cameras are driven to horizontally move so as to form a convergent point on the object.
[6] In a CCD (Charge Coupled Device) camera currently produced, distortion does not greatly occur. However, in a camera using a particular lens, distortion greatly occurs. The image picked up with such a camera shows distortion caused from internal or external influence, so that the image is different from the real object. In addition, although a camera is produced to have very small distortion, there is a need to correct various types of distortion according to environments where the camera is used. Particularly, the correction of distortion of camera is very important in image processing applications which require accurate images.
[7] On the other hand, in the multi-view display system, characteristics in human recognition need to be considered. Particularly, in order to obtain an image same as a real image seen by human by taking into consideration binocular disparity according to the distance from the object, accurate control and mechanical drive techniques are required. In addition, sine two or more cameras cooperates with each other, a control technique for maintaining uniform characteristics of the cameras are required.
[8] However, in the conventional multi-view camera system, since the image is obtained without correction, there is a problem in that human may experience eye fatigue.
[9] In order to remove the eye fatigue, two-eye result such as binocular disparity, since single-eye result such as light, shade, advancing color, and receding color, and eye- fatigue factors need to be analyzed, a technique for correcting distortion between two or more images and maintaining uniformity has been required.
[10] In correcting the distortion and maintaining the uniformities between the multi-view images, in case of a binocular (that is, two- view) image, the distortion and uniformities between only two cameras are corrected. In case of a three- view image, the distortion and uniformities are corrected by applying a two- view correcting method on two images at the left and right of the intermediate image.
[11] However, in case of a multi-view image picked up with at least four cameras, the distance between the left and right end cameras is longer than that in case of the binocular image or the three- view image. Therefore, there is a problem in that it is difficult to compensate for the distortion and the uniformities by using the two-view correcting method. Disclosure of Invention Technical Problem
[12] In order to solve the aforementioned problems, the present invention provides a multi-view image distortion correcting method of compensating for distortion and uniformity between multi-view images by using at least four cameras.
[13] The present invention also provides a multi-view image distortion correcting
method of compensating for brightness uniformities of cameras, lens distortion, inter- camera errors and image sizes, and inter-cameras brightness and color uniformities occurring between four- view images picked up with four cameras. Technical Solution
[14] According to an aspect of the present invention, there is provided a multi-view image distortion correcting method comprising steps of: (A) generating a brightness difference map for images picked up with at least four cameras and compensating for brightness uniformities of the cameras by using the picked-up images and the brightness map; (B) detecting patterns of the images picked up with the cameras, and obtaining lens distortion coefficients of the detected patterns of the images through a Hough transformation, and compensating for lens distortion of the cameras by using the lens distortion coefficients; (C) extracting first correspondence information including characteristic points and correspondence points for the images picked up with the at least four cameras, calculating a covariance matrix and a rotational matrix from the first correspondence information, and compensating for errors and image sizes between the at least four cameras by using the rotational matrix; and (D) extracting second correspondence information on the images picked up with the at least four cameras and compensating for brightness and color uniformities between the at least four cameras by performing homography and affine transformation using the second correspondence information.
[15] In the aforementioned aspect, the step (A) may comprise steps of: (A-I) picking up the images with the at least four cameras so that each image is fully filled with the same plane; (A-2) transforming each of the picked-up images into a YCbCr space and extracting a Y channel; (A-3) calculating reference brightness values in the Y channel with respect to the cameras; (A-4) generating a brightness difference map by using the reference brightness values; and (A-5) compensating for the brightness uniformities of the cameras by summing the Y channel and the brightness difference map.
[16] In addition, the step (B) may comprise steps of: (B-I) picking up the patterns with the cameras and detecting corner points from the patterns of the picked-up images; (B-2) grouping the detected corner points into points of a straight line in a Hough space; (B-3) estimating coefficients of the straight line by using the grouped points; (B-4) obtaining the lens distortion coefficients of the cameras by using the estimated coefficients of the straight line; and (B-5) compensating for the lens distortion of the cameras by using the lens distortion coefficients.
[17] In addition, the step (C) may comprise steps of: (C-I) picking up the same patterns with the at least four cameras; (C-2) detecting the characteristic points from the pattern of image picked up with specific one of the at least four cameras; (C-3) extracting the
characteristic point and the correspondence points of other cameras by using a SSD (Sum of Squared Difference) method; (C-4) calculating the covariance matrix from the first correspondence matrix including the characteristic points and the correspondence points; (C-5) calculating an orthogonal matrix from the covariance matrix by using an EVD (Eigen Value Decomposition) method; (C-6) calculating a rotational matrix by using the orthogonal matrix; and (C-7) compensating for errors and image sizes between the at least four cameras by using the rotational matrix.
[18] In addition, the step (D) may comprise steps of: (D-I) extracting second correspondence information between the images picked up with the at least four cameras; (D-2) extracting the correspondence points of the entire images by applying the second correspondence information to the homography; and (D-3) compensating for the brightness and color uniformities between the at least four cameras by applying the correspondence points of the entire images to the affine transformation.
Advantageous Effects
[19] As described above, in the multi-view image distortion correcting method according to the present invention, since the brightness difference maps of images are generated by using at least four cameras to compensate for the brightness uniformities of the cameras, it is possible to obtain images having very uniform brightness.
[20] In addition, in the multi-view image distortion correcting method according to the present invention, since the patterns of images are detected by using at least four cameras to compensate for lens distortion of the cameras, it is possible to obtain images of which barrel distortion, pincushion distortion, or other radial distortion is removed.
[21] In addition, in the multi-view image distortion correcting method according to the present invention, since the inter-camera errors and image sizes are compensated by using the characteristic points and correspondence points detected from the images picked up with at least four cameras, it is possible to obtain images of which tilt and error are removed.
[22] In addition, in the multi-view image distortion correcting method according to the present invention, since the inter-camera brightness and color uniformity are compensated through color transformation using correspondence information on the four-or-more-view images picked up with at least four cameras, it is possible to maintain very uniform color between the four-or-more-view images.
[23] In addition, in the multi-view image distortion correcting method according to the present invention, since the brightness uniformities of cameras, lens distortion, inter- camera errors and image sizes, and inter-camera brightness and color uniformities occurring between the multi-view images are compensated, it is possible to effectively
implement three-dimensional stereoscopic display. Brief Description of the Drawings
[24] FIG. 1 is a flowchart showing a multi-view image distortion correcting method according to the present invention.
[25] FIG. 2 is a detailed flowchart showing a step of compensating brightness uniformities of cameras in the multi-view image distortion correcting method shown in
FIG. 1. [26] FIG. 3 is a schematic view showing an image generation model with respect to occurrence of lens distortion of the cameras. [27] FIG. 4 is a detailed flowchart showing a step of compensating for the lens distortion of the cameras in the multi-view image distortion correcting method shown in FIG. 1. [28] FIG. 5 is a detailed flowchart showing a step of compensating for inter-camera error and image sizes in the multi-view image distortion correcting method shown in FIG. 1. [29] FIG. 6 is a detailed flowchart showing a step of compensating for inter-camera brightness and color uniformity in the multi-view image distortion correcting method shown in FIG. 1. [30] FIGs. 7 A and 7B show images before and after the step of compensating for the brightness uniformities of the cameras according to the embodiment of the present invention. [31] FIGs. 8 A and 8B show images before and after the step of compensating for the lens distortion of the cameras according to the embodiment of the present invention. [32] FIGs. 9 A and 9B show images before and after the step of compensating for the inter-camera error and image sizes according to the embodiment of the present invention. [33] FIGs. 1OA and 1OB show images before and after the step of compensating for the brightness and color uniformities of the cameras according to the embodiment of the present invention.
Best Mode for Carrying Out the Invention [34] Hereinafter, a multi-view image distortion correcting method according to the present invention will described in detail with reference to the accompanying drawings. [35] FIG. 1 is a flowchart showing a multi-view image distortion correcting method according to the present invention. FIG. 2 is a detailed flowchart showing a step of compensating brightness uniformities of cameras in the multi-view image distortion correcting method shown in FIG. 1. FIG. 3 is a schematic view showing an image generation model with respect to occurrence of lens distortion of the cameras. FIG. 4 is a detailed flowchart showing a step of compensating for the lens distortion of the
cameras in the multi-view image distortion correcting method shown in FIG. 1. FIG. 5 is a detailed flowchart showing a step of compensating for inter-camera error and image sizes in the multi-view image distortion correcting method shown in FIG. 1. FIG. 6 is a detailed flowchart showing a step of compensating for inter-camera brightness and color uniformity in the multi-view image distortion correcting method shown in FIG. 1.
[36] As shown in FIG. 1, the multi-view image distortion correcting method according to the present invention includes a step SlOO of compensating for brightness uniformities of the cameras, a step S200 of compensating for lens distortion of the cameras, a step S300 of compensating for inter-camera errors and image sizes, and a step S400 of compensating for inter-cameras brightness and color uniformities.
[37] Firstly, the step S 130 of compensating for brightness uniformities of the cameras is described more in detail with reference to FIG. 2.
[38] Images are picked up with the cameras so that each of the images is fully filled with the same plane with the same brightness (Sl 10).
[39] The picked-up images are transformed into a YCbCr space to extract a Y
(luminance) channel (S 120).
[40] Reference brightness values in the Y channel for the cameras such as maximum brightness values lmax and average brightness values laver are calculated from the following Equation 1.
[41] [Equation 1]
[42]
/ max = max. (Kx, y)) ,
[43]
I αVer = ( 2_i 2-, Kx, y))/ ( u> χ k)
[44] Here, I(x,y) denotes a brightness value at a coordinate (x,y) of a pixel in a picked- up image, h denotes the number of pixels in the Y-axis direction in the picked-up image, and w denotes the number of pixels in the X-axis direction in the picked-up image.
[45] A brightness difference map Id(x, y) is generated from the following Equation 2 by using the calculated reference brightness value such as the maximum brightness value lmax and the average brightness value (S 140)
[46] [Equation 2]
[47]
I d(χ, y) = / max — Kx, y)
[48]
[49] The brightness uniformity of each camera is compensated by summing the brightness difference map Id(x, y) generated in the step S 140 and the Y channel extracted in the step S 120 (S 150).
[50] As described above, in the step SlOO of compensating for the brightness uniformities of the cameras according to the present invention, all the pixels of images for the cameras are processed by using the Y channel and the brightness difference map, very uniform brightness for each camera can be obtained.
[51] Next, referring to FIG. 3, assuming that each of the cameras is an ideal pinhole camera, a point P in the real space corresponds to a point p in the image of the camera. However, since a real camera is not an ideal pinhole camera, the point P may correspond to a point p' which is shifted due to camera lens distortion such as barrel distortion and pincushion distortion. The shift from the point p to the point p' is proportional to a distance from an optical center. More specifically, the shift can be represented by the following Equation 3, which is a quadratic equation with respect to the distance d from the optical center.
[52] [Equation 3]
[53] x' = x + fx { d) , y' = y + /y( d)
[54] Here, x and y are the coordinates of the undistorted point p when the point P is projected on the image of camera, x' and y' are the coordinates of the distorted point p' shifted due to the lens distortion when the point P is projected on the image of camera. In addition, d is a distance from the optical center to the point p, that is, d=(x +y )l/2, and differences f(d) of distances in the directions are represented by the following Equation 4.
[55] [Equation 4]
[56]
/x(d) = klxd2 + k2xd4, fy(d) = kλyd2 + k2ydA
[57] Here, kl and k2 are lens distortion coefficients. As shown in FIGs. 3 and 4, if there are at least two pairs of the distorted point p' and the distortion-removed (undistorted) point p, the distortion caused from the lens can be removed by using the lens distortion coefficients kl and k2.
[58] Now, the step of compensating for lens distortion of the cameras to obtain the lens distortion coefficients kl and k2 is described more in detail with reference to FIG. 4.
[59] The same patterns are picked up with the cameras, and corner points are defined
and detected from the patterns of the picked-up images (S210). Preferably, the patterns are lattice pattern so as to allow the corner points to be easily detected.
[60] The detected corner points for the cameras are grouped into points on a straight line in the Hough space (S220). Since resolution in the Hough space contains an accuracy of the straight line, it is preferable that the points on the straight line are roughly grouped with a low resolution.
[61] Coefficients of the straight line are estimated by using the grouped points (S230). The straight line can be estimated by using at least two points. In a case where the straight line is estimated by using at least three points, the three or more points may not accurately be on the straight line. Therefore, it is preferable that the coefficients of the straight line are obtained by using the following Equation 3 so that distances between the three or more points and the straight line are minimized.
[62] [Equation 5] [63]
[64] Here, a, b, and c is coefficients of the straight line and can be estimated from Equation 3 by using a singular value decomposition method. [65] The lens distortion coefficients kl and k2 are obtained by using the estimated coefficients a, b, and c of the straight line (S240). More specifically, by substituting Equations 3 and 4 for the equation of straight line ax'+by'+c=0, the following Equation 6 is obtained.
[66] [Equation 6] [67] ax+ OXk 1Cl2 -+- axk 2 dΛ + by + byk l d2 + byk 2 d4 + c= 0
[68] By substituting coordinates (xl, yl), ..., (xn, yn) of the grouped points for Equation 6, the following Equation 7 is obtained in a form of matrix equation.
[69] [Equation 7] [70]
ax \d" + by i d' ax Ld4 + by x d'X λ
- ax γ— by x — c ax 2d2 + by 2 d' ax -)d'] ' + by2d i - ax 2— by 2— c ax3d2 + by 3 d: Ox ^d4 + byadi ~ ax ^- by 3- c
* ax nd + by nd2 ax ndA + by ndl U- - ax n- by n- c
[71] The lens distortion coefficients kl and k2 can be calculated from Equation 7 by using the least square method. [72] By substituting the lens distortion coefficients kl and k2 obtained from Equation 7 for Equations 3 and 4, the coordinates of the lens -distortion-removed point p are obtained from the coordinates of the distorted point p' (S250).
[73] Next, the inter-camera error and image size compensating step S300 is described more in detail with reference to FIG. 5. [74] The same patterns are picked up with cameras (S310). Preferably, the patterns are lattice patterns so as to allow the characteristic points and correspondence points to be easily detected in the following steps. More preferably, the patterns are variously- colored lattice patterns.
[75] A characteristic point is defined and detected from the pattern of image picked up with specific one of the cameras (S320). [76] In order to set correspondence information between the image picked up with the specific camera and the images picked up with other cameras, the correspondence points are extracted by using the characteristic points detected in the step S320 and an SSD (Sum of Squared Difference) method represented by the following Equation 8 (S330).
[77] [Equation 8] [78]
Ad) = Jl w Jlw ΦUι U+ kJ+ n , I2(i+ k- dxJ+ I- dy))
[79] Here, ψ(u, v)=-(u-v)2, 1(x, y) is a brightness value at x and y positions of an image I, (dx, dy) is a disparity of the correspondence point, and W is a size of a characteristic region around the characteristic point. The correspondence points can be extracted from Equation 8 by finding the value d corresponding to the maximum value of the function f(d).
[80] A covariance matrix Cov represented by the following Equation 9 is calculated from the correspondence information on the extracted characteristic points and the correspondence points (S340).
[83] Here, (xaver, yaver) denotes an average point and is obtained by dividing sum of values in the axes by the number of points n. [84] The covariance matrix of Equation 9 is decomposed by using an EVD (Eigen value Decomposition) method, so that an orthogonal matrix U is calculated as shown in the following Equation 10 (S350).
[85] [Equation 10] [86]
EVD{ Cov) = UDU 7
[87] Here, D is a diagonal matrix. In Equation 10, the column vectors of U represent long and short axes, and elements of D represent size of the long and short axes in the point distribution.
[88] A rotational matrix R can be calculated by using the orthogonal matrix U as shown in the following Equation 11 (S360).
[89] [Equation 11] [90]
R = U1 U^1
[91] By using the rotational matrix R of Equation 11, the image of the second is applied to the image of the first camera, so that the inter-camera error caused from the distortion of housing, sensor position, and lens mount and the inter-camera image size error caused from the difference of focal lengths are compensated (S370).
[92] Next, the inter-camera brightness and color uniformity compensating method is described more in detail with reference to FIG. 6. [93] The correspondence information between the images of the cameras is extracted by using a pre-set image (S410). [94] A homography step of applying the extracted correspondence information to the projective transformation is performed as shown in the following Equation 12, so that correspondence points on the entire images are extracted (S420).
[95] [Equatior i 12]
[96]
[97] Here, λ is a scale vector. In Equation 14, all the elements hi 1, ..., h33 can be obtained from at least four correspondence points, and the scale vector λ can be eliminated in the projective transformation matrix. When the projective transformation matrix of Equation 12 is estimated, a correspondence point with respect to a point can be easily set by using the estimated projective transformation matrix. Therefore, the correspondence points on the entire images can be extracted.
[98] By applying the correspondence points on the entire images to an affine transformation represented by the following Equation 13, the inter-camera brightness and color uniformity is compensated (S430).
[99] [Equation 13] [100]
[101] Here, aR, aG, aB, bR, bG, and bB can be obtained by using the correspondence points. [102]
Claims
[1] A multi-view image distortion correcting method comprising steps of:
(A) generating a brightness difference map for images picked up with at least four cameras and compensating for brightness uniformities of the cameras by using the picked-up images and the brightness map;
(B) detecting patterns of the images picked up with the cameras, and obtaining lens distortion coefficients of the detected patterns of the images through a Hough transformation, and compensating for lens distortion of the cameras by using the lens distortion coefficients;
(C) extracting first correspondence information including characteristic points and correspondence points for the images picked up with the at least four cameras, calculating a covariance matrix and a rotational matrix from the first correspondence information, and compensating for errors and image sizes between the at least four cameras by using the rotational matrix; and
(D) extracting second correspondence information on the images picked up with the at least four cameras and compensating for brightness and color uniformities between the at least four cameras by performing homography and affine transformation using the second correspondence information.
[2] The multi-view image distortion correcting method according to claim 1, wherein the step (A) comprises steps of:
(A-I) picking up the images with the at least four cameras so that each image is fully filled with the same plane;
(A-2) transforming each of the picked-up images into a YCbCr space and extracting a Y channel;
(A-3) calculating reference brightness values in the Y channel with respect to the cameras;
(A-4) generating a brightness difference map by using the reference brightness values; and
(A-5) compensating for the brightness uniformities of the cameras by summing the Y channel and the brightness difference map. [3] The multi-view image distortion correcting method according to claim 2, wherein the reference brightness value is a maximum brightness value. [4] The multi-view image distortion correcting method according to claim 2, wherein the reference brightness value is an average brightness value. [5] The multi-view image distortion correcting method according to claim 1, wherein the step (B) comprises steps of:
(B-I) picking up the patterns with the cameras and detecting corner points from
the patterns of the picked-up images;
(B-2) grouping the detected corner points into points of a straight line in a Hough space;
(B-3) estimating coefficients of the straight line by using the grouped points;
(B-4) obtaining the lens distortion coefficients of the cameras by using the estimated coefficients of the straight line; and
(B-5) compensating for the lens distortion of the cameras by using the lens distortion coefficients. [6] The multi-view image distortion correcting method according to claim 5, wherein, in the step (B-I), the pattern picked up with the cameras are lattice patterns. [7] The multi-view image distortion correcting method according to claim 1, wherein the step (C) comprises steps of:
(C-I) picking up the same patterns with the at least four cameras;
(C-2) detecting the characteristic points from the pattern of image picked up with specific one of the at least four cameras;
(C-3) extracting the characteristic point and the correspondence points of other cameras by using a SSD (Sum of Squared Difference) method;
(C-4) calculating the covariance matrix from the first correspondence matrix including the characteristic points and the correspondence points;
(C- 5) calculating an orthogonal matrix from the covariance matrix by using an
EVD (Eigen Value Decomposition) method;
(C-6) calculating a rotational matrix by using the orthogonal matrix; and
(C-7) compensating for errors and image sizes between the at least four cameras by using the rotational matrix. [8] The multi-view image distortion correcting method according to claim 1, wherein the step (D) comprises steps of:
(D-I) extracting second correspondence information between the images picked up with the at least four cameras;
(D-2) extracting the correspondence points of the entire images by applying the second correspondence information to the homography; and
(D-3) compensating for the brightness and color uniformities between the at least four cameras by applying the correspondence points of the entire images to the affine transformation.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2005-0062061 | 2005-07-11 | ||
KR1020050062061A KR100668073B1 (en) | 2005-07-11 | 2005-07-11 | Method for calibrating distortion of multi-view image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007007924A1 true WO2007007924A1 (en) | 2007-01-18 |
Family
ID=37637280
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2005/002228 WO2007007924A1 (en) | 2005-07-11 | 2005-07-11 | Method for calibrating distortion of multi-view image |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR100668073B1 (en) |
WO (1) | WO2007007924A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2466898A1 (en) | 2010-12-20 | 2012-06-20 | Vestel Elektronik Sanayi ve Ticaret A.S. | A method and apparatus for calibration of stereo images |
EP2493204A1 (en) * | 2011-02-24 | 2012-08-29 | Tektronix, Inc. | Stereoscopic image registration and color balance evaluation display |
US8750641B2 (en) | 2010-12-30 | 2014-06-10 | Postech Academy—Industry Foundation | Apparatus and method for correcting distortion of image |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101227936B1 (en) * | 2011-05-31 | 2013-01-30 | 전자부품연구원 | Method for compensating colour of corresponding image and recording medium thereof |
KR101918030B1 (en) * | 2012-12-20 | 2018-11-14 | 삼성전자주식회사 | Method and apparatus for rendering hybrid multi-view |
KR102128336B1 (en) | 2018-04-26 | 2020-06-30 | 한국전자통신연구원 | 3d image distortion correction system and method |
KR20200057287A (en) * | 2018-11-16 | 2020-05-26 | (주)리플레이 | Camera calibration method for multi view point shooting and apparatus for the same |
CN111507924B (en) * | 2020-04-27 | 2023-09-29 | 北京百度网讯科技有限公司 | Video frame processing method and device |
US11122248B1 (en) * | 2020-07-20 | 2021-09-14 | Black Sesame International Holding Limited | Stereo vision with weakly aligned heterogeneous cameras |
KR102412275B1 (en) * | 2021-05-27 | 2022-06-24 | 한국과학기술원 | Image Distortion Correction Method and Apparatus in Measurement of Three Dimensional Shape Information using Stereo Method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5652616A (en) * | 1996-08-06 | 1997-07-29 | General Instrument Corporation Of Delaware | Optimal disparity estimation for stereoscopic video coding |
US6064424A (en) * | 1996-02-23 | 2000-05-16 | U.S. Philips Corporation | Autostereoscopic display apparatus |
US6459532B1 (en) * | 1999-07-24 | 2002-10-01 | Sharp Kabushiki Kaisha | Parallax autostereoscopic 3D picture and autostereoscopic 3D display |
-
2005
- 2005-07-11 KR KR1020050062061A patent/KR100668073B1/en active IP Right Grant
- 2005-07-11 WO PCT/KR2005/002228 patent/WO2007007924A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6064424A (en) * | 1996-02-23 | 2000-05-16 | U.S. Philips Corporation | Autostereoscopic display apparatus |
US5652616A (en) * | 1996-08-06 | 1997-07-29 | General Instrument Corporation Of Delaware | Optimal disparity estimation for stereoscopic video coding |
US6459532B1 (en) * | 1999-07-24 | 2002-10-01 | Sharp Kabushiki Kaisha | Parallax autostereoscopic 3D picture and autostereoscopic 3D display |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2466898A1 (en) | 2010-12-20 | 2012-06-20 | Vestel Elektronik Sanayi ve Ticaret A.S. | A method and apparatus for calibration of stereo images |
US8750641B2 (en) | 2010-12-30 | 2014-06-10 | Postech Academy—Industry Foundation | Apparatus and method for correcting distortion of image |
EP2493204A1 (en) * | 2011-02-24 | 2012-08-29 | Tektronix, Inc. | Stereoscopic image registration and color balance evaluation display |
CN102740115A (en) * | 2011-02-24 | 2012-10-17 | 特克特朗尼克公司 | Stereoscopic image registration and color balance evaluation display |
US9307227B2 (en) | 2011-02-24 | 2016-04-05 | Tektronix, Inc. | Stereoscopic image registration and color balance evaluation display |
CN102740115B (en) * | 2011-02-24 | 2016-12-07 | 特克特朗尼克公司 | Stereoscopic image registration and color balance evaluation show |
Also Published As
Publication number | Publication date |
---|---|
KR100668073B1 (en) | 2007-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230336707A1 (en) | Systems and Methods for Dynamic Calibration of Array Cameras | |
WO2007007924A1 (en) | Method for calibrating distortion of multi-view image | |
US8897502B2 (en) | Calibration for stereoscopic capture system | |
EP3163535B1 (en) | Wide-area image acquisition method and device | |
CN109767474B (en) | Multi-view camera calibration method and device and storage medium | |
KR101761751B1 (en) | Hmd calibration with direct geometric modeling | |
CN110809786B (en) | Calibration device, calibration chart, chart pattern generation device, and calibration method | |
EP0680014B1 (en) | Image processing method and apparatus | |
EP2194725B1 (en) | Method and apparatus for correcting a depth image | |
US20180122099A1 (en) | Image processing apparatus having automatic compensation function for image obtained from camera, and method thereof | |
US20110080466A1 (en) | Automated processing of aligned and non-aligned images for creating two-view and multi-view stereoscopic 3d images | |
US20070165942A1 (en) | Method for rectifying stereoscopic display systems | |
US20110149031A1 (en) | Stereoscopic image, multi-view image, and depth image acquisition apparatus and control method thereof | |
US20070189599A1 (en) | Apparatus, method and medium displaying stereo image | |
US20130272600A1 (en) | Range image pixel matching method | |
CN107545586B (en) | Depth obtaining method and system based on light field polar line plane image local part | |
CN108269234B (en) | Panoramic camera lens attitude estimation method and panoramic camera | |
JP2005142957A (en) | Imaging apparatus and method, and imaging system | |
Nozick | Multiple view image rectification | |
TW202029056A (en) | Disparity estimation from a wide angle image | |
JP7033294B2 (en) | Imaging system, imaging method | |
KR20110025083A (en) | Apparatus and method for displaying 3d image in 3d image system | |
Kumar et al. | Stereo image rectification using focal length adjustment | |
CN112017138B (en) | Image splicing method based on scene three-dimensional structure | |
Stankowski et al. | Application of epipolar rectification algorithm in 3D Television |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1), EPO FORM 1205A SENT ON 09/04/08 . |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05774416 Country of ref document: EP Kind code of ref document: A1 |