JP4970118B2 - Camera calibration method, program thereof, recording medium, and apparatus - Google Patents

Camera calibration method, program thereof, recording medium, and apparatus Download PDF

Info

Publication number
JP4970118B2
JP4970118B2 JP2007102346A JP2007102346A JP4970118B2 JP 4970118 B2 JP4970118 B2 JP 4970118B2 JP 2007102346 A JP2007102346 A JP 2007102346A JP 2007102346 A JP2007102346 A JP 2007102346A JP 4970118 B2 JP4970118 B2 JP 4970118B2
Authority
JP
Japan
Prior art keywords
camera
lens
distortion
eccentricity
imaging surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2007102346A
Other languages
Japanese (ja)
Other versions
JP2008262255A (en
Inventor
雅浩 上野
豊 國田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2007102346A priority Critical patent/JP4970118B2/en
Publication of JP2008262255A publication Critical patent/JP2008262255A/en
Application granted granted Critical
Publication of JP4970118B2 publication Critical patent/JP4970118B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)

Description

本発明は、カメラで撮った複数枚の画像から、カメラの位置姿勢、カメラの内部パラメタ等を推定するカメラ校正技術に関する。   The present invention relates to a camera calibration technique for estimating the position and orientation of a camera, internal parameters of the camera, and the like from a plurality of images taken by the camera.

カメラの光学系に歪みがない理想的な系を考えた場合、カメラ撮像素子上の点m=[u,v]と空間中の点M=[X,Y,Z]との間には、レンズの焦点距離、カメラ撮像素子の画像ピッチ(水平走査方向、垂直走査方向)、光学系の光軸と撮像素子との交点座標といったカメラ内部パラメタ、カメラの位置や姿勢といったカメラ外部パラメタを用いて、 When an ideal system without distortion in the optical system of the camera is considered, between the point m = [u, v] T on the camera image sensor and the point M = [X, Y, Z] T in space. Are the camera internal parameters such as the focal length of the lens, the image pitch of the camera image sensor (horizontal scanning direction and vertical scanning direction), the intersection coordinates of the optical axis of the optical system and the image sensor, and camera external parameters such as the position and orientation of the camera. make use of,

Figure 0004970118
Figure 0004970118

ここで、αは、カメラ撮像素子の水平走査方向の画素ピッチと、カメラレンズの焦点距離の、積の逆数、βは、カメラ撮像素子の垂直走査方向の画素ピッチと、レンズの焦点距離の、積の逆数を、水平走査方向と垂直走査方向の成す角の正弦(sin)で割った値、γは、カメラ撮像素子の垂直方向の画素ピッチと、レンズ焦点距離の、積の逆数を、水平走査方向と垂直走査方向の成す角の正接(tan)で割った値、uC,0は、光学系の光軸と撮像素子との交点のカメラ撮像素子の水平走査方向の座標、vC,0は、光学系の光軸と撮像素子との交点のカメラ撮像素子の垂直走査方向の座標、Rはカメラ回転行列、tはカメラ平行並進ベクトル、のような関係が成り立つ。なお、上記の文章中に記載したm=[u,v]、M=[X,Y,Z]の左辺の「m」、「M」は本来は太字で記載すべきものであるが、文章中では太字が使えないため通常の文字で「m」、「M」と記載した。以下の文章中の式で用いる文字についても同様である。 Here, α C is the reciprocal of the product of the pixel pitch in the horizontal scanning direction of the camera image sensor and the focal length of the camera lens, and β C is the pixel pitch in the vertical scanning direction of the camera image sensor and the focal length of the lens. Γ is a value obtained by dividing the reciprocal of the product by the sine of the angle formed by the horizontal scanning direction and the vertical scanning direction, and γ C is the reciprocal of the product of the pixel pitch in the vertical direction of the camera image sensor and the lens focal length. Is divided by the tangent (tan) of the angle formed by the horizontal scanning direction and the vertical scanning direction, u C, 0 is the coordinate in the horizontal scanning direction of the camera image sensor at the intersection of the optical axis of the optical system and the image sensor, v C, 0 is a coordinate in the vertical scanning direction of the camera image sensor at the intersection of the optical axis of the optical system and the image sensor, RC is a camera rotation matrix, and t C is a camera parallel translation vector. In addition, m = [u, v] T and M = [X, Y, Z] described in the above sentence, “m” and “M” on the left side of T should originally be written in bold. Since bold characters cannot be used in the text, normal characters “m” and “M” are described. The same applies to the characters used in the following expressions.

ところが実際には、光学系には一般に「樽型歪み」や「糸巻き型歪み」と呼ばれる歪曲歪みが存在しており、空間中の点Mの理想的なカメラ撮像素子上の射影点mと、実際にカメラ撮像素子上に結像される座標(すなわち、レンズで歪んだ場合に結像される座標)   However, in reality, there is a distortion distortion generally called “barrel distortion” or “pincushion distortion” in the optical system, and a projection point m on the ideal camera imaging element of the point M in space, The coordinates that are actually imaged on the camera image sensor (that is, the coordinates that are imaged when distorted by the lens)

Figure 0004970118
Figure 0004970118

とは一致しない。なお、数2のイメージデータに示した記号を文章中では「ブレーヴェm」と記載する。 Does not match. In addition, the symbol shown in the image data of Formula 2 is described as “Breve m” in the text.

また、一般にカメラ内部パラメタ、カメラ外部パラメタも厳密な値は既知ではないので、理想的なカメラ撮像素子上の射影点mから実際にカメラ撮像素子上に結像される座標ブレーヴェmへの変換関数D[.]とともに、カメラ内部パラメタ、カメラ外部パラメタも、空間中の観測点の座標と撮像素子上の結像点の座標の組から推定する必要がある。   In general, the exact values of the camera internal parameters and the camera external parameters are not known, so that a conversion function from an ideal projection point m on the camera image sensor to a coordinate breve m that is actually imaged on the camera image sensor. D [. In addition, it is necessary to estimate the camera internal parameters and the camera external parameters from a set of coordinates of observation points in space and coordinates of image forming points on the image sensor.

このような歪曲歪みのパラメタも含めたカメラパラメタの推定方法として、従来、以下のような方法が行われていた(たとえば、非特許文献1)。   Conventionally, as a camera parameter estimation method including such distortion parameters, the following method has been performed (for example, Non-Patent Document 1).

図1は空間中にある平面上の観測点とカメラ撮像素子上の結像点の関係を示す図であり、101は空間中の平面B1、102は平面上の観測点pB1,1、111はカメラC、112はレンズ、113はレンズ中心、114は撮像素子、115は撮像素子原点、116は観測点pB1,1の結像点pC,B1,1、を示す。 FIG. 1 is a diagram showing the relationship between observation points on a plane in the space and image formation points on the camera image sensor. Reference numeral 101 denotes a plane B1 in the space, and 102 denotes observation points p B1,1 , 111 on the plane. Denotes a camera C, 112 denotes a lens, 113 denotes a lens center, 114 denotes an imaging element, 115 denotes an imaging element origin, and 116 denotes an imaging point p C, B1,1 of the observation point p B1,1 .

カメラパラメタの推定は、たとえば、下記のステップで行われる。
Step1.レンズ歪みが無いとして、カメラ内部/外部パラメタを概算。
Step1−1.空間中の平面B1とカメラ撮像素子のホモグラフィ行列を概算。
Step1−2.カメラ内部パラメタを概算。
Step1−3.カメラ外部パラメタを概算。
Step2.レンズ歪み係数を概算。
Step3.カメラ内部/外部パラメタ、レンズ歪み係数を最適化手法により高精度化。
The camera parameter is estimated by the following steps, for example.
Step1. Estimated camera internal / external parameters assuming no lens distortion.
Step 1-1. Approximate the homography matrix of the plane B1 in space and the camera image sensor.
Step 1-2. Approximate camera internal parameters.
Step 1-3. Approximate camera external parameters.
Step2. Approximate lens distortion coefficient.
Step3. Highly accurate camera internal / external parameters and lens distortion coefficient using optimization techniques.

以下、詳細に説明する。   Details will be described below.

[Step1.レンズ歪みが無いとして、カメラ内部/外部パラメタを概算]
[Step1−1.空間中の平面B1とカメラ撮像素子のホモグラフィ行列を概算]
平面B1上の観測点pB1,1の座標を世界座標系で表し、XpB1,1=[XpB1,1,YpB1,1,ZpB1,1のようにデカルト座標系で表記する。また、撮像素子上にある観測点pB1,1のカメラCの撮像素子上の結像点pC,B1,1の座標を撮像素子のディジタル画像座標系で表し、mpC,B1,1=[upC,B1,1,vpC,B1,1のように表記する。
[Step 1. Estimate camera internal / external parameters assuming no lens distortion]
[Step 1-1. Approximate homography matrix of plane B1 in space and camera image sensor]
The coordinates of the observation point pB1,1 on the plane B1 are expressed in the world coordinate system, and expressed in a Cartesian coordinate system such as XpB1,1 = [ XpB1,1 , YpB1,1 , ZpB1,1 ] T. . Further , the coordinates of the imaging points p C, B1,1 on the image sensor of the camera C at the observation point p B1,1 on the image sensor are represented by the digital image coordinate system of the image sensor , and mp C , B1,1 = [ UpC, B1,1, vpC , B1,1 ] Denoted as T.

101の平面B1を世界座標系の座標[X,Y,Z]のZ=0の平面とし、その座標を2次元座標XpB1,1’=[XpB1,1,YpB1,1としたとき、平面B1上の観測点pB1,1と結像点pC,B1,1のそれぞれの座標は、平面B1と撮像素子の平面で関連づけられる1つのホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を使って次のように関係づけられる。ただし、hC,B1,1、hC,B1,2、hC,B1,3はそれぞれ3行1列の列ベクトルである。 The plane B1 of 101 is the plane [Z, 0, T ] of the coordinate [X, Y, Z] T of the world coordinate system, and the coordinates are the two-dimensional coordinates X pB1,1 ′ = [X pB1,1 , Y pB1,1 ] T , The coordinates of the observation point p B1,1 on the plane B1 and the imaging points p C, B1,1 are one homography matrix H C, B1 = corresponding to the plane B1 and the plane of the image sensor. [H C, B1,1 h C, B1,2 h C, B1,3 ] are used as follows. However, h C, B1,1 , h C, B1,2 , h C, B1,3 are column vectors of 3 rows and 1 column, respectively.

Figure 0004970118
Figure 0004970118

ここで、ベクトルの上に付いている「〜」は同次座標であることを示し、チルダmpC,B1,1=[upC,B1,1,vpC,B1,1,1]、チルダXpB1,1’=[XpB1,1,YpB1,1,1]であり、sはスカラ値である。なお、数3のイメージに記載されているような「〜」が上についた「m」等を文章中では「チルダm」等と記載する。 Here, “˜” on the vector indicates a homogeneous coordinate, and the tilde m pC, B1,1 = [ upC , B1,1 , v pC, B1,1,1 ] T , The tilde X pB1,1 ′ = [X pB1,1 , Y pB1,1 , 1] T , and s is a scalar value. Note that “m” or the like with “˜” as described in the image of Equation 3 is described as “tilde m” or the like in the text.

平面B1上の点の座標とそれに対応する撮像素子上の結像点の座標を(3.1)式に代入して方程式をつくり、ホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を算出する。この方法としては、たとえば、下記の方法がある。まず、n組の観測点の座標と結像点の座標の組を使って、次の行列LC,B1を作る。 By substituting the coordinates of the point on the plane B1 and the coordinates of the image forming point on the image sensor corresponding to the equation (3.1) into the equation, an equation is created, and the homography matrix H C, B1 = [h C, B1,1 h C, B1,2 h C, B1,3 ] are calculated. Examples of this method include the following method. First, the following matrix L C, B1 is created using a set of coordinates of n sets of observation points and coordinates of imaging points.

Figure 0004970118
Figure 0004970118

ここで、0=[0,0,0]である。 Here, 0 = [0, 0, 0] T.

次にLC,B1 C,B1の固有ベクトルを求め、その中の最も固有値が小さい固有ベクトルをxC,B1=[hC,B1,1 C,B1,2 C,B1,3 ]とする。最後にこのベクトルxC,B1の各エレメントを並び直すことによりホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を得る。 Then determine the eigenvectors of L C, B1 T L C, B1, an eigenvector least eigenvalue is small therein x C, B1 = [h C , B1,1 T h C, B1,2 T h C, B1, 3 T ]. Finally, the elements of the vectors x C and B1 are rearranged to obtain a homography matrix H C, B1 = [h C, B1,1 h C, B1,2 h C, B1,3 ].

[Step1−2.カメラ内部パラメタを概算]
次に、ホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]のエレメントを次のように表記する。
[Step 1-2. Approximate camera internal parameters]
Next, the elements of the homography matrix H C, B1 = [h C, B1,1 h C, B1,2 h C, B1,3 ] are expressed as follows.

Figure 0004970118
Figure 0004970118

ただし、iは1≦i≦3の整数である。 However, i is an integer of 1 ≦ i ≦ 3.

上記表記を使って、次のようなベクトルを作る。   Using the above notation, create the following vector.

Figure 0004970118
Figure 0004970118

ただし、jは1≦j≦3の整数である。 However, j is an integer of 1 ≦ j ≦ 3.

平面の位置をn回変えてホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を作り、それらから得たn個のベクトルVC,B1,i,j〜VC,Bn2,i,jを使って、次の行列Vを作る。 The position of the plane is changed n 2 times to create a homography matrix H C, B1 = [h C, B1,1 h C, B1,2 h C, B1,3 ], and n 2 vectors V obtained therefrom C, B1, i, j to V C, Bn2, i, j are used to create the following matrix V C.

Figure 0004970118
Figure 0004970118

次にV の固有ベクトルを求め、その中の最も固有値が小さい固有ベクトルをb=[BC,1,1C,1,2C,2,2C,1,3C,2,3C,3,3]とする。 Next, the eigenvector of V C T V C is obtained, and the eigenvector having the smallest eigenvalue is represented by b C = [B C, 1,1 BC , 1,2 BC, 2,2 BC , 1,3 B C, 2,3 B C, 3,3 ].

このbを用いてカメラ内部行列Aは以下のように概算される。 Using this b C , the camera internal matrix A C is approximated as follows.

Figure 0004970118
Figure 0004970118

ただし、 However,

Figure 0004970118
Figure 0004970118

ここで、αは、カメラ撮像素子の水平走査方向の画素ピッチと、レンズの焦点距離の、積の逆数、βは、カメラ撮像素子の垂直走査方向の画素ピッチと、レンズの焦点距離の、積の逆数を、水平走査方向と垂直走査方向の成す角の正弦(sin)で割ったもの、γは、カメラ撮像素子の水平走査方向の画素ピッチと、レンズの焦点距離の、積の逆数を、水平走査方向と垂直走査方向の成す角の正接(tan)で割ったもの、uC,0は、光学系の光軸と撮像素子との交点のカメラ撮像素子の水平走査方向の座標、vC,0は、光学系の光軸と撮像素子との交点のカメラ撮像素子の垂直走査方向の座標、である。 Here, α C is the inverse of the product of the pixel pitch in the horizontal scanning direction of the camera image sensor and the focal length of the lens, and β C is the pixel pitch in the vertical scanning direction of the camera image sensor and the focal length of the lens. , The inverse of the product divided by the sine of the angle formed by the horizontal scanning direction and the vertical scanning direction, γ C is the product of the pixel pitch in the horizontal scanning direction of the camera image sensor and the focal length of the lens The reciprocal divided by the tangent (tan) of the angle formed by the horizontal scanning direction and the vertical scanning direction, u C, 0 is the coordinate in the horizontal scanning direction of the camera image sensor at the intersection of the optical axis of the optical system and the image sensor , V C, 0 are the coordinates in the vertical scanning direction of the camera image sensor at the intersection of the optical axis of the optical system and the image sensor.

[Step1−3.カメラ外部パラメタを概算]
一方、カメラ外部行列GC,B1をカメラ外部パラメタをエレメントに持つカメラ回転行列RC,B1=[rC,B1,1C,B1,2C,B1,3]とカメラ平行並進ベクトルtC,B1からGC,B1=[RC,B1C,B1]と定義すると、カメラ外部行列GC,B1はこれまでの結果を用いて、
[Step 1-3. Approximate camera external parameters]
On the other hand, a camera rotation matrix R C, B1 = [r C, B1,1 r C, B1,2 r C, B1,3 ] having a camera external matrix G C, B1 as a camera external parameter and a camera parallel translation vector If t C, B1 to G C, B1 = [ RC, B1 t C, B1 ] are defined, the camera external matrix GC , B1 uses the results so far,

Figure 0004970118
Figure 0004970118

と概算される。 Is estimated.

[step2.レンズ歪み係数を概算]
次に、レンズ歪みを反映するためのレンズ歪み係数を計算する。従来の方法においては、レンズで歪んだ場合に結像される座標ブレーヴェxpC,B1,1=[ブレーヴェxpC,B1,1,ブレーヴェypC,B1,1は、レンズ歪みがない場合に結像される座標xpC,B1,1=[xpC,B1,1,ypC,B1,1、レンズ歪み係数kC,1、kC,2を用いて、
[Step2. Estimate lens distortion coefficient]
Next, a lens distortion coefficient for reflecting the lens distortion is calculated. In the conventional method, the coordinate breve x pC, B1,1 = [ breve x pC, B1,1 , breve y pC, B1,1 ] T formed when the lens is distorted is the case where there is no lens distortion. Using coordinates x pC, B1,1 = [x pC, B1,1 , y pC, B1,1 ] T , lens distortion coefficients k C, 1 , k C, 2

Figure 0004970118
Figure 0004970118

としていた。ただし、本式はカメラCのカメラ座標系で記述している。なお、数10にイメージデータで示された(3.10)式の左辺のx、yを文章中では「ブレーヴェx」、「ブレーヴェy」と記載する。 I was trying. However, this expression is described in the camera coordinate system of the camera C. Note that x and y on the left side of the expression (3.10) shown by the image data in Equation 10 are described as “Breve x” and “Breve y” in the text.

この変換を、レンズ歪みがない場合に結像される座標xpC,B1,1=[xpC,B1,1,ypC,B1,1を入力とし、レンズで歪んだ場合に結像される座標ブレーヴェxpC,B1,1=[ブレーヴェxpC,B1,1,ブレーヴェypC,B1,1を出力するレンズ歪み関数D[kC,1,kC,2,xpC,B1,1]を導入し、ブレーヴェxpC,B1,1=D[kC,1,kC,2,xpC,B1,1]と表す。 This transformation is input when coordinates x pC, B1,1 = [x pC, B1,1 , y pC, B1,1 ] T are input when there is no lens distortion, and image is formed when the lens is distorted. Coordinates Brave x pC, B1,1 = [Breve x pC, B1,1 , Breve y pC, B1,1 ] Lens distortion function D [k C, 1 , k C, 2 , x pC, T B1,1 ] is introduced and represented by Breve xpC, B1,1 = D [ kC , 1 , kC , 2 , xpC, B1,1 ].

上記の式をカメラ座標系x=[x,y]とディジタル座標系m=Aの関係を使い、レンズ歪み係数kC,1、kC,2を以下のように算出する。ただし、Aのエレメントの1つであるγは通常非常に小さいので、計算量と精度との兼ね合いを考慮して0として計算する。 Using the relationship between the camera coordinate system x C = [x C , y C ] and the digital coordinate system m C = A C x C , the lens distortion coefficients k C, 1 , k C, 2 are expressed as follows: calculate. However, since the gamma C is one of the elements of A C is usually very small, to calculate a balance between computational and accuracy as 0 in consideration.

Figure 0004970118
Figure 0004970118

ただし、 However,

Figure 0004970118
Figure 0004970118

なお、上記の式では、一つの平面上の観測点と撮像素子上の結像点の組が、1つの平面あたりn組あり、その平面をn回場所を変えてカメラで撮像した、観測点と結像点の組を全て使用している。 In the above formula, there are n 1 set of observation points on one plane and imaging points on the image sensor, and the plane was imaged with the camera by changing the location n 2 times. All pairs of observation points and imaging points are used.

[Step3.カメラ内部/外部パラメタ、レンズ歪み係数を最適化手法により高精度化]
次に、下記の式が最も小さくなるように、カメラ内部行列A、カメラ回転行列RC,Bj、カメラ平行並進ベクトルtC,Bj、レンズ歪み係数kC,1、kC,2を最適化し、カメラ内部/外部パラメタおよびレンズ歪み係数を高精度化する。
[Step 3. Highly accurate camera internal / external parameters and lens distortion coefficient by optimization method]
Next, the camera internal matrix A C , the camera rotation matrix R C, Bj , the camera parallel translation vector t C, Bj , and the lens distortion coefficients k C, 1 , k C, 2 are optimized so that the following expression is minimized. To increase the accuracy of camera internal / external parameters and lens distortion coefficient.

Figure 0004970118
Figure 0004970118

ここで、nは1枚の平面上にある観測点と撮像素子上の結像点の組の数、nは平面の場所を変えて撮像した回数、XpBj,i’は観測点の座標、ブレーヴェmpC,Bj.iは結像点の実測した座標、m[.]は観測点XpBj,i’をA、kC,1、kC,2、RC,Bj、tC,Bjを用いてカメラの撮像素子上に結像した結像点の座標を計算する投射関数であり、これまで求めてきた式を用いて、 Here, n 1 is the number of pairs of observation points on one plane and imaging points on the image sensor, n 2 is the number of times of imaging by changing the location of the plane, and X pBj, i ′ is the number of observation points Coordinates, Breve m pC, Bj. i is the measured coordinates of the image point, m [. ] Represents the coordinates of the image point formed by imaging the observation point XpBj, i ′ on the image sensor of the camera using A C , k C, 1 , k C, 2 , R C, Bj , t C, Bj. This is the projection function to be calculated.

Figure 0004970118
Figure 0004970118

と表される。ここでチルダxpC,Bj,iは、 It is expressed. Where tilde x pC, Bj, i is

Figure 0004970118
Figure 0004970118

によって計算される。(3.12)式の最適化は、たとえば、Levenberg-Marquardt法により行う。 Calculated by The optimization of the expression (3.12) is performed by, for example, the Levenberg-Marquardt method.

以上の手順によって、カメラ内部/外部パラメタおよび歪み係数が求められ、これを用いて歪みの補正などの各種画像処理の際のカメラ校正が行われる。   Through the above procedure, camera internal / external parameters and distortion coefficients are obtained, and camera calibration is performed using these for various image processing such as distortion correction.

Z. Zhang,“A Flexible New Technique for Camera Calibration,”Technical Report MSR-TR-98-71, Microsoft Research, Dec. 1998. Available together with the software at http://research.microsoft.com/~zhang/Calib/.Z. Zhang, “A Flexible New Technique for Camera Calibration,” Technical Report MSR-TR-98-71, Microsoft Research, Dec. 1998. Available together with the software at http://research.microsoft.com/~zhang/ Calib /.

しかしながら、従来の方法で求められたカメラ内部/外部パラメタおよび歪み係数を用いて実際に画像処理を行うと、レンズの歪みが残っている場合があった。特に近年多く使われるようになった、携帯電話のカメラなどの安価な光学系を用いたカメラで顕著であった。   However, when the image processing is actually performed using the camera internal / external parameters and the distortion coefficient obtained by the conventional method, there is a case where the distortion of the lens remains. This was particularly noticeable in cameras using inexpensive optical systems such as mobile phone cameras, which have been used frequently in recent years.

上記に示すような従来のレンズ歪み補正方法は、放射方向のレンズ歪みはレンズの偏心がない場合の歪曲歪みに対応しているだけであり、レンズの光軸からの偏心がある場合の歪曲歪みについては考慮されておらず、偏心に依存した歪み量算出のためのカメラパラメタとして何を導入すればよいかの指針も明らかになっていなかったことに起因する。このため従来法では、一般のレンズが偏心しているカメラを使用した場合には歪み補正が十分なされない、という欠点があった。   In the conventional lens distortion correction method as described above, the lens distortion in the radial direction only corresponds to the distortion distortion when there is no lens eccentricity, and the distortion distortion when there is eccentricity from the optical axis of the lens. This is due to the fact that the guideline for what to introduce as a camera parameter for calculating the amount of distortion depending on the eccentricity has not been clarified. For this reason, the conventional method has a drawback in that distortion correction is not sufficiently performed when a camera having a decentered general lens is used.

この点について、図2を用いて説明する。レンズの偏心とは、光学系の基準軸に対してレンズが平行移動したり傾いたりすることにより、レンズ面の曲率中心が基準軸上からずれることを言う。図2(a)はこの様子を示したものであり、上図はずれ量を平行移動距離で表したものであり、下図は角度で表したものである。軸のずれがわずかな場合は、E=lεであるので、平行移動Eと角度εは比例関係となるので、歪み係数を求める際には本質的な違いがなく、どちらで表しても良い。   This point will be described with reference to FIG. Lens decentering means that the center of curvature of the lens surface deviates from the reference axis when the lens is translated or tilted with respect to the reference axis of the optical system. FIG. 2A shows this state. The upper diagram shows the shift amount in terms of the parallel movement distance, and the lower diagram shows the angle. When the axis deviation is slight, E = 1ε, and the translation E and the angle ε are in a proportional relationship. Therefore, there is no essential difference in obtaining the distortion coefficient, and either may be expressed.

図2(b)は、偏心量をEで表し、偏心した場合の歪み量について説明する図である。偏心した場合の歪み量は、無歪み時の結像位置lに加えて、偏心に依存しない場合の放射方向歪み(V‖l‖+V‖l‖)と、偏心に依存する放射方向歪み(V3E+V5E‖l‖)(E・l)lがある。従来は、前者の歪みのみを考慮していたが、後者の歪みもあるため、従来の方法では偏心がある場合は後者の歪みの補正が十分なされない。この理由としては、後者は(E・l)の項を持つため基準軸からの方向によってその歪みの大きさが変わるという性質を持つが、従来考慮していた前者の歪みは基準軸からの方向により歪みの大きさが変わらないため、前者の歪みを考慮するだけでは、後者の歪みは十分補正できない。 FIG. 2B is a diagram for explaining the amount of distortion when the amount of eccentricity is represented by E and is decentered. In addition to the imaging position l when there is no distortion, the amount of distortion in the case of decentering is the radial distortion (V 3 ‖l 2 + V 5 ‖l‖ 4 ) when not dependent on decentration and the radiation depending on decentration. There is a directional distortion (V 3E + V 5E ‖l‖ 2 ) (E · l) l. Conventionally, only the former distortion has been considered, but there is also the latter distortion. Therefore, in the conventional method, if there is an eccentricity, the latter distortion is not sufficiently corrected. The reason for this is that the latter has the term (E · l), so that the magnitude of the distortion changes depending on the direction from the reference axis. However, the former distortion considered in the past is the direction from the reference axis. Therefore, the distortion of the latter cannot be corrected sufficiently only by considering the distortion of the former.

基準軸からの方向により歪む量が異なることについて、結像する像の形を用いて説明する。図2(c)にレンズの偏心がある場合と無い場合とでの歪み画像の違いを示す。点線は無歪み時の画像であり、一点鎖線はレンズの偏心が無い場合の歪み画像であり、実線はレンズの偏心がある場合の歪み画像である。偏心がなければ、光学系の基準軸を中心にどの角度にも同じ歪みが出て、一点鎖線のような画像となる。偏心があった場合には、矢印に示す偏心の方向に像が伸縮する。従来は、一点鎖線で示した偏心が無い場合の方向性のない歪みのみを補正対象としているが、レンズの偏心がある一般的な場合では、レンズの偏心による実線で示したレンズの偏心方向に依存した歪みが出ており、この歪みは従来方法では十分に補正することはできない。   The fact that the amount of distortion varies depending on the direction from the reference axis will be described using the shape of an image to be formed. FIG. 2C shows the difference in the distorted image with and without the lens decentration. A dotted line is an image without distortion, a one-dot chain line is a distortion image when there is no lens eccentricity, and a solid line is a distortion image when there is lens eccentricity. If there is no decentering, the same distortion will appear at any angle around the reference axis of the optical system, resulting in an image like an alternate long and short dash line. When there is eccentricity, the image expands and contracts in the direction of eccentricity indicated by the arrow. Conventionally, only distortion with no directivity when there is no eccentricity indicated by the one-dot chain line is targeted for correction, but in the general case where there is lens eccentricity, the eccentric direction of the lens indicated by the solid line due to lens eccentricity The distortion which depended has come out, and this distortion cannot fully correct | amend by the conventional method.

本発明は、上記に鑑みてなされたものであり、その目的とするところは、偏心しているレンズを有するカメラにおいても、歪み補正を精度よく行うカメラ校正方法を提供することにある。   The present invention has been made in view of the above, and an object of the present invention is to provide a camera calibration method for accurately correcting distortion even in a camera having an eccentric lens.

本願において開示される発明のうち、代表的なものの概要を簡単に説明すれば、以下の通りである。   Of the inventions disclosed in the present application, the outline of typical ones will be briefly described as follows.

の発明は、カメラで撮影された画像から、カメラ光学系の歪みを補正するカメラ校正装置におけるカメラ校正方法であって、校正の際に使われるレンズ歪み関数が、偏心に依存するレンズ歪みをパラメタとして有し、撮像面上のレンズの光軸中心からの距離の3乗と撮像面の各軸方向への偏心量に比例したレンズ歪みと、撮像面上のレンズの光軸中心からの距離の5乗と撮像面の各軸方向への偏心量に比例したレンズ歪みとの和を含む式として表されること、を特徴とする。 A first aspect of the present invention is a camera calibration method in a camera calibration apparatus for correcting distortion of a camera optical system from an image photographed by a camera, wherein a lens distortion function used for calibration depends on eccentricity. As a parameter, lens distortion proportional to the cube of the distance from the optical axis center of the lens on the imaging surface and the amount of eccentricity in the direction of each axis of the imaging surface, and from the optical axis center of the lens on the imaging surface It is expressed as an equation including the sum of the fifth power of the distance and the lens distortion proportional to the amount of eccentricity in the direction of each axis of the imaging surface.

の発明は、カメラで撮影された画像から、カメラ光学系の歪みを補正するカメラ校正装置におけるカメラ校正方法であって、校正の際に使われるレンズ歪み関数が、偏心に依存するレンズ歪みをパラメタとして有し、一つの平面上にある観察点の座標と撮像面上の結像点の座標の組からレンズ歪みが無いとして、カメラ内部/外部パラメタを概算する第のステップと、光学系の偏心に依存しない、撮像面上のレンズの光軸中心からの方向によらず、撮像面上のレンズの光軸中心からの距離のみにより歪み量が異なる歪みと、光学系の偏心に依存し、撮像面上のレンズの光軸中心からの方向と距離の両方により歪み量が異なる歪みの両方を足し合わせたレンズ歪み関数を用いて、3次元空間中の点のカメラ撮像面上の観測位置と、前記レンズ歪み関数を用いて計算したカメラ撮像上の位置との差の二乗が最も小さくなる最小二乗法で歪み係数を概算する第2のステップと、前記歪み関数と、カメラ位置と3次元上の観測点で規定されるエピポーラ幾何学の拘束条件を用いて、3次元空間中の点のカメラ撮像面上の観測位置と、前記レンズ歪み関数を用いて計算したカメラ撮像面上の位置との差が極小となるような最適化手法を用いてカメラ内部/外部パラメタとレンズ歪み係数を高精度化する第3のステップ、を持つことを特徴とする。 A second invention is a camera calibration method in a camera calibration apparatus for correcting distortion of a camera optical system from an image photographed by a camera, wherein a lens distortion function used for calibration depends on eccentricity. A first step of estimating the camera internal / external parameters on the assumption that there is no lens distortion from the set of coordinates of the observation point on one plane and the coordinates of the imaging point on the imaging plane, Depends on the eccentricity of the optical system and the distortion that differs depending on the distance from the optical axis center of the lens on the imaging surface, regardless of the direction from the optical axis center of the lens on the imaging surface, regardless of the eccentricity of the system Using a lens distortion function that adds both distortions with different amounts of distortion depending on both the direction and distance from the center of the optical axis of the lens on the imaging surface, observation of points in the three-dimensional space on the camera imaging surface Position and the label A second step of approximating a distortion coefficient by a least square method that minimizes the square of the difference from the position on the camera image calculated using the image distortion function, the distortion function, the camera position, and the observation in three dimensions Using the epipolar geometry constraint defined by the point, the difference between the observation position of the point in the three-dimensional space on the camera imaging surface and the position on the camera imaging surface calculated using the lens distortion function is The third step is to improve the accuracy of the internal / external parameters of the camera and the lens distortion coefficient by using an optimization method that minimizes the size.

の発明は、第1または第2の発明のカメラ校正方法をコンピュータに実行させるためのプログラムである。 A third invention is a program for causing a computer to execute the camera calibration method of the first or second invention.

の発明は、第1または第2の発明のカメラ校正方法をコンピュータに実行させるためのプログラムを格納した記憶媒体である。 A fourth invention is a storage medium storing a program for causing a computer to execute the camera calibration method of the first or second invention.

の発明は、第1または第2の発明のカメラ校正方法を実行するカメラ校正装置である。 The fifth invention is a camera calibration apparatus for executing the camera calibration method of the first or second invention.

以下、本発明の理解を助けるために、より具体的に説明するが、これにより本発明が限定されるものではない。   Hereinafter, in order to help the understanding of the present invention, more specific description will be given, but the present invention is not limited thereby.

本発明により、レンズの偏心に依存しない従来の歪みに加えて、レンズの偏心に依存する歪みも加えたレンズ歪みの補正を行うため、高精度なレンズ歪みの補正が可能となる。   According to the present invention, in addition to the conventional distortion that does not depend on the eccentricity of the lens, the lens distortion that includes the distortion that depends on the eccentricity of the lens is corrected.

以下、図面を用いて本発明の実施の形態を説明する。   Hereinafter, embodiments of the present invention will be described with reference to the drawings.

図1は空間中にある平面上の観測点とカメラ撮像素子上の結像点の関係を示す図であり、101は空間中の平面B1、102は平面上の観測点pB1,1、111はカメラC、112はレンズ、113はレンズ中心、114は撮像素子、115は撮像素子原点、116は観測点pB1,1の結像点pC,B1,1、を示す。 FIG. 1 is a diagram showing the relationship between observation points on a plane in the space and image formation points on the camera image sensor. Reference numeral 101 denotes a plane B1 in the space, and 102 denotes observation points p B1,1 , 111 on the plane. Denotes a camera C, 112 denotes a lens, 113 denotes a lens center, 114 denotes an imaging element, 115 denotes an imaging element origin, and 116 denotes an imaging point p C, B1,1 of the observation point p B1,1 .

図3に本実施形態のカメラ校正方法の処理のフローを示す。なお、本明細書においては、カメラ内部/外部パラメタおよびレンズ歪み係数等のパラメタの算出方法をカメラ校正方法と呼ぶ。   FIG. 3 shows a processing flow of the camera calibration method of the present embodiment. In this specification, a method for calculating parameters such as camera internal / external parameters and lens distortion coefficient is referred to as a camera calibration method.

カメラ校正に用いられるカメラ内部/外部パラメタおよびレンズ歪み係数は、以下のように求められる。以下の処理はカメラ校正装置(図示していない)によって実行することができる。カメラ校正装置は記憶装置に記憶されたプログラムとコンピュータによって構成することができ、また、そのプログラムの一部または全部に代えてハードウェアで構成することもできる。
Step0.観測点座標と結像点座標の組の入力。
Step1.レンズ歪みが無いとして、カメラ内部/外部パラメタを概算。
Step1−1.空間中の平面B1とカメラ撮像素子のホモグラフィ行列を概算。
Step1−2.カメラ内部パラメタを概算。
Step1−3.カメラ外部パラメタを概算。
Step2.レンズ歪み係数を概算。
Step3.カメラ内部/外部パラメタ、レンズ歪み係数を最適化手法により高精度化。
The camera internal / external parameters and the lens distortion coefficient used for camera calibration are obtained as follows. The following processing can be executed by a camera calibration device (not shown). The camera calibration device can be configured by a program stored in a storage device and a computer, or can be configured by hardware instead of part or all of the program.
Step 0. Input of observation point coordinates and imaging point coordinates.
Step1. Estimated camera internal / external parameters assuming no lens distortion.
Step 1-1. Approximate the homography matrix of the plane B1 in space and the camera image sensor.
Step 1-2. Approximate camera internal parameters.
Step 1-3. Approximate camera external parameters.
Step2. Approximate lens distortion coefficient.
Step3. Highly accurate camera internal / external parameters and lens distortion coefficient using optimization techniques.

処理の流れは従来と同じであるが、Step2以降で求められるレンズ歪み係数の中に、レンズの偏心に依存した成分が新たに導入されている。   The processing flow is the same as that of the prior art, but a component depending on the decentration of the lens is newly introduced in the lens distortion coefficient obtained after Step 2.

カメラ内部/外部パラメタおよびレンズ歪み係数(歪曲歪みの係数およびレンズ偏心の歪み係数)を求めるために、一平面上にあるn個(nは8以上の整数)の観察点を、カメラと平面の相対位置を変えてn回(nは3以上の整数)撮影したときの、空間中にある観察点の座標とカメラ撮像素子上の結像点の座標の組を入力として用いる。 To determine the camera internal / external parameter and lens distortion coefficients (coefficients and the distortion factor of the lens decentering distortion distortion), the observation point n 1 piece in one plane (n 1 8 or more integer), the camera and A set of coordinates of observation points in the space and coordinates of image formation points on the camera image sensor when n 2 times (n 2 is an integer of 3 or more) is taken while changing the relative position of the plane is used as an input.

以下、カメラ内部/外部パラメタおよびレンズ歪み係数の求め方を詳細に説明する。
[Step0.観測点座標と結像点座標の組の入力]
平面B1上の観測点の座標と、それが撮像素子上で結像している点の座標の組を入力する。このとき、平面B1上の観測点の座標は、世界座標系で表し、また、撮像素子上で結像している点の座標は、カメラ座標軸で表す。
Hereinafter, how to determine the camera internal / external parameters and the lens distortion coefficient will be described in detail.
[Step 0. Input of observation point coordinates and imaging point coordinate pairs]
A set of the coordinates of the observation point on the plane B1 and the coordinates of the point on which the image is formed on the image sensor is input. At this time, the coordinates of the observation point on the plane B1 are represented in the world coordinate system, and the coordinates of the point imaged on the image sensor are represented by the camera coordinate axis.

[Step1.レンズ歪みが無いとして、カメラ内部/外部パラメタを概算]
[Step1−1.空間中の平面B1とカメラ撮像素子のホモグラフィ行列を概算]
平面B1上の観測点pB1,1の座標を世界座標系で表し、XpB1,1=[XpB1,1,YpB1,1,ZpB1,1のようにデカルト座標系で表記する。また、撮像素子上にある観測点pB1,1のカメラCの撮像素子上の結像点pC,B1,1の座標を撮像素子のディジタル画像座標系で表し、mpC,B1,1=[upC,B1,1,vpC,B1,1のように表記する。
[Step 1. Estimate camera internal / external parameters assuming no lens distortion]
[Step 1-1. Approximate homography matrix of plane B1 in space and camera image sensor]
The coordinates of the observation point pB1,1 on the plane B1 are expressed in the world coordinate system, and expressed in a Cartesian coordinate system such as XpB1,1 = [ XpB1,1 , YpB1,1 , ZpB1,1 ] T. . Further , the coordinates of the imaging points p C, B1,1 on the image sensor of the camera C at the observation point p B1,1 on the image sensor are represented by the digital image coordinate system of the image sensor , and mp C , B1,1 = [ UpC, B1,1, vpC , B1,1 ] Denoted as T.

101の平面B1を世界座標系の座標[X,Y,Z]のZ=0の平面とし、その座標を2次元座標XpB1,1’=[XpB1,1,YpB1,1としたとき、平面B1上の観測点pB1,1と結像点pC,B1,1のそれぞれの座標は、平面B1と撮像素子の平面で関連づけられる1つのホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を使って次のように関係づけられる。ただし、hC,B1,1、hC,B1,2、hC,B1,3はそれぞれ3行1列の列ベクトルである。 The plane B1 of 101 is the plane [Z, 0, T ] of the coordinate [X, Y, Z] T of the world coordinate system, and the coordinates are the two-dimensional coordinates X pB1,1 ′ = [X pB1,1 , Y pB1,1 ] T , The coordinates of the observation point p B1,1 on the plane B1 and the imaging points p C, B1,1 are one homography matrix H C, B1 = corresponding to the plane B1 and the plane of the image sensor. [H C, B1,1 h C, B1,2 h C, B1,3 ] are used as follows. However, h C, B1,1 , h C, B1,2 , h C, B1,3 are column vectors of 3 rows and 1 column, respectively.

Figure 0004970118
Figure 0004970118

ここで、ベクトルの上に付いている「〜」は同次座標であることを示し、チルダmpC,B1,1=[upC,B1,1,vpC,B1,1,1]、チルダXpB1,1’=[XpB1,1,YpB1,1,1]であり、sはスカラ値である。 Here, “˜” on the vector indicates a homogeneous coordinate, and the tilde m pC, B1,1 = [ upC , B1,1 , v pC, B1,1,1 ] T , The tilde X pB1,1 ′ = [X pB1,1 , Y pB1,1 , 1] T , and s is a scalar value.

平面B1の点の座標とそれに対応する撮像素子上の結像点の座標を(8.1)式に代入して方程式をつくり、ホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を算出する。この方法としては、たとえば、下記の方法がある。まず、n組(ただしnは8以上の整数)の観測点の座標と結像点の座標の組を使って、次の行列LC,B1を作る。 By substituting the coordinates of the point on the plane B1 and the coordinates of the image forming point on the image sensor corresponding to the equation into the equation (8.1), an equation is created, and the homography matrix H C, B1 = [h C, B1,1 h C, B1, 2hC, B1 , 3 ] are calculated. Examples of this method include the following method. First, the following matrices L C and B1 are created using n 1 set (where n 1 is an integer of 8 or more) observation point coordinates and image formation point coordinates.

Figure 0004970118
Figure 0004970118

ここで、0=[0,0,0]である。 Here, 0 = [0, 0, 0] T.

次にLC,B1 C,B1の固有ベクトルを求め、その中の最も固有値が小さい固有ベクトルをxC,B1=[hC,B1,1 C,B1,2 C,B1,3 ]とする。最後にこのベクトルxC,B1の各エレメントを並び直すことによりホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を得る。 Then determine the eigenvectors of L C, B1 T L C, B1, an eigenvector least eigenvalue is small therein x C, B1 = [h C , B1,1 T h C, B1,2 T h C, B1, 3 T ]. Finally, the elements of the vectors x C and B1 are rearranged to obtain a homography matrix H C, B1 = [h C, B1,1 h C, B1,2 h C, B1,3 ].

[Step1−2.カメラ内部パラメタを概算]
次に、ホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]のエレメントを次のように表記する。
[Step 1-2. Approximate camera internal parameters]
Next, the elements of the homography matrix H C, B1 = [h C, B1,1 h C, B1,2 h C, B1,3 ] are expressed as follows.

Figure 0004970118
Figure 0004970118

ただしiは1≦i≦3の整数である。 However, i is an integer of 1 ≦ i ≦ 3.

上記表記を使って、次のようなベクトルを作る。   Using the above notation, create the following vector.

Figure 0004970118
Figure 0004970118

ただしjは1≦j≦3の整数である。 However, j is an integer of 1 ≦ j ≦ 3.

平面の位置をn回(ただしnは3以上の整数)変えてホモグラフィ行列HC,B1=[hC,B1,1C,B1,2C,B1,3]を作り、それらから得たn個のベクトルvC,B1,i,j〜vC,Bm,i,jを使って、次の行列Vを作る。 The plane position is changed n 2 times (where n 2 is an integer of 3 or more) to create a homography matrix H C, B1 = [h C, B1,1 h C, B1,2 h C, B1,3 ], Using the n 2 vectors v C, B1, i, j to v C, Bm, i, j obtained from them, the following matrix V C is created.

Figure 0004970118
Figure 0004970118

次にV の固有ベクトルを求め、その中の最も固有値が小さい固有ベクトルをb=[BC,1,1C,1,2C,2,2C,1,3C,2,3C,3,3]とする。 Next, the eigenvector of V C T V C is obtained, and the eigenvector having the smallest eigenvalue is represented by b C = [ BC, 1,1 BC , 1,2 BC, 2,2 BC , 1,3 B C, 2,3 B C, 3,3 ].

このbを用いてカメラ内部行列Aは以下のように概算される。 Using this b C , the camera internal matrix A C is approximated as follows.

Figure 0004970118
Figure 0004970118

ただし、 However,

Figure 0004970118
Figure 0004970118

ここで、αは、カメラ撮像素子の水平走査方向の画素ピッチと、レンズの焦点距離の、積の逆数、βは、カメラ撮像素子の垂直走査方向の画素ピッチと、レンズの焦点距離の、積の逆数を、水平走査方向と垂直走査方向の成す角の正弦(sin)で割ったもの、γは、カメラ撮像素子の水平走査方向の画素ピッチと、レンズの焦点距離の、積の逆数を、水平走査方向と垂直走査方向の成す角の正接(tan)で割ったもの、uC,0は、光学系の光軸と撮像素子との交点のカメラ撮像素子の水平走査方向の座標、vC,0は、光学系の光軸と撮像素子との交点のカメラ撮像素子の垂直走査方向の座標、である。 Here, α C is the inverse of the product of the pixel pitch in the horizontal scanning direction of the camera image sensor and the focal length of the lens, and β C is the pixel pitch in the vertical scanning direction of the camera image sensor and the focal length of the lens. , The inverse of the product divided by the sine of the angle formed by the horizontal scanning direction and the vertical scanning direction, γ C is the product of the pixel pitch in the horizontal scanning direction of the camera image sensor and the focal length of the lens The reciprocal divided by the tangent (tan) of the angle formed by the horizontal scanning direction and the vertical scanning direction, u C, 0 is the coordinate in the horizontal scanning direction of the camera image sensor at the intersection of the optical axis of the optical system and the image sensor , V C, 0 are the coordinates in the vertical scanning direction of the camera image sensor at the intersection of the optical axis of the optical system and the image sensor.

[Step1−3.カメラ外部パラメタを概算]
一方、カメラ外部行列GC,B1を、カメラ外部パラメタをエレメントに持つカメラ回転行列RC,B1とカメラ平行並進ベクトルtC,B1からGC,B1=[RC,B1C,B1]と定義すると、カメラ外部行列GC,B1はこれまでの結果を用いて
[Step 1-3. Approximate camera external parameters]
On the other hand, camera external matrix G C, B1 , camera rotation matrix R C, B1 having camera external parameters as elements and camera parallel translation vectors t C, B1 to G C, B1 = [ RC, B1 t C, B1 ] If the camera external matrix GC , B1 is defined as

Figure 0004970118
Figure 0004970118

ただし、 However,

Figure 0004970118
Figure 0004970118

と概算される。 Is estimated.

[Step2.レンズ歪み係数を概算]
次に、レンズ歪みを反映するためのレンズ歪み係数を計算する。ここで考慮する歪みは、レンズ歪みの中で一般に歪み量の大きい、放射方向歪み(歪曲歪み)、および、レンズの曲率中心がレンズ基準軸からずれている状態、つまり、偏心している状態における歪みである。
[Step 2. Estimate lens distortion coefficient]
Next, a lens distortion coefficient for reflecting the lens distortion is calculated. The distortions to be considered here are generally large distortion amounts in the lens distortion, radial distortion (distortion distortion), and distortion in a state where the center of curvature of the lens is deviated from the lens reference axis, that is, in an eccentric state. It is.

図2にレンズの偏心と歪みの関係を示す。偏心には図2(a)に示すように、2種類の表示方法があり、1つは図2(a)の上図に示すように並進移動Eによる表記、もう1つは図2(a)の下図に示すように角度εによる表記である。ずれ量が小さい場合は、曲率半径Rを用いてE=Rεとなり、平行移動Eと角度εは比例関係となるので、歪み係数を求める際にはどちらを用いても本質的には違いがなく、どちらを用いても良い。一般に、レンズの偏心による歪みは、平行移動Eと角度εを比例関係として扱ってよい程度である。   FIG. 2 shows the relationship between the lens eccentricity and distortion. As shown in FIG. 2 (a), there are two types of display methods for eccentricity, one is represented by translation E as shown in the upper diagram of FIG. 2 (a), and the other is shown in FIG. 2 (a). ) Is expressed by an angle ε as shown in the lower diagram. When the amount of deviation is small, E = Rε using the radius of curvature R, and the translation E and the angle ε are in a proportional relationship, so there is essentially no difference in using either one when obtaining the distortion coefficient. Either of these may be used. In general, the distortion due to the eccentricity of the lens is such that the translation E and the angle ε can be treated as a proportional relationship.

図2(b)はEを用いて歪みを示した図である。無歪み時の結像位置lに対して、
偏心に依存しない歪み(V‖l‖+V‖l‖)と、それに加えて偏心に依存する歪み(V3E+V5E‖l‖)(E・l)lがあり、それらを加えた
FIG. 2B is a diagram showing distortion using E. FIG. For the imaging position l without distortion,
There are strains that do not depend on eccentricity (V 3 ‖l‖ 2 + V 5 ‖l‖ 4 ), and in addition, strains that depend on eccentricity (V 3E + V 5E ‖l‖ 2 ) (E · l) l. added

Figure 0004970118
Figure 0004970118

が全歪みとなる。 Becomes the total distortion.

これらをカメラCのカメラ座標系の各エレメントで書き下したものが下記式(8.10)である。ただし、係数V、V、V3E、V5E、E=[E]はそのまま使わず、後述する(8.11)式によって行列演算しやすいように、各項に乗じる係数を適宜組み合わせ、それらをkC,1、kC,2、qC,1、qC,2、qC,3、qC,4という係数に置き換えた。 The following formula (8.10) is the result of writing these down with each element of the camera coordinate system of camera C. However, the coefficients V 3 , V 5 , V 3E , V 5E , and E = [E x E y ] are not used as they are, and the coefficients to be multiplied by each term are used so that matrix calculation can be easily performed by the equation (8.11) described later. They were appropriately combined and replaced with the coefficients k C, 1 , k C, 2 , q C, 1 , q C, 2 , q C, 3 , q C, 4 .

Figure 0004970118
Figure 0004970118

ただし、 However,

Figure 0004970118
Figure 0004970118

である。 It is.

ここで、ブレーヴェxpC,B1,1=[ブレーヴェxpC,B1,1,ブレーヴェypC,B1,1はレンズで歪んだ場合に結像される座標、xpC,B1,1=[xpC,B1,1,ypC,B1,1はレンズ歪みがない場合に結像される座標、kC,1はレンズの偏心に依存しない放射方向のレンズ歪み係数で、レンズ中心からの距離の3乗に比例した歪みの係数、kC,2はレンズの偏心に依存しない放射方向のレンズ歪み係数で、レンズ中心からの距離の5乗に比例した歪みの係数、qC,1はレンズの偏心に依存した放射方向のレンズ歪み係数で、レンズ中心からの距離の3乗と、撮像素子の水平走査方向の偏心量に依存した歪みの係数、qC,2はレンズの偏心に依存した放射方向のレンズ歪み係数で、レンズ中心からの距離の5乗と、撮像素子の水平走査方向の偏心量に依存した歪みの係数、qC,3はレンズの偏心に依存した放射方向のレンズ歪み係数で、レンズ中心からの距離の3乗と、撮像素子の垂直走査方向の偏心量に依存した歪みの係数、qC,4はレンズの偏心に依存した放射方向のレンズ歪み係数で、レンズ中心からの距離の5乗と、撮像素子の垂直走査方向の偏心量に依存した歪みの係数、である。 Here, Breve x pC, B1,1 = [Breve x pC, B1,1 , Breve y pC, B1,1 ] T is a coordinate formed when the lens is distorted, x pC, B1,1 = [ x pC, B1,1 , y pC, B1,1 ] T is a coordinate formed when there is no lens distortion, and k C, 1 is a lens distortion coefficient in the radial direction that does not depend on the eccentricity of the lens. Is a distortion coefficient proportional to the third power of the distance, k C, 2 is a lens distortion coefficient in the radial direction independent of the lens eccentricity, and is a distortion coefficient proportional to the fifth power of the distance from the lens center, q C, 1 Is a lens distortion coefficient in the radial direction that depends on the eccentricity of the lens, the cube of the distance from the lens center, and a distortion coefficient that depends on the amount of eccentricity in the horizontal scanning direction of the image sensor, and q C, 2 Depending on the lens distortion coefficient in the radial direction, And the fifth power of the distance from the lens center, a horizontal factor of distortion depends on the eccentricity of the scanning direction, q C, 3 is a lens distortion coefficient of the radiation direction depending on the eccentricity of the lens of the imaging device, the distance from the lens center And the coefficient of distortion depending on the amount of eccentricity in the vertical scanning direction of the image sensor, q C, 4 is the lens distortion coefficient in the radial direction depending on the eccentricity of the lens, and the fifth power of the distance from the lens center, This is a distortion coefficient depending on the amount of eccentricity in the vertical scanning direction of the image sensor.

(8−10)式の上の2つの式のそれぞれの第2項目は偏心に関係なく現れる歪み量を表しており、第3項目、第4項目は偏心に依存する歪み量を表している。   The second item of each of the two equations above the equation (8-10) represents the amount of distortion that appears regardless of the eccentricity, and the third item and the fourth item represent the amount of distortion depending on the eccentricity.

上記の式をカメラ座標系x=[x,y]とディジタル座標系m=Aの関係を使い、レンズ歪み係数kC,1、kC,2、qC,1、qC,2、qC,3、qC,4を以下のように最小二乗法を用いて概算する。ただし、Aのエレメントの1つであるγは通常非常に小さいので、計算量と精度との兼ね合いを考慮して0として計算する。 The above equation camera coordinate system x C = [x C, y C] using the relationship between the digital coordinate system m C = A C x C, the lens distortion coefficient k C, 1, k C, 2, q C, 1 , Q C, 2 , q C, 3 , q C, 4 are approximated using the least squares method as follows. However, since the gamma C is one of the elements of A C is usually very small, to calculate a balance between computational and accuracy as 0 in consideration.

Figure 0004970118
Figure 0004970118

ただし、 However,

Figure 0004970118
Figure 0004970118

である。 It is.

なお、上記の式では、一つの平面上の観測点と撮像素子上の結像点の組が、一つの平面あたりn組あり、その平面をn回場所を変えてカメラで撮像した、観測点と結像点の組を全て使用している。 In the above formula, there are n 1 sets of observation points on one plane and imaging points on the image sensor per plane, and the plane was imaged with the camera by changing the location n 2 times. All pairs of observation points and imaging points are used.

[Step3.カメラ内部/外部パラメタ、レンズ歪み係数を最適化手法により高精度化]
次に、上記の式が最も小さくなるように、カメラ内部行列A、レンズ歪み係数kC,1、kC,2、qC,1、qC,2、qC,3、qC,4、カメラ回転行列RC,Bj、カメラ平行並進ベクトルtC,Bjを、たとえばLevenberg-Marquardt法などで最適化する。
[Step 3. Highly accurate camera internal / external parameters and lens distortion coefficient by optimization method]
Next, the camera internal matrix A C , lens distortion coefficients k C, 1 , k C, 2 , q C, 1 , q C, 2 , q C, 3 , q C, 4. The camera rotation matrix R C, Bj and the camera parallel translation vector t C, Bj are optimized by, for example, the Levenberg-Marquardt method.

Figure 0004970118
Figure 0004970118

ここで、nは1枚の平面上にある観測点と撮像素子上の結像点の組の数、nは平面の場所を変えて撮像した回数、XpBj,i’は観測点の座標、ブレーヴェmpC,Bj,iは結像点の実測した座標、m[.]は観測点XpBj,i’をA、kC,1、kC,2、qC,1、qC,2、qC,3、qC,4、RC,Bj、tC,Bjを用いてカメラの撮像素子上に結像した結像点の座標を計算する投射関数である。この投射関数m[.]の最後の引数XpBj,i’は3次元空間中の点であり、ユーザが規定した世界座標系上のユーザが決めた点の座標であり、既知である。引数A、kC,1、kC,2、qC,1、qC,2、qC,3、qC,4、RC,Bj、tC,Bjが、求めたいパラメタである。本実施の形態では投射関数m[.]を用いてパラメタを求めているが、これを一般的にいうと、カメラ位置と3次元上の観測点で規定されるエピポーラ幾何学の拘束条件を用いてパラメタを求めるということになる。 Here, n 1 is the number of pairs of observation points on one plane and imaging points on the image sensor, n 2 is the number of times of imaging by changing the location of the plane, and X pBj, i ′ is the number of observation points The coordinates, Breve m pC, Bj, i are the measured coordinates of the imaging point, m [. ] The observation point X pBJ, a i 'A C, k C, 1, k C, 2, q C, 1, q C, 2, q C, 3, q C, 4, R C, Bj, t C , Bj is a projection function for calculating the coordinates of the image point formed on the image sensor of the camera. This projection function m [. ] Last argument X pBj, i ′ is a point in the three-dimensional space, the coordinates of the point determined by the user on the world coordinate system defined by the user, and known. Arguments AC , kC , 1 , kC , 2 , qC , 1 , qC , 2 , qC , 3 , qC , 4 , RC, Bj , tC, Bj are parameters to be obtained. . In this embodiment, the projection function m [. ], The parameters are generally determined using epipolar geometrical constraints defined by camera positions and three-dimensional observation points.

投射関数m[.]は、レンズ歪みがない場合に結像される座標xpC,B1,1=[xpC,B1,1,ypC,B1,1を入力とし、レンズで歪んだ場合に結像される座標ブレーヴェxpC,B1,1=[ブレーヴェxpC,B1,1,ブレーヴェypC,B1,1を出力するレンズ歪み関数D[kC,1、kC,2、qC,1、qC,2、qC,3、qC,4、xpC,Bj,i]を用いて、 The projection function m [.] Is distorted by the lens with coordinates xpC, B1,1 = [ xpC , B1,1 , ypC, B1,1 ] T being input when there is no lens distortion. A lens distortion function D [k C, 1 , k C, 2 that outputs a coordinate breve x pC, B1,1 = [Breve x pC, B1,1 , breve y pC, B1,1 ] T , Q C, 1 , q C, 2 , q C, 3 , q C, 4 , x pC, Bj, i ]

Figure 0004970118
Figure 0004970118

と表される。ここでブレーヴェxpC,Bj,iは、 It is expressed. Where Breve x pC, Bj, i is

Figure 0004970118
Figure 0004970118

によって計算され、レンズ歪みの関数D[kC,1、kC,2、qC,1、qC,2、qC,3、qC,4、xpC,Bj,i]は(8.10)式に従って計算される。 The lens distortion function D [k C, 1 , k C, 2 , q C, 1 , q C, 2 , q C, 3 , q C, 4 , x pC, Bj, i ] is (8 10) Calculated according to the equation.

(8.12)式の最適化は、上記実施例では、Levenberg-Marquardt法により行うとしたが、勾配法やNewton法などを用いてもよい。   Although the optimization of the equation (8.12) is performed by the Levenberg-Marquardt method in the above embodiment, a gradient method, a Newton method, or the like may be used.

また、実施例中では、nは8以上の整数、nは3以上の整数としているが、nはStep1−1で使用しているホモグラフィ行列HC,B1を一意に決めるのに必要な観察点の組みの数、すなわちLC,B1 C,B1のランクが8以上(LC,B1 C,B1は9×9の行列)となるような観察点の組の数でなければならず、9組が望ましい。 In the embodiment, n 1 is an integer greater than or equal to 8, and n 2 is an integer greater than or equal to 3, but n 1 is used to uniquely determine the homography matrices H C and B1 used in Step 1-1. The number of sets of observation points required, that is, a set of observation points such that the rank of L C, B1 T L C, B1 is 8 or more (L C, B1 T L C, B1 is a 9 × 9 matrix) It must be a number and 9 sets are preferred.

また、nはStep1−2で求めているカメラ内部行列Aの独立な変数5個を出すために必要なものであり、V のランクが5以上(V は6×6の行列)となるような撮影回数でなければならない。 Further, n 2 are those required to produce the independent variables five seeking camera intrinsic matrix A C In Step 1-2, V C T rank V C is 5 or more (V C T V C is 6 × 6 matrix).

以上のように求められた、最適化された歪み係数kC,1、kC,2、qC,1、qC,2、qC,3、qC,4を係数として持つ歪み関数D[.]や、最適化されたカメラパラメタ行列Aおよび最適化された歪み係数kC,1、kC,2、qC,1、qC,2、qC,3、qC,4を係数として持つ投射関数m[.]、および、それらの逆関数を用いることで、光学系に偏心に依存した歪みが存在する場合にも、画像の歪みの補正や、画像からの3次元復元の際の歪みの補正などの画像処理といったカメラ校正を行うことができる。 The distortion function D having the optimized distortion coefficients k C, 1 , k C, 2 , q C, 1 , q C, 2 , q C, 3 , q C, 4 as coefficients obtained as described above. [.] and the camera parameters matrix optimized A C and optimized distortion coefficient k C, 1, k C, 2, q C, 1, q C, 2, q C, 3, q C, 4 By using the projection function m [.] Having a coefficient as a coefficient and the inverse function thereof, even when there is a distortion depending on the eccentricity in the optical system, the distortion of the image is corrected or the image is three-dimensionally restored. It is possible to perform camera calibration such as image processing such as distortion correction at the time.

本実施の形態の効果としては、レンズの偏心に依存しない従来の歪みに加えて、レンズの偏心に依存する歪みも加えたレンズ歪みの補正を行うため、高精度なレンズ歪みの補正が可能という利点がある。   As an effect of this embodiment, in addition to the conventional distortion that does not depend on the eccentricity of the lens, the lens distortion that includes the distortion that depends on the eccentricity of the lens is also corrected. There are advantages.

空間中の平面上の観測点とカメラ撮像素子上の結像点の関係を示す図である。It is a figure which shows the relationship between the observation point on the plane in space, and the image formation point on a camera image pick-up element. レンズの偏心とレンズ歪みの関係(偏心量の2つの表示方法)を示す図である。It is a figure which shows the relationship (two display methods of the amount of eccentricity) of lens eccentricity and lens distortion. レンズの偏心とレンズ歪みの関係(偏心量と歪みの関係)を示す図である。It is a figure which shows the relationship between the eccentricity of a lens and lens distortion (Relationship between the amount of eccentricity and distortion). レンズの偏心とレンズ歪みの関係(レンズ偏心の有無による歪み画像の違い)を示す図である。It is a figure which shows the relationship between the lens eccentricity and lens distortion (difference of the distortion image by the presence or absence of lens eccentricity). 処理のフローを示す図である。It is a figure which shows the flow of a process.

符号の説明Explanation of symbols

101は空間中にある平面B1、102は平面B1上の観測点pB1,1、111はカメラC、112はレンズ、113はレンズ中心、114は撮像素子、115は撮像素子原点、116は観測点pB1,1の結像pC,B1,1である。 101 is a plane B1 in the space, 102 is an observation point pB1,1 , 111 on the plane B1, 111 is a camera C, 112 is a lens, 113 is a lens center, 114 is an image sensor, 115 is an image sensor origin, and 116 is an observation This is the image p C, B1,1 of the point p B1,1 .

Claims (5)

カメラで撮影された画像から、カメラ光学系の歪みを補正するカメラ校正装置におけるカメラ校正方法であって、
校正の際に使われるレンズ歪み関数が、偏心に依存するレンズ歪みをパラメタとして有し、
撮像面上のレンズの光軸中心からの距離の3乗と撮像面の各軸方向への偏心量に比例したレンズ歪みと、撮像面上のレンズの光軸中心からの距離の5乗と撮像面の各軸方向への偏心量に比例したレンズ歪みとの和を含む式として表されること、
を特徴とするカメラ校正方法。
A camera calibration method in a camera calibration device for correcting distortion of a camera optical system from an image taken by a camera,
Lens distortion function used during calibration, have a lens distortion that depends on the eccentricity as a parameter,
The third power of the distance from the optical axis center of the lens on the imaging surface, the lens distortion proportional to the amount of eccentricity in the direction of each axis of the imaging surface, the fifth power of the distance from the optical axis center of the lens on the imaging surface, and imaging Expressed as a formula including the sum of lens distortion proportional to the amount of eccentricity in the axial direction of the surface,
A camera calibration method characterized by the above.
カメラで撮影された画像から、カメラ光学系の歪みを補正するカメラ校正装置におけるカメラ校正方法であって、
校正の際に使われるレンズ歪み関数が、偏心に依存するレンズ歪みをパラメタとして有し、
一つの平面上にある観察点の座標と撮像面上の結像点の座標の組からレンズ歪みが無いとして、カメラ内部/外部パラメタを概算する第のステップと、
光学系の偏心に依存しない、撮像面上のレンズの光軸中心からの方向によらず、撮像面上のレンズの光軸中心からの距離のみにより歪み量が異なる歪みと、光学系の偏心に依存し、撮像面上のレンズの光軸中心からの方向と距離の両方により歪み量が異なる歪みの両方を足し合わせたレンズ歪み関数を用いて、3次元空間中の点のカメラ撮像面上の観測位置と、前記レンズ歪み関数を用いて計算したカメラ撮像上の位置との差の二乗が最も小さくなる最小二乗法で歪み係数を概算する第2のステップと、
前記歪み関数と、カメラ位置と3次元上の観測点で規定されるエピポーラ幾何学の拘束条件を用いて、3次元空間中の点のカメラ撮像面上の観測位置と、前記レンズ歪み関数を用いて計算したカメラ撮像面上の位置との差が極小となるような最適化手法を用いてカメラ内部/外部パラメタとレンズ歪み係数を高精度化する第3のステップ、
を持つことを特徴とするカメラ校正方法。
A camera calibration method in a camera calibration device for correcting distortion of a camera optical system from an image taken by a camera,
Lens distortion function used during calibration, have a lens distortion that depends on the eccentricity as a parameter,
A first step of approximating camera internal / external parameters, assuming that there is no lens distortion from a set of coordinates of observation points on one plane and coordinates of imaging points on the imaging surface;
Regardless of the direction from the optical axis center of the lens on the imaging surface, which does not depend on the eccentricity of the optical system, the amount of distortion differs depending only on the distance from the optical axis center of the lens on the imaging surface, Depending on both the direction and distance from the center of the optical axis of the lens on the imaging surface, a lens distortion function that adds both distortions that differ in the amount of distortion is used. A second step of approximating a distortion coefficient by a least square method that minimizes the square of the difference between the observation position and the position on the camera image calculated using the lens distortion function;
Using the distortion function, the epipolar geometry constraint defined by the camera position and the observation point in three dimensions, the observation position on the camera imaging surface of the point in the three-dimensional space, and the lens distortion function A third step of increasing the accuracy of the camera internal / external parameters and the lens distortion coefficient using an optimization method that minimizes the difference between the calculated position on the camera imaging surface and
A camera calibration method characterized by comprising:
請求項1または2に記載のカメラ校正方法をコンピュータに実行させるためのプログラム。 Program for executing a camera calibration method according to the computer to claim 1 or 2. 請求項1または2に記載のカメラ校正方法をコンピュータに実行させるためのプログラムを格納した記憶媒体。 Storage medium storing a program for executing the camera calibration method according to the computer to claim 1 or 2. 請求項1または2に記載のカメラ校正方法を実行するカメラ校正装置。 The camera calibration device that performs camera calibration method according to claim 1 or 2.
JP2007102346A 2007-04-10 2007-04-10 Camera calibration method, program thereof, recording medium, and apparatus Expired - Fee Related JP4970118B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007102346A JP4970118B2 (en) 2007-04-10 2007-04-10 Camera calibration method, program thereof, recording medium, and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007102346A JP4970118B2 (en) 2007-04-10 2007-04-10 Camera calibration method, program thereof, recording medium, and apparatus

Publications (2)

Publication Number Publication Date
JP2008262255A JP2008262255A (en) 2008-10-30
JP4970118B2 true JP4970118B2 (en) 2012-07-04

Family

ID=39984706

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007102346A Expired - Fee Related JP4970118B2 (en) 2007-04-10 2007-04-10 Camera calibration method, program thereof, recording medium, and apparatus

Country Status (1)

Country Link
JP (1) JP4970118B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458181A (en) * 2013-06-29 2013-12-18 华为技术有限公司 Lens distortion parameter adjustment method and device and camera shooting device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5523017B2 (en) 2009-08-20 2014-06-18 キヤノン株式会社 Image processing apparatus and image processing method
CN108447100B (en) * 2018-04-26 2020-02-11 王涛 Method for calibrating eccentricity vector and visual axis eccentricity angle of airborne three-linear array CCD camera
CN111798521B (en) * 2019-04-09 2023-10-31 Oppo广东移动通信有限公司 Calibration method and device, storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6101288A (en) * 1997-07-28 2000-08-08 Digital Equipment Corporation Method for recovering radial distortion parameters from a single camera image
ATE543093T1 (en) * 1998-09-10 2012-02-15 Wallac Oy ANALYZER FOR A LARGE-AREA IMAGE
EP1252480A1 (en) * 1999-11-12 2002-10-30 Go Sensors, L.L.C. Image metrology methods and apparatus
FR2808326B1 (en) * 2000-04-27 2002-07-12 Commissariat Energie Atomique METHOD FOR MEASURING A THREE-DIMENSIONAL OBJECT, OR A SET OF OBJECTS

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458181A (en) * 2013-06-29 2013-12-18 华为技术有限公司 Lens distortion parameter adjustment method and device and camera shooting device
CN103458181B (en) * 2013-06-29 2016-12-28 华为技术有限公司 Lens distortion parameter adjusting method, device and picture pick-up device

Also Published As

Publication number Publication date
JP2008262255A (en) 2008-10-30

Similar Documents

Publication Publication Date Title
US8630506B2 (en) Image correcting device, method for creating corrected image, correction table creating device, method for creating correction table, program for creating correction table, and program for creating corrected image
JP5544277B2 (en) Image correction apparatus, correction image generation method, correction table generation apparatus, correction table generation method, correction table generation program, and correction image generation program
JP7052788B2 (en) Camera parameter estimation device, camera parameter estimation method, and program
JP6011548B2 (en) Camera calibration apparatus, camera calibration method, and camera calibration program
JP5057948B2 (en) Distortion correction image generation unit and distortion correction image generation method
JP5742275B2 (en) System, method, computer program, and storage medium for performing geometric transformation
KR101255431B1 (en) Image correcting apparatus, correction image generating method, correction tabel generating apparatus, correction table generating method, computer readable recording medium storing a correction table generating program, and computer readable recording medium storing a correction image generating program
JP2009134509A (en) Device for and method of generating mosaic image
US10142616B2 (en) Device and method that compensate for displayed margin of error in IID
JP4970118B2 (en) Camera calibration method, program thereof, recording medium, and apparatus
JP2008217526A (en) Image processor, image processing program, and image processing method
JP2017191572A (en) Image processor, method thereof, and program
JP7111183B2 (en) Camera parameter estimation device, camera parameter estimation method, and program
JP7246175B2 (en) Estimation device, training device, estimation method and training method
JP2009146150A (en) Method and device for detecting feature position
JP4887820B2 (en) Image position measuring method, image position measuring apparatus, and image position measuring program
JP2013246779A (en) Unified optimal calculation method and program for two-dimensional or three-dimensional geometric transformation
WO2017154417A1 (en) Image processor, image-processing method, and program
JP2017163386A (en) Camera parameter estimation apparatus, camera parameter estimation method, and program
JP6260533B2 (en) Position / orientation estimation apparatus, position / orientation estimation method, and position / orientation estimation program
JP5875248B2 (en) Image processing apparatus, image processing method, and program
JP6315542B2 (en) Image generating apparatus and image generating method
US20130343636A1 (en) Image processing apparatus, control method of the same and non-transitory computer-readable storage medium
US10701293B2 (en) Method for compensating for off-axis tilting of a lens
JP2009104228A (en) Image alignment method and image alignment program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090713

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120214

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120313

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120403

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120404

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150413

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

LAPS Cancellation because of no payment of annual fees