JP2007189312A - Imaging apparatus, imaging method, and camera - Google Patents

Imaging apparatus, imaging method, and camera Download PDF

Info

Publication number
JP2007189312A
JP2007189312A JP2006003606A JP2006003606A JP2007189312A JP 2007189312 A JP2007189312 A JP 2007189312A JP 2006003606 A JP2006003606 A JP 2006003606A JP 2006003606 A JP2006003606 A JP 2006003606A JP 2007189312 A JP2007189312 A JP 2007189312A
Authority
JP
Japan
Prior art keywords
pixel
imaging
information
exit pupil
optical system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2006003606A
Other languages
Japanese (ja)
Other versions
JP4946059B2 (en
Inventor
Yosuke Kusaka
洋介 日下
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nikon Corp
Original Assignee
Nikon Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corp filed Critical Nikon Corp
Priority to JP2006003606A priority Critical patent/JP4946059B2/en
Publication of JP2007189312A publication Critical patent/JP2007189312A/en
Application granted granted Critical
Publication of JP4946059B2 publication Critical patent/JP4946059B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a technology of efficiently and surely correcting variations in outputs of each pixel by means of a small amount of correction data. <P>SOLUTION: An imaging apparatus disclosed herein for imaging an object image by using an imaging element 212 comprising pixels each of which is arranged with a micro lens in front of its photoelectric conversion section, the pixels are being arranged in the vicinity of a scheduled image forming face, is provided with a pixel information memory 218 for storing optical variation information of each pixel. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

本発明は撮像装置、撮像方法およびカメラに関する。   The present invention relates to an imaging apparatus, an imaging method, and a camera.

マイクロレンズを用いた瞳分割方式で焦点検出を行う撮像装置において、光電変換部が受光すべき焦点検出光束に光学系の口径食が発生し、一対の画像出力が不均一になることを補正するために、均一輝度面を光学系で撮像した場合の各一対の画素の出力を補正データとして記憶しておき、焦点検出時にこの補正データに応じて一対の画素出力を補正し、補正後の一対の画像出力に基づいて焦点検出を行うようにした撮像装置が知られている(例えば、特許文献1参照)。   In an imaging device that performs focus detection using a pupil division method using a microlens, correction is made so that the vignetting of the optical system occurs in the focus detection light beam to be received by the photoelectric conversion unit and the pair of image outputs becomes non-uniform. Therefore, the output of each pair of pixels when the uniform luminance surface is imaged by the optical system is stored as correction data, and the pair of pixel outputs are corrected according to the correction data at the time of focus detection. There is known an imaging apparatus that performs focus detection based on the image output (see, for example, Patent Document 1).

この出願の発明に関連する先行技術文献としては次のものがある。
特開2002−131623号公報
Prior art documents related to the invention of this application include the following.
JP 2002-131623 A

しかしながら、上述した従来の撮像装置では、均一輝度面に対する画素出力を補正用データとして画素出力を補正しているので、光学系の種類、フォーカシング状態、ズーミング状態、絞り状態などの光学系の条件に応じて膨大な補正用データを実測し、記憶しておく必要があり、現実的でない上に、補正用データを取得するときの光学系と実際に撮像するときの光学系に個体差がある場合には対応できないという問題がある。   However, since the conventional imaging apparatus described above corrects the pixel output using the pixel output for the uniform luminance plane as the correction data, the optical system conditions such as the type of the optical system, the focusing state, the zooming state, and the aperture state are used. It is necessary to actually measure and store a large amount of correction data accordingly, and it is not realistic, and there are individual differences between the optical system when acquiring correction data and the optical system when actually capturing images There is a problem that cannot be handled.

(1) 請求項1の発明は、光電変換部の前方にマイクロレンズを配置した画素を光学系の予定結像面近傍に配列した撮像素子を用いて被写体像を撮像する撮像装置において、各画素の光学的なバラツキ情報を記憶する画素情報記憶手段を備える。
(2) 請求項2の撮像装置は、画素情報記憶手段に記憶されている各画素のバラツキ情報に基づいて、撮像素子の各画素の出力を補正する出力補正手段を備える。
(3) 請求項3の撮像装置は、光学系の射出瞳の口径情報を発生する口径情報発生手段を備え、出力補正手段によって、画素情報記憶手段の各画素のバラツキ情報と口径情報発生手段の口径情報とに基づいて撮像素子の各画素の出力を補正する。
(4) 請求項4の撮像装置は、画素の光電変換部は一対の光電変換部から構成されており、複数の前記画素から出力される一対の出力データに基づいて光学系の焦点調節状態を検出する焦点検出手段を備える。
(5) 請求項5の撮像装置は、各画素のバラツキ情報が、各画素の光学的なバラツキに起因したマイクロレンズによる投影特性である。
(6) 請求項6の撮像装置は、各画素のバラツキ情報には、マイクロレンズによる投影方向が含まれる。
(7) 請求項7の撮像装置は、各画素のバラツキ情報には、マイクロレンズによる投影倍率が含まれる。
(8) 請求項8の撮像装置は、口径情報は光学系の射出瞳の径および位置の情報である。
(9) 請求項9の撮像装置は、口径情報発生手段によって、光学系の絞り、フォーカシングレンズ位置およびズームレンズ位置に応じた口径情報を発生するようにしたものである。
(10) 請求項10の撮像装置は、出力補正手段によって、各画素のバラツキ情報と口径情報に基づいて各画素の光電変換部が受光する光量を演算し、この光量に応じて各画素の出力を補正するようにしたものである。
(11) 請求項11の撮像装置は、撮像素子と光学系とをそれぞれ脱着可能な別個の構体に組み込んだものである。
(1) The invention of claim 1 is directed to an image pickup apparatus that picks up a subject image using an image pickup element in which a pixel in which a microlens is arranged in front of a photoelectric conversion unit is arranged in the vicinity of a predetermined image formation plane of an optical system. Pixel information storage means for storing the optical variation information.
(2) The imaging apparatus according to a second aspect includes output correction means for correcting the output of each pixel of the imaging element based on variation information of each pixel stored in the pixel information storage means.
(3) An imaging apparatus according to a third aspect includes aperture information generating means for generating aperture information of the exit pupil of the optical system, and the output correction means includes the variation information of each pixel in the pixel information storage means and the aperture information generating means. The output of each pixel of the image sensor is corrected based on the aperture information.
(4) In the imaging device according to claim 4, the photoelectric conversion unit of the pixel includes a pair of photoelectric conversion units, and the focus adjustment state of the optical system is determined based on the pair of output data output from the plurality of pixels. Focus detection means for detecting is provided.
(5) In the imaging device according to the fifth aspect, variation information of each pixel is a projection characteristic by a microlens caused by optical variation of each pixel.
(6) In the imaging device according to the sixth aspect, the variation information of each pixel includes the projection direction by the microlens.
(7) In the imaging device according to the seventh aspect, the variation information of each pixel includes the projection magnification by the microlens.
(8) In the imaging device according to an eighth aspect, the aperture information is information on the diameter and position of the exit pupil of the optical system.
(9) The imaging apparatus according to claim 9 is configured to generate aperture information corresponding to the aperture, focusing lens position, and zoom lens position of the optical system by the aperture information generating means.
(10) In the imaging device according to claim 10, the output correction means calculates the amount of light received by the photoelectric conversion unit of each pixel based on the variation information and the aperture information of each pixel, and outputs each pixel according to the amount of light. Is to be corrected.
(11) The imaging apparatus according to an eleventh aspect is one in which the imaging element and the optical system are incorporated in separate detachable structures.

本発明によれば、光学系の種類や使用条件に拘わらず、比較的少ない補正用データを記憶しておくだけで各画素の出力のバラツキを効率的かつ確実に補正することができる。   According to the present invention, the output variation of each pixel can be corrected efficiently and reliably only by storing a relatively small amount of correction data regardless of the type of optical system and usage conditions.

本願発明の撮像装置をデジタルスチルカメラに適用した一実施の形態を説明する。図1は一実施の形態のデジタルスチルカメラの構成を示す図である。デジタルスチルカメラ201は、交換レンズ202がマウント部204を介してカメラボディ203に装着されている。   An embodiment in which the imaging apparatus of the present invention is applied to a digital still camera will be described. FIG. 1 is a diagram illustrating a configuration of a digital still camera according to an embodiment. In the digital still camera 201, an interchangeable lens 202 is attached to the camera body 203 via a mount unit 204.

交換レンズ202にはレンズ209、ズーミング用レンズ208、フォーカシング用レンズ210、絞り211、レンズ駆動制御回路206などが内蔵される。レンズ駆動制御回路206はマイクロコンピューターを備え、マウント部204の電気接点213を介してボディ駆動制御回路214と通信を行い、各種情報の授受を行う。なお、レンズ駆動制御回路206からボディ駆動制御回路214へ送られる情報の中には交換レンズ202に関する口径情報(詳細後述)が含まれる。また、レンズ駆動制御回路206は、フォーカシング用レンズ210と絞り211の駆動制御を行うとともに、ズーミング用レンズ208、フォーカシング用レンズ210および絞り211の状態を検出する。   The interchangeable lens 202 includes a lens 209, a zooming lens 208, a focusing lens 210, an aperture 211, a lens drive control circuit 206, and the like. The lens drive control circuit 206 includes a microcomputer, and communicates with the body drive control circuit 214 via the electrical contact 213 of the mount unit 204 to exchange various information. The information sent from the lens drive control circuit 206 to the body drive control circuit 214 includes aperture information (details will be described later) regarding the interchangeable lens 202. The lens drive control circuit 206 controls the driving of the focusing lens 210 and the diaphragm 211 and detects the states of the zooming lens 208, the focusing lens 210 and the diaphragm 211.

次に、カメラボディ203には撮像素子212、ボディ駆動制御回路214、液晶表示装置駆動回路215、液晶表示素子216、接眼レンズ217、画素情報メモリ218、画像記憶メモリカード219などが内蔵される。撮像素子212は交換レンズ202の予定結像面に配置され、マイクロレンズ方式の撮像用画素が二次元状に配列されるとともに、複数の焦点検出位置に対応した複数の部分にマイクロレンズ方式の焦点検出用画素列が組込まれている。なお、この撮像素子212については詳細を後述する。   Next, the camera body 203 includes an image pickup device 212, a body drive control circuit 214, a liquid crystal display device drive circuit 215, a liquid crystal display device 216, an eyepiece lens 217, a pixel information memory 218, an image storage memory card 219, and the like. The imaging element 212 is disposed on the planned imaging plane of the interchangeable lens 202, and microlens imaging pixels are two-dimensionally arranged, and a microlens focus is provided in a plurality of portions corresponding to a plurality of focus detection positions. A pixel row for detection is incorporated. The details of the image sensor 212 will be described later.

ボディ駆動制御回路214はマイクロコンピューターを備え、デジタルスチルカメラ201全体の動作制御を行う。ボディ駆動制御回路214はまた、レンズ駆動制御回路206と通信を行って交換レンズ202の口径情報を受信するとともにデフォーカス量を送信し、撮像素子212からの画像信号を読み出す。さらに、レンズ駆動制御回路214は、口径情報と画素情報に基づいて画像信号の補正を行うとともに、交換レンズ202の焦点調節状態(デフォーカス量)の検出を行う。   The body drive control circuit 214 includes a microcomputer and controls the operation of the entire digital still camera 201. The body drive control circuit 214 also communicates with the lens drive control circuit 206 to receive aperture information of the interchangeable lens 202, transmit a defocus amount, and read an image signal from the image sensor 212. Further, the lens drive control circuit 214 corrects the image signal based on the aperture information and the pixel information, and detects the focus adjustment state (defocus amount) of the interchangeable lens 202.

画像情報メモリ218は例えばEEPROMなどの電気的書換可能な不揮発性メモリから構成され、撮像素子212の画素ごとのバラツキを補正するための画素情報を記憶する。また、画像記憶メモリカード219はボディ駆動制御回路214により補正された画像を記憶する。液晶ビューファインダー(EVF:電気的ビューファインダー)の液晶表示素子216には、液晶表示素子駆動回路215により被写体像や各種情報が表示され、撮影者は接眼レンズ217を通してそれらを視認することができる。   The image information memory 218 is composed of an electrically rewritable nonvolatile memory such as an EEPROM, for example, and stores pixel information for correcting variation for each pixel of the image sensor 212. The image storage memory card 219 stores the image corrected by the body drive control circuit 214. The liquid crystal display 216 of the liquid crystal viewfinder (EVF: electrical viewfinder) displays subject images and various information by the liquid crystal display element driving circuit 215, and the photographer can visually recognize them through the eyepiece 217.

交換レンズ202を通過して撮像素子212上に形成された被写体像は撮像素子212により光電変換され、その画像出力はボディ駆動制御回路214へ送られる。ボディ駆動制御回路214はレンズ駆動制御回路206と交信して交換レンズ202の口径情報を読み出し、この口径情報と画像情報メモリ218に記憶されている画素情報とに基づいて各画素ごとの出力補正を行うとともに、補正した画像信号に基づいて複数の焦点検出位置に応じたデフォーカス量を算出し、デフォーカス量をレンズ駆動制御回路206へ送る。また、ボディ駆動制御回路214は補正後の画像信号を画像記憶メモリカード219に記憶するとともに、補正後の画像信号を液晶表示素子駆動回路215へ送り、液晶表示素子216に表示する。   The subject image formed on the image sensor 212 through the interchangeable lens 202 is photoelectrically converted by the image sensor 212, and the image output is sent to the body drive control circuit 214. The body drive control circuit 214 communicates with the lens drive control circuit 206 to read the aperture information of the interchangeable lens 202, and performs output correction for each pixel based on the aperture information and the pixel information stored in the image information memory 218. At the same time, a defocus amount corresponding to a plurality of focus detection positions is calculated based on the corrected image signal, and the defocus amount is sent to the lens drive control circuit 206. The body drive control circuit 214 stores the corrected image signal in the image storage memory card 219 and sends the corrected image signal to the liquid crystal display element drive circuit 215 for display on the liquid crystal display element 216.

レンズ駆動制御回路206は交換レンズ202のフォーカシング状態、ズーミング状態、絞り設定状態などに応じて口径情報を変更する。具体的には、レンズ駆動制御回路206はズーミング用レンズ208およびフォーカシング用レンズ210の位置と、絞り211の設定位置をモニターし、これらのモニター情報に応じて口径情報を演算したり、あるいは予め用意されたルックアップテーブルからモニター情報に応じた口径情報を選択する。   The lens drive control circuit 206 changes the aperture information according to the focusing state, zooming state, aperture setting state, etc. of the interchangeable lens 202. Specifically, the lens drive control circuit 206 monitors the positions of the zooming lens 208 and the focusing lens 210 and the set position of the aperture 211, and calculates aperture information according to the monitor information or prepares in advance. The aperture information corresponding to the monitor information is selected from the look-up table.

ボディ駆動制御回路214は、口径情報と画素情報に応じて焦点検出位置に含まれる画素の画像信号を補正した後、周知の焦点検出演算処理を施して焦点検出位置ごとに一対の像の像ズレ量を算出し、これらの像ズレ量に所定の変換係数を乗じて各焦点検出位置におけるデフォーカス量を算出し、デフォーカス量をレンズ駆動制御回路206へ送信する。レンズ駆動制御回路206はデフォーカス量に基づいてレンズ駆動量を算出し、レンズ駆動量に基づいてフォーカシングレンズ210を合焦点へ駆動する。   The body drive control circuit 214 corrects the image signal of the pixel included in the focus detection position in accordance with the aperture information and the pixel information, and then performs a well-known focus detection calculation process so that the image shift between a pair of images is performed for each focus detection position. The amount is calculated, the amount of image shift is multiplied by a predetermined conversion coefficient, a defocus amount at each focus detection position is calculated, and the defocus amount is transmitted to the lens drive control circuit 206. The lens drive control circuit 206 calculates a lens drive amount based on the defocus amount, and drives the focusing lens 210 to a focal point based on the lens drive amount.

図2は一実施の形態のデジタルスチルカメラ201の焦点検出位置を示す。なお、焦点検出位置はこの一実施の形態に限定されない。この一実施の形態では、撮影画面300の中央に焦点検出位置301を、上下周辺部に焦点検出位置302と303を、左右周辺部に焦点検出位置304と305をそれぞれ配置する。   FIG. 2 shows a focus detection position of the digital still camera 201 according to the embodiment. Note that the focus detection position is not limited to this embodiment. In this embodiment, the focus detection position 301 is disposed at the center of the photographing screen 300, the focus detection positions 302 and 303 are disposed at the upper and lower peripheral portions, and the focus detection positions 304 and 305 are disposed at the left and right peripheral portions, respectively.

図3は撮像素子212の詳細な構成を示す正面図である。撮像素子212は撮像用画素310を2次元状に配列するとともに、図2に示す5箇所の焦点検出位置301〜305に対応する部分に焦点検出用画素311を配列する。撮像用画素310はマイクロレンズ10と撮像用の光電変換部11を備え、焦点検出用画素311はマイクロレンズ10と焦点検出用の一対の光電変換部12、13を備えている。   FIG. 3 is a front view showing a detailed configuration of the image sensor 212. The image sensor 212 arranges the imaging pixels 310 in a two-dimensional manner, and arranges focus detection pixels 311 at portions corresponding to the five focus detection positions 301 to 305 shown in FIG. The imaging pixel 310 includes a microlens 10 and an imaging photoelectric conversion unit 11, and the focus detection pixel 311 includes a microlens 10 and a pair of photoelectric conversion units 12 and 13 for focus detection.

図4は撮像用画素310の断面図である。撮像用画素310では、撮像用の光電変換部11の前方にマイクロレンズ10を配置し、マイクロレンズ10により光電変換部11を前方に投影する。   FIG. 4 is a cross-sectional view of the imaging pixel 310. In the imaging pixel 310, the microlens 10 is disposed in front of the imaging photoelectric conversion unit 11, and the photoelectric conversion unit 11 is projected forward by the microlens 10.

図5は焦点検出用画素311の断面図である。焦点検出用画素311では、焦点検出用の光電変換部12、13の前方にマイクロレンズ10を配置し、マイクロレンズ10により光電変換部12、13を前方に投影する。   FIG. 5 is a cross-sectional view of the focus detection pixel 311. In the focus detection pixel 311, the microlens 10 is disposed in front of the focus detection photoelectric conversion units 12 and 13, and the photoelectric conversion units 12 and 13 are projected forward by the microlens 10.

図6は光電変換部の投影状態を説明するための図である。なお、図では焦点検出用画素311を例示しているが、撮像用画素310についても同様である。マイクロレンズ10により光電変換部12、13の中心16が投影される方向14は、中心16とマイクロレンズ主点18を結んだ直線の方向である。また、マイクロレンズ10により光電変換部12、13が投影される距離は、マイクロレンズ10の焦点距離、マイクロレンズ10と光電変換部12、13との距離d1、およびマイクロレンズ10と光電変換部12、13との間の媒質の屈折率により決まる。   FIG. 6 is a diagram for explaining a projection state of the photoelectric conversion unit. Although the focus detection pixel 311 is illustrated in the figure, the same applies to the imaging pixel 310. A direction 14 in which the center 16 of the photoelectric conversion units 12 and 13 is projected by the microlens 10 is a direction of a straight line connecting the center 16 and the microlens principal point 18. Further, the distance at which the photoelectric conversion units 12 and 13 are projected by the microlens 10 is the focal length of the microlens 10, the distance d1 between the microlens 10 and the photoelectric conversion units 12 and 13, and the microlens 10 and the photoelectric conversion unit 12. , 13 depending on the refractive index of the medium.

さらに、マイクロレンズ10により光電変換部12、13が投影される倍率は、投影距離、マイクロレンズ10と光電変換部12、13との距離d1、およびマイクロレンズ10と光電変換部12、13との間の媒質の屈折率により決まる。マイクロレンズ10により光電変換部12、13が受光する光束15は、投影方向14に投影倍率で投影距離に投影した光電変換部12、13の投影像のサイズおよび位置により決まり、この投影像の範囲を通過し各マイクロレンズに向う光束15を光電変換部12、13が受光する。   Furthermore, the magnification at which the photoelectric conversion units 12 and 13 are projected by the microlens 10 is the projection distance, the distance d1 between the microlens 10 and the photoelectric conversion units 12 and 13, and the microlens 10 and the photoelectric conversion units 12 and 13. It is determined by the refractive index of the medium in between. The light beam 15 received by the photoelectric conversion units 12 and 13 by the microlens 10 is determined by the size and position of the projection image of the photoelectric conversion units 12 and 13 projected at the projection distance in the projection direction 14 at the projection magnification, and the range of this projection image The photoelectric conversion units 12 and 13 receive the light fluxes 15 that pass through the microlenses and travel toward the microlenses.

図7は撮像用画素と射出瞳の関係を説明するための図である。この図では、光軸91上にあるマイクロレンズ50と光電変換部51とからなる画素と、光軸21外にあるマイクロレンズ60と光電変換部61とからなる画素とを模式的に例示する。図において、90は仮想的な射出瞳、91は光学系の光軸、50、60はマイクロレンズ、51、61は撮像用画素の光電変換部、57はマイクロレンズ50による投影方向、67はマイクロレンズ60による投影方向、71、81は撮像光束、94はマイクロレンズ50、60により投影された光電変換部51、61の領域である。   FIG. 7 is a diagram for explaining the relationship between the imaging pixels and the exit pupil. In this figure, a pixel including the microlens 50 and the photoelectric conversion unit 51 on the optical axis 91 and a pixel including the microlens 60 and the photoelectric conversion unit 61 outside the optical axis 21 are schematically illustrated. In the figure, 90 is a virtual exit pupil, 91 is an optical axis of the optical system, 50 and 60 are microlenses, 51 and 61 are photoelectric conversion units of imaging pixels, 57 is a projection direction by the microlens 50, and 67 is a microlens. The projection direction by the lens 60, 71 and 81 are imaging light fluxes, and 94 are regions of the photoelectric conversion units 51 and 61 projected by the microlenses 50 and 60.

マイクロレンズ50、60は光学系の予定結像面近傍に配置されており、光軸91上に配置されたマイクロレンズ50によりその背後に配置された光電変換部51の形状がマイクロレンズ50から投影距離d4だけ離間した仮想の射出瞳90上に投影方向57で投影され、その投影形状は領域94を形成する。一方、光軸91から離間して配置されたマイクロレンズ60によりその背後に配置された光電変換部61の形状がマイクロレンズ60から投影距離d4だけ離間した仮想の射出瞳90上に投影方向67で投影され、その投影形状は領域94を形成する。すなわち、投影距離d4にある仮想射出瞳90上で各画素の光電変換部の投影形状(領域94)が一致するように各画素の投影方向を決定する。   The microlenses 50 and 60 are disposed in the vicinity of the planned imaging plane of the optical system, and the shape of the photoelectric conversion unit 51 disposed behind the microlens 50 disposed on the optical axis 91 is projected from the microlens 50. Projection is performed in a projection direction 57 on a virtual exit pupil 90 separated by a distance d4, and the projection shape forms a region 94. On the other hand, the shape of the photoelectric conversion unit 61 disposed behind the microlens 60 disposed away from the optical axis 91 is projected in the projection direction 67 on the virtual exit pupil 90 separated from the microlens 60 by the projection distance d4. The projected shape forms a region 94. That is, the projection direction of each pixel is determined so that the projection shape (region 94) of the photoelectric conversion unit of each pixel matches on the virtual exit pupil 90 at the projection distance d4.

光電変換部51は、領域94を通過しマイクロレンズ50へ向う焦点検出光束71によってマイクロレンズ50上に形成される像の強度に対応した信号を出力する。また、光電変換部61は、領域94を通過しマイクロレンズ60へ向う焦点検出光束81によってマイクロレンズ50上に形成される像の強度に対応した信号を出力する。   The photoelectric conversion unit 51 outputs a signal corresponding to the intensity of the image formed on the microlens 50 by the focus detection light beam 71 that passes through the region 94 toward the microlens 50. Further, the photoelectric conversion unit 61 outputs a signal corresponding to the intensity of the image formed on the microlens 50 by the focus detection light beam 81 that passes through the region 94 and travels toward the microlens 60.

図8は焦点検出用画素と射出瞳の関係を説明するための図である。図において、90は仮想の射出瞳、91は光学系の光軸、50、60はマイクロレンズ、(52、53)と(62、63)は焦点検出用画素の一対の光電変換部、57はマイクロレンズ50による投影方向、67はマイクロレンズ60による投影方向、72,73、82,83は焦点検出光束、92はマイクロレンズ50、60により投影された光電変換部52、62の領域(測距瞳)、93はマイクロレンズ50、60により投影された光電変換部53、63の領域(測距瞳)である。   FIG. 8 is a diagram for explaining the relationship between focus detection pixels and exit pupils. In the figure, 90 is a virtual exit pupil, 91 is an optical axis of the optical system, 50 and 60 are microlenses, (52, 53) and (62, 63) are a pair of photoelectric conversion units of focus detection pixels, and 57 is Projection direction by the microlens 50, 67 is the projection direction by the microlens 60, 72, 73, 82, and 83 are focus detection light beams, 92 is an area of the photoelectric conversion units 52 and 62 projected by the microlenses 50 and 60 (ranging) Pupil) and 93 are regions (ranging pupils) of the photoelectric conversion units 53 and 63 projected by the microlenses 50 and 60, respectively.

図8では、光軸91上にあるマイクロレンズ50と一対の光電変換部52、53とからなる画素と、光軸外にあるマイクロレンズ60と一対の光電変換部62、63とからなる画素を模式的に例示する。マイクロレンズ50、60は光学系の予定結像面近傍に配置されており、光軸91上に配置されたマイクロレンズ50によりその背後に配置された一対の光電変換部52、53の形状が、マイクロレンズ50から投影距離d4だけ離間した仮想の射出瞳90上に投影方向57で投影され、その投影形状は測距瞳92,93を形成する。   In FIG. 8, a pixel composed of a microlens 50 on the optical axis 91 and a pair of photoelectric conversion units 52 and 53 and a pixel composed of a microlens 60 outside the optical axis and a pair of photoelectric conversion units 62 and 63 are shown. This is schematically illustrated. The microlenses 50 and 60 are disposed in the vicinity of the planned imaging plane of the optical system, and the shape of the pair of photoelectric conversion units 52 and 53 disposed behind the microlens 50 disposed on the optical axis 91 is as follows. Projection is performed in a projection direction 57 on a virtual exit pupil 90 separated from the microlens 50 by a projection distance d4, and the projection shape forms distance measurement pupils 92 and 93.

一方、光軸91から離間して配置されたマイクロレンズ60によりその背後に配置された一対の光電変換部62、63の形状が、マイクロレンズ60から投影距離d4だけ離間した仮想の射出瞳90上に投影方向67で投影され、その投影形状は測距瞳92、93を形成する。すなわち、投影距離d4にある仮想射出瞳90上で各画素の光電変換部の投影形状(測距瞳92、93)が一致するように各画素の投影方向が決定されている。   On the other hand, the shape of the pair of photoelectric conversion units 62 and 63 arranged behind the microlens 60 arranged away from the optical axis 91 is on the virtual exit pupil 90 separated from the microlens 60 by the projection distance d4. Are projected in the projection direction 67, and the projection shape forms distance measuring pupils 92 and 93. That is, the projection direction of each pixel is determined so that the projection shape (ranging pupils 92 and 93) of the photoelectric conversion unit of each pixel matches on the virtual exit pupil 90 at the projection distance d4.

光電変換部52は、測距瞳92を通過しマイクロレンズ50に向う焦点検出光束72によってマイクロレンズ50上に形成される像の強度に対応した信号を出力する。また、光電変換部53は、測距瞳93を通過しマイクロレンズ50に向う焦点検出光束73によってマイクロレンズ50上に形成される像の強度に対応した信号を出力する。一方、光電変換部62は、測距瞳92を通過しマイクロレンズ60に向う焦点検出光束82によってマイクロレンズ60上に形成される像の強度に対応した信号を出力する。また、光電変換部63は、測距瞳93を通過しマイクロレンズ60に向う焦点検出光束83によってマイクロレンズ60上に形成される像の強度に対応した信号を出力する。   The photoelectric conversion unit 52 outputs a signal corresponding to the intensity of the image formed on the microlens 50 by the focus detection light beam 72 that passes through the distance measuring pupil 92 and faces the microlens 50. In addition, the photoelectric conversion unit 53 outputs a signal corresponding to the intensity of the image formed on the microlens 50 by the focus detection light beam 73 passing through the distance measuring pupil 93 and facing the microlens 50. On the other hand, the photoelectric conversion unit 62 outputs a signal corresponding to the intensity of the image formed on the microlens 60 by the focus detection light beam 82 that passes through the distance measuring pupil 92 and faces the microlens 60. Further, the photoelectric conversion unit 63 outputs a signal corresponding to the intensity of the image formed on the microlens 60 by the focus detection light beam 83 passing through the distance measuring pupil 93 and facing the microlens 60.

上記のような焦点検出用画素をアレイ状に多数配置し、その背後に配置した一対の光電変換部の出力をまとめることによって、測距瞳92と測距瞳93を各々通過する焦点検出光束が画素列上に形成する一対の像の強度分布に関する情報が得られる。この情報に対して周知の像ズレ検出演算処理(相関処理、位相差検出処理)を施すことによって、いわゆる瞳分割焦点検出方式で一対の像の像ズレ量を検出することができる。さらに、像ズレ量に所定の変換係数を乗ずることによって、予定結像面に対する現在の結像面(予定結像面上のマイクロレンズアレイの位置に対応した焦点検出位置における結像面)の偏差(デフォーカス量)を算出することができる。   A large number of focus detection pixels as described above are arranged in an array, and the outputs of a pair of photoelectric conversion units disposed behind the focus detection pixels are combined so that the focus detection light beams passing through the distance measurement pupil 92 and the distance measurement pupil 93 respectively. Information on the intensity distribution of a pair of images formed on the pixel column is obtained. By applying a known image shift detection calculation process (correlation process, phase difference detection process) to this information, the image shift amount of a pair of images can be detected by a so-called pupil division focus detection method. Further, by multiplying the image shift amount by a predetermined conversion coefficient, the deviation of the current imaging plane (the imaging plane at the focus detection position corresponding to the position of the microlens array on the planned imaging plane) with respect to the planned imaging plane (Defocus amount) can be calculated.

図9は撮像光束のケラレを説明するための図である。図において、5は予定結像面(撮像素子の配置された面)、3、4は撮像光束、45は予定結像面5と光軸91の交点、46は予定結像面5上で光軸91から離間した点、40は仮想射出瞳面、41、44は仮想射出瞳面40以外の位置にある光学系の絞りの射出瞳、68は点46にある画素の投影方向である。   FIG. 9 is a diagram for explaining vignetting of the imaging light flux. In the figure, 5 is a planned imaging plane (surface on which an image sensor is arranged), 3 and 4 are imaging light beams, 45 is an intersection of the planned imaging plane 5 and the optical axis 91, and 46 is light on the planned imaging plane 5. A point separated from the axis 91, 40 is a virtual exit pupil plane, 41 and 44 are exit pupils of the stop of the optical system at a position other than the virtual exit pupil plane 40, and 68 is a projection direction of the pixel at the point 46.

撮像用画素の位置が光軸上にある場合、予定結像面5から距離d4に設定された仮想射出瞳面40上に領域94が形成されており、光学系の射出瞳の位置がこの仮想射出瞳面40に一致し、さらに射出瞳径が領域94を包含するような径(形状)であれば撮像光束4にケラレは発生しない。仮想射出瞳面40に一致した位置に光学系の射出瞳があり、射出瞳径が小さくなり領域94を包含できなくなると、撮像光束4の一部を遮光していわゆるケラレと呼ばれる現象が発生する。   When the position of the imaging pixel is on the optical axis, a region 94 is formed on the virtual exit pupil plane 40 set at a distance d4 from the planned imaging plane 5, and the position of the exit pupil of the optical system is the virtual exit pupil position. If the exit pupil diameter coincides with the exit pupil plane 40 and the exit pupil diameter includes a region 94 (shape), no vignetting occurs in the imaging light flux 4. When there is an exit pupil of the optical system at a position coincident with the virtual exit pupil plane 40, and the exit pupil diameter becomes small and the region 94 cannot be included, a part of the imaging light beam 4 is shielded and a phenomenon called vignetting occurs. .

一般に、領域94は光軸91に対し対称に設定されており、射出瞳の形状も光軸91に対し対称となるので、ケラレが発生した場合でも領域94を通る撮像光束4のケラレ方も対称となり、光学系の射出瞳が仮想射出瞳面40上にある場合は、光軸91上の撮像用画素の出力は、光学系の射出瞳と領域94とがオーバラップしている面積に比例した出力となる。また、仮想射出瞳面40以外の位置(予定結像面6から距離d5(<d4)、距離d6(>d4))に光学系の射出瞳41、44(射出瞳径が小さい)が来た場合でも、必ず光軸対称に撮像光束4を遮光するので、光軸91上の撮像用画素の出力は、光学系の射出瞳と撮像光束4が通過する領域とがオーバラップしている面積に比例した出力となる。   In general, the region 94 is set symmetrically with respect to the optical axis 91, and the shape of the exit pupil is also symmetrical with respect to the optical axis 91. Therefore, even when vignetting occurs, the vignetting direction of the imaging light beam 4 passing through the region 94 is also symmetric. When the exit pupil of the optical system is on the virtual exit pupil plane 40, the output of the imaging pixel on the optical axis 91 is proportional to the area where the exit pupil of the optical system and the region 94 overlap. Output. Further, the exit pupils 41 and 44 (the exit pupil diameter is small) of the optical system have come to positions other than the virtual exit pupil plane 40 (distance d5 (<d4), distance d6 (> d4) from the planned imaging plane 6). Even in this case, since the imaging light beam 4 is always shielded with respect to the optical axis, the output of the imaging pixel on the optical axis 91 has an area where the exit pupil of the optical system and the region through which the imaging light beam 4 passes overlap. Proportional output.

しかしながら、光軸上に撮像用画素の画素構成のバラツキ(マイクロレンズの曲率、光軸のずれ、マイクロレンズと受光部の距離、マイクロレンズの屈折率など)により投影方向(光軸91に一致)が設計値からずれると、領域94が光軸に対して非対称な位置に設定されるので、光学系の射出瞳径が小さい場合には射出瞳の位置に関わらず、光学系の射出瞳と撮像光束4が通過する領域とがオーバラップしている面積は、投影方向(光軸91に一致)が設計値であった場合の面積と異なり、均一輝度面を撮像した時の出力レベルが相違する場合が出てくる。   However, the projection direction (matches the optical axis 91) due to variations in the pixel configuration of the imaging pixels on the optical axis (the curvature of the microlens, the deviation of the optical axis, the distance between the microlens and the light receiving unit, the refractive index of the microlens, etc.) Is shifted from the design value, the region 94 is set at an asymmetrical position with respect to the optical axis. Therefore, when the exit pupil diameter of the optical system is small, the imaging of the exit pupil of the optical system is performed regardless of the position of the exit pupil. The area where the region through which the light beam 4 passes overlaps is different from the area where the projection direction (coincidence with the optical axis 91) is a design value, and the output level when imaging a uniform luminance surface is different. The case comes out.

領域94の形状は投影倍率や投影距離のズレによっても変化する。すなわち、撮像用画素が光軸上にある場合、投影方向/投影倍率/投影距離などのバラツキによって、光学系の射出瞳と撮像光束4が通過する領域とがオーバラップしている面積は、投影倍率/投影距離が設計値であった場合の面積と異なり、均一輝度面を撮像した場合の光電変換部の出力は、投影方向/投影倍率/投影距離が設計値である場合に対してバラツクことになる。   The shape of the region 94 also changes depending on the projection magnification and the shift of the projection distance. That is, when the imaging pixel is on the optical axis, the area where the exit pupil of the optical system and the region through which the imaging light beam 4 passes is overlapped due to variations in the projection direction / projection magnification / projection distance, etc. Unlike the area when the magnification / projection distance is the design value, the output of the photoelectric conversion unit when the uniform luminance surface is imaged varies with respect to the case where the projection direction / projection magnification / projection distance is the design value. become.

撮像用画素が光軸から離れた位置46にある場合、予定結像面5から距離d4に設定された仮想射出瞳面40上に領域94が形成されており、光学系の射出瞳の位置がこの仮想射出瞳面40に一致し、さらに射出瞳径が領域94を包含するような径(形状)であれば、撮像光束3にケラレは発生しない。仮想射出瞳面40に一致した位置に光学系の射出瞳があり、射出瞳径が小さくなり領域94を包含できなくなると、撮像光束3の一部を遮光していわゆるケラレと呼ばれる現象が発生する。   When the imaging pixel is located at a position 46 away from the optical axis, a region 94 is formed on the virtual exit pupil plane 40 set at a distance d4 from the planned imaging plane 5, and the position of the exit pupil of the optical system is If the diameter of the exit pupil diameter coincides with the virtual exit pupil plane 40 and the exit pupil diameter includes the region 94, no vignetting occurs in the imaging light flux 3. When there is an exit pupil of the optical system at a position coincident with the virtual exit pupil plane 40, and the exit pupil diameter becomes small and the region 94 cannot be included, a part of the imaging light beam 3 is shielded and a phenomenon called vignetting occurs. .

一般に、領域94は光軸91に対し対称に設定されており、射出瞳の形状も光軸91に対し対称となるので、ケラレが発生した場合でも領域94を通る撮像光束3のケラレ方も対称となるので、光学系の射出瞳が仮想射出瞳面40上にある場合は、位置46にある撮像用画素の出力は、光学系の射出瞳と撮像光束3が通過する領域とがオーバラップしている面積に比例した出力となる。また、仮想射出瞳面40以外の位置(予定結像面6から距離d5(<d4)、距離d6(>d4))に光学系の射出瞳41、44(射出瞳径が小さい)が来た場合は、非対称に撮像光束3を遮光するので、光学系の射出瞳と撮像光束4が通過する領域とがオーバラップしている面積は、射出瞳が仮想射出瞳面40にある場合の面積と異なり、均一輝度面を撮像した時の出力レベルが射出瞳位置によって相違する場合がでてくる。   In general, the region 94 is set symmetrically with respect to the optical axis 91, and the shape of the exit pupil is also symmetrical with respect to the optical axis 91. Therefore, even when vignetting occurs, the vignetting direction of the imaging light beam 3 passing through the region 94 is also symmetric. Therefore, when the exit pupil of the optical system is on the virtual exit pupil plane 40, the output of the imaging pixel at the position 46 overlaps the exit pupil of the optical system and the region through which the imaging light beam 3 passes. The output is proportional to the area that is being used. Further, the exit pupils 41 and 44 (the exit pupil diameter is small) of the optical system have come to positions other than the virtual exit pupil plane 40 (distance d5 (<d4), distance d6 (> d4) from the planned imaging plane 6). In this case, since the imaging light beam 3 is shielded asymmetrically, the area where the exit pupil of the optical system and the region through which the imaging light beam 4 passes overlaps the area when the exit pupil is on the virtual exit pupil plane 40. In contrast, the output level when a uniform luminance surface is imaged may differ depending on the exit pupil position.

さらに、撮像用画素の画素構成のバラツキにより領域94の投影方向68が設計値からずれると、領域94が光軸に対して非対称な位置に設定されるので、射出瞳径が小さい場合には射出瞳の位置に関わらず、撮像光束3のケラレが非対称になり、位置46にある撮像用画素の出力は均一輝度面を撮像した場合に、同一の射出瞳に対して撮像用画素の画素構成が設計値である場合の出力と一致しなくなる。領域94の形状は投影倍率や投影距離のズレによっても変化する。すなわち、撮像用の画素が光軸から離れている場合、投影方向/投影倍率/投影距離などのバラツキによって、均一輝度面を撮像した場合の光電変換部の出力は投影方向/投影倍率/投影距離などが設計値である場合の出力に対してバラツクことになる。   Furthermore, if the projection direction 68 of the region 94 deviates from the design value due to variations in the pixel configuration of the imaging pixels, the region 94 is set to an asymmetrical position with respect to the optical axis. Regardless of the position of the pupil, the vignetting of the imaging light beam 3 is asymmetrical, and the output of the imaging pixel at the position 46 has a pixel configuration of the imaging pixel with respect to the same exit pupil when imaging a uniform luminance plane. The output is not consistent with the design value. The shape of the region 94 also changes depending on the projection magnification and the shift of the projection distance. That is, when the imaging pixel is away from the optical axis, the output of the photoelectric conversion unit when the uniform luminance plane is imaged due to variations in the projection direction / projection magnification / projection distance, etc. is the projection direction / projection magnification / projection distance. Etc., which are design values, will vary.

なお、撮像用画素が光軸から離れた位置にあり、光学系の射出瞳が仮想射出瞳面と一致しない場合には、撮像用画素が光軸上にある場合に比較して、撮像光束が光軸から遠ざかりケラレ易くなるので、均一輝度面を撮像した時の出力が光軸上の撮像用画素の出力より低下し、低下の程度は光軸からの距離に比例して大きくなる。これは、一般的にシェーディングと呼ばれている現象である。   Note that when the imaging pixel is located away from the optical axis and the exit pupil of the optical system does not coincide with the virtual exit pupil plane, the imaging light flux is smaller than when the imaging pixel is on the optical axis. Since it is far away from the optical axis and vignetting is easy, the output when imaging a uniform luminance surface is lower than the output of the imaging pixels on the optical axis, and the degree of reduction increases in proportion to the distance from the optical axis. This is a phenomenon generally called shading.

図10は焦点検出光束のケラレを説明するための図である。図において、5は予定結像面、6、7、8、9は焦点検出光束、45は予定結像面5と光軸91の交点、46は予定結像面5上で光軸91から離間した点、40は仮想射出瞳面、41、44は仮想射出瞳面40以外の位置にある光学系の絞りの射出瞳、68は点46にある画素の投影方向である。   FIG. 10 is a diagram for explaining vignetting of the focus detection light beam. In the figure, 5 is a planned imaging plane, 6, 7, 8, and 9 are focus detection light beams, 45 is an intersection of the planned imaging plane 5 and the optical axis 91, and 46 is separated from the optical axis 91 on the planned imaging plane 5. , 40 is a virtual exit pupil plane, 41 and 44 are exit pupils of the stop of the optical system at a position other than the virtual exit pupil plane 40, and 68 is a projection direction of the pixel at the point 46.

焦点検出用画素の位置が光軸上にある場合、予定結像面5から距離d4に設定された仮想射出瞳面40上に測距瞳92、93が形成されており、光学系の射出瞳の位置がこの仮想射出瞳面40に一致し、さらに射出瞳径が測距瞳92、93を包含するような径(形状)であれば、焦点検出光束8、9にケラレは発生しない。しかし、仮想射出瞳面40に一致した位置に光学系の射出瞳があり、射出瞳径が小さくなり測距瞳92、93を包含できなくなると、焦点検出光束8、9の一部を遮光していわゆるケラレと呼ばれる現象が発生する。   When the position of the focus detection pixel is on the optical axis, the distance measuring pupils 92 and 93 are formed on the virtual exit pupil plane 40 set at the distance d4 from the planned imaging plane 5, and the exit pupil of the optical system If the position coincides with the virtual exit pupil plane 40 and the exit pupil diameter is a diameter (shape) including the distance measuring pupils 92 and 93, no vignetting occurs in the focus detection light beams 8 and 9. However, if there is an exit pupil of the optical system at a position that coincides with the virtual exit pupil plane 40 and the exit pupil diameter becomes small and cannot include the distance measuring pupils 92 and 93, a part of the focus detection light beams 8 and 9 is shielded. So-called vignetting phenomenon occurs.

一般に、測距瞳92、93は光軸91に対し対称に設定されており、射出瞳の形状も光軸91に対し対称となるので、ケラレが発生した場合でも測距瞳92、93を通る焦点検出光束8、9のケラレ方も対称となるので、光学系の射出瞳が仮想射出瞳面40上にある場合は、光軸91上の焦点検出画素の一対の出力のレベルは変化しても出力の比はケラレが生じても変化しない。また、仮想射出瞳面40以外の位置(予定結像面6から距離d5(<d4)、距離d6(>d4))に光学系の射出瞳41、44(射出瞳径が小さい)が来た場合でも、必ず光軸対称に焦点検出光束8、9を遮光するので、光軸91上の焦点検出画素の一対の出力のレベルは変化しても出力の比はケラレが生じても変化しない。   In general, the distance measurement pupils 92 and 93 are set symmetrically with respect to the optical axis 91, and the shape of the exit pupil is also symmetrical with respect to the optical axis 91. Therefore, even if vignetting occurs, the distance detection pupils 92 and 93 pass through. Since the vignetting directions of the focus detection light beams 8 and 9 are also symmetric, when the exit pupil of the optical system is on the virtual exit pupil plane 40, the level of the pair of outputs of the focus detection pixels on the optical axis 91 changes. However, the output ratio does not change even if vignetting occurs. Further, the exit pupils 41 and 44 (the exit pupil diameter is small) of the optical system have come to positions other than the virtual exit pupil plane 40 (distance d5 (<d4), distance d6 (> d4) from the planned imaging plane 6). Even in this case, since the focus detection light beams 8 and 9 are always shielded with respect to the optical axis, even if the level of the pair of outputs of the focus detection pixels on the optical axis 91 changes, the output ratio does not change even if vignetting occurs.

しかしながら、光軸上に焦点検出用画素があっても、画素構成のバラツキにより測距瞳の投影方向(光軸91に一致)が設計値からずれると、測距瞳92、93が光軸に対して非対称な位置に設定されるので、光学系の射出瞳径が小さい場合には射出瞳の位置に関わらず、焦点検出光束8、9のケラレが非対称になり、光軸91上の焦点検出画素の一対の出力は均一輝度面を撮像した場合に一致しなくなる。   However, even if there are focus detection pixels on the optical axis, if the projection direction of the distance measuring pupil (which coincides with the optical axis 91) deviates from the design value due to variations in the pixel configuration, the distance measuring pupils 92 and 93 are on the optical axis. Since the asymmetric position is set, the vignetting of the focus detection light beams 8 and 9 becomes asymmetric regardless of the position of the exit pupil when the exit pupil diameter of the optical system is small, and the focus detection on the optical axis 91 is detected. A pair of output of pixels does not match when a uniform luminance surface is imaged.

一対の出力の不一致度は光学系の射出瞳径や測距瞳の形状に応じても変化する。測距瞳の形状は投影倍率や投影距離のズレによっても変化する。すなわち、焦点検出用画素が光軸上にある場合、投影方向/投影倍率/投影距離などのバラツキによって、均一輝度面を撮像した場合の一対の光電変換部の出力レベルおよび出力比がバラツクことになる。   The degree of mismatch between the pair of outputs also varies depending on the exit pupil diameter of the optical system and the shape of the distance measuring pupil. The shape of the distance measuring pupil also changes depending on the projection magnification and the deviation of the projection distance. That is, when the focus detection pixel is on the optical axis, the output level and output ratio of the pair of photoelectric conversion units when a uniform luminance surface is imaged vary due to variations in projection direction / projection magnification / projection distance. Become.

焦点検出用画素が光軸からはなれた位置46にある場合、予定結像面5から距離d4に設定された仮想射出瞳面40上に測距瞳92、93が形成されており、光学系の射出瞳の位置がこの仮想射出瞳面40に一致し、さらに射出瞳径が測距瞳92、93を包含するような径(形状)であれば、焦点検出光束8、9にケラレが発生しない。しかし、仮想射出瞳面40に一致した位置に光学系の射出瞳があり、射出瞳径が小さくなり測距瞳92、93を包含できなくなると、焦点検出光束56、57の一部を遮光していわゆるケラレと呼ばれる現象が発生する。   When the focus detection pixel is located at a position 46 away from the optical axis, distance measurement pupils 92 and 93 are formed on the virtual exit pupil plane 40 set at a distance d4 from the planned imaging plane 5, and the optical system If the position of the exit pupil coincides with the virtual exit pupil plane 40 and the exit pupil diameter is a diameter (shape) including the distance measuring pupils 92 and 93, no vignetting occurs in the focus detection light beams 8 and 9. . However, if there is an exit pupil of the optical system at a position that coincides with the virtual exit pupil plane 40, and the exit pupil diameter becomes small and the distance measurement pupils 92 and 93 cannot be included, a part of the focus detection light beams 56 and 57 is shielded. So-called vignetting phenomenon occurs.

一般に、測距瞳92、93は光軸91に対し対称に設定されており、射出瞳の形状も光軸91に対し対称となるので、ケラレが発生した場合でも測距瞳92、93を通る焦点検出光束6、7のケラレ方も対称となるので、光学系の射出瞳が仮想射出瞳面40上にある場合は、位置46にある焦点検出画素の一対の出力のレベルは変化するが出力の比はケラレが生じても変化しない。また、仮想射出瞳面40以外の位置(予定結像面6から距離d5(<d4)、距離d6(>d4))に光学系の射出瞳41、44(射出瞳径が小さい)が来た場合は、非対称に焦点検出光束6、7を遮光するので、位置46にある焦点検出画素の一対の出力の比はケラレが生じることにより変化する。   In general, the distance measurement pupils 92 and 93 are set symmetrically with respect to the optical axis 91, and the shape of the exit pupil is also symmetrical with respect to the optical axis 91. Therefore, even if vignetting occurs, the distance detection pupils 92 and 93 pass through. Since the vignetting direction of the focus detection light beams 6 and 7 is also symmetric, when the exit pupil of the optical system is on the virtual exit pupil plane 40, the output level of the pair of focus detection pixels at the position 46 changes, but the output is changed. The ratio does not change even if vignetting occurs. Further, the exit pupils 41 and 44 (the exit pupil diameter is small) of the optical system have come to positions other than the virtual exit pupil plane 40 (distance d5 (<d4), distance d6 (> d4) from the planned imaging plane 6). In this case, since the focus detection light beams 6 and 7 are shielded asymmetrically, the ratio of the pair of outputs of the focus detection pixel at the position 46 changes due to vignetting.

さらに、焦点検出画素の画素構成のバラツキにより測距瞳の投影方向68が設計値からずれると、測距瞳92、93が光軸に対して対称な位置に設定されないので、射出瞳径が小さい場合には射出瞳の位置に関わらず、焦点検出光束6、7のケラレが非対称になり、光軸91上の焦点検出画素の一対の出力は均一輝度面を撮像した場合に一致しなくなる。   Further, if the projection direction 68 of the distance detection pupil deviates from the design value due to variations in the pixel configuration of the focus detection pixels, the distance detection pupils 92 and 93 are not set at positions symmetrical with respect to the optical axis, and thus the exit pupil diameter is small. In this case, the vignetting of the focus detection light beams 6 and 7 becomes asymmetric regardless of the position of the exit pupil, and the pair of outputs of the focus detection pixels on the optical axis 91 do not match when a uniform luminance surface is imaged.

一対の出力の不一致度は光学系の射出瞳径や測距瞳の形状に応じても変化する。測距瞳の形状は投影倍率や投影距離のズレによっても変化する。すなわち、焦点検出用画素の画面上の位置が光軸から離れている場合、投影方向/投影倍率/投影距離などのバラツキによって、均一輝度面を撮像した場合の一対の光電変換部の出力および出力比がバラツクことになる。   The degree of mismatch between the pair of outputs also varies depending on the exit pupil diameter of the optical system and the shape of the distance measuring pupil. The shape of the distance measuring pupil also changes depending on the projection magnification and the deviation of the projection distance. That is, when the position of the focus detection pixel on the screen is away from the optical axis, the output and output of a pair of photoelectric conversion units when a uniform luminance plane is imaged due to variations in projection direction / projection magnification / projection distance, etc. The ratio will vary.

図11は投影方向のバラツキを説明するための図であり、仮想射出瞳面における光学系の射出瞳と撮像用画素の光電変換部の投影領域の関係を示す。なお、光学系の射出瞳42が仮想射出瞳面に一致しているとする。光軸上の撮像用画素から光電変換部をマイクロレンズにより仮想射出瞳面に投影した領域30(実線)は投影方向が設計値通りであれば、その中心は射出瞳42の中心と一致している。投影方向がずれると光電変換部をマイクロレンズにより仮想射出瞳面に投影した領域31(破線)も領域30からずれる。領域30の外形が射出瞳42の外形近傍にある場合には、領域30と射出瞳42が重なる領域の面積が領域31と射出瞳42が重なる領域の面積とに相違を生ずる。そのために、同一の輝度面を撮像した場合、投影方向が設計値通りの場合と投影方向に誤差がある場合で撮像画素の出力レベルに相違が生ずる。   FIG. 11 is a diagram for explaining the variation in the projection direction, and shows the relationship between the exit pupil of the optical system on the virtual exit pupil plane and the projection area of the photoelectric conversion unit of the imaging pixel. It is assumed that the exit pupil 42 of the optical system matches the virtual exit pupil plane. The region 30 (solid line) obtained by projecting the photoelectric conversion unit from the imaging pixel on the optical axis onto the virtual exit pupil plane by the microlens coincides with the center of the exit pupil 42 if the projection direction is as designed. Yes. When the projection direction is shifted, the region 31 (broken line) in which the photoelectric conversion unit is projected onto the virtual exit pupil plane by the microlens is also shifted from the region 30. When the outer shape of the region 30 is in the vicinity of the outer shape of the exit pupil 42, the area of the region where the region 30 and the exit pupil 42 overlap is different from the area of the region where the region 31 and the exit pupil 42 overlap. Therefore, when the same luminance plane is imaged, the output level of the imaging pixel differs between when the projection direction is as designed and when there is an error in the projection direction.

投影方向の誤差情報と領域30、領域31のサイズ情報(光電変換部のサイズ/投影倍率/投影距離などから算出)と光学系の射出瞳42の情報(サイズ/位置)に基づき、領域30と射出瞳42が重なる領域の面積と領域31と射出瞳42が重なる領域の面積の比を算出し、その比により投影方向に誤差がある場合の撮像画素の出力レベルを補正することによって、投影方向に誤差がない場合の撮像画素の出力レベルに補正することができる。   Based on the projection direction error information, the size information of the regions 30 and 31 (calculated from the size of the photoelectric conversion unit / projection magnification / projection distance, etc.) and the information (size / position) of the exit pupil 42 of the optical system, By calculating the ratio of the area of the region where the exit pupil 42 overlaps and the area of the region 31 and the region where the exit pupil 42 overlap, and correcting the output level of the imaging pixel when there is an error in the projection direction by the ratio, the projection direction Can be corrected to the output level of the image pickup pixel when there is no error.

図12は投影倍率のバラツキを説明するための図であり、仮想射出瞳面における光学系の射出瞳と撮像用画素の光電変換部の投影領域の関係を示す。なお、光学系の射出瞳42が仮想射出瞳面に一致しているとする。光軸上の撮像用画素から光電変換部をマイクロレンズにより仮想射出瞳面に投影した領域30(実線)は投影方向が設計値通りであれば、その中心は射出瞳42の中心と一致している。投影倍率がずれる(図では小さくなる)と光電変換部をマイクロレンズにより仮想射出瞳面に投影した領域32(破線)のサイズが領域30サイズから変化し、領域30と射出瞳42が重なる領域の面積と領域32と射出瞳42が重なる領域の面積とに相違を生ずる。そのために、同一の輝度面を撮像した場合、投影倍率が設計値通りの場合と投影倍率に誤差がある場合で撮像画素の出力レベルに相違が生ずる。   FIG. 12 is a diagram for explaining the variation in projection magnification, and shows the relationship between the exit pupil of the optical system on the virtual exit pupil plane and the projection area of the photoelectric conversion unit of the imaging pixel. It is assumed that the exit pupil 42 of the optical system matches the virtual exit pupil plane. The region 30 (solid line) obtained by projecting the photoelectric conversion unit from the imaging pixel on the optical axis onto the virtual exit pupil plane by the microlens coincides with the center of the exit pupil 42 if the projection direction is as designed. Yes. When the projection magnification is deviated (smaller in the figure), the size of the region 32 (broken line) in which the photoelectric conversion unit is projected onto the virtual exit pupil plane by the microlens changes from the region 30 size, and the region 30 and the exit pupil 42 overlap. There is a difference between the area and the area of the region where the region 32 and the exit pupil 42 overlap. Therefore, when the same luminance plane is imaged, the output level of the imaging pixel differs between when the projection magnification is as designed and when there is an error in the projection magnification.

投影倍率の誤差情報と領域30、領域32のサイズ情報(光電変換部のサイズ/投影倍率/投影距離などから算出)と光学系の射出瞳42の情報(サイズ/位置)に基づき、領域30と射出瞳42が重なる領域の面積と領域32と射出瞳42が重なる領域の面積の比を算出し、その比により投影倍率に誤差がある場合の撮像画素の出力レベルを補正することによって、投影倍率に誤差がない場合の撮像画素の出力レベルに補正することができる。   Based on the error information of the projection magnification, the size information of the regions 30 and 32 (calculated from the size of the photoelectric conversion unit / projection magnification / projection distance, etc.) and the information (size / position) of the exit pupil 42 of the optical system, The ratio of the area of the area where the exit pupil 42 overlaps and the area of the area where the area 32 overlaps the exit pupil 42 is calculated, and the projection magnification is corrected by correcting the output level of the imaging pixel when there is an error in the projection magnification. Can be corrected to the output level of the image pickup pixel when there is no error.

図13は投影方向のバラツキを説明するための図であり、仮想射出瞳面における光学系の射出瞳と焦点検出用画素の一対の光電変換部の投影領域(測距瞳)の関係を示す。なお、光学系の射出瞳42が仮想射出瞳面に一致しているとする。光軸上の焦点検出用画素から一対の光電変換部をマイクロレンズにより仮想射出瞳面に投影した測距瞳20、21(実線)は投影方向が設計値通りであれば、仮想射出瞳面の中心を通る直線(図ではY軸)に対し線対称となる。投影方向がずれる(図においてX軸方向およびY軸方向)と一対の光電変換部をマイクロレンズにより仮想射出瞳面に投影した測距瞳22、23(破線)も測距瞳20、21からX軸方向およびY軸方向にずれる。測距瞳20、21の外形が射出瞳42の外形近傍にある場合には、測距瞳20、21と射出瞳42が重なる領域の面積が測距瞳22、23と射出瞳42が重なる領域の面積とに相違を生ずる。そのために、同一の輝度面を焦点検出用画素で受光した場合、投影方向が設計値通りの場合と投影方向に誤差がある場合で焦点検出用画素の一対の出力の比に相違が生ずる。   FIG. 13 is a diagram for explaining variations in the projection direction, and shows the relationship between the exit pupil of the optical system on the virtual exit pupil plane and the projection area (ranging pupil) of the pair of photoelectric conversion units of the focus detection pixels. It is assumed that the exit pupil 42 of the optical system matches the virtual exit pupil plane. Distance measuring pupils 20 and 21 (solid lines) obtained by projecting a pair of photoelectric conversion units from a focus detection pixel on the optical axis onto a virtual exit pupil plane by a microlens are provided on the virtual exit pupil plane if the projection direction is as designed. The line is symmetrical with respect to a straight line passing through the center (Y axis in the figure). When the projection direction is deviated (X-axis direction and Y-axis direction in the figure), the distance measurement pupils 22 and 23 (broken lines) obtained by projecting the pair of photoelectric conversion units onto the virtual exit pupil plane by the microlens are also shown in FIG. It shifts in the axial direction and the Y-axis direction. When the outer shape of the distance measurement pupils 20 and 21 is in the vicinity of the outer shape of the exit pupil 42, the area of the region where the distance measurement pupils 20 and 21 and the exit pupil 42 overlap is the area where the distance measurement pupils 22 and 23 and the exit pupil 42 overlap. There is a difference in the area. For this reason, when the same luminance plane is received by the focus detection pixels, the ratio of the pair of outputs of the focus detection pixels differs depending on whether the projection direction is as designed or there is an error in the projection direction.

投影方向の誤差情報(仮想射出瞳面の中心に対する、X軸/Y軸方向の偏差量)と測距瞳20、21、測距瞳22、23のサイズ情報(光電変換部のサイズ/投影倍率/投影距離などから算出)と光学系の射出瞳42の情報(サイズ/位置)に基づき、測距瞳20、21と射出瞳42が重なる領域の面積と測距瞳22、23と射出瞳42が重なる領域の面積の比を算出し、その比により投影方向に誤差がある場合の焦点検出用画素の出力レベルを補正することによって、投影方向に誤差がない場合の焦点検出用画素の出力レベル(すなわち一対の出力の比が1)に補正することができる。   Error information of the projection direction (deviation amount in the X-axis / Y-axis direction with respect to the center of the virtual exit pupil plane) and size information of the distance measurement pupils 20, 21, and distance measurement pupils 22, 23 (size of the photoelectric conversion unit / projection magnification) / Calculated from the projection distance, etc.) and the information (size / position) of the exit pupil 42 of the optical system, the area of the area where the distance measurement pupils 20, 21 and the exit pupil 42 overlap, the distance measurement pupils 22, 23, and the exit pupil 42 The output level of the focus detection pixel when there is no error in the projection direction by calculating the ratio of the area of the overlapping area and correcting the output level of the focus detection pixel when there is an error in the projection direction by the ratio (Ie, the ratio of the pair of outputs is 1).

図14は投影方向のバラツキに対する射出瞳サイズの影響を説明するための図である。射出瞳43の外形が小さくなると、測距瞳の投影方向の同じ偏差に対して、図14に示すように測距瞳20、21と射出瞳43が重なる領域の面積が測距瞳22、23と射出瞳43が重なる領域の面積とに図13で示した相違より大きな相違を生ずる。そのために、同一の輝度面を焦点検出用画素で受光した場合、光学系の射出瞳径が小さい場合には焦点検出用画素の一対の出力の比によりおおきな相違が生ずる。   FIG. 14 is a diagram for explaining the influence of the exit pupil size on the variation in the projection direction. If the outer shape of the exit pupil 43 is reduced, the area of the area where the distance measurement pupils 20 and 21 and the exit pupil 43 overlap is shown in FIG. And the area of the region where the exit pupil 43 overlaps is larger than the difference shown in FIG. Therefore, when the same luminance surface is received by the focus detection pixels, if the exit pupil diameter of the optical system is small, a large difference occurs depending on the ratio of the pair of outputs of the focus detection pixels.

図15は投影倍率のバラツキを説明するための図であり、仮想射出瞳面における光学系の射出瞳と焦点検出用画素の一対の光電変換部の投影領域(測距瞳)の関係を示す。なお、光学系の射出瞳42が仮想射出瞳面に一致しているとする。光軸上の焦点検出用画素から一対の光電変換部をマイクロレンズにより仮想射出瞳面に設計値通りの撮影倍率で投影すると測距瞳20、21(実線)となる。投影倍率がずれる(図では小さくなる)と一対の光電変換部をマイクロレンズにより仮想射出瞳面に投影した測距瞳24、25(破線)も測距瞳20、21よりサイズが変化し、測距瞳20、21と射出瞳42とが重なる領域の面積と測距瞳24、25と射出瞳42が重なる領域の面積とに相違を生ずる。そのために、同一の輝度面を焦点検出用画素で受光した場合、投影倍率が設計値通りの場合と投影倍率に誤差がある場合で焦点検出用画素の一対の出力レベルに相違が生ずる。   FIG. 15 is a diagram for explaining variations in projection magnification, and shows the relationship between the exit pupil of the optical system on the virtual exit pupil plane and the projection area (ranging pupil) of the pair of photoelectric conversion units of the focus detection pixels. It is assumed that the exit pupil 42 of the optical system matches the virtual exit pupil plane. When a pair of photoelectric conversion units are projected from the focus detection pixel on the optical axis onto the virtual exit pupil plane with the imaging magnification as designed by the microlens, the distance measuring pupils 20 and 21 (solid lines) are obtained. When the projection magnification is shifted (becomes smaller in the figure), the distance measurement pupils 24 and 25 (broken lines) obtained by projecting the pair of photoelectric conversion units onto the virtual exit pupil plane by the microlens change in size from the distance measurement pupils 20 and 21, There is a difference between the area of the area where the distance pupils 20 and 21 and the exit pupil 42 overlap and the area of the area where the distance measurement pupils 24 and 25 and the exit pupil 42 overlap. Therefore, when the same luminance surface is received by the focus detection pixels, a difference occurs in the pair of output levels of the focus detection pixels when the projection magnification is as designed and when there is an error in the projection magnification.

投影倍率の誤差情報と測距瞳20、21のサイズ情報(光電変換部のサイズ/投影倍率/投影距離などから算出)と光学系の射出瞳42の情報(サイズ/位置)に基づき、測距瞳20、21と射出瞳42が重なる領域の面積と測距瞳24、25と射出瞳42が重なる領域の面積の比を算出し、その比により投影倍率に誤差がある場合の焦点検出用画素の出力レベルを補正することにより、投影倍率に誤差がない場合の焦点検出用画素の出力レベルに補正することができる。   Ranging based on the projection magnification error information, the size information of the distance measuring pupils 20 and 21 (calculated from the size of the photoelectric conversion unit / projection magnification / projection distance, etc.) and the information (size / position) of the exit pupil 42 of the optical system. The ratio of the area of the area where the pupils 20 and 21 overlap the exit pupil 42 and the area of the area where the distance measurement pupils 24 and 25 overlap the exit pupil 42 is calculated, and the focus detection pixel when there is an error in the projection magnification by the ratio Can be corrected to the output level of the focus detection pixel when there is no error in the projection magnification.

図16は投影方向と投影倍率のバラツキを説明するための図であり、仮想射出瞳面における光学系の射出瞳と焦点検出用画素の一対の光電変換部の投影領域(測距瞳)の関係を示す。なお、光学系の射出瞳43が仮想射出瞳面に一致しているとする。測距瞳20、21(実線)は光軸上の焦点検出用画素から一対の光電変換部をマイクロレンズにより仮想射出瞳面に設計値通り投影方向/投影倍率で投影した場合の測距瞳の領域を示している。投影方向と投影倍率の両方が同時にずれると一対の光電変換部をマイクロレンズにより仮想射出瞳面に投影した測距瞳26、27(破線)も測距瞳20、21に対してX軸方向およびY軸方向にずれるとともに、そのサイズも変化し、測距瞳20、21と射出瞳43が重なる領域の面積が測距瞳26、27と射出瞳43が重なる領域の面積とに相違を生ずる。そのために、同一の輝度面を焦点検出用画素で受光した場合、投影方向および投影倍率が設計値通りの場合と投影方向および投影倍率に誤差がある場合で焦点検出用画素の一対の出力の比に相違が生ずる。   FIG. 16 is a diagram for explaining the variation in the projection direction and the projection magnification, and the relationship between the exit pupil of the optical system on the virtual exit pupil plane and the projection area (ranging pupil) of the pair of photoelectric conversion units of the focus detection pixels. Indicates. It is assumed that the exit pupil 43 of the optical system matches the virtual exit pupil plane. The distance measurement pupils 20 and 21 (solid lines) are the distance measurement pupils when a pair of photoelectric conversion units are projected from the focus detection pixels on the optical axis onto the virtual exit pupil plane with the projection direction / projection magnification as designed by the microlens. Indicates the area. When both the projection direction and the projection magnification are simultaneously shifted, the distance measurement pupils 26 and 27 (broken lines) obtained by projecting the pair of photoelectric conversion units onto the virtual exit pupil plane by the microlens are also in the X-axis direction with respect to the distance measurement pupils 20 and 21. While shifting in the Y-axis direction, the size also changes, and the area of the area where the distance measurement pupils 20 and 21 and the exit pupil 43 overlap is different from the area of the area where the distance measurement pupils 26 and 27 and the exit pupil 43 overlap. Therefore, when the same luminance surface is received by the focus detection pixel, the ratio of the pair of outputs of the focus detection pixel when the projection direction and projection magnification are as designed and when there is an error in the projection direction and projection magnification. Differences.

投影方向と投影倍率の誤差情報と測距瞳20、21のサイズ情報(光電変換部のサイズ/投影方向/投影倍率/投影距離などから算出)と光学系の射出瞳43の情報(サイズ/位置)に基づき、測距瞳20、21と射出瞳43が重なる領域の面積と測距瞳26、27と射出瞳43が重なる領域の面積の比を算出し、その比により投影方向および投影倍率に誤差がある場合の焦点検出用画素の出力レベルを補正することにより、投影方向および投影倍率に誤差がない場合の焦点検出用画素の出力レベルに補正することができる。   Error information of the projection direction and projection magnification, size information of the distance measuring pupils 20 and 21 (calculated from the size of the photoelectric conversion unit / projection direction / projection magnification / projection distance, etc.) and information of the exit pupil 43 of the optical system (size / position) ), The ratio of the area of the area where the distance measurement pupils 20 and 21 and the exit pupil 43 overlap and the area of the area where the distance measurement pupils 26 and 27 and the exit pupil 43 overlap is calculated, and the projection direction and the projection magnification are calculated based on the ratio. By correcting the output level of the focus detection pixel when there is an error, the output level of the focus detection pixel when there is no error in the projection direction and the projection magnification can be corrected.

図17は画素位置および光束内の光線分布の影響を説明するための図であり、仮想射出瞳面における光学系の射出瞳と撮像用画素の光電変換部の投影領域の関係を示す。なお、光学系の射出瞳47が仮想射出瞳面に一致しているとする。光軸から離れた位置の撮像用画素から光学系の射出瞳47を見た場合、そのときの光軸に対する角度に応じて射出瞳47は変形(円形から楕円形)に変形する。また、光電変換部をマイクロレンズにより仮想射出瞳面に投影した領域33はマイクロレンズの回折や収差により領域33を通過する面内で一様な光線量の分布ではない(図では領域33の内部の濃度でこれを示しており、領域33の中心部で光線量が多い)。   FIG. 17 is a diagram for explaining the influence of the pixel position and the light distribution in the light beam, and shows the relationship between the exit pupil of the optical system on the virtual exit pupil plane and the projection area of the photoelectric conversion unit of the imaging pixel. It is assumed that the exit pupil 47 of the optical system matches the virtual exit pupil plane. When the exit pupil 47 of the optical system is viewed from an imaging pixel at a position away from the optical axis, the exit pupil 47 is deformed (from a circle to an ellipse) according to the angle with respect to the optical axis at that time. In addition, the region 33 in which the photoelectric conversion unit is projected onto the virtual exit pupil plane by the microlens is not a uniform light amount distribution in the plane passing through the region 33 due to diffraction and aberration of the microlens (in the figure, the inside of the region 33). This is indicated by the density of light, and the amount of light is large at the center of the region 33).

投影方向あるいは投影倍率の誤差に応じて画素出力を補正する場合には、上記画素位置に応じた射出瞳の変形および領域内の光線量の分布を考慮する必要がある。投影方向と投影倍率の誤差情報と領域33の情報(光電変換部のサイズ/投影方向/投影倍率/投影距離/収差/回折などから算出)と光学系の射出瞳47の情報(サイズ/位置)と画素位置情報に基づき、投影倍率/投影距離の誤差がない場合の領域と射出瞳47が重なる領域の面積と領域33と射出瞳47が重なる領域の面積の比を算出し、その比により投影方向および投影倍率に誤差がある場合の撮像用画素の出力レベルを補正することによって、投影方向および投影倍率に誤差がない場合の撮像用画素の出力レベルに補正することができる。   When correcting the pixel output according to the projection direction or projection magnification error, it is necessary to consider the deformation of the exit pupil according to the pixel position and the distribution of the amount of light in the region. Error information on the projection direction and projection magnification, and information on the area 33 (calculated from the size of the photoelectric conversion unit / projection direction / projection magnification / projection distance / aberration / diffraction etc.) and information on the exit pupil 47 of the optical system (size / position) Based on the pixel position information, the ratio of the area where there is no projection magnification / projection distance error and the area where the exit pupil 47 overlaps and the area where the area 33 overlaps the exit pupil 47 is calculated, and projection is performed based on the ratio. By correcting the output level of the imaging pixel when there is an error in the direction and the projection magnification, the output level of the imaging pixel when there is no error in the projection direction and the projection magnification can be corrected.

図18は画素位置および光束内の光線分布の影響を説明するための図であり、仮想射出瞳面における光学系の射出瞳と焦点検出用画素の光電変換部の投影領域の関係を示す。光軸から離れた位置(画面周辺)に焦点検出用画素がある場合、光学系の絞りの射出瞳以外のレンズ外形に対応する射出瞳が焦点検出用光束を規制(遮光)する要因となり得る。光軸から離れた位置の焦点検出用画素から光学系の仮想射出瞳面を見た場合、そのときの光軸に対する角度に応じて絞り以外のレンズ外形に対応した射出瞳48が光学系の射出瞳として作用し、射出瞳48は複数の射出瞳形状を合成した形状となる。また、一対の光電変換部をマイクロレンズにより仮想射出瞳面に誤差を持つ投影方向や投影倍率で投影した測距瞳28、29はマイクロレンズの回折や収差により測距瞳28、29の面内において一様な光線量の分布ではない。   FIG. 18 is a diagram for explaining the influence of the pixel position and the light distribution in the light beam, and shows the relationship between the exit pupil of the optical system on the virtual exit pupil plane and the projection area of the photoelectric conversion unit of the focus detection pixel. When the focus detection pixel is located away from the optical axis (periphery of the screen), the exit pupil corresponding to the outer shape of the lens other than the exit pupil of the stop of the optical system can be a factor that restricts (shields) the focus detection light beam. When the virtual exit pupil plane of the optical system is viewed from a focus detection pixel at a position away from the optical axis, the exit pupil 48 corresponding to the lens outer shape other than the diaphragm is emitted from the optical system according to the angle with respect to the optical axis at that time. Acting as a pupil, the exit pupil 48 is formed by combining a plurality of exit pupil shapes. The distance measuring pupils 28 and 29 obtained by projecting a pair of photoelectric conversion units by a micro lens with a projection direction and a projection magnification having an error on the virtual exit pupil plane are in-plane with the diffraction and aberration of the micro lens. Is not a uniform light amount distribution.

投影方向あるいは投影倍率の誤差に応じて画素出力を補正する場合には、上記画素位置に応じた射出瞳の変形および測距瞳内部の光線量の分布を考慮する必要がある。投影方向と投影倍率の誤差情報と測距瞳28、29の位置およびサイズおよび分布情報(光電変換部のサイズ/投影方向/投影倍率/投影距離/収差/回折などから算出)と光学系の射出瞳48の情報(絞りおよびその他レンズのサイズ/光軸方向の位置)と画素位置情報に基づき、測距瞳情報(位置/サイズ/分布)と画素位置に応じた射出瞳情報を得るとともに、投影倍率/投影距離の誤差がない場合の測距瞳と射出瞳48が重なる領域の面積と投影倍率/投影距離の誤差がある場合の測距瞳28、29と射出瞳48が重なる領域の面積の比を算出し、その比により投影方向および投影倍率に誤差がある場合の焦点検出用画素の出力レベルを補正することによって、投影方向および投影倍率に誤差がない場合の焦点検出用画素の出力レベルに補正することができる。   When correcting the pixel output according to the projection direction or projection magnification error, it is necessary to consider the deformation of the exit pupil according to the pixel position and the distribution of the amount of light within the distance measuring pupil. Error information of the projection direction and projection magnification, position and size and distribution information of the distance measurement pupils 28 and 29 (calculated from the size of the photoelectric conversion unit / projection direction / projection magnification / projection distance / aberration / diffraction, etc.) and emission of the optical system Based on pupil 48 information (aperture and other lens sizes / positions in the optical axis direction) and pixel position information, distance measurement pupil information (position / size / distribution) and exit pupil information corresponding to the pixel position are obtained and projected. The area of the area where the distance measurement pupil and the exit pupil 48 overlap when there is no magnification / projection distance error and the area of the area where the distance measurement pupils 28 and 29 and the exit pupil 48 overlap when there is an error of the projection magnification / projection distance By calculating the ratio and correcting the output level of the focus detection pixel when there is an error in the projection direction and the projection magnification by the ratio, the output level of the focus detection pixel when there is no error in the projection direction and the projection magnification. It can be corrected to.

以上の説明をまとめると、撮像用画素あるいは焦点検出用画素の出力を補正する場合には次の情報が必要となる。第1に口径情報(=射出瞳情報)が必要である。口径情報は光学系側の構成に関わる情報であって、光学系の絞りおよびその他のレンズ外形(口径)に対応する射出瞳およびその光軸方向の位置に関する情報である。射出瞳情報は、光学系の直接的な構成情報(絞りおよびその他のレンズ外形(口径)とその光軸方向の位置)と光学特性情報(パワー配置等)とから演算で求めてもよい。口径情報が光学系の状態(フォーカシング状態/ズーミング状態/絞り設定状態)によって変化する場合は、光学系の状態を検知して対応する口径情報に変更する。口径情報は、設計値または実測値を記憶したルックアップテーブルから、検出した光学系の状態(フォーカシング状態/ズーミング状態/絞り設定状態)に応じて読み出される。   To summarize the above description, the following information is required when correcting the output of the imaging pixel or the focus detection pixel. First, aperture information (= exit pupil information) is required. The aperture information is information relating to the configuration on the optical system side, and is information relating to the exit of the optical system and the position in the optical axis direction corresponding to the outer shape (aperture) of the lens. The exit pupil information may be obtained by calculation from direct configuration information of the optical system (aperture and other lens outer shape (aperture) and its position in the optical axis direction) and optical characteristic information (power arrangement, etc.). When the aperture information changes depending on the state of the optical system (focusing state / zooming state / aperture setting state), the state of the optical system is detected and changed to the corresponding aperture information. The aperture information is read from a look-up table storing design values or measured values according to the detected optical system state (focusing state / zooming state / aperture setting state).

第2に画素情報が必要である。画素情報は撮像素子(撮像画素、焦点検出画素)の側の構成に関わる情報であって、光電変換部のマイクロレンズによる投影領域に関する情報(投影倍率/投影方向/投影距離およびその偏差、投影収差/回折度合い、光電変換部のサイズ)である。投影領域に関する情報は、画素の直接的な構成情報(マイクロレンズ曲率およびその誤差、マイクロレンズと光電変換部の光軸方向の相対的な位置関係およびその誤差、マイクロレンズと光電変換部の光軸方向と直交する面内での相対的な位置関係およびその誤差、光電変換部のサイズおよびその誤差)として保持し、演算により投影倍率/投影方向を求めるようにしてもよい。画素情報にはさらに光軸から画素までの距離とその誤差(画面内での位置およびその誤差)を含むようにしてもよい。   Second, pixel information is required. The pixel information is information related to the configuration of the image sensor (imaging pixel, focus detection pixel), and is information related to the projection area by the microlens of the photoelectric conversion unit (projection magnification / projection direction / projection distance and its deviation, projection aberration). / Degree of diffraction, size of photoelectric conversion part). Information on the projection area includes the direct configuration information of the pixels (microlens curvature and its error, relative positional relationship between the microlens and the photoelectric conversion unit in the optical axis direction and its error, and the optical axis of the microlens and the photoelectric conversion unit. The relative magnification in the plane orthogonal to the direction and its error, the size of the photoelectric conversion unit and its error), and the projection magnification / projection direction may be obtained by calculation. The pixel information may further include the distance from the optical axis to the pixel and its error (position in the screen and its error).

次に、画素ごとの投影方向/投影倍率/分布情報等の情報の測定方法について説明する。仮想射出瞳面で輝点を2次元的に走査した時の各画素の出力を処理することによって測定する方法がある。また、仮想射出瞳面上に所定の形状の開口を均一輝度で照明した場合の各画素の出力を処理する方法がある。例えば、仮想射出瞳面上のX軸またはY軸に対称な細いスリット開口を挿入し、その時の一対の光電変換部の出力比から、一対の測距瞳の位置ズレデータを算出する。仮想射出瞳面上のX軸またはY軸を境界として、片側を開口、もう片側を遮光した場合の画素出力の比データから投影方向を算出する。仮想射出瞳面上で開口径を変化させたときの画素出力の変化度合いから投影倍率を算出する。さらに別の測定方法として、各画素の構成要素の寸法をレーザープローブなどにより直接測定したデータを処理する方法がある。   Next, a method for measuring information such as the projection direction / projection magnification / distribution information for each pixel will be described. There is a method of measuring by processing the output of each pixel when the bright spot is scanned two-dimensionally on the virtual exit pupil plane. There is also a method of processing the output of each pixel when an opening of a predetermined shape is illuminated with uniform brightness on the virtual exit pupil plane. For example, a thin slit opening symmetric with respect to the X axis or Y axis on the virtual exit pupil plane is inserted, and the positional deviation data of the pair of distance measuring pupils is calculated from the output ratio of the pair of photoelectric conversion units at that time. The projection direction is calculated from the pixel output ratio data when the X axis or Y axis on the virtual exit pupil plane is the boundary and one side is open and the other side is shielded. The projection magnification is calculated from the degree of change in pixel output when the aperture diameter is changed on the virtual exit pupil plane. As another measurement method, there is a method of processing data obtained by directly measuring the dimensions of the constituent elements of each pixel with a laser probe or the like.

画素情報の格納方法としては、まず、画素ごとに投影倍率、投影方向のデータを格納し、領域内の光量分布などの2次元的な情報は画素ごとに演算によって求める方法がある。次に、画素毎に投影倍率、投影方向のデータを格納し、領域内の光量分布などの2次元的な情報は、画素ごとに記憶するとデータ量が多量になるので、所定画素ブロックまたは全画素共通なデータとして予め演算または実測して格納しておく方法がある。   As a method of storing pixel information, first, there is a method in which data of projection magnification and projection direction is stored for each pixel, and two-dimensional information such as a light amount distribution in the region is obtained by calculation for each pixel. Next, the projection magnification and projection direction data are stored for each pixel, and the two-dimensional information such as the light quantity distribution in the area becomes large when stored for each pixel. There is a method of calculating or actually measuring and storing as common data.

上記口径情報(光学系の種類および状態に応じて変化する)と画素情報(実測により記憶固定)とを組み合わせて補正情報を演算し、実使用条件においての画素出力の補正を行うことによって、少ないデータ(画素情報)で多くの条件(口径情報)に対応可能な補正を行うことができる。   Less correction is made by calculating correction information by combining the aperture information (which changes according to the type and state of the optical system) and pixel information (memory fixed by actual measurement), and correcting the pixel output under actual use conditions. Data (pixel information) can be corrected to cope with many conditions (caliber information).

図19は撮像用画素出力の補正を説明するための図である。図において、横軸は画素位置を表し、縦軸は画素出力を表す。ある光学系を介して均一輝度面を撮像した場合の画素出力101は、投影倍率/投影方向などのバラツキにより、図のように変動している。予め測定した画素情報(投影方向/投影倍率のバラツキ)と光学系の口径情報に基づき、光量補正情報(誤差のない光軸上画素の出力を基準とした各画素の出力の逆数)を演算で求め、各画素出力に補正情報を乗ずることによって、バラツキを軽減した均一輝度に対する画素出力102を得ることができる。   FIG. 19 is a diagram for explaining the correction of the imaging pixel output. In the figure, the horizontal axis represents the pixel position, and the vertical axis represents the pixel output. The pixel output 101 when a uniform luminance surface is imaged via a certain optical system varies as shown in the figure due to variations in projection magnification / projection direction. Based on pre-measured pixel information (projection direction / projection magnification variation) and optical aperture information, light amount correction information (reciprocal number of each pixel output based on the output of the pixel on the optical axis without error) can be calculated. By obtaining and multiplying each pixel output by correction information, it is possible to obtain a pixel output 102 for uniform luminance with reduced variation.

図20は焦点検出用画素出力の補正を説明するための図である。図において、横軸は画素位置を表し、縦軸は画素出力を表す。ある光学系を介して均一輝度面を撮像した場合の一対の画素出力103,104は、投影倍率/投影方向などのバラツキにより、互いに図のように変動しており一致していない。予め測定した画素情報(投影方向/投影倍率のバラツキ)と光学系の口径情報に基づき、光量補正情報(誤差のない光軸上の一対の画素の出力を基準とした各画素の一対の出力の逆数)を演算で求め、各画素の一対の出力に補正情報を乗ずることによって、バラツキを軽減した均一輝度に対する一対の画素出力105(図では一致して示す)を得ることができる。   FIG. 20 is a diagram for explaining correction of the focus detection pixel output. In the figure, the horizontal axis represents the pixel position, and the vertical axis represents the pixel output. A pair of pixel outputs 103 and 104 when a uniform luminance surface is imaged through a certain optical system fluctuate as shown in the figure due to variations in projection magnification / projection direction, etc., and do not coincide with each other. Based on the pixel information (projection direction / projection magnification variation) measured in advance and the aperture information of the optical system, the light amount correction information (the output of the pair of pixels on the basis of the output of the pair of pixels on the optical axis without error) By calculating the reciprocal number and multiplying the pair of outputs of each pixel by the correction information, a pair of pixel outputs 105 (shown in the figure in a consistent manner) for uniform luminance with reduced variation can be obtained.

図21は、一実施の形態のデジタルスチルカメラ(撮像装置)の動作を示すフローチャートである。ボディ駆動制御回路214のマイクロコンピューターは、カメラの電源が投入されるとこの動作を繰り返し実行する。ステップ100で電源がONされるとステップ110へ進み、レンズ駆動制御回路206から口径情報を受信する。ステップ120で、口径情報と画素情報(画素バラツキ情報と焦点検出位置情報)に基づいて、各画素毎(撮像用画素、焦点検出用画素)に光量補正情報を演算する。ステップ130では、焦点検出用画素から一対の像信号を読み出し、光量補正情報で補正する。   FIG. 21 is a flowchart illustrating the operation of the digital still camera (imaging device) according to the embodiment. The microcomputer of the body drive control circuit 214 repeatedly executes this operation when the camera is turned on. When the power is turned on in step 100, the process proceeds to step 110, and aperture information is received from the lens drive control circuit 206. In step 120, light amount correction information is calculated for each pixel (imaging pixel, focus detection pixel) based on aperture information and pixel information (pixel variation information and focus detection position information). In step 130, a pair of image signals are read from the focus detection pixels and corrected with the light amount correction information.

次に、ステップ140で、各焦点検出用画素ごとに補正された一対の像信号に基づいて、周知の像ズレ検出演算処理を行って像ズレ量を求める。続くステップ150で像ズレ量に変換係数を乗じてデフォーカス量に変換する。ステップ160においてデフォーカス量に基づいて光学系が合焦状態か否かを判定する。合焦状態でないと判定した場合はステップ170へ進み、デフォーカス量をレンズ駆動制御回路206へ送信して光学系のフォーカシング用レンズ210を合焦位置に駆動させ、ステップ110へ戻って上記動作を繰り返す。   Next, in step 140, based on a pair of image signals corrected for each focus detection pixel, a known image shift detection calculation process is performed to obtain an image shift amount. In subsequent step 150, the image shift amount is multiplied by a conversion coefficient to be converted into a defocus amount. In step 160, it is determined whether or not the optical system is in focus based on the defocus amount. If it is determined that the in-focus state is not achieved, the process proceeds to step 170, the defocus amount is transmitted to the lens drive control circuit 206, the focusing lens 210 of the optical system is driven to the in-focus position, and the process returns to step 110 to perform the above operation. repeat.

一方、合焦状態と判定した場合はステップ180へ進み、シャッターレリーズがなされたか否かを判定する。シャッターレリーズがなされていないと判定された場合はステップ110へ戻り、上記動作を繰り返す。シャッターレリーズがなされたと判定された場合はステップ190へ進み、撮像用画素から画像信号を読み出して光量補正情報で補正する。ステップ200で補正後の画像信号を画像記憶用メモリーカード219に保存した後、ステップ110へ戻って上記動作を繰り返す。   On the other hand, if it is determined that the in-focus state is reached, the process proceeds to step 180, where it is determined whether or not a shutter release has been performed. If it is determined that the shutter release has not been performed, the process returns to step 110 and the above operation is repeated. If it is determined that the shutter release has been performed, the process proceeds to step 190, where an image signal is read from the imaging pixel and corrected with the light amount correction information. After the corrected image signal is stored in the image storage memory card 219 in step 200, the process returns to step 110 and the above operation is repeated.

このように、一実施の形態によれば、光電変換部の前方にマイクロレンズを配置した画素を光学系の予定結像面近傍に配列した撮像素子を用いて被写体像を撮像する撮像装置において、各画素の光学的なバラツキ情報を記憶しておき、記憶されている各画素のバラツキ情報に基づいて撮像素子の各画素の出力を補正するようにしたので、比較的少量の補正用データを記憶しておくだけで各画素の出力のバラツキを効率的かつ確実に補正することができる。   Thus, according to one embodiment, in an imaging apparatus that captures a subject image using an imaging element in which pixels in which a microlens is arranged in front of a photoelectric conversion unit is arranged in the vicinity of a planned imaging surface of an optical system, Since optical variation information of each pixel is stored and the output of each pixel of the image sensor is corrected based on the stored variation information of each pixel, a relatively small amount of correction data is stored. It is possible to efficiently and reliably correct variations in the output of each pixel simply by doing so.

また、一実施の形態によれば、光学系の射出瞳の口径情報を入手し、各画素のバラツキ情報と口径情報とに基づいて撮像素子の各画素の出力を補正するようにしたので、光学系の種類や使用条件に拘わらず、比較的少量の補正用データを記憶しておくだけで各画素の出力のバラツキを効率的かつ確実に補正することができる。   In addition, according to the embodiment, the aperture information of the exit pupil of the optical system is obtained, and the output of each pixel of the image sensor is corrected based on the variation information and aperture information of each pixel. Regardless of the type of system and usage conditions, it is possible to efficiently and reliably correct variations in the output of each pixel simply by storing a relatively small amount of correction data.

さらに、一実施の形態によれば、画素の光電変換部は一対の光電変換部から構成し、複数の画素から出力される一対の出力データに基づいて光学系の焦点調節状態を検出するようにしたので、光学系の種類や使用条件に拘わらず、比較的少量の補正用データを記憶しておくだけで各画素の出力のバラツキを効率的かつ確実に補正することができ、焦点検出精度を向上させることができる。   Furthermore, according to one embodiment, the photoelectric conversion unit of the pixel is configured by a pair of photoelectric conversion units, and the focus adjustment state of the optical system is detected based on a pair of output data output from the plurality of pixels. Therefore, regardless of the type of optical system and usage conditions, it is possible to efficiently and reliably correct variations in the output of each pixel simply by storing a relatively small amount of correction data, thereby improving focus detection accuracy. Can be improved.

なお、上述した一実施の形態では本願発明の撮像装置を交換レンズ202とカメラボディ203から構成されるデジタルスチルカメラ201に適用した例を示したが、本願発明の撮像装置はデジタルスチルカメラに限定されず、レンズ一体型のデジタルスチルカメラやビデオカメラ、携帯電話等に内蔵される小型カメラモジュールなどのあらゆる装置に適用することができる。   In the above-described embodiment, the imaging apparatus of the present invention is applied to the digital still camera 201 including the interchangeable lens 202 and the camera body 203. However, the imaging apparatus of the present invention is limited to the digital still camera. However, the present invention can be applied to various apparatuses such as a lens-integrated digital still camera, a video camera, a small camera module built in a mobile phone or the like.

一実施の形態のデジタルスチルカメラの構成を示す図である。It is a figure which shows the structure of the digital still camera of one embodiment. 一実施の形態のデジタルスチルカメラの焦点検出位置を示す図である。It is a figure which shows the focus detection position of the digital still camera of one embodiment. 撮像素子の詳細な構成を示す正面図である。It is a front view which shows the detailed structure of an image pick-up element. 撮像用画素の断面図である。It is sectional drawing of the pixel for imaging. 焦点検出用画素の断面図である。It is sectional drawing of the pixel for focus detection. 光電変換部の投影状態を説明するための図である。It is a figure for demonstrating the projection state of a photoelectric conversion part. 撮像用画素と射出瞳の関係を説明するための図である。It is a figure for demonstrating the relationship between the pixel for imaging, and an exit pupil. 焦点検出用画素と射出瞳の関係を説明するための図である。It is a figure for demonstrating the relationship between a pixel for focus detection, and an exit pupil. 撮像光束のケラレを説明するための図である。It is a figure for demonstrating the vignetting of an imaging light beam. 焦点検出光束のケラレを説明するための図である。It is a figure for demonstrating the vignetting of a focus detection light beam. 投影方向のバラツキを説明するための図である。It is a figure for demonstrating the variation in a projection direction. 投影倍率のバラツキを説明するための図である。It is a figure for demonstrating the variation in projection magnification. 投影方向のバラツキを説明するための図である。It is a figure for demonstrating the variation in a projection direction. 投影方向のバラツキに対する射出瞳サイズの影響を説明するための図である。It is a figure for demonstrating the influence of the exit pupil size with respect to the variation in a projection direction. 投影倍率のバラツキを説明するための図である。It is a figure for demonstrating the variation in projection magnification. 投影方向と投影倍率のバラツキを説明するための図である。It is a figure for demonstrating the variation of a projection direction and a projection magnification. 画素位置および光束内の光線分布の影響を説明するための図である。It is a figure for demonstrating the influence of a pixel position and the light distribution in a light beam. 画素位置および光束内の光線分布の影響を説明するための図である。It is a figure for demonstrating the influence of a pixel position and the light distribution in a light beam. 撮像用画素出力の補正を説明するための図である。It is a figure for demonstrating correction | amendment of the imaging pixel output. 焦点検出用画素出力の補正を説明するための図である。It is a figure for demonstrating correction | amendment of the pixel output for focus detection. 一実施の形態のデジタルスチルカメラ(撮像装置)の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the digital still camera (imaging device) of one embodiment.

符号の説明Explanation of symbols

10 マイクロレンズ
11、12、13 光電変換部
201 デジタルスチルカメラ
202 交換レンズ
203 カメラボディ
206 レンズ駆動制御回路
208 ズーミング用レンズ
210 フォーカシング用レンズ
211 絞り
212 撮像素子
214 ボディ駆動制御回路
218 画素情報メモリ
310 撮像用画素
311 焦点検出用画素
DESCRIPTION OF SYMBOLS 10 Microlens 11, 12, 13 Photoelectric conversion part 201 Digital still camera 202 Interchangeable lens 203 Camera body 206 Lens drive control circuit 208 Zooming lens 210 Focusing lens 211 Diaphragm 212 Image sensor 214 Body drive control circuit 218 Pixel information memory 310 Imaging Pixel 311 focus detection pixel

Claims (14)

光電変換部の前方にマイクロレンズを配置した画素を光学系の予定結像面近傍に配列した撮像素子を用いて被写体像を撮像する撮像装置において、
前記各画素の光学的なバラツキ情報を記憶する画素情報記憶手段を備えることを特徴とする撮像装置。
In an imaging apparatus that captures a subject image using an imaging element in which pixels in which a microlens is arranged in front of a photoelectric conversion unit is arranged in the vicinity of a predetermined imaging plane of an optical system
An image pickup apparatus comprising pixel information storage means for storing optical variation information of each pixel.
請求項1に記載の撮像装置において、
前記画素情報記憶手段に記憶されている前記各画素のバラツキ情報に基づいて、前記撮像素子の各画素の出力を補正する出力補正手段を備えることを特徴とする撮像装置。
The imaging device according to claim 1,
An image pickup apparatus comprising: output correction means for correcting the output of each pixel of the image pickup element based on variation information of each pixel stored in the pixel information storage means.
請求項2に記載の撮像装置において、
前記光学系の射出瞳の口径情報を発生する口径情報発生手段を備え、
前記出力補正手段は、前記画素情報記憶手段の前記各画素のバラツキ情報と前記口径情報発生手段の口径情報とに基づいて前記撮像素子の各画素の出力を補正することを特徴とする撮像装置。
The imaging device according to claim 2,
Aperture information generating means for generating aperture information of the exit pupil of the optical system,
The output correction unit corrects the output of each pixel of the imaging element based on variation information of each pixel of the pixel information storage unit and aperture information of the aperture information generation unit.
請求項1〜3のいずれか1項に記載の撮像装置において、
前記画素の光電変換部は一対の光電変換部から構成されており、
複数の前記画素から出力される一対の出力データに基づいて前記光学系の焦点調節状態を検出する焦点検出手段を備えることを特徴とする撮像装置。
The imaging device according to any one of claims 1 to 3,
The photoelectric conversion unit of the pixel is composed of a pair of photoelectric conversion units,
An imaging apparatus comprising: a focus detection unit that detects a focus adjustment state of the optical system based on a pair of output data output from a plurality of the pixels.
請求項1〜4のいずれか1項に記載の撮像装置において、
前記各画素のバラツキ情報は、前記各画素の光学的なバラツキに起因した前記マイクロレンズによる投影特性であることを特徴とする撮像装置。
In the imaging device according to any one of claims 1 to 4,
The image pickup apparatus, wherein the variation information of each pixel is a projection characteristic by the microlens due to optical variation of each pixel.
請求項5に記載の撮像装置において、
前記各画素のバラツキ情報には、前記マイクロレンズによる投影方向が含まれることを特徴とする撮像装置。
The imaging apparatus according to claim 5,
The image pickup apparatus, wherein the variation information of each pixel includes a projection direction by the microlens.
請求項5に記載の撮像装置において、
前記各画素のバラツキ情報には、前記マイクロレンズによる投影倍率が含まれることを特徴とする撮像装置。
The imaging apparatus according to claim 5,
The image pickup apparatus, wherein the variation information of each pixel includes a projection magnification by the microlens.
請求項3〜6のいずれか1項に記載の撮像装置において、
前記口径情報は前記光学系の射出瞳の径および位置の情報であることを特徴とする撮像装置。
The imaging apparatus according to any one of claims 3 to 6,
The aperture information is information on the diameter and position of an exit pupil of the optical system.
請求項3〜6のいずれか1項に記載の撮像装置において、
前記口径情報発生手段は、前記光学系の絞り、フォーカシングレンズ位置およびズームレンズ位置に応じた口径情報を発生することを特徴とする撮像装置。
The imaging apparatus according to any one of claims 3 to 6,
The aperture information generating means generates aperture information corresponding to an aperture, a focusing lens position, and a zoom lens position of the optical system.
請求項3〜9のいずれか1項に記載の撮像装置において、
前記出力補正手段は、前記各画素のバラツキ情報と前記口径情報に基づいて前記各画素の光電変換部が受光する光量を演算し、この光量に応じて前記各画素の出力を補正することを特徴とする撮像装置。
In the imaging device according to any one of claims 3 to 9,
The output correction means calculates the amount of light received by the photoelectric conversion unit of each pixel based on variation information of each pixel and the aperture information, and corrects the output of each pixel according to the amount of light. An imaging device.
請求項1〜10のいずれか1項に記載の撮像装置において、
前記撮像素子と前記光学系とをそれぞれ脱着可能な別個の構体に組み込むことを特徴とする撮像装置。
In the imaging device according to any one of claims 1 to 10,
An image pickup apparatus, wherein the image pickup element and the optical system are incorporated in separate detachable structures.
請求項1〜11のいずれか1項に記載の撮像装置を備えたことを特徴とするカメラ。   A camera comprising the imaging device according to claim 1. 光電変換部の前方にマイクロレンズを配置した画素を光学系の予定結像面近傍に配列した撮像素子を用いて被写体像を撮像する撮像方法において、
前記各画素の光学的なバラツキ情報を予め記憶しておき、前記各画素のバラツキ情報に基づいて前記撮像素子の各画素の出力を補正することを特徴とする撮像方法。
In an imaging method for imaging a subject image using an imaging device in which pixels in which a microlens is arranged in front of a photoelectric conversion unit are arranged in the vicinity of a predetermined imaging plane of an optical system,
An imaging method, wherein optical variation information of each pixel is stored in advance, and an output of each pixel of the imaging device is corrected based on the variation information of each pixel.
請求項13に記載の撮像方法において、
前記光学系の射出瞳の口径情報を入手し、前記各画素のバラツキ情報と前記口径情報とに基づいて前記撮像素子の各画素の出力を補正することを特徴とする撮像方法。
The imaging method according to claim 13.
An imaging method comprising obtaining aperture information of an exit pupil of the optical system and correcting an output of each pixel of the imaging device based on variation information and aperture information of each pixel.
JP2006003606A 2006-01-11 2006-01-11 Imaging device Expired - Fee Related JP4946059B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006003606A JP4946059B2 (en) 2006-01-11 2006-01-11 Imaging device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006003606A JP4946059B2 (en) 2006-01-11 2006-01-11 Imaging device

Publications (2)

Publication Number Publication Date
JP2007189312A true JP2007189312A (en) 2007-07-26
JP4946059B2 JP4946059B2 (en) 2012-06-06

Family

ID=38344202

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006003606A Expired - Fee Related JP4946059B2 (en) 2006-01-11 2006-01-11 Imaging device

Country Status (1)

Country Link
JP (1) JP4946059B2 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009044534A (en) * 2007-08-09 2009-02-26 Nikon Corp Electronic camera
JP2009063952A (en) * 2007-09-10 2009-03-26 Nikon Corp Imaging device, focus detecting device and imaging apparatus
WO2009044776A1 (en) * 2007-10-02 2009-04-09 Nikon Corporation Light receiving device, focal point detecting device and imaging device
JP2009081522A (en) * 2007-09-25 2009-04-16 Nikon Corp Imaging apparatus
JP2009145527A (en) * 2007-12-13 2009-07-02 Nikon Corp Imaging element, focus detecting device, and imaging device
JP2009162846A (en) * 2007-12-28 2009-07-23 Nikon Corp Element inspection light source, and device and method for inspecting element
WO2009107705A1 (en) * 2008-02-28 2009-09-03 ソニー株式会社 Imaging device, and imaging element
WO2009113701A1 (en) * 2008-03-11 2009-09-17 Canon Kabushiki Kaisha Image capturing apparatus and image processing method
JP2010049209A (en) * 2008-08-25 2010-03-04 Canon Inc Imaging sensing apparatus, image sensing system, and focus detection method
JP2011022386A (en) * 2009-07-16 2011-02-03 Canon Inc Imaging apparatus and control method therefor
EP2340454A1 (en) * 2008-10-30 2011-07-06 Canon Kabushiki Kaisha Image capturing apparatus
JP2013021615A (en) * 2011-07-13 2013-01-31 Olympus Imaging Corp Image pickup apparatus
JP2013037295A (en) * 2011-08-10 2013-02-21 Olympus Imaging Corp Image pickup apparatus and image pickup device
JP2013037296A (en) * 2011-08-10 2013-02-21 Olympus Imaging Corp Image pickup apparatus and image pickup device
JP2013083999A (en) * 2012-12-17 2013-05-09 Canon Inc Imaging apparatus, imaging system, and focus detection method
JP2013148917A (en) * 2008-03-11 2013-08-01 Canon Inc Imaging apparatus and method for processing image
JP2013186201A (en) * 2012-03-06 2013-09-19 Canon Inc Imaging apparatus
JP2013211770A (en) * 2012-03-30 2013-10-10 Canon Inc Imaging device and signal processing method
US8675121B2 (en) 2008-10-30 2014-03-18 Canon Kabushiki Kaisha Camera and camera system
WO2014097784A1 (en) * 2012-12-20 2014-06-26 オリンパスイメージング株式会社 Imaging apparatus, method for calculating information for focus control, and camera system
WO2014106917A1 (en) * 2013-01-04 2014-07-10 富士フイルム株式会社 Image processing device, imaging device, image processing method, and image processing program
US20140204231A1 (en) * 2011-07-22 2014-07-24 Nikon Corporation Focus adjustment device and imaging apparatus
JP2014186338A (en) * 2014-05-15 2014-10-02 Canon Inc Imaging apparatus, imaging system, and method for controlling imaging apparatus
JP2014215405A (en) * 2013-04-24 2014-11-17 オリンパス株式会社 Imaging element and microscope device
JP2014222291A (en) * 2013-05-13 2014-11-27 キヤノン株式会社 Imaging apparatus and control method thereof
WO2015046246A1 (en) * 2013-09-30 2015-04-02 オリンパス株式会社 Camera system and focal point detection pixel correction method
JP2015096965A (en) * 2014-12-26 2015-05-21 株式会社ニコン Imaging device
CN105390513A (en) * 2014-08-21 2016-03-09 三星电子株式会社 Unit pixels, image sensors including the same, and image processing systems including the same
WO2016035566A1 (en) * 2014-09-01 2016-03-10 ソニー株式会社 Solid-state imaging element, signal processing method therefor, and electronic device
WO2016067648A1 (en) * 2014-10-30 2016-05-06 オリンパス株式会社 Focal point adjustment device, camera system, and focal point adjustment method
JP2017021052A (en) * 2016-10-06 2017-01-26 キヤノン株式会社 Image data processing device, distance calculation device, imaging device, and image data processing method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06178198A (en) * 1992-12-04 1994-06-24 Nec Corp Solid state image pickup device
JPH08220584A (en) * 1995-02-14 1996-08-30 Nikon Corp Image pick-up
JPH1155558A (en) * 1997-08-06 1999-02-26 Minolta Co Ltd Digital camera
JPH11122525A (en) * 1997-08-06 1999-04-30 Minolta Co Ltd Digital camera
JP2000236480A (en) * 1999-02-16 2000-08-29 Minolta Co Ltd Image pickup device and shading correction method
JP2000324505A (en) * 1999-05-11 2000-11-24 Nikon Corp Image input device and lens for exchange
JP2002131623A (en) * 2000-10-24 2002-05-09 Canon Inc Imaging apparatus and system
JP2002218298A (en) * 2001-01-17 2002-08-02 Canon Inc Image pickup device, shading correcting method and storage medium
JP2003241075A (en) * 2002-02-22 2003-08-27 Canon Inc Camera system, camera and photographic lens device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06178198A (en) * 1992-12-04 1994-06-24 Nec Corp Solid state image pickup device
JPH08220584A (en) * 1995-02-14 1996-08-30 Nikon Corp Image pick-up
JPH1155558A (en) * 1997-08-06 1999-02-26 Minolta Co Ltd Digital camera
JPH11122525A (en) * 1997-08-06 1999-04-30 Minolta Co Ltd Digital camera
JP2000236480A (en) * 1999-02-16 2000-08-29 Minolta Co Ltd Image pickup device and shading correction method
JP2000324505A (en) * 1999-05-11 2000-11-24 Nikon Corp Image input device and lens for exchange
JP2002131623A (en) * 2000-10-24 2002-05-09 Canon Inc Imaging apparatus and system
JP2002218298A (en) * 2001-01-17 2002-08-02 Canon Inc Image pickup device, shading correcting method and storage medium
JP2003241075A (en) * 2002-02-22 2003-08-27 Canon Inc Camera system, camera and photographic lens device

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009044534A (en) * 2007-08-09 2009-02-26 Nikon Corp Electronic camera
JP2009063952A (en) * 2007-09-10 2009-03-26 Nikon Corp Imaging device, focus detecting device and imaging apparatus
JP2009081522A (en) * 2007-09-25 2009-04-16 Nikon Corp Imaging apparatus
JP2009175680A (en) * 2007-10-02 2009-08-06 Nikon Corp Light receiving device, focal point detecting device and imaging device
WO2009044776A1 (en) * 2007-10-02 2009-04-09 Nikon Corporation Light receiving device, focal point detecting device and imaging device
US8680451B2 (en) 2007-10-02 2014-03-25 Nikon Corporation Light receiving device, focus detection device and imaging device
JP2009145527A (en) * 2007-12-13 2009-07-02 Nikon Corp Imaging element, focus detecting device, and imaging device
JP2009162846A (en) * 2007-12-28 2009-07-23 Nikon Corp Element inspection light source, and device and method for inspecting element
WO2009107705A1 (en) * 2008-02-28 2009-09-03 ソニー株式会社 Imaging device, and imaging element
US8319882B2 (en) 2008-02-28 2012-11-27 Sony Corporation Image pickup device and image pickup element including plurality of types of pixel pairs
WO2009113701A1 (en) * 2008-03-11 2009-09-17 Canon Kabushiki Kaisha Image capturing apparatus and image processing method
JP2009244858A (en) * 2008-03-11 2009-10-22 Canon Inc Image capturing apparatus and image processing method
US20110019028A1 (en) * 2008-03-11 2011-01-27 Canon Kabushiki Kaisha Image capturing apparatus and image processing method
US8531583B2 (en) * 2008-03-11 2013-09-10 Canon Kabushiki Kaisha Image capturing apparatus and image processing method
JP2013148917A (en) * 2008-03-11 2013-08-01 Canon Inc Imaging apparatus and method for processing image
US8890972B2 (en) 2008-03-11 2014-11-18 Canon Kabushiki Kaisha Image capturing apparatus and image processing method
JP2010049209A (en) * 2008-08-25 2010-03-04 Canon Inc Imaging sensing apparatus, image sensing system, and focus detection method
US9560258B2 (en) 2008-08-25 2017-01-31 Canon Kabushiki Kaisha Image sensing apparatus, image sensing system and focus detection method of detecting a focus position of a lens using image signal pair
US8405760B2 (en) 2008-08-25 2013-03-26 Canon Kabushiki Kaisha Image sensing apparatus, image sensing system and focus detection method
EP2340454A4 (en) * 2008-10-30 2013-03-06 Canon Kk Image capturing apparatus
US8477233B2 (en) 2008-10-30 2013-07-02 Canon Kabushiki Kaisha Image capturing apparatus
EP2340454A1 (en) * 2008-10-30 2011-07-06 Canon Kabushiki Kaisha Image capturing apparatus
US8675121B2 (en) 2008-10-30 2014-03-18 Canon Kabushiki Kaisha Camera and camera system
JP2011022386A (en) * 2009-07-16 2011-02-03 Canon Inc Imaging apparatus and control method therefor
JP2013021615A (en) * 2011-07-13 2013-01-31 Olympus Imaging Corp Image pickup apparatus
US9967450B2 (en) 2011-07-22 2018-05-08 Nikon Corporation Focus adjustment device and imaging apparatus
US20140204231A1 (en) * 2011-07-22 2014-07-24 Nikon Corporation Focus adjustment device and imaging apparatus
JP2013037295A (en) * 2011-08-10 2013-02-21 Olympus Imaging Corp Image pickup apparatus and image pickup device
JP2013037296A (en) * 2011-08-10 2013-02-21 Olympus Imaging Corp Image pickup apparatus and image pickup device
JP2013186201A (en) * 2012-03-06 2013-09-19 Canon Inc Imaging apparatus
JP2013211770A (en) * 2012-03-30 2013-10-10 Canon Inc Imaging device and signal processing method
JP2013083999A (en) * 2012-12-17 2013-05-09 Canon Inc Imaging apparatus, imaging system, and focus detection method
WO2014097784A1 (en) * 2012-12-20 2014-06-26 オリンパスイメージング株式会社 Imaging apparatus, method for calculating information for focus control, and camera system
US9473693B2 (en) 2012-12-20 2016-10-18 Olympus Corporation Photographic apparatus, camera system and methods for calculating focus control information based on a distance between centers of gravity distributions of light receiving sensitivities
WO2014106917A1 (en) * 2013-01-04 2014-07-10 富士フイルム株式会社 Image processing device, imaging device, image processing method, and image processing program
JP5889441B2 (en) * 2013-01-04 2016-03-22 富士フイルム株式会社 Image processing apparatus, imaging apparatus, image processing method, and image processing program
CN104885440B (en) * 2013-01-04 2017-12-08 富士胶片株式会社 Image processing apparatus, camera device and image processing method
CN104885440A (en) * 2013-01-04 2015-09-02 富士胶片株式会社 Image processing device, imaging device, image processing method, and image processing program
JP2014215405A (en) * 2013-04-24 2014-11-17 オリンパス株式会社 Imaging element and microscope device
JP2014222291A (en) * 2013-05-13 2014-11-27 キヤノン株式会社 Imaging apparatus and control method thereof
US9509899B2 (en) 2013-09-30 2016-11-29 Olympus Corporation Camera system and method for correcting focus detection pixel
WO2015046246A1 (en) * 2013-09-30 2015-04-02 オリンパス株式会社 Camera system and focal point detection pixel correction method
JP2015069180A (en) * 2013-09-30 2015-04-13 オリンパス株式会社 Camera system and correction method of focus detection pixels
JP2014186338A (en) * 2014-05-15 2014-10-02 Canon Inc Imaging apparatus, imaging system, and method for controlling imaging apparatus
CN105390513A (en) * 2014-08-21 2016-03-09 三星电子株式会社 Unit pixels, image sensors including the same, and image processing systems including the same
CN111430387A (en) * 2014-09-01 2020-07-17 索尼公司 Solid-state imaging device, signal processing method for solid-state imaging device, and electronic apparatus
US10542229B2 (en) 2014-09-01 2020-01-21 Sony Corporation Solid-state imaging device, signal processing method therefor, and electronic apparatus for enabling sensitivity correction
JP2016052041A (en) * 2014-09-01 2016-04-11 ソニー株式会社 Solid-state imaging device, signal processing method therefor, and electronic apparatus
US11770639B2 (en) 2014-09-01 2023-09-26 Sony Group Corporation Solid-state imaging device, signal processing method therefor, and electronic apparatus for enabling sensitivity correction
CN111430387B (en) * 2014-09-01 2023-09-19 索尼公司 Solid-state imaging device, signal processing method for solid-state imaging device, and electronic apparatus
WO2016035566A1 (en) * 2014-09-01 2016-03-10 ソニー株式会社 Solid-state imaging element, signal processing method therefor, and electronic device
CN111464763B (en) * 2014-09-01 2022-03-18 索尼公司 Solid-state imaging device, signal processing method for solid-state imaging device, and electronic apparatus
CN105580352B (en) * 2014-09-01 2020-01-14 索尼公司 Solid-state imaging device, signal processing method for solid-state imaging device, and electronic apparatus
CN105580352A (en) * 2014-09-01 2016-05-11 索尼公司 Solid-state imaging element, signal processing method therefor, and electronic device
CN111464763A (en) * 2014-09-01 2020-07-28 索尼公司 Solid-state imaging device, signal processing method for solid-state imaging device, and electronic apparatus
US9918031B2 (en) 2014-09-01 2018-03-13 Sony Corporation Solid-state imaging device and electronic apparatus having a pixel unit with one microlens and a correction circuit
CN107111102A (en) * 2014-10-30 2017-08-29 奥林巴斯株式会社 Focus-regulating device, camera arrangement and focus adjusting method
JP2016090649A (en) * 2014-10-30 2016-05-23 オリンパス株式会社 Focus adjustment device, camera system and focus adjustment method
WO2016067648A1 (en) * 2014-10-30 2016-05-06 オリンパス株式会社 Focal point adjustment device, camera system, and focal point adjustment method
US10171724B2 (en) 2014-10-30 2019-01-01 Olympus Corporation Focal point adjustment device and focal point adjustment method
JP2015096965A (en) * 2014-12-26 2015-05-21 株式会社ニコン Imaging device
JP2017021052A (en) * 2016-10-06 2017-01-26 キヤノン株式会社 Image data processing device, distance calculation device, imaging device, and image data processing method

Also Published As

Publication number Publication date
JP4946059B2 (en) 2012-06-06

Similar Documents

Publication Publication Date Title
JP4946059B2 (en) Imaging device
JP4984491B2 (en) Focus detection apparatus and optical system
JP5169499B2 (en) Imaging device and imaging apparatus
JP5219865B2 (en) Imaging apparatus and focus control method
JP4720508B2 (en) Imaging device and imaging apparatus
JP2011039499A (en) Automatic focus detection device
US20200252568A1 (en) Image sensor and image capture apparatus
JP6014452B2 (en) FOCUS DETECTION DEVICE, LENS DEVICE HAVING THE SAME, AND IMAGING DEVICE
JP5784395B2 (en) Imaging device
JP2009075407A (en) Imaging apparatus
JP5157377B2 (en) Focus detection apparatus and imaging apparatus
JP6854619B2 (en) Focus detection device and method, imaging device, lens unit and imaging system
JP2011114553A (en) Imaging device
JP5061858B2 (en) Focus detection apparatus and imaging apparatus
JP2017207695A (en) Optical device
JP5610005B2 (en) Imaging device
JP2017219791A (en) Control device, imaging device, control method, program, and storage medium
JP2017223879A (en) Focus detector, focus control device, imaging apparatus, focus detection method, and focus detection program
JP2020113948A (en) Imaging element, imaging apparatus, control method, and program
JP2009162845A (en) Imaging device, focus detecting device and imaging apparatus
JP2017009640A (en) Imaging device and imaging device control method
JP2019184956A (en) Focus detection device, camera body and camera system
JP2012226088A (en) Imaging apparatus
JP5846245B2 (en) Automatic focus detection device
JP6136566B2 (en) Imaging device and intermediate adapter

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20081204

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110405

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110412

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110526

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20110526

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110712

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110907

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20111018

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20111114

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120207

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120220

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150316

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4946059

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150316

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees