WO2016111175A1 - Image processing device, image processing method, and program - Google Patents

Image processing device, image processing method, and program Download PDF

Info

Publication number
WO2016111175A1
WO2016111175A1 PCT/JP2015/085942 JP2015085942W WO2016111175A1 WO 2016111175 A1 WO2016111175 A1 WO 2016111175A1 JP 2015085942 W JP2015085942 W JP 2015085942W WO 2016111175 A1 WO2016111175 A1 WO 2016111175A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase difference
pixel
wavelength band
image
photoelectric conversion
Prior art date
Application number
PCT/JP2015/085942
Other languages
French (fr)
Japanese (ja)
Inventor
斉藤 正剛
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2016111175A1 publication Critical patent/WO2016111175A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration

Definitions

  • the present technology relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program suitable for use in AF (Auto Focus).
  • AF Auto Focus
  • phase difference detection AF there is a phase difference detection method as a kind of AF (Auto-Focus) method.
  • phase difference detection AF there is an image plane phase difference method in which pixels used for phase difference detection (hereinafter referred to as phase difference detection pixels) are arranged on the imaging surface of an image sensor.
  • the phase difference detection pixel has a lower sensitivity than a normal imaging pixel because the amount of incident light is limited by dividing the pupil.
  • it has been proposed to detect a phase difference using a synthesized signal obtained by synthesizing signals of phase difference detection pixels in a predetermined region see, for example, Patent Document 1).
  • this technique is intended to improve the focusing accuracy.
  • An image processing apparatus is configured to calculate a phase difference pixel value, which is a pixel value based on incident light on a partial region at a biased position of a light receiving surface of a pixel, in a time direction, a spatial direction, and a wavelength direction.
  • An integration processing unit that adaptively weights and integrates in at least one direction, and a plurality corresponding to the plurality of first partial regions that are located in the same direction based on the phase difference pixel value after integration. Between the image composed of the phase difference pixel values and the image composed of the plurality of phase difference pixel values corresponding to the plurality of second partial regions located in positions opposite to the first partial region.
  • a phase difference detection unit for detecting the phase difference.
  • An in-focus position is detected based on the phase difference, and when the wavelength band used to detect the phase difference is different from the wavelength band to be focused, the in-focus position is corrected based on axial chromatic aberration between the wavelength bands.
  • a focus position detection unit can be further provided.
  • a focus control unit that controls the position of the focus based on the in-focus position can be further provided.
  • the phase difference detection unit can select at least one or more of a direction, a wavelength band, and a position for detecting the phase difference based on a predetermined condition.
  • the integration processing unit is configured to perform integration in the time direction of the phase difference pixel value using a weight based on movement of a subject between frames in an image including pixel values based on incident light on the entire light receiving surface. Can do.
  • the integration processing unit can individually calculate a weight based on a horizontal distance between pixels and a weight based on a vertical distance, and perform integration processing in the spatial direction using the two calculated weights. .
  • the integration processing unit uses a weight based on the similarity between the pixels of the first image. If the phase difference pixel value is integrated in the spatial direction and the parameter of the first image does not satisfy the condition, the phase difference pixel value is parallel to the direction of the partial region corresponding to the phase difference pixel value. And using a weight based on the similarity between pixels of the first image, and based on the incident light on the entire light receiving surface with respect to a direction orthogonal to the direction of the partial region corresponding to the phase difference pixel value. Integration of the phase difference pixel value in the spatial direction can be performed using a weight based on the similarity between the pixels of the second image made up of pixel values.
  • the integration processing unit performs integration of the phase difference pixel values in the wavelength direction by adding the phase difference pixel values of different wavelength bands of the same pixel using weights based on magnification chromatic aberration between the wavelength bands. Can be made.
  • the integration processing unit may further perform integration of the phase difference pixel value in the wavelength direction using a weight based on the difference between the phase difference pixel values of different wavelength bands of the same pixel.
  • the integration processing unit is configured to perform two or more integration processes in a predetermined order among the integration processes in the time direction, the spatial direction, and the wavelength direction, and a predetermined value representing the image quality of the image including the phase difference pixel values.
  • a predetermined condition When the parameter satisfies a predetermined condition, it is possible to prevent the subsequent integration process from being performed.
  • a first photoelectric conversion unit located in a position biased in the first direction of the light receiving surface, and a second photoelectric conversion located in a position biased in a second direction opposite to the first direction of the light receiving surface.
  • An imaging element in which a plurality of first phase difference pixels, which are pixels including at least one of the units, are arranged can be further provided.
  • the image sensor includes a third photoelectric conversion unit located in a position biased in a third direction orthogonal to the first direction and the second direction of the light receiving surface, and the third photoelectric conversion unit.
  • a plurality of second phase difference pixels which are pixels including at least one of the fourth photoelectric conversion units located in a position biased in a fourth direction opposite to the first direction, can be arranged.
  • the first phase difference pixel is provided with the first photoelectric conversion unit and the second photoelectric conversion unit, and the second phase difference pixel is provided with the third photoelectric conversion unit and the fourth photoelectric conversion unit.
  • a photoelectric conversion unit may be provided, and the first phase difference pixel and the second phase difference pixel may be arranged in a predetermined pattern on the imaging element.
  • the first phase difference pixel may include a first pixel that receives invisible light in a first wavelength band.
  • the first phase difference pixel may further include a second pixel that receives invisible light in the first wavelength band and visible light in the second wavelength band.
  • the first phase difference pixel includes a first pixel that receives visible light in a first wavelength band, a second pixel that receives visible light in a second wavelength band, and the first wavelength band. And a third pixel that receives visible light in a third wavelength band including the second wavelength band and invisible light in a fourth wavelength band.
  • the first phase difference pixel includes a first pixel that receives visible light in a first wavelength band and invisible light in a second wavelength band, visible light in a third wavelength band, and the second wavelength band.
  • the second pixel that receives the invisible light, the visible light in the fourth wavelength band including the first wavelength band and the third wavelength band, and the invisible light in the second wavelength band.
  • a third pixel can be included.
  • a first photoelectric conversion layer composed of a plurality of photoelectric conversion units that are stacked at positions shifted in the first direction of the light receiving surface and receive light of different wavelength bands, and the first direction of the light receiving surface.
  • a phase difference pixel that is a pixel including at least one of the second photoelectric conversion layers that are stacked at positions deviated in the second direction opposite to that of the plurality of photoelectric conversion units that receive light of different wavelength bands.
  • a plurality of image sensors can be further provided.
  • a phase difference pixel value which is a pixel value based on incident light on a partial region at a biased position of a light receiving surface of a pixel, is obtained in a time direction, a spatial direction, and a wavelength direction.
  • An integration processing step that adaptively weights and integrates in at least one direction, and a plurality corresponding to the plurality of first partial regions that are located in the same direction based on the phase difference pixel value after integration. Between the image composed of the phase difference pixel values and the image composed of the plurality of phase difference pixel values corresponding to the plurality of second partial regions located in positions opposite to the first partial region.
  • a program is configured to set a phase difference pixel value, which is a pixel value based on incident light to a partial region at a biased position on a light receiving surface of a pixel, to at least one of a time direction, a spatial direction, and a wavelength direction.
  • An integration processing step for adaptively weighting and integrating the direction, and a plurality of the plurality of the first partial regions corresponding to the plurality of first partial regions located in the same direction based on the phase difference pixel value after integration.
  • a computer is caused to execute processing including a phase difference detection step of detecting a phase difference.
  • the phase difference pixel value which is a pixel value based on incident light on a partial region at a biased position on the light receiving surface of the pixel, is at least one direction in the time direction, the spatial direction, and the wavelength direction. From the plurality of phase difference pixel values corresponding to the plurality of first partial regions located at positions deviated in the same direction based on the phase difference pixel values after being integrated adaptively weighted And a phase difference between the plurality of phase difference pixel values corresponding to the plurality of second partial areas that are offset in the direction opposite to the first partial area is detected. .
  • FIG. 1 is a block diagram illustrating a configuration example of a camera 101a which is an embodiment of a digital camera to which the present technology is applied.
  • the camera 101a includes a lens 111, an image sensor 112a, an AGC (Automatic Gain Control) unit 113, an ADC (Analog Digital Converter) unit 114, a pixel interpolation unit 115, a captured image signal processing unit 116, a display system driving unit 117, and an output display monitor. 118, a captured image compression unit 119, a captured image storage unit 120, an AF (Auto-Focus) control unit 121, and a lens driving unit 122.
  • AGC Automatic Gain Control
  • ADC Analog Digital Converter
  • the AF control unit 121 includes a focus detection image acquisition unit 131, an integration processing unit 132, a phase difference detection unit 133, a focus position detection unit 134, a focus control unit 135, a focus detection image storage unit 136, and chromatic aberration data storage. It is comprised so that the part 137 may be included.
  • the lens 111 forms an image of light (incident light) from the subject on the imaging surface of the imaging element 112a.
  • the lens 111 can move in the optical axis direction under the control of the lens driving unit 122. As the lens 111 moves in the optical axis direction, the focal position of the lens 111 moves in the optical axis direction.
  • the lens 111 can be configured by a lens system in which a plurality of lenses are combined.
  • the image sensor 112a is composed of an image plane phase difference image sensor having a phase difference detection function.
  • the image sensor 112a receives light transmitted through the lens 111, captures an image of the received light, and outputs an obtained image signal (hereinafter also simply referred to as an image).
  • the imaging device 112a outputs an image (hereinafter referred to as a phase difference image) used for phase difference detection AF (Auto-Focus) in addition to a normal image (hereinafter referred to as a captured image). It is possible.
  • the AGC unit 113 controls the gain used to amplify the captured image and the phase difference image according to the SN ratio of the captured image and the phase difference image supplied from the image sensor 112a and the brightness of the subject environment.
  • the AGC unit 113 can control the gain independently for each of the captured image and the phase difference image.
  • the ADC unit 114 converts the captured image and the phase difference image, which are analog image signals, into digital image signals.
  • the ADC unit 114 supplies the captured image and the phase difference image converted to digital to the pixel interpolation unit 115.
  • the pixel interpolation unit 115 performs demosaic processing on the captured image and phase difference image that have been subjected to white balance processing by a white balance processing unit (not shown). That is, the pixel interpolation unit 115 is detected at each pixel position of the red (R), green (G), and blue (B) wavelength bands at each pixel position of the captured image and the phase difference image. Interpolate pixel values in the non-wavelength band using pixel values of surrounding pixels.
  • the pixel interpolation unit 115 supplies the captured image after the demosaic process to the captured image signal processing unit 116 and the focus detection image acquisition unit 131 of the AF control unit 121.
  • the pixel interpolation unit 115 supplies the demodulated phase difference image to the focus detection image acquisition unit 131 of the AF control unit 121.
  • the captured image signal processing unit 116 performs various signal processing after the demosaic process in the development process on the captured image.
  • the captured image signal processing unit 116 supplies the captured image after performing various signal processings to the display system driving unit 117 and the captured image compression unit 119.
  • the display system drive unit 117 displays the captured image and various GUIs (Graphical User Interface) on the output display monitor 118.
  • GUIs Graphic User Interface
  • the output display monitor 118 is configured by a display such as an LCD (Liquid Crystal Display) or an organic EL display.
  • a display such as an LCD (Liquid Crystal Display) or an organic EL display.
  • the photographed image compression unit 119 compresses the photographed image in a format that can be stored in the photographed image storage unit 120.
  • the captured image compression unit 119 stores the compressed captured image in the captured image storage unit 120.
  • the photographed image storage unit 120 includes, for example, a hard disk or a nonvolatile memory.
  • the captured image storage unit 120 includes a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and a drive that drives the removable medium.
  • the AF control unit 121 detects the in-focus position of the lens 111 based on the captured image and the phase difference image. Then, the AF control unit 121 adjusts the position of the lens 111 in the optical axis direction by controlling the lens driving unit 122 so that the lens 111 is focused on the subject based on the detection result.
  • the focus detection image acquisition unit 131 stores the captured image and phase difference image supplied from the pixel interpolation unit 115 in the focus detection image storage unit 136 or supplies them to the integration processing unit 132.
  • the focus detection image acquisition unit 131 detects the SN ratio of the captured image and the phase difference image. For example, the focus detection image acquisition unit 131 determines the captured image and the phase difference based on the set values such as the exposure time and AGC by the captured image AE unit 152 and the phase difference image AE unit 154 and the signal level of the obtained image.
  • the SN ratio of the image is estimated, and the estimated SN ratio is set as a detection value.
  • the focus detection image acquisition unit 131 sets the exposure time of the image sensor 112a and the gain of the AGC unit 113 based on the detection result.
  • the focus detection image acquisition unit 131 supplies an exposure time control signal indicating the set exposure time to the image sensor 112a, and supplies an AGC gain control signal indicating the set gain to the AGC unit 113.
  • the integration processing unit 132 performs integration processing of pixel values of each pixel of the phase difference image that is adaptively weighted in the time direction, the spatial direction, and the wavelength direction, as will be described later.
  • the integration processing unit 132 supplies the phase difference image after the integration processing to the phase difference detection unit 133.
  • the phase difference detection unit 133 detects a phase shift amount indicating the phase difference of the image caused by the focus shift of the lens 111 based on the phase difference image after the integration process.
  • the phase difference detection unit 133 supplies the detection result of the phase shift amount to the focus position detection unit 134.
  • the focus position detection unit 134 detects the focus position of the lens 111 based on the detection result of the phase shift amount.
  • the focus position detection unit 134 supplies the focus position detection result to the focus control unit 135.
  • the focus control unit 135 controls the lens driving unit 122 to control the focus position of the lens 111.
  • the focus control unit 135 controls the lens driving unit 122 based on the detection result of the focus position, moves the lens 111 in the optical axis direction, and focuses the lens 111.
  • the focus detection image storage unit 136 temporarily stores a plurality of frames of captured images and phase difference images necessary for detecting the focus position.
  • the chromatic aberration data storage unit 137 stores chromatic aberration data of each lens or lens unit that can be used as the lens 111.
  • the lens driving unit 122 includes, for example, an actuator or the like, and moves the lens 111 in the optical axis direction.
  • FIG. 2 is a block diagram illustrating a configuration example of the focus detection image acquisition unit 131 and the integration processing unit 132 of the AF control unit 121 of the camera 101a.
  • the focus detection image acquisition unit 131 includes a captured image acquisition unit 151, a captured image AE (Automatic Exposure) unit 152, a phase difference image acquisition unit 153, and a phase difference image AE (Automatic Exposure) unit 154.
  • the integration processing unit 132 includes a time direction weight adjustment unit 161, a time direction integration processing unit 162, a spatial direction weight adjustment unit 163, a spatial direction integration processing unit 164, a wavelength direction weight adjustment unit 165, and a wavelength direction integration processing unit 166. Configured to include.
  • the captured image acquisition unit 151 acquires a captured image after demosaic processing from the image interpolation unit 115.
  • the captured image acquisition unit 151 stores the acquired captured image in the focus detection image storage unit 136.
  • the captured image acquisition unit 151 supplies the captured image to the captured image AE unit 152 and the time direction weight adjustment unit 161 as necessary.
  • the photographed image AE unit 152 sets the exposure time and gain when the photographed image is photographed next, according to the SN ratio of the photographed image.
  • the photographed image AE unit 152 adjusts the exposure time of the image sensor 112a by supplying an exposure time control signal to the image sensor 112a.
  • the captured image AE unit 152 adjusts the gain of the AGC unit 113 by supplying an AGC gain control signal to the AGC unit 113.
  • the phase difference image acquisition unit 153 acquires a phase difference image after demosaic processing from the image interpolation unit 115.
  • the phase difference image acquisition unit 153 stores the acquired phase difference image in the focus detection image storage unit 136. Further, the phase difference image acquisition unit 153 supplies the phase difference image to the phase difference image AE unit 154 and the time direction weight adjustment unit 161 as necessary.
  • the phase difference image AE unit 154 sets an exposure time and a gain when the phase difference image is next photographed according to the SN ratio of the phase difference image.
  • the phase difference image AE unit 154 adjusts the exposure time of the image sensor 112a by supplying an exposure time control signal to the image sensor 112a. Further, the phase difference image AE unit 154 adjusts the gain of the AGC unit 113 by supplying an AGC gain control signal to the AGC unit 113. Further, the phase difference image AE unit 154 detects the SN ratio of the phase difference image, and notifies the phase difference image acquisition unit 153 of the detection result.
  • the time direction weight adjustment unit 161 adjusts weights used for integration processing in the time direction of the captured image and the phase difference image based on the captured image and the phase difference image, as will be described later.
  • the time direction weight adjustment unit 161 supplies the captured image, the phase difference image, and information indicating the set weight to the time direction integration processing unit 162.
  • the time direction integration processing unit 162 performs integration processing in the time direction of the captured image and the phase difference image using the weight set by the time direction weight adjustment unit 161, as will be described later.
  • the time direction integration processing unit 162 supplies the captured image and phase difference image after the integration processing to the space direction weight adjustment unit 163 and the space direction integration processing unit 164. Further, the time direction integration processing unit 162 supplies the phase difference image to the phase difference detection unit 133.
  • the spatial direction weight adjustment unit 163 adjusts the weight used for integration processing of the captured image and the phase difference image in the spatial direction based on the captured image and the phase difference image.
  • the spatial direction weight adjustment unit 163 supplies information indicating the set weight to the spatial direction integration processing unit 164.
  • the spatial direction integration processing unit 164 performs integration processing in the spatial direction of the captured image and the phase difference image using the weight set by the spatial direction weight adjustment unit 163.
  • the spatial direction integration processing unit 164 supplies the captured image and the phase difference image after the integration processing to the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166. Further, the spatial direction integration processing unit 164 supplies the phase difference image to the phase difference detection unit 133.
  • the wavelength direction weight adjustment unit 165 adjusts the weight used for the integration processing in the wavelength direction of the captured image and the phase difference image based on the captured image and the phase difference image, as will be described later.
  • the wavelength direction weight adjustment unit 165 supplies information indicating the set weight to the wavelength direction integration processing unit 166.
  • the wavelength direction integration processing unit 166 performs integration processing in the wavelength direction of the phase difference image using the weight set by the wavelength direction weight adjustment unit 165 as described later.
  • the wavelength direction integration processing unit 166 supplies the phase difference image to the phase difference detection unit 133.
  • FIG. 3 schematically shows an exploded view of the pixel 201a viewed from the lateral direction
  • the lower diagram schematically shows a plan view of the pixel 201a viewed from the upper direction.
  • an on-chip microlens 211 In the pixel 201a, an on-chip microlens 211, a wavelength selection filter 212, a light shielding unit 213a, and photoelectric conversion units 214L and 214R are sequentially stacked.
  • the light incident on the on-chip microlens 211 is condensed toward the center of the light receiving surface of the pixel 201a, which is the optical axis center of the on-chip microlens 211. Then, a component of a predetermined wavelength band of incident light is transmitted by the wavelength selection filter 212 and is incident on the light receiving regions of the photoelectric conversion units 214L and 214R that are not shielded by the light shielding unit 213a.
  • the light shielding unit 213a has effects of preventing color mixture with adjacent pixels and pupil division of the pixel 201a.
  • the photoelectric conversion units 214L and 214R are each composed of a photoelectric conversion element such as a photodiode, for example. Note that the photoelectric conversion units 214L and 214R may employ any structure such as a thin film as well as a semiconductor substrate such as silicon.
  • the photoelectric conversion unit 214L and the photoelectric conversion unit 214R are arranged in a horizontal direction (row direction, left-right direction) with a predetermined interval. That is, the photoelectric conversion unit 214L is disposed at a position that is biased to the left of the light receiving surface of the pixel 201a. The light receiving region of the photoelectric conversion unit 214L is eccentric to the left with respect to the on-chip microlens 211. Opposite to the photoelectric conversion unit 214L, the photoelectric conversion unit 214R is disposed at a position biased to the right of the light receiving surface of the pixel 201a. The light receiving region of the photoelectric conversion unit 214R is eccentric to the right with respect to the on-chip microlens 211.
  • the photoelectric conversion unit 214L receives light incident on substantially the left half of the light receiving surface of the pixel 201a, and outputs a pixel signal corresponding to the amount of light received.
  • the photoelectric conversion unit 214R receives light incident on substantially the right half of the light receiving surface of the pixel 201a, and outputs a pixel signal corresponding to the amount of received light.
  • the incident light is pupil-divided in the horizontal direction (row direction, left-right direction) within the imaging surface.
  • the pixel 201a can individually output the pixel signals of the photoelectric conversion units 214L and 214R, or can add and output the pixel signals. Pixel signals individually output from the photoelectric conversion units 214L and 214R are used as signals for phase difference detection.
  • the signal obtained by adding the two pixel signals is a pixel signal based on light incident on the entire light receiving surface of the pixel 201a, and is used as a signal for normal photographing. A photographed image is generated by the normal photographing signal.
  • FIG. 3 shows an example in which the pupil is divided in the horizontal direction, in the image sensor 112a, there are pixels that divide the pupil in the vertical direction (column direction, vertical direction) in the imaging surface.
  • the photoelectric conversion unit arranged in the upper direction of the pixel that is divided into two pupils in the vertical direction in the imaging plane is referred to as a photoelectric conversion unit 214U
  • the photoelectric conversion unit arranged in the lower direction is photoelectrically converted. This will be referred to as a part 214D.
  • the photoelectric conversion unit 214 when it is not necessary to distinguish the photoelectric conversion units 214L to 214D from each other, they are simply referred to as the photoelectric conversion unit 214.
  • pixel values (hereinafter also referred to as phase difference pixel values) indicated by the pixel signals of the photoelectric conversion units 214L, 214R, 214U, and 214D are respectively referred to as pixel values qL, qR, qU, and qD. Called.
  • phase difference pixel value in the direction d of the coordinates (x, y) of the phase difference image is represented by qd (x, y).
  • qd the phase difference pixel value in the direction d of the coordinates (x, y) of the phase difference image.
  • any one of L indicating the left direction, R indicating the right direction, U indicating the upward direction, and D indicating the downward direction is set.
  • the wavelength band ⁇ includes ⁇ r indicating a red wavelength band (hereinafter referred to as an R wavelength band), ⁇ g indicating a green wavelength band (hereinafter referred to as a G wavelength band), and a blue wavelength band (hereinafter referred to as B).
  • R wavelength band a red wavelength band
  • G wavelength band a green wavelength band
  • B a blue wavelength band
  • the pixel value of the captured image (hereinafter also referred to as a captured pixel value) is represented by p (x, y), p (x, y, ⁇ ), or p (t, x, y, ⁇ ).
  • the photographic pixel value is a pixel value obtained by adding the phase difference pixel value qL and the phase difference pixel value qR of the same pixel or a pixel value obtained by adding the phase difference pixel value qU and the phase difference pixel value qD of the same pixel. .
  • phase difference pixel corresponding to the photoelectric conversion unit 214L is referred to as a left phase difference pixel
  • phase difference pixel corresponding to the photoelectric conversion unit 214R is referred to as a right direction phase difference pixel.
  • phase difference pixel corresponding to the photoelectric conversion unit 214U is referred to as an upward phase difference pixel
  • phase difference pixel corresponding to the photoelectric conversion unit 214D is referred to as a downward phase difference pixel.
  • FIG. 4 shows an arrangement example of pixels of the image sensor 112a.
  • the pixels are arranged according to a Bayer array having 2 ⁇ 2 pixels as one unit shown in FIG. That is, G pixels (Gr pixels and Gb pixels) that detect light in the G wavelength band are arranged in an oblique direction.
  • G pixels Gr pixels and Gb pixels
  • an R pixel that detects light in the R wavelength band
  • a B pixel that detects light in the B wavelength band are arranged in an oblique direction opposite to the G pixel.
  • the Gr pixel is a G pixel arranged in the same row as the R pixel
  • the Gb pixel is a G pixel arranged in the same row as the B pixel.
  • a B filter that transmits light in a wavelength band of about 405 to 500 nm is used.
  • the B filter is realized, for example, by stacking a B + IRb filter and an infrared cut filter.
  • the wavelength selection filter 212 of the G pixel for example, a G filter that transmits light in a wavelength band of about 475 to 600 nm is used.
  • the G filter is realized, for example, by stacking a G + IRb filter and an infrared cut filter.
  • an R filter that transmits light in a wavelength band of about 580 to 650 nm is used.
  • the R filter is realized, for example, by stacking an R + IRb filter and an infrared cut filter.
  • FIG. 5 is a graph showing examples of spectral characteristics of the B + IRb filter, the G + IRg filter, and the R + IRr filter.
  • the horizontal axis of the graph indicates the wavelength, and the vertical axis indicates the spectral transmittance.
  • Waveform 251 shows an example of the spectral characteristics of the B + IRb filter.
  • the B + IRb filter transmits not only the B wavelength band (about 405 to 500 nm) but also a part of the near infrared wavelength band (hereinafter referred to as the IR wavelength band) (about 800 to 900 nm).
  • Waveform 252 shows an example of spectral characteristics of the G + IRg filter.
  • the G + IRg filter transmits not only the G wavelength band (about 475 to 600 nm) but also a part of the IR wavelength band (about 800 to 900 nm).
  • Waveform 253 shows an example of spectral characteristics of the R + IRr filter.
  • the G + IRg filter transmits not only the R wavelength band (about 580 to 650 nm) but also a part of the IR wavelength band (about 800 to 900 nm).
  • the infrared cut filter blocks light of 700 nm or more, for example.
  • FIGS. 6 and 7 show examples of the pupil division direction of each pixel. As shown in FIGS. 6 and 7, the pixels divided in the horizontal direction and the pixels divided in the vertical direction are regularly arranged on the imaging surface of the imaging element 112 a.
  • the Gr pixel is divided into pupils in the horizontal direction
  • the Gb pixel is divided into pupils in the vertical direction.
  • R pixels exist every two rows or every two columns, and R pixels that are pupil-divided in the horizontal direction and R pixels that are pupil-divided in the vertical direction are arranged alternately in the horizontal and vertical directions.
  • the R pixel in the first column of the 0th row is pupil-divided in the horizontal direction
  • the R pixel in the third column of the 0th row is pupil-divided in the vertical direction
  • the R pixel in the fifth column of the 0th row is horizontal.
  • the pupil is divided in the direction.
  • the R pixel in the 0th row in the first column is pupil-divided in the horizontal direction
  • the R pixel in the second row in the first column is pupil-divided in the vertical direction
  • the R pixel in the fourth row in the first column Is divided into pupils in the horizontal direction.
  • B pixels exist every two rows or every two columns, and are arranged so that B pixels divided in pupils in the horizontal direction and B pixels divided in pupils in the vertical direction are alternately arranged in the horizontal direction and the vertical direction. Has been.
  • step S1 the camera 101a performs AE control for a captured image. Specifically, the image sensor 112a sets the exposure time based on the exposure time control signal supplied from the captured image AE unit 152. The AGC unit 113 sets the gain based on the AGC gain control signal supplied from the captured image AE unit 152.
  • the exposure time is set according to the illuminance of the subject. If the exposure time is set to the longest time in the frame and the sensitivity is insufficient, the gain is increased.
  • the gain and exposure time for the captured image are set based on the detection result of the S / N ratio of the captured image in step S3.
  • step S2 the camera 101a acquires a captured image. Specifically, the image sensor 112 a supplies a captured image obtained as a result of imaging to the AGC unit 113. That is, the image sensor 112 a supplies the captured image to the AGC unit 113 by adding and outputting the pixel signals of the photoelectric conversion units 214 of the pixels.
  • the AGC unit 113 amplifies the signal level of the captured image with the gain set in the process of step S ⁇ b> 1 and supplies the amplified signal level to the ADC unit 114.
  • the ADC unit 114 AD-converts the captured image and supplies the AD-converted captured image to the pixel interpolation unit 115.
  • the pixel interpolation unit 115 performs demosaic processing after performing white balance processing on the captured image. That is, the pixel interpolation unit 115 uses the pixel values of the surrounding pixels, using the pixel values of the undetected wavelength bands among the pixel values of the R, G, and B wavelength bands in each pixel of the captured image. Interpolate.
  • the pixel interpolation unit 115 supplies the captured image after the demosaic process to the captured image signal processing unit 116 and the captured image acquisition unit 151.
  • the captured image acquisition unit 151 supplies the acquired captured image to the captured image AE unit 152 and stores it in the focus detection image storage unit 136.
  • step S3 the captured image AE unit 152 detects an S / N ratio of the captured image using an arbitrary method.
  • step S4 the camera 101a performs AE control for the phase difference image.
  • the image sensor 112a sets the exposure time based on the exposure time control signal supplied from the phase difference image AE unit 154.
  • the AGC unit 113 sets a gain based on the AGC gain control signal supplied from the phase difference image AE unit 154.
  • the exposure time is set according to the illuminance of the subject. If the exposure time is set to the longest time in the frame and the sensitivity is insufficient, the gain is increased.
  • the phase difference image gain and exposure time are set based on the detection result of the S / N ratio of the phase difference image in step S6.
  • step S5 the camera 101a acquires a phase difference image. Specifically, the image sensor 112 a supplies a phase difference image obtained as a result of imaging to the AGC unit 113. That is, the image sensor 112a supplies the phase difference image to the AGC unit 113 by individually outputting the pixel signal of each photoelectric conversion unit 214 of each pixel.
  • the amount of incident light with respect to the phase difference pixel value of the phase difference image is limited to about half of the amount of incident light with respect to the photographing pixel value of the photographed image. Therefore, the imaging sensitivity of the phase difference image is lower than the imaging sensitivity of the captured image, and as a result, the SN ratio of the phase image is lower than the SN ratio of the captured image.
  • the AGC unit 113 amplifies the signal level of the phase difference image with the gain set in the process of step S4 and supplies the amplified signal level to the ADC unit 114.
  • the ADC unit 114 performs AD conversion on the phase difference image and supplies the AD converted phase difference image to the pixel interpolation unit 115.
  • the pixel interpolation unit 115 performs demosaic processing. In other words, the pixel interpolation unit 115 uses the pixel values of the wavelength bands that are not detected in the R, G, and B wavelength bands in the respective phase difference pixels of the phase difference image, using the pixel values of the surrounding phase difference pixels. Interpolate. At this time, the pixel interpolation unit 115 performs interpolation using the pixel value of the phase difference pixel in the same direction as the phase difference pixel to be interpolated. For example, when the interpolation target is a left phase difference pixel, the pixel interpolation unit 115 performs interpolation using the pixel values of the surrounding left phase difference pixels. When white balance is necessary for integration in the wavelength direction, for example, the pixel interpolation unit 115 may apply white balance gain of the captured image and perform white balance processing on the phase difference image. .
  • the pixel interpolation unit 115 supplies the phase difference image after the demosaic process to the phase difference image acquisition unit 153.
  • the phase difference image acquisition unit 153 supplies the acquired phase difference image to the phase difference image AE unit 154 and stores it in the focus detection image storage unit 136.
  • step S6 the phase difference image AE unit 154 detects the SN ratio of the phase difference image using an arbitrary method.
  • the phase difference image AE unit 154 notifies the phase difference image acquisition unit 153 of the detection result of the SN ratio of the phase difference image.
  • step S7 the phase difference image acquisition unit 153 determines whether or not the SN ratio of the phase difference image is less than a predetermined threshold value. If it is determined that the S / N ratio of the phase difference image is less than the predetermined threshold, the process proceeds to step S8.
  • the phase difference image has lower photographing sensitivity than the photographed image.
  • the camera 101a can compensate for the lack of sensitivity of the phase difference image by performing AE control separately at the time of shooting the shot image and the phase difference image under a situation where the illuminance of the subject is high. For example, the camera 101a sets a longer exposure time when capturing a phase difference image to compensate for insufficient sensitivity, and sets a shorter exposure time when capturing a captured image to prevent pixel value saturation. Can do.
  • the S / N ratio of the phase difference image is less than the predetermined threshold value, at least one of integration processing in the time direction, the spatial direction, and the wavelength direction is performed on the phase difference image in the processing after step S8. .
  • the signal component of the phase difference image is amplified, the noise component is relatively reduced, and the SN ratio of the phase difference image is improved.
  • step S8 the camera 101a performs time direction integration processing. For example, the camera 101a performs weighted addition of the pixel values of the phase difference pixels in the same direction and the same wavelength band at the same position between the phase difference image of the latest frame and the phase difference image of the past frame.
  • the amount of movement of the subject greatly affects the correlation between the frames of pixels at the same position. That is, the correlation between frames of pixels at the same position increases as the amount of motion between frames of the subject decreases, and decreases as the amount of motion between frames of the subject increases.
  • the detection accuracy of the amount of motion between the frames of the subject is higher when using a captured image having a higher S / N ratio than the phase difference image.
  • the photographic pixel value of the photographic image and the phase difference pixel value of the phase difference image have a high correlation in the time direction. Therefore, the time direction weight adjustment unit 161 sets the weight used for the time direction integration processing based on the amount of movement of the subject between frames in the captured image.
  • the captured image acquisition unit 151 reads the latest frame and the past captured image from the focus detection image storage unit 136 and supplies them to the time direction weight adjustment unit 161.
  • the phase difference image acquisition unit 153 reads the latest frame and the past phase difference image from the focus detection image storage unit 136 and supplies them to the time direction weight adjustment unit 161.
  • the time direction weight adjustment unit 161 spatially smoothes the latest frame and the previous frame using a low pass filter such as a Gaussian filter or a moving average filter.
  • the time direction weight adjustment unit 161 calculates a difference between two smoothed captured images, and calculates a motion coefficient of a subject in each pixel based on the result.
  • the time direction weight adjustment unit 161 sets a weight for each pixel of the captured image based on the calculated motion coefficient.
  • the time direction weight adjustment unit 161 decreases the weight as the pixel has a larger motion coefficient (a pixel having a large subject motion), and increases the weight as a pixel having a small motion coefficient (a pixel having a small subject motion).
  • the time direction weight adjustment unit 161 supplies information indicating the weight for each set pixel to the time direction integration processing unit 162. In addition, the time direction weight adjustment unit 161 supplies the latest captured image and phase difference image of the frame, and the captured image and phase difference image of the previous frame to the time direction integration processing unit 162.
  • the time direction integration processing unit 162 performs weighted addition in the time direction of each phase difference pixel value of the phase difference image using the weight set by the time direction weight adjustment unit 161.
  • the time direction integration processing unit 162 performs weighted addition in the time direction of each phase difference pixel value of the phase difference image using, for example, an IIR (Infinite Impulse ⁇ ⁇ ⁇ Response) filter as shown in FIG. Specifically, the time direction integration processing unit 162 performs weighted integration processing in the time direction using an IIR filter on the phase difference image of the latest frame according to the following equation (1).
  • IIR Infinite Impulse ⁇ ⁇ ⁇ Response
  • the time direction integration process according to the equation (1) is a blend process based on the feedback rate k between the phase difference pixel value of the current frame and the phase difference pixel value of the past frame.
  • This time direction integration process is performed between phase difference pixel values in the same wavelength band. That is, by performing demosaic processing, phase difference pixel values in the R wavelength band, the G wavelength band, and the B wavelength band are obtained for the same phase difference pixel. Therefore, the time direction integration processing unit 162 weights and adds the phase difference pixel values of the same wavelength band of each phase difference pixel in the time direction according to Expression (1).
  • the weight set for each pixel of the captured image by the time direction weight adjustment unit 161 is used for the feedback rate k. That is, the weight set for the pixels at the same position in the captured image is used as the feedback rate k for each phase difference pixel of the phase difference image. Therefore, the same feedback factor k is used for the two phase difference pixels in the same pixel of the captured image.
  • the feedback rate k is small, and the weight for the phase difference pixel value of the past frame is small.
  • the feedback rate k increases, and the weight for the phase difference pixel value of the past frame increases.
  • the time direction integration processing unit 162 performs weighted integration processing in the time direction using the IIR filter on the captured image of the latest frame according to the following equation (2).
  • the time direction integration process according to the equation (2) is a blend process based on the feedback rate k between the captured pixel value of the current frame and the captured pixel value of the past frame.
  • this time direction integration processing is performed between imaged pixel values in the same wavelength band. That is, by performing demosaic processing, imaging pixel values in the R wavelength band, the G wavelength band, and the B wavelength band are obtained for the same imaging pixel. Therefore, the time direction integration processing unit 162 weights and adds the photographic pixel values in the same wavelength band of the respective photographic pixels by Expression (2).
  • the time direction integration processing unit 162 supplies the captured image and the phase difference image after the time direction integration processing to the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164.
  • time direction integration processing is not limited to the method described above.
  • an FIR (FiniteFImpulse Response) filter may be used instead of the IIR filter.
  • the feedback rate k used for integration of the phase difference image may be set based on the difference between frames of the phase difference image. That is, the feedback rate k for each phase difference pixel of the phase difference image may be set based on the amount of movement of the subject in the phase difference image.
  • the threshold value here is set to a value lower than the threshold value used in the determination process in step S7, for example.
  • step S9 the spatial direction integration processing unit 164 determines whether or not the SN ratio of the phase difference image is less than a predetermined threshold value. Specifically, the spatial direction integration processing unit 164 detects the SN ratio of the phase difference image after the time direction integration processing using an arbitrary method. If the spatial direction integration processing unit 164 determines that the SN ratio of the phase difference image is less than the predetermined threshold, the process proceeds to step S10. In addition, the threshold value of this determination process is set to the same value as the threshold value used in the determination process of step S7, for example.
  • step S10 the camera 101a performs a spatial direction integration process.
  • the camera 101a weights and adds the phase difference pixel values of the same direction and the same wavelength band in the vicinity of the same frame to the phase difference pixel value of each phase difference pixel of the phase difference image after the time direction integration processing.
  • the spatial direction integration processing unit 164 performs weighted integration processing in the spatial direction using an FIR filter having a finite number of taps in the horizontal direction and the vertical direction as shown in FIG.
  • typical smoothing filters that improve the S / N ratio of an image include a Gaussian filter and a moving average filter.
  • FIG. 11 shows an example of a Gaussian filter.
  • the Gaussian filters 261 to 263 are examples of one-dimensional Gaussian filters.
  • the Gaussian filter 264 is an example of a two-dimensional 5 ⁇ 5 pixel Gaussian filter.
  • the image may be blurred as a whole, and the edge component necessary for detecting the amount of phase shift may be weakened.
  • the spatial direction integration processing unit 164 uses a variable coefficient filter that adaptively adjusts the weight according to the pattern or pattern of the subject.
  • a filter is an edge-preserving smoothing filter such as a bilateral filter, an epsilon filter, or a non-local Means filter.
  • the edge preserving type smoothing filter increases the amount of calculation for weight calculation compared with the fixed coefficient filter, but it is possible to obtain a sharp phase difference image while improving the S / N ratio more strongly.
  • a bilateral filter is mainly used will be described as an example.
  • the spatial direction weight adjustment unit 163 increases the weight as the spatial direction correlation between the pixels is higher, and decreases the weight as the spatial direction correlation between the pixels is lower.
  • the similarity between the horizontal direction and the vertical direction of the subject greatly contributes to the correlation between pixels in the same frame. That is, the correlation between pixels in the same frame increases as the subject similarity between the pixels increases, and decreases as the subject similarity between the pixels decreases. Accordingly, the spatial direction weight adjustment unit 163 increases the weight as the subject similarity between the pixels is higher, and decreases the weight as the subject similarity between the pixels is lower.
  • the photographed image corresponds to a high-frequency component included in the subject pattern or the like with blur due to defocusing.
  • the phase difference image is composed of a component obtained by dividing the light beam into pupils in the horizontal direction or the vertical direction, but is not affected by eccentricity due to pupil division in the direction orthogonal to the pupil division direction in the imaging plane. For this reason, the phase difference image has a high correlation with the captured image in the direction orthogonal to the pupil division direction.
  • the phase difference image has a lower sensitivity than the captured image because the incident direction of the light beam is limited in the pupil division direction.
  • the phase difference image includes more high-frequency components of the subject than the photographed image, and blur is reduced. Therefore, the phase difference image has a low correlation with the captured image in the pupil division direction.
  • the phase difference image has a lower S / N ratio than the photographed image, but contains more high-frequency components than the photographed image.
  • the phase difference image has a high correlation with the captured image in the direction orthogonal to the pupil division direction, and has a low correlation with the captured image in the pupil division direction.
  • the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 perform the spatial direction integration processing while properly using the captured image and the phase difference image according to the SN ratio of the phase difference image. Specifically, the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 perform processing separately for a case where the SN ratio of the phase difference image is greater than or equal to a predetermined threshold and a case where it is less than the threshold. Note that the threshold value here is set to a value lower than the threshold value of the determination process in step S9, for example.
  • the spatial direction weight adjustment unit 163 When the SN ratio of the phase difference image is equal to or greater than a predetermined threshold, the spatial direction weight adjustment unit 163 performs weighting based on the horizontal and vertical similarity of the phase difference image. Then, the spatial direction integration processing unit 164 performs a two-dimensional convolution integration process using these vertical and horizontal weights.
  • the spatial direction weight adjustment unit 163 determines that the SN value is higher than the phase difference image in the direction orthogonal to the pupil division direction (direction of the photoelectric conversion unit 214). Weighting is performed based on the similarity of the captured image having a high ratio and a high correlation with the phase difference image. On the other hand, the spatial direction weight adjustment unit 163 performs weighting on the direction parallel to the pupil division direction (direction of the photoelectric conversion unit 214) based on the similarity of the phase difference image. Then, the spatial direction integration processing unit 164 performs a two-dimensional convolution integration process using these vertical and horizontal weights.
  • the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 have both the SN ratios of the phase difference pixel values in the vertical and horizontal directions of the phase difference image after the time direction integration processing equal to or greater than a predetermined threshold.
  • the phase difference pixel value qd ′′ (x, y, ⁇ ) in the direction d of the wavelength band ⁇ of each coordinate (x, y) of the phase difference image is calculated by the following equations (3) and (4).
  • the direction d is any one of the upward direction, the downward direction, the left direction, and the right direction
  • the wavelength band ⁇ is any of the R wavelength band, the G wavelength band, and the B wavelength band.
  • qd ′ (x, y, ⁇ ) is a phase difference pixel value in the direction d of the wavelength band ⁇ of the coordinate (x, y) after the time direction integration processing.
  • f (n, ⁇ h ) is a weight based on the horizontal distance n between the coordinates (x, y) and the coordinates (x + n, y + m).
  • ⁇ h is the standard deviation of the Gaussian filter with respect to the distance in the horizontal direction within the imaging surface. This weight becomes larger as the distance n is closer, and becomes smaller as the distance n is farther, and mainly has the effect of smoothing the image in the horizontal direction.
  • f (m, ⁇ v ) is a weight based on the vertical distance m between the coordinates (x, y) and the coordinates (x + n, y + m).
  • ⁇ v is the standard deviation of the Gaussian filter with respect to the distance in the vertical direction within the imaging surface. This weight becomes larger as the distance m is closer, and becomes smaller as the distance m is farther, and mainly has the effect of smoothing the image in the vertical direction.
  • the weight component due to the distance is calculated separately in the horizontal direction and the vertical direction.
  • the dispersion standard deviation ⁇ h and standard deviation ⁇ v ) can be controlled independently.
  • Wqd hv (x, y, n, m, ⁇ ) is based on the difference between the phase difference pixel value in the direction d in the wavelength band ⁇ of the coordinate (x, y) and the coordinate (x + n, y + m) after the time direction integration processing. It is a weight.
  • ⁇ Sq is the standard deviation of the Gaussian filter with respect to the difference in the phase difference pixel value in the imaging plane. This weight increases as the difference between the phase difference pixel values decreases, and decreases as the difference between the phase difference pixel values increases.
  • a luminance difference is often used instead of a pixel value difference.
  • the luminance difference even if the spectrum (color) of the phase difference pixel at the coordinates (x, y) and the phase difference image at the coordinates (x + n, y + m) are completely different, there is no luminance difference. Since the component of the phase difference pixel at the coordinates (x + n, y + m) remains, it causes erroneous detection.
  • phase difference pixel component of the coordinate (x + n, y + m) of the wavelength band is converted to the coordinate (x, y of the same wavelength band).
  • the spatial direction integration processing is performed using only the phase difference image. Is called.
  • the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 To (7), the phase difference pixel value qd ′′ (x, y, ⁇ ) in the direction d of the wavelength band ⁇ of each coordinate (x, y) is calculated.
  • the direction d is the upward direction or the downward direction.
  • the wavelength band ⁇ is any one of the R wavelength band, the G wavelength band, and the B wavelength band.
  • Wp h (x, y, n, m, ⁇ ) is a weight that is based on the difference in shooting pixel values of coordinates after the time direction integration process (x, y + m) and the coordinates (x + n, y + m ).
  • ⁇ Sp is the standard deviation of the Gaussian filter with respect to the difference in the photographic pixel value in the imaging plane. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation in the horizontal direction.
  • Wqd v (x, y, n, m, ⁇ ) is based on the difference between the phase difference pixel value in the direction d in the wavelength band ⁇ of the coordinate (x + n, y) after the time direction integration processing and the coordinate (x + n, y + m). It is a weight. This weight increases as the difference between the phase difference pixel values decreases, and decreases as the difference between the phase difference pixel values increases. This weight mainly brings about the effect of edge preservation in the vertical direction.
  • both the captured image and the phase difference image are used for the phase pixel value in the vertical direction.
  • the spatial direction integration process is performed using. That is, weighting is performed on the horizontal direction orthogonal to the pupil division direction based on the similarity of the captured image.
  • the vertical direction parallel to the pupil division direction is weighted based on the similarity of the phase difference image.
  • the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 when the SN ratio of the phase difference pixel value in the horizontal direction of the phase difference image after the time direction integration processing is less than a predetermined threshold, ) To (10), the phase difference pixel value qd ′′ (x, y, ⁇ ) in the direction d of the wavelength band ⁇ of each coordinate (x, y) is calculated.
  • the direction d is the left direction or the right direction.
  • the wavelength band ⁇ is any one of the R wavelength band, the G wavelength band, and the B wavelength band.
  • Wqd h (x, y, n, m, ⁇ ) is based on the difference between the phase difference pixel value in the direction d in the wavelength band ⁇ of the coordinate (x, y + m) and the coordinate (x + n, y + m) after the time direction integration processing. It is a weight. This weight increases as the difference between the phase difference pixel values decreases, and decreases as the difference between the phase difference pixel values increases. This weight mainly brings about the effect of edge preservation in the horizontal direction.
  • Wp v (x, y, n, m, ⁇ ) is a weight based on the difference between the captured pixel values of the coordinate (x + n, y) and the coordinate (x + n, y + m) after the time direction integration processing. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation in the vertical direction.
  • both the captured image and the phase difference image are used for the phase pixel value in the left-right direction.
  • the spatial direction integration process is performed using. That is, weighting is performed on the vertical direction orthogonal to the pupil division direction based on the similarity of the captured image. On the other hand, the horizontal direction parallel to the pupil division direction is weighted based on the similarity of the phase difference image.
  • the spatial direction integration processing unit 164 performs weighted integration processing in the spatial direction using the FIR filter on the photographed image after the time direction integration processing according to the following equations (11) and (12).
  • Wp hv (x, y, n, m, ⁇ ) is a weight based on the difference between the captured pixel values of the coordinate (x, y) and the coordinate (x + n, y + m) after the time direction integration processing. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation.
  • the spatial direction integration processing unit 164 supplies the captured image and the phase difference image after the spatial direction integration processing to the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166.
  • step S11 the wavelength direction integration processing unit 166 determines whether or not the SN ratio of the phase difference image is less than a predetermined threshold value. Specifically, the wavelength direction integration processing unit 166 detects the SN ratio of the phase difference image after the spatial direction integration processing using an arbitrary method. If the wavelength direction integration processing unit 166 determines that the SN ratio of the phase difference image is less than the predetermined threshold value, the process proceeds to step S12. In addition, the threshold value of this determination process is set to the same value as the threshold value used in the determination process of steps S7 and S9, for example.
  • the camera 101a performs wavelength direction integration processing. For example, the camera 101a performs weighted addition of pixel values of other wavelength bands to pixel values of the target wavelength band in each phase difference pixel of the phase difference image after the spatial direction integration processing. For example, the camera 101a weights and adds the phase difference pixel values of the B wavelength band and the G wavelength band of the same phase difference pixel to the phase difference pixel value of the R wavelength band of a certain phase difference pixel, thereby changing the level of the R wavelength band. Integration processing of the phase difference pixel value in the wavelength direction is performed.
  • the axial chromatic aberration and the lateral chromatic aberration of the lens 111 greatly affect the correlation between pixel values of different wavelength bands of the same phase difference pixel.
  • the refractive index of light at the lens 111 varies depending on the wavelength.
  • light having a short wavelength such as ultraviolet rays is easily refracted, and light having a long wavelength such as infrared rays is not easily refracted.
  • the short wavelength light is focused at a position close to the lens 111, and the long wavelength light is focused at a position far from the lens 111, thereby causing axial chromatic aberration.
  • axial chromatic aberration occurs in a direction along the optical axis of the lens 111.
  • the correction for the longitudinal chromatic aberration is performed in the process of step S16 described later.
  • the light incident obliquely on the lens 111 is focused at different positions on the image pickup surface of the image pickup device 112a.
  • a difference is generated between the magnification of the image Lk with the light having the wavelength ⁇ k centered on the optical axis c and the image Li with the light with the wavelength ⁇ i. Chromatic aberration occurs.
  • the light quantity distribution for each wavelength band in the subject environment and the reflectance for each wavelength band of the subject can be considered.
  • the reflectance for each wavelength band is unique to the subject, and the correlation factor with the position of the subject is dominant.
  • a white subject has substantially the same reflectivity with respect to light in the B wavelength band, the G wavelength band, and the R wavelength band.
  • a blue subject has only a high reflectance with respect to light in the B wavelength band.
  • the yellow subject has a high reflectance with respect to light in the G wavelength band and the R wavelength band.
  • the luminance value is just a typical example, and is mixed at a constant ratio regardless of the reflectance of the subject with respect to each of the B wavelength band, the G wavelength band, and the R wavelength band. For this reason, textures having substantially the same luminance value despite greatly different colors cannot be detected by luminance.
  • the wavelength direction integration processing unit 166 uses a variable filter that adaptively adjusts the weight for each wavelength band and for each pixel in order to cope with the chromatic aberration of magnification and the color pattern of the subject. For example, the wavelength direction integration processing unit 166 performs weighted integration processing in the wavelength direction using an FIR filter having a finite number of taps in the wavelength direction as shown in FIG.
  • a bilateral filter As a good example of a filter that adaptively adjusts the weight for each wavelength band and for each pixel with respect to the color pattern of the subject, a bilateral filter, an epsilon filter, or a non-local Means (nonlocal average) filter
  • edge-preserving smoothing filters such as Hereinafter, a case where a bilateral filter is mainly used will be described as an example.
  • the wavelength direction weight adjustment unit 165 reads a magnification chromatic aberration amount table indicating a magnification chromatic aberration amount of the lens 111 attached to the camera 101 a from the chromatic aberration data storage unit 137.
  • FIG. 14 shows an example of a magnification chromatic aberration amount table.
  • the lateral chromatic aberration amount table shows the lateral chromatic aberration amount of the wavelength indicated on the horizontal axis with respect to the wavelength indicated on the vertical axis.
  • c ( ⁇ k , ⁇ i ) indicates the amount of chromatic aberration of magnification at the wavelength ⁇ i with respect to the wavelength ⁇ k .
  • the wavelength ⁇ k and the wavelength ⁇ i are any of the R wavelength band, the G wavelength band, and the B wavelength band.
  • axial chromatic aberration and lateral chromatic aberration vary depending on the lens system used.
  • a magnification chromatic aberration amount table for the lens 111 is stored in the chromatic aberration data storage unit 137.
  • a magnification chromatic aberration amount table of all usable lens systems is stored in the chromatic aberration data storage unit 137 in advance. Further, for example, identification information for specifying optical characteristics such as the amount of chromatic aberration for each wavelength band is recorded in each lens system in a format readable by the camera 101a.
  • the AF control unit 121 reads the identification information of the lens system. Based on the identification information, the wavelength direction weight adjustment unit 165 reads a magnification chromatic aberration amount table of the lens system (lens 111) attached to the camera 101a from the chromatic aberration data storage unit 137.
  • the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166 for example, by the following equation (13), the phase difference pixel value qd ′ ′′ ( ⁇ k ) of the wavelength band ⁇ k of each phase difference pixel of the phase difference image. ) Is calculated.
  • the wavelength band ⁇ k is any one of the R wavelength band, the G wavelength band, and the B wavelength band.
  • q ′′ ( ⁇ i ) is a phase difference pixel value of the wavelength band ⁇ i after the spatial direction integration processing.
  • ⁇ c ⁇ is a weight based on the amount of chromatic aberration of magnification c ( ⁇ k , ⁇ i ) of the wavelength band ⁇ k and the wavelength band ⁇ i .
  • ⁇ c is a standard deviation of the Gaussian filter with respect to the chromatic aberration of magnification c ( ⁇ k , ⁇ i ). This weight decreases as the magnification chromatic aberration amount c ( ⁇ k, ⁇ i) is large, it becomes greater as the lateral chromatic aberration amount c ( ⁇ k, ⁇ i) is small, resulting in the effect of mainly smoothing.
  • ⁇ ⁇ q ⁇ is a weight based on the difference between the phase difference pixel values of the wavelength band ⁇ i and the wavelength band ⁇ k after the spatial direction integration processing.
  • ⁇ ⁇ q is the standard deviation of the Gaussian filter with respect to the difference between the phase difference pixel values. This weight increases as the difference in phase difference pixel value decreases, and decreases as the difference in phase difference pixel value increases. This weight mainly brings about the effect of edge preservation.
  • the correlation between the components in the R wavelength band, the G wavelength band, and the B wavelength band is high, so that a W pixel using a W filter (transparent filter) is used.
  • the signal-to-noise ratio can be improved to a level equivalent to the use of.
  • the wavelength direction integration processing unit 166 supplies the phase difference image after the wavelength direction integration processing to the phase difference detection unit 133.
  • the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166 may calculate the phase difference by the following equation (14), for example.
  • the pixel value qd ′ ′′ ( ⁇ k ) may be calculated.
  • the threshold value here is set to a value lower than the threshold value used in the determination processing in step S11, for example.
  • ⁇ ⁇ p ⁇ is a weight based on the difference between the imaging pixel values of the wavelength band ⁇ i and the wavelength band ⁇ k after the spatial direction integration processing.
  • ⁇ ⁇ p is the standard deviation of the Gaussian filter with respect to the difference in the captured pixel value. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation.
  • the weight may be set using the captured pixel value of the captured image instead of the phase difference pixel value. In this case, since the correlation of the color separation pattern is lowered, the S / N ratio and the correlation are traded off.
  • step S11 determines whether the S / N ratio of the phase difference image is equal to or greater than the predetermined threshold. If it is determined in step S11 that the S / N ratio of the phase difference image is equal to or greater than the predetermined threshold, the process of step S12 is skipped, and the process proceeds to step S13. That is, in this case, when the spatial direction integration processing is completed, it is determined that the SN ratio of the phase difference image has reached a level sufficient to detect a reliable phase difference, and the wavelength direction integration processing is skipped. The At this time, the wavelength direction integration processing unit 166 supplies the phase difference image after the spatial direction integration processing to the phase difference detection unit 133.
  • step S9 If it is determined in step S9 that the S / N ratio of the phase difference image is equal to or greater than the predetermined threshold value, the processes in steps S10 to S12 are skipped, and the process proceeds to step S13. That is, in this case, when the time direction integration processing is completed, it is determined that the SN ratio of the phase difference image has reached a level sufficient to detect a reliable phase difference, and the spatial direction integration processing and the wavelength direction are determined. The integration process is skipped. At this time, the spatial direction integration processing unit 164 supplies the phase difference image after the time direction integration processing to the phase difference detection unit 133.
  • step S7 if it is determined in step S7 that the S / N ratio of the phase difference image is equal to or greater than a predetermined threshold, the processes in steps S8 to S12 are skipped, and the process proceeds to step S13. That is, in this case, even if integration processing is not performed, it is determined that the SN ratio of the phase difference image has reached a level sufficient to detect a reliable phase difference, and all integration processing is skipped. .
  • the phase difference image acquisition unit 153 reads the phase difference image of the latest frame from the focus detection image storage unit 136, and passes through the time direction weight adjustment unit 161 and the time direction integration processing unit 162, thereby the phase difference detection unit. 133.
  • step S13 the phase difference detection unit 133 selects a phase difference sequence used for detection of the phase shift amount.
  • the phase difference sequence and the phase shift amount will be described.
  • a phase difference column is a phase difference pixel in the same wavelength band and in the same direction, in which phase difference pixel values of phase difference pixels arranged in a row or column parallel to the pupil division direction are arranged in the order of pixel positions.
  • the Gr pixels are arranged in even-numbered rows, and all are pupil-divided in the horizontal direction. Accordingly, the phase difference sequence QL (2m, ⁇ g) and the phase difference sequence QR (2m, ⁇ g) are the phase difference components of the Gr pixel. That is, the phase difference sequence QL (2m, ⁇ g) is a phase difference sequence composed of pixel values qL in the left direction of 2m rows of Gr pixels.
  • the phase difference sequence QR (2m, ⁇ g) is a phase difference sequence composed of pixel values qR in the right direction of 2m rows of Gr pixels.
  • the Gb pixels are arranged in odd-numbered columns, and all are pupil-divided in the vertical direction. Therefore, the phase difference sequence QU (2n + 1, ⁇ g) and the phase difference sequence QD (2n + 1, ⁇ g) are the phase difference components of the Gb pixel. That is, the phase difference sequence QU (2n + 1, ⁇ g) is a phase difference sequence composed of the pixel values qU in the upward direction of 2n + 1 Gb pixels.
  • the phase difference sequence QR (2n + 1, ⁇ g) is a phase difference sequence composed of downward pixel values qD of 2n + 1 Gb pixels.
  • the phase difference sequence is divided into 12 types depending on the combination of wavelength band and direction. That is, the left phase difference sequence QL (y, ⁇ r) of the R wavelength band, the right phase difference sequence QR (y, ⁇ r) of the R wavelength band, and the upward phase difference sequence QU (x, ⁇ r), R phase band downward phase difference sequence QD (x, ⁇ r), G wavelength band left phase difference sequence QL (y, ⁇ g), G wavelength band right phase difference sequence QR (y , ⁇ g), an upward phase difference sequence QU (x, ⁇ g), a downward phase difference sequence QD (x, ⁇ g) of the G wavelength band, and a left phase difference sequence QL ( y, ⁇ b), right phase difference column QR (y, ⁇ b) in the B wavelength band, upward phase difference sequence QU (x, ⁇ b) in the B wavelength band, and downward phase difference in the B wavelength band
  • the shift amount (phase difference) between the image formed by the phase difference column QL (y, ⁇ r) in the R wavelength band of the same row and the image formed by the phase difference column QR (y, ⁇ r) is It is detected as the amount of phase shift in the horizontal direction.
  • the amount of deviation (phase difference) between the image formed by the phase difference column QL (y, ⁇ g) and the image formed by the phase difference column QR (y, ⁇ g) in the G wavelength band of the same row is horizontal. Is detected as a phase shift amount.
  • the amount of deviation (phase difference) between the image formed in the phase difference column QL (y, ⁇ b) in the B wavelength band of the same row and the image formed by the phase difference column QR (y, ⁇ b) is horizontal. Is detected as a phase shift amount.
  • the shift amount (phase difference) between the image formed by the phase difference column QU (x, ⁇ r) in the R wavelength band of the same column and the image formed by the phase difference column QD (x, ⁇ r) is vertical. Is detected as a phase shift amount.
  • the deviation amount (phase difference) between the image formed by the phase difference column QU (x, ⁇ g) in the G wavelength band of the same column and the image formed by the phase difference column QD (x, ⁇ g) is vertical. Is detected as a phase shift amount.
  • the deviation amount (phase difference) between the image formed in the phase difference column QU (x, ⁇ b) in the B wavelength band of the same column and the image formed by the phase difference column QD (x, ⁇ b) is vertical. Is detected as a phase shift amount.
  • FIGS. 15 to 17 are schematic views of a state in which a light flux in a certain wavelength band ⁇ is incident on the lens 111 from above.
  • the light beam passing through the left side (lower side in the figure) of the lens 111 is indicated by a rough dotted line
  • the light beam passing through the right side (upper side in the figure) of the lens 111 is indicated by a fine dotted line. ing.
  • FIG. 15 shows the front pin state in which the lens 111 is focused in front of the subject. More specifically, a state in which the image sensor 112a is displaced in the direction closer to the lens 111 than the in-focus position with respect to the light flux in the wavelength band ⁇ is illustrated.
  • the 16 shows the in-focus state where the lens 111 is focused on the subject. More specifically, the imaging element 112a is in a focused position with respect to the light flux in the wavelength band ⁇ .
  • FIG. 17 shows a rear pin state in which the lens 111 is focused behind the subject. More specifically, it shows a state in which the image sensor 112a is displaced in the direction away from the lens 111 relative to the focused position with respect to the light flux in the wavelength band ⁇ .
  • FIGS. 15 to 17 show the left phase difference column QL and the right phase difference column QR of the light beam in the wavelength band ⁇ when the light beam in the wavelength band ⁇ is in the state shown in the left figure. It is a graph which shows the relationship.
  • the horizontal axis of the graph indicates the pixel position in a certain row of the phase difference image, and the vertical axis indicates the phase difference pixel value.
  • the waveform of the phase difference sequence QL in the left direction is indicated by a rough dotted line
  • the waveform of the phase difference sequence QR in the right direction is indicated by a fine dotted line.
  • the waveform of the phase difference column QL indicates the position of the image of the light flux in the wavelength band ⁇ detected by the left phase difference pixel (photoelectric conversion unit 214L), and the waveform of the phase difference column QR is the phase difference pixel in the right direction.
  • the position of the image of the light flux in the wavelength band ⁇ detected by the (photoelectric conversion unit 214R) is shown.
  • the waveforms of the phase difference sequence QL and the phase difference sequence QR are waveforms that are close to a symmetric normal distribution, but the actual waveform varies depending on the pattern of the subject.
  • the waveform of the phase difference sequence QL and the waveform of the phase difference sequence QR are almost the same, and the high frequency component is the highest.
  • the waveform of the phase difference sequence QL is shifted to the left and the waveform of the phase difference sequence QR is Deviation in direction.
  • the waveform of the phase difference sequence QL is shifted to the right and the waveform of the phase difference sequence QR is left compared to the case of the in-focus state. Deviation in direction.
  • the amount of deviation between the waveform of the phase difference row QL and the waveform of the phase difference row QR is the amount of phase deviation in the horizontal direction.
  • the horizontal phase shift amount of the G wavelength band is detected with a resolution of two column intervals
  • the horizontal phase shift amount of the B wavelength band and the R wavelength band is detected with a resolution of four column intervals. Therefore, it is possible to detect vertical lines with a resolution of two columns using the phase difference component of the Gr pixel.
  • the phase difference component of the B pixel or R pixel it is possible to detect vertical lines with a resolution of four columns.
  • the horizontal phase difference sequence (phase difference sequence QL and phase difference sequence QR) is suitable for detecting the phase shift amount of a subject having a large contrast difference in the horizontal direction such as a vertical line.
  • the horizontal phase difference sequence is suitable for detecting a horizontal contrast difference of a subject such as a vertical line.
  • the phase difference sequence in the horizontal direction is not very suitable for detecting the phase shift amount of a subject having a large contrast difference in the vertical direction such as a horizontal line.
  • the positional relationship between the waveform of the phase difference sequence QL and the waveform of the phase difference sequence QR is reversed between the front pin state and the rear pin state. Therefore, whether the lens 111 is in a focused state, a front pin state, or a rear pin state is detected based on the positional relationship between the two.
  • the shift amount between the waveform of the phase difference sequence QU and the waveform of the phase difference sequence QD is detected as the phase shift amount in the vertical direction.
  • the phase shift amount in the vertical direction of the G wavelength band is detected with a resolution of two rows
  • the phase shift amount of the B wavelength band and the R wavelength band is detected with a resolution of four rows. Accordingly, it is possible to detect a horizontal line with a resolution of two rows using the phase difference component of the Gr pixel.
  • the phase difference component of the B pixel or R pixel it is possible to detect a horizontal line with a resolution of four rows.
  • the vertical phase difference sequence (phase difference sequence QU and phase difference sequence QD) is suitable for detecting the phase shift amount of a subject having a large contrast difference in the vertical direction such as a horizontal line.
  • the vertical phase difference sequence is suitable for detecting a vertical contrast difference of a subject such as a horizontal line.
  • the phase difference sequence in the vertical direction is not very suitable for detecting the phase shift amount of a subject having a large contrast difference in the horizontal direction such as a vertical line.
  • the positional relationship between the waveform of the phase difference sequence QU and the waveform of the phase difference sequence QD is reversed between the front pin state and the rear pin state. Therefore, whether the camera 101a is in the focused state, the front pin state, or the rear pin state is detected based on the positional relationship between the two.
  • an in-focus position is obtained based on the detected amount of phase shift, and focus control is performed.
  • the horizontal phase difference sequence is suitable for detecting the horizontal contrast difference of the subject such as a vertical line
  • the vertical phase difference sequence is the vertical direction of the subject such as a horizontal line. Suitable for detecting contrast differences.
  • the direction in which the contrast of the subject changes is usually not uniform and biased. Therefore, a direction in which a contrast difference is easily detected and a direction in which it is difficult to detect are generated for each subject.
  • the phase difference sequence in the B wavelength band is suitable for detecting the contrast in the B wavelength band of the subject
  • the phase difference sequence in the G wavelength band is suitable for detecting the contrast in the G wavelength band of the subject.
  • the band phase difference sequence is suitable for detecting the contrast of the subject in the R wavelength band.
  • phase difference columns of the same type exist for each row or each column.
  • the contrast of the subject varies depending on the location of the subject. Therefore, there are places where it is easy to detect the contrast difference and places where it is difficult to detect for each subject.
  • the value of the phase difference pixel value constituting the phase difference sequence becomes larger, and the detection accuracy of the phase shift amount is improved.
  • the detection accuracy of the focus position of the lens 111 is improved, and the focus accuracy of the camera 101a is improved.
  • the phase difference detection unit 133 can detect a highly reliable phase difference (that is, a phase shift amount) from among a plurality of phase difference sequences extracted from a phase difference image whose SN ratio has been improved by integration processing.
  • a phase difference sequence of direction, wavelength band, and position is selected. More specifically, the phase difference detection unit 133 is a combination of the phase difference column QL and the phase difference column QR in the same wavelength band and the same row, or the phase difference column QU in the same wavelength band and the same column and the position in the downward direction.
  • One or more combinations that can detect a phase difference with high reliability are selected from the combinations of the phase difference sequences QD. Thereby, a direction, a wavelength band, and a position suitable for detecting the phase difference are selected in the phase difference image.
  • the phase difference detection unit 133 selects a phase difference sequence corresponding to the position, direction, and wavelength band including the steep edge component of the subject.
  • the phase difference detection unit 133 selects a phase difference sequence corresponding to a position, a direction, and a wavelength band with a high SN ratio.
  • the phase difference detection unit 133 selects a phase difference sequence corresponding to a direction and position where a highly reliable phase difference can be detected from among the phase difference sequences in the wavelength band to be finally focused. select.
  • the phase difference detection unit 133 selects a phase difference sequence corresponding to a direction and a wavelength band in which a highly reliable phase difference can be detected from among the phase difference sequences at the position to be focused.
  • the position to be focused is, for example, a position designated by the user, a position where a person's face is shown, or the like.
  • phase difference detection unit 133 may select the phase difference sequence in the G wavelength band with priority, for example, when there is no superiority or inferiority or priority setting in the phase difference sequence in each wavelength band. This is because the human eye is most sensitive to light in the G wavelength band.
  • step S14 the phase difference detection unit 133 detects a phase shift amount. Specifically, the phase difference detection unit 133 detects the phase shift amount by performing a correlation calculation of the phase difference sequences belonging to the same group selected in the process of step S13.
  • the phase difference detection unit 133 shifts one of the two phase difference sequences belonging to the same group little by little in the pupil division direction. Then, the phase difference detection unit 133 detects, for example, a position where the ratio of the waveforms of the two phase difference sequences is the highest. Alternatively, for example, the phase difference detection unit 133 detects the position where the waveform peak patterns at the edge portions of the two phase difference sequences overlap most. Then, the phase difference detection unit 133 detects the distance between the position before the phase difference sequence is shifted and the detected position as the phase shift amount.
  • the method of detecting the amount of phase shift is not limited to the method described above, and any method can be employed.
  • step S13 when two or more sets of phase difference sequences are selected in the process of step S13, the amount of phase shift is detected for each set.
  • the phase difference detection unit 133 supplies the detection result of the phase shift amount to the focus position detection unit 134.
  • the focus position detection unit 134 converts the phase shift amount into the defocus amount.
  • the amount of phase shift between the waveform of the phase difference sequence QL and the waveform of the phase difference sequence QR is the length of the base on the imaging surface of the isosceles triangular imaging element 112a indicated by hatching in FIGS. It corresponds to.
  • the height of the isosceles triangle corresponds to the amount of movement of the lens 111 in the optical axis direction necessary for focusing, that is, the defocus amount. That is, the defocus amount is proportional to the phase shift amount. Therefore, the focus position detection unit 134 calculates the height of the isosceles triangle based on the phase shift amount that is the base of the isosceles triangle to obtain the defocus amount.
  • a defocus amount is obtained based on each phase shift amount.
  • the final defocus amount is calculated from the defocus amount obtained from the plurality of phase shift amounts in consideration of reliability and the like. For example, the final defocus amount is calculated by weighting and adding a plurality of defocus amounts using weights based on reliability or the like.
  • step S16 the focus position detection unit 134 obtains the focus position based on the defocus amount obtained in the process of step S15.
  • the in-focus position differs for each wavelength band due to the influence of axial chromatic aberration.
  • 18 to 20 as in the left diagrams of FIGS. 15 to 17, the light beams in the B wavelength band, the G wavelength band, and the R wavelength band are incident on the lens 111 from above. It is the seen schematic diagram.
  • the diagrams on the left side of FIGS. 18 to 20 respectively show the state of the light beam when the light beam in the G wavelength band is in focus.
  • the right-side diagrams of FIGS. 18 to 20 are the phase difference sequences in the left direction of the light beams in the respective wavelength bands in the case where the light beams are in the state shown in the left-side diagram, similarly to the right-side diagrams of FIGS. 15 to 17.
  • the relationship between QL and the phase difference sequence QR in the right direction is shown.
  • the light beam in the G wavelength band when the light beam in the G wavelength band is in focus, the light beam in the B wavelength band having a shorter wavelength than the G wavelength band and having a high refractive index is in a back-pin state.
  • the light flux in the R wavelength band having a longer wavelength than the G wavelength band and having a low refractive index is in a front pin state.
  • the focus position detection unit 134 uses the wavelength band used for detecting the phase shift amount (hereinafter referred to as the phase difference detection wavelength band) and the wavelength band used for focusing (hereinafter referred to as the focus wavelength band). The in-focus position obtained based on the defocus amount is corrected.
  • the axial chromatic aberration varies depending on the lens system used. Therefore, for example, the axial chromatic aberration amount tables of all lens systems that can be used in the camera 101a are stored in advance in the chromatic aberration data storage unit 137 in the same manner as the magnification chromatic aberration amount table.
  • FIG. 21 shows an example of the axial chromatic aberration amount table.
  • This axial chromatic aberration amount table is a table having the same configuration as the magnification chromatic aberration correction table of FIG. That is, the axial chromatic aberration amount table shows the axial chromatic aberration amount of the wavelength indicated on the horizontal axis with respect to the wavelength indicated on the vertical axis.
  • a ( ⁇ k , ⁇ i ) indicates the amount of axial chromatic aberration at the wavelength ⁇ i with respect to the wavelength ⁇ k .
  • the wavelength ⁇ k and the wavelength ⁇ i are any of the R wavelength band, the G wavelength band, and the B wavelength band.
  • the in-focus position detection unit 134 obtains the axial chromatic aberration amount table of the lens 111 attached to the camera 101a based on the identification information of the lens 111 and the chromatic aberration data. Read from the storage unit 137. Next, the focus position detection unit 134 obtains the axial chromatic aberration amount of the focused wavelength band with respect to the phase detection wavelength band based on the axial chromatic aberration amount table. Then, the focus position detection unit 134 corrects the focus position with respect to the phase detection wavelength to the focus position with respect to the focus wavelength band based on the obtained amount of longitudinal chromatic aberration.
  • the focus position detection unit 134 detects the phase shift amount in the R wavelength band, and obtains the defocus amount in the R wavelength band based on the detected phase shift amount. Further, the focus position detection unit 134 obtains the focus position of the R wavelength band based on the defocus amount of the R wavelength band. Then, the focus position detection unit 134 corrects the focus position of the R wavelength band to the focus position of the G wavelength band based on the axial chromatic aberration amount of the G wavelength band with respect to the R wavelength band.
  • the focus position detection unit 134 notifies the focus control unit 135 of the obtained focus position.
  • step S17 the camera 101a focuses.
  • the focus control unit 135 controls the lens driving unit 122 to move the lens 111 in the optical axis direction to the focus position notified from the focus position detection unit 134.
  • step S1 the process returns to step S1, and the processes after step S1 are executed. Then, by repeatedly executing the processing after step S1, the focus is adjusted to an appropriate position following the change of the subject and lens driving.
  • timing for executing the AF control process and the repetition interval can be arbitrarily set.
  • the weight is appropriately adjusted according to the correlation between the time direction, the space direction, and the wavelength direction, it is possible to prevent noise from being mixed into the phase difference image by the integration process.
  • a phase difference sequence of a wavelength band, a direction, and a position suitable for detecting the phase shift amount is selected.
  • the focus position is corrected based on the longitudinal chromatic aberration. As a result, focusing accuracy is improved. Even when the illuminance of the subject is low, the subject can be accurately focused.
  • the sensitivity of the phase difference pixel decreases because the area of the light receiving portion is smaller than that of the photographic pixel.
  • the sensitivity of the phase difference pixel is improved by using a W filter having a high transmittance instead of the color filter used in the photographing pixel.
  • the SN ratio of the phase difference image can be improved without using a W filter for the phase difference pixel.
  • a W filter it is possible to prevent the influence of chromatic aberration caused by receiving light in different wavelength bands.
  • all the pixels can be used for both imaging and phase difference detection. Accordingly, there is no deterioration in image quality due to defective pixels that occurs when a pixel dedicated to phase difference detection is used. In addition, the compatibility at the time of focusing and shooting is increased, and control is facilitated. In addition, there is no need to provide a dedicated line sensor or the like for detecting the phase difference, and the cost is not increased. In addition, the focus position can be detected without driving the lens and the focus speed is increased.
  • the subsequent integration process is omitted, so that the focusing speed can be further increased.
  • weight adjustment method in each integration process is an example, and the weight may be adjusted by another method.
  • the integration process may be performed using a fixed coefficient filter with a fixed weight.
  • the amount of movement of the subject differs depending on the combination of the pixel position and the light receiving wavelength band. Therefore, in the integration process in the time direction, convolution integration not only in the time direction but also in the spatial direction and the wavelength direction may be performed.
  • the similarity of the subject in the horizontal and vertical directions varies depending on the combination of the shooting time and the light receiving wavelength band. Therefore, in the integration process in the spatial direction, not only the spatial direction but also the convolution integration with the time direction and the wavelength direction may be performed.
  • the chromatic aberration of magnification changes according to the image height from the center of the optical axis toward the periphery. Therefore, in the integration process in the wavelength direction, the chromatic aberration of magnification c ( ⁇ k , ⁇ i ) in the above-described equations (13) and (14) may be corrected according to the distance from the optical axis center. .
  • the type of image used for calculating the weight of each integration process can be arbitrarily selected.
  • both the captured image and the phase difference image may be used, only the captured image may be used, or only the phase difference image may be used.
  • the phase difference pixel value of the other wavelength band is obtained by performing a simple demosaic process using the phase difference pixel value of the phase difference pixel of the other wavelength band near the phase difference pixel. do it.
  • phase difference pixel value obtained by demosaic processing may be used for the phase difference sequence.
  • the phase shift amount used for detecting the in-focus position may be selected.
  • FIG. 22 shows a configuration example of the pixel 201b divided into four pupils.
  • the upper diagram in FIG. 22 schematically illustrates an exploded view of the pixel 201b as viewed from the lateral direction
  • the lower diagram schematically illustrates a plan view of the pixel 201b as viewed from the upper direction.
  • symbol is attached
  • the pixel 201b is different from the pixel 201a in FIG. 3 in that a light shielding unit 213b and photoelectric conversion units 214LU to 214RD are provided instead of the light shielding unit 213a and the photoelectric conversion units 214L and 214R. .
  • photoelectric conversion unit 214 when it is not necessary to individually distinguish the photoelectric conversion units 214LU to 214RD, they are simply referred to as the photoelectric conversion unit 214.
  • the light incident on the on-chip microlens 211 is collected toward the center of the light receiving surface of the pixel 201b, which is the optical axis center of the on-chip microlens 211. Then, a component of a predetermined wavelength band of incident light is transmitted by the wavelength selection filter 212 and is incident on the light receiving regions of the photoelectric conversion units 214LU to 214RD that are not shielded by the light shielding unit 213b.
  • the light shielding unit 213b has the effect of preventing color mixing with adjacent pixels and pupil division of the pixel 201b.
  • the photoelectric conversion units 214LU to 214RD are each composed of a photoelectric conversion element such as a photodiode, for example.
  • the photoelectric conversion units 214LU to 214RD are arranged so that the light receiving surface of the pixel 201b is divided into four parts in the vertical direction (column direction) and the horizontal direction (row direction). That is, the photoelectric conversion unit 214LU is disposed at a position offset in the upper left direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214LU is eccentric in the upper left direction with respect to the on-chip microlens 211.
  • the photoelectric conversion unit 214LD is disposed at a position deviated in the lower left direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214LD is decentered in the lower left direction with respect to the on-chip microlens 211.
  • the photoelectric conversion unit 214RU is disposed at a position offset in the upper right direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214RU is eccentric in the upper right direction with respect to the on-chip microlens 211.
  • the photoelectric conversion unit 214RD is arranged at a position offset in the lower right direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214RD is eccentric in the lower right direction with respect to the on-chip microlens 211.
  • the photoelectric conversion unit 214LU receives the light incident on the upper left half of the light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of light received.
  • the photoelectric conversion unit 214LD receives light incident on the lower left half of the light receiving surface of the pixel 201b and outputs a pixel signal corresponding to the amount of received light.
  • the photoelectric conversion unit 214RU receives light incident on the upper right half of the light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of received light.
  • the photoelectric conversion unit 214RD receives light incident on the lower right half of the light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of light received.
  • the pixel 201b can individually output the pixel signals of the photoelectric conversion units 214LU to 214RD, or can add and output two or more pixel signals in any combination.
  • pixel signals individually output from the photoelectric conversion units 214LU to 214RD are used as phase difference detection signals.
  • a signal obtained by selecting a plurality of photoelectric conversion units 214 so that the position of the center of gravity is decentered from the center of the on-chip microlens 211 and adding the pixel signals of the selected photoelectric conversion units 214 is used as a phase difference detection signal. Used. Further, a signal obtained by adding all four pixel signals is used as a signal for photographing.
  • phase difference pixel values in the left direction, the right direction, the upper direction, and the lower direction can be detected by one type of pixel 201b.
  • an imaging phase-difference pixel that combines an imaging pixel and a phase-difference detection pixel has been described.
  • the present technology provides an imaging-dedicated pixel and a phase-difference detection-dedicated pixel.
  • the present invention can also be applied to the case where these are provided separately.
  • FIG. 23 shows a configuration example of a pixel 201c that is a dedicated pixel for imaging.
  • the upper diagram in FIG. 23 schematically illustrates an exploded view of the pixel 201c as viewed from the lateral direction, and the lower diagram schematically illustrates a plan view of the pixel 201c as viewed from the upper direction.
  • symbol is attached
  • the pixel 201c is different from the pixel 201a in FIG. 3 in that a light shielding unit 213c and a photoelectric conversion unit 214M are provided instead of the light shielding unit 213a and the photoelectric conversion units 214L and 214R.
  • the light incident on the on-chip microlens 211 is collected toward the center of the pixel, which is the optical axis center of the on-chip microlens 211. Then, a component of a predetermined wavelength band of incident light is transmitted by the wavelength selection filter 212 and is incident on the light receiving region of the photoelectric conversion unit 214M that is not shielded by the light shielding unit 213c.
  • the light shielding part 213c has an effect of preventing color mixing with adjacent pixels.
  • the photoelectric conversion unit 214M includes, for example, photoelectric conversion elements such as photodiodes.
  • the photoelectric conversion unit 214M is disposed in the approximate center of the light receiving surface of the pixel 201c, and the light receiving region of the photoelectric conversion unit 214M is not decentered with respect to the on-chip microlens 211.
  • the photoelectric conversion unit 214M receives light incident on almost the entire light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of light received.
  • the pixel 201c in which the wavelength selection filter 212 is an R filter is referred to as a pixel 201Rc.
  • the pixel 201c in which the wavelength selection filter 212 is a G filter is referred to as a pixel 201Gc.
  • the pixel 201c in which the wavelength selection filter 212 is a B filter is referred to as a pixel 201Bc.
  • FIG. 24 shows a configuration example of a pixel 201dL that is a phase difference detection dedicated pixel.
  • the upper diagram of FIG. 24 schematically shows an exploded view of the pixel 201dL viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 201dL viewed from the upper direction.
  • symbol is attached
  • the pixel 201dL is different from the pixel 201a in FIG. 3 in that a light shielding unit 213dL is provided instead of the light shielding unit 213a and the photoelectric conversion unit 214R is not provided.
  • the light incident on the on-chip microlens 211 is condensed toward the center of the light receiving surface of the pixel 201 dL, which is the optical axis center of the on-chip microlens 211.
  • the wavelength selection filter 212 is made of, for example, a W filter, and transmits most of the wavelength component of incident light and enters the light receiving region of the photoelectric conversion unit 214dL that is not shielded by the light shielding unit 213dL.
  • the light shielding unit 213dL has effects of preventing color mixing with adjacent pixels and pupil division of the pixel 201dL.
  • the photoelectric conversion unit 214dL receives light incident on substantially the left half of the light receiving surface of the pixel 201d, and outputs a pixel signal corresponding to the amount of light received. Thereby, in the pixel 201dL, the incident light is divided into pupils substantially in the left half.
  • FIG. 25 shows a configuration example of the pixel 201dR which is a phase difference detection dedicated pixel.
  • the upper diagram in FIG. 25 schematically shows an exploded view of the pixel 201dR as seen from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 201dR as seen from the upper direction.
  • symbol is attached
  • the pixel 201dR is different from the pixel 201a in FIG. 3 in that a light shielding unit 213dR is provided instead of the light shielding unit 213a and the photoelectric conversion unit 214L is not provided.
  • the light incident on the on-chip microlens 211 is condensed toward the center of the light receiving surface of the pixel 201dR, which is the optical axis center of the on-chip microlens 211.
  • the wavelength selection filter 212 is made of, for example, a W filter, and transmits most of the wavelength component of incident light and enters the light receiving region of the photoelectric conversion unit 214dR that is not shielded by the light shielding unit 213dR.
  • the light shielding unit 213dR has effects of preventing color mixing with adjacent pixels and pupil division of the pixel 201dR.
  • the photoelectric conversion unit 214dR receives light incident on substantially the right half of the light receiving surface of the pixel 201d, and outputs a pixel signal corresponding to the amount of light received. Thereby, in the pixel 201dR, the incident light is divided into pupils substantially in the right half.
  • a pixel including the photoelectric conversion unit 214U arranged at a position biased upward in the light receiving surface is referred to as a pixel 201dU.
  • a pixel including the photoelectric conversion unit 214D disposed at a position biased downward in the light receiving surface is referred to as a pixel 201dD.
  • a pixel 201dL to 201dD when it is not necessary to individually distinguish the pixels 201dL to 201dD, they are simply referred to as a pixel 201d.
  • FIGS. 26 to 29 show examples of the pixel arrangement of the image sensor 112b using the pixel 201c and the pixels 201dL to 201dD.
  • FIG. 26 shows an example of the pixel arrangement of the entire image sensor 112b.
  • 27 and 28 show examples of pixel arrangement in units of 2 ⁇ 2 pixels.
  • FIG. 29 shows an example of pixel arrangement in units of 8 ⁇ 8 pixels. This pixel arrangement is based on the disclosure of JP 2010-152161 A, and other pixel arrangements can be adopted.
  • pixels 201Rc to 201Bc dedicated to imaging are arranged according to the Bayer array.
  • the pixels 201dL to 201dD dedicated to phase difference detection are arranged instead of the pixels 201Rc and 201Bc as compared with the pattern 1 of FIG.
  • a pixel 201dR is arranged instead of the pixel 201Rc, and a pixel 201dL is arranged instead of the pixel 201Bc.
  • a pixel 201dL is arranged instead of the pixel 201Rc, and a pixel 201dR is arranged instead of the pixel 201Bc. That is, in the patterns 2A and 2B, instead of the R pixel and the B pixel, the pixels 201dL and 201dR that are pupil-divided in the horizontal direction are arranged so as to be adjacent in the oblique direction.
  • a pixel 201dD is arranged instead of the pixel 201Rc, and a pixel 201dU is arranged instead of the pixel 201Bc.
  • a pixel 201dU is arranged instead of the pixel 201Rc, and a pixel 201dD is arranged instead of the pixel 201Bc. That is, in the patterns 2C and 2D, instead of the R pixel and the B pixel, the pixels 201dU and 201dD that divide the pupil in the vertical direction are arranged so as to be adjacent in the oblique direction.
  • One is arranged.
  • the pixel 201dL is arranged at the coordinates (8n, 16m + 14) and the coordinates (8n + 1, 16m + 7).
  • Pixels 201dR are arranged at coordinates (8n, 16m + 6) and coordinates (8n + 1, 16m + 15).
  • Pixels 201dU are arranged at coordinates (16n + 5, 8m + 3) and coordinates (16n + 12, 8m + 2).
  • integration processing is performed on the phase difference pixel values of the pixels 201dL to 201dD by the same method as described above.
  • all the wavelength selection filters 212 of the pixels 201dL to 201dD are W filters, integration processing in the wavelength direction is not performed, and only integration processing in the time direction and the spatial direction is performed.
  • the pixels 201dL to 201dD which are the pixels for phase difference detection, are used, the SN ratio of the phase difference sequence including the phase difference pixel values of the pixels 201dL to 201dD can be appropriately increased. As a result, focusing accuracy is improved.
  • ⁇ Configuration Example of Image Sensor 112c ⁇ 30 to 32 show a configuration example of the image sensor 112c used in the camera 101b (not shown) in the second embodiment.
  • the camera 101b is obtained by replacing the image sensor 112a of the camera 101a with an image sensor 112c.
  • the image sensor 112c includes an IR pixel provided with an IR filter that transmits near-infrared light of about 700 to 1100 nm, instead of the Gr pixel. Is different.
  • the IR filter used for the IR pixel is realized, for example, by stacking the B + IRb filter having the spectral characteristics of the waveform 251 in FIG. 5 and the R + IRr filter having the spectral characteristics of the waveform 253 described above.
  • the IR pixels are arranged in odd rows and odd columns. Further, IR pixels that are pupil-divided in the horizontal direction and IR pixels that are pupil-divided in the vertical direction are alternately arranged in the row direction and the column direction.
  • steps S1 to S12 processing similar to that described above is executed. However, in steps S8, S10, and S12, an integration process of the phase difference pixel value in the IR wavelength band is also executed.
  • the integration process of the phase difference pixel values in the IR wavelength band is performed by the same method as that for the phase difference pixel values in the other wavelength bands.
  • step S13 the phase difference detection unit 133 selects a phase difference sequence used for detection of the phase shift amount.
  • the phase difference detection unit 133 can be used in the horizontal direction or the vertical direction of the R wavelength band, the G wavelength, or the B wavelength band as in the first embodiment described above.
  • a phase difference sequence to be used for detecting a phase shift amount is selected from the phase difference sequence.
  • the S / N ratio is low in any of the phase difference sequences in the R wavelength band, G wavelength, and B wavelength band, and the detection accuracy of the phase shift amount descend.
  • FIGS. 33 to 35 are diagrams comparing phase shift amounts of the G wavelength band and the IR wavelength band when the illuminance of visible light is not sufficient.
  • the right-side diagrams of FIGS. 33 to 35 are schematic views of a state in which a light beam in the G wavelength band and a light beam in the IR wavelength band are incident on the lens 111 from above.
  • a light beam in the G wavelength band is indicated by a solid line
  • a light beam in the IR wavelength band is indicated by a dotted line.
  • the left diagrams of FIGS. 33 to 35 show examples of the amount of phase shift between the G wavelength band and the IR wavelength band when the light beams in the G wavelength band and the IR wavelength band are in the state shown in the right diagram. .
  • FIG. 33 shows a case where the light beam in the IR wavelength band is in the rear pin state
  • FIG. 34 shows a case where the light beam in the IR wavelength band is in the front pin state
  • FIG. 35 shows a state where the light beam in the G wavelength band is in focus. Shows the case.
  • FIG. 33 to FIG. 35 when the visible light has a low illuminance, the phase difference pixel value of the phase difference sequence in the G wavelength band becomes extremely small, and the detection accuracy of the phase shift amount decreases.
  • FIG. 33 and FIG. 34 show the maximum value (upper arrow) and the minimum value (lower arrow) of the phase shift amount that may be detected. There is a possibility that the phase shift amount is detected between the two.
  • the phase difference image AE unit 154 performs the same AE control for all the wavelength bands, the phase difference pixel value of the phase difference sequence in the IR wavelength band becomes relatively large.
  • the amount of noise included is the same.
  • the phase difference sequence in the IR wavelength band has a larger phase difference pixel value and a higher S / N ratio than the phase difference sequence in the other visible light wavelength bands.
  • the phase difference detection unit 133 uses the phase difference sequence in the horizontal direction or the vertical direction of the IR wavelength band.
  • the phase difference sequence to be used for detecting the phase shift amount is selected from the above.
  • column of G wavelength band becomes large.
  • the phase difference pixel value of the phase difference sequence in the IR wavelength band may be smaller than in FIGS. 33 and 34.
  • the phase difference detection unit 133 selects a phase difference sequence in the G wavelength band at the next focusing time.
  • the phase difference detection unit 133 does not use the phase difference sequence of the R, G, and B wavelength bands, and the horizontal or vertical direction of the IR wavelength band is the vertical direction.
  • a phase difference sequence to be used for detecting a phase shift amount is selected from the phase difference sequence.
  • step S14 the amount of phase shift is detected by the same processing as in the first embodiment.
  • step S15 the phase shift amount is converted into the defocus amount by the same processing as in the first embodiment.
  • step S16 the in-focus position is obtained by the same processing as in the first embodiment. That is, the focus position detection unit 134 obtains the focus position based on the defocus amount obtained in the process of step S15. Furthermore, the in-focus chromatic aberration amount shown in FIG. 21 is obtained when the in-focus position detection unit 134 differs from the in-focus wavelength band used for focusing and the phase difference detection wavelength band used for detecting the phase shift amount. The focus position is corrected using the table.
  • the focusing position is corrected based on the axial chromatic aberration of the G wavelength band with respect to the IR wavelength band. Since the axial chromatic aberration is large between light in the IR wavelength band and visible light, this in-focus position correction is particularly effective.
  • step S17 the focus is adjusted by the same processing as in the first embodiment.
  • step S1 the process returns to step S1, and the processes after step S1 are executed.
  • a waveform 301 in FIG. 37 is a graph showing an example of spectral characteristics of a bandpass filter (hereinafter referred to as a W + IR filter) provided in a W + IR pixel.
  • the horizontal axis of the graph indicates the wavelength, and the vertical axis indicates the spectral transmittance.
  • the W + IR filter transmits visible light in a wavelength band including the B wavelength band, G wavelength band, and R wavelength band, and light in the IR wavelength band (only about 800 to 900 nm). Therefore, the sensitivity of the W + IR pixel using the W + IR filter is higher than that of the other pixels.
  • the surrounding R pixel, G pixel, and B pixel are calculated from the pixel value of the W + IR pixel as shown in the following equation (15).
  • the pixel value IR of the IR component can be obtained by subtracting the pixel value of.
  • W + IR, B, G, and R indicate pixel values of the W + IR pixel, the B pixel, the G pixel, and the R pixel, respectively.
  • an infrared cut filter is laminated in the wavelength selection filters of the R pixel, the G pixel, and the B pixel, and the light in the IR wavelength band is blocked.
  • an infrared cut filter is not stacked in the wavelength selection filter, and light in the IR wavelength band is received.
  • a step corresponding to the film thickness of the infrared cut filter is generated at the boundary between the IR pixel and a pixel in the visible light wavelength band other than the IR pixel (R pixel, G pixel, B pixel). Then, the incident light in the oblique direction crossing the step causes color mixing of the IR wavelength band component in the visible wavelength band pixel. The influence of this color mixture becomes larger as the pixel interval becomes smaller due to the miniaturization of the image sensor due to the increase in the number of pixels.
  • the third embodiment is intended to prevent the occurrence of this color mixture.
  • FIG. 38 shows an example of pixel arrangement in an image sensor 112d (not shown) used for the camera 101c (not shown) in the third embodiment.
  • the camera 101c is obtained by replacing the image sensor 112a of the camera 101a with an image sensor 112d.
  • B + IR pixels, G + IR pixels, and R + IR pixels are arranged instead of B pixels, G pixels, and R pixels. Is different.
  • B + IR pixels are provided with a B + IR filter that transmits the B wavelength band (about 405 to 500 nm) and a part of the IR wavelength band (about 800 to 900 nm).
  • the B + IR filter is realized, for example, by stacking the B + IRb filter having the spectral characteristics of the waveform 251 in FIG. 5 and the W + IR filter having the spectral characteristics of the waveform 301 in FIG. 37 described above.
  • the G + IR pixel is provided with a G + IR filter that transmits the G wavelength band (about 475 to 600 nm) and a part of the IR wavelength band (about 800 to 900 nm).
  • the G + IR filter is realized, for example, by stacking the G + IRg filter having the spectral characteristics of the waveform 252 in FIG. 5 and the W + IR filter.
  • the R + IR pixel is provided with an R + IR filter that transmits the R wavelength band (about 580 to 650 nm) and part of the IR wavelength band (about 800 to 900 nm).
  • the R + IR filter is realized, for example, by stacking the R + IRr filter having the spectral characteristics of the waveform 253 in FIG. 5 and the W + IR filter.
  • the IR pixel is provided with an IR filter that transmits only a part of the IR wavelength band (around 800 to 900 nm).
  • the IR filter is realized, for example, by stacking a B + IRb filter, an R + IRr filter, and a W + IR filter.
  • FIG. 39 is a graph showing the spectral characteristics of B + IR pixels, G + IR pixels, and R + IR pixels.
  • the horizontal axis of the graph indicates the wavelength, and the vertical axis indicates the spectral transmittance.
  • a waveform 401 indicates the spectral characteristic of the B + IR pixel
  • a waveform 402 indicates the spectral characteristic of the G + IR pixel
  • a waveform 403 indicates the spectral characteristic of the R + IR pixel.
  • the spectral characteristics of the IR wavelength band transmitted through the filters of the B + IR pixel, the G + IR pixel, the R + IR pixel, and the IR pixel are normalized by the W + IR filter and become almost the same (around 800 to 900 nm). Therefore, as shown in the following equations (16) to (18), the pixel value IR of the neighboring IR pixel is calculated from the pixel value B + IR of the B + IR pixel, the pixel value G + IR of the G + IR pixel, and the pixel value R + IR of the R + IR pixel. By subtracting, the pixel value B, the pixel value G, and the pixel value R in the visible light region are obtained.
  • the IR wavelength band component is mixed in the pixel value in the visible light wavelength band, thereby preventing color mixing.
  • the IR component of each pixel can be used.
  • the W + IR filter can be formed on-chip on the image sensor 112d, for example, or can be installed in a part of the optical system of the camera 101c.
  • the light receiving range in the IR wavelength band is narrower than that in the second embodiment.
  • the accuracy of the in-focus position correction using the longitudinal chromatic aberration is increased.
  • the SN ratio of the phase difference pixel value in the IR wavelength band is lowered.
  • the SN ratio of the phase difference pixel value in the IR wavelength band is improved by performing integration processing in the time direction, the spatial direction, and the wavelength direction of the phase difference pixel value in the IR wavelength band. As a result, focusing accuracy is improved.
  • W + IR pixels may be arranged instead of the IR pixels in the pixel arrangement of FIG. 40.
  • the W + IR pixel is provided with, for example, a W + IR filter having the spectral characteristics of the waveform 301 in FIG. 37 described above.
  • the pixel values B + IR and G + IR of the surrounding B + IR pixels are calculated from the pixel value W + IR of the W + IR pixels as in the following equation (19).
  • the pixel value IR of the IR component is calculated.
  • the configuration of the image sensor is different from that of the first to third embodiments described above. Specifically, in the fourth embodiment, a multilayer spectroscopic imaging device is used.
  • FIG. 41 shows a configuration example of the pixel 501a of the image sensor 112e (not shown) used for the camera 101d (not shown) in the fourth embodiment.
  • the camera 101d is obtained by replacing the image sensor 112a of the camera 101a with an image sensor 112e.
  • the upper diagram in FIG. 41 schematically shows an exploded view of the pixel 501a seen from the lateral direction
  • the lower diagram schematically shows a plan view of the pixel 501a seen from the upper direction.
  • an on-chip microlens 511, a light shielding portion 512a, a photoelectric conversion layer 513L, and a photoelectric conversion layer 513R are sequentially stacked from the top.
  • the photoelectric conversion units 513BL to 513IRL are stacked so as to substantially overlap in order from the top.
  • the photoelectric conversion units 513BR to 513IRR are stacked so as to almost overlap in order from the top.
  • the light incident on the on-chip microlens 511 is condensed toward the center of the light receiving surface of the pixel 501a, which is the optical axis center of the on-chip microlens 511.
  • the incident light is incident on the light receiving regions of the photoelectric conversion units 513BL and 513BR that are not shielded by the light shielding unit 512a.
  • the light shielding unit 512a has effects of preventing color mixture with adjacent pixels and pupil division of the pixel 501a.
  • the photoelectric conversion units 513BL to 513IRL and the photoelectric conversion units 513BR to 513IRR are each composed of a photoelectric conversion element such as a photodiode, for example.
  • the photoelectric conversion unit 513BL and the photoelectric conversion unit 513BR are arranged in a horizontal direction (row direction, left-right direction) with a predetermined interval.
  • the photoelectric conversion unit 513BL is disposed at a position offset to the left of the light receiving surface of the pixel 501a, and the light receiving region of the photoelectric conversion unit 513BL is eccentric to the left with respect to the on-chip microlens 511.
  • the photoelectric conversion unit 513BR is disposed at a position offset in the right direction of the light receiving surface of the pixel 501a, and the light receiving region of the photoelectric conversion unit 513BR is eccentric in the right direction with respect to the on-chip microlens 511.
  • the photoelectric conversion units 513GL to 513RL are arranged at a position biased to the left of the light receiving surface of the pixel 501a. It is eccentric to the left.
  • the photoelectric conversion units 513GR to 513IRR are arranged at positions offset in the right direction of the light receiving surface of the pixel 501a, and the light receiving regions of the photoelectric conversion units 513GR to 513IRR with respect to the on-chip microlens 511. Is eccentric to the right.
  • the photoelectric conversion units 513BL and 513BR have sensitivity to light in the B wavelength band. That is, the photoelectric conversion units 513BL and 513BR absorb light in the B wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513BL and 513BR each output a pixel signal corresponding to the amount of light received in the B wavelength band.
  • the photoelectric conversion units 513GL and 513GR are sensitive to light in the G wavelength band. That is, the photoelectric conversion units 513GL and 513GR absorb light in the G wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513GL and 513GR each output a pixel signal corresponding to the amount of light received in the G wavelength band.
  • the photoelectric conversion units 513RL and 513RR have sensitivity to light in the R wavelength band. That is, the photoelectric conversion units 513RL and 513RR absorb light in the R wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513RL and 513RR each output a pixel signal corresponding to the amount of light received in the R wavelength band.
  • the photoelectric conversion units 513IRL and 513IRR have sensitivity to light in the IR wavelength band. That is, the photoelectric conversion units 513IRL and 513IRR absorb light in the IR wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513IRL and 513IRR each output a pixel signal corresponding to the amount of light received in the IR wavelength band.
  • incident light is divided into pupils in the horizontal direction (row direction, left-right direction).
  • the pixel 501a splits incident light in the stacking direction perpendicular to the light receiving surface, and simultaneously receives light in the B wavelength band, the G wavelength band, the R wavelength band, and the IR wavelength band.
  • Pixel signals for light can be output independently.
  • the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513BL and 513BR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513BL and 513BR are used as phase difference detection signals, and a signal obtained by adding the two pixel signals is used as a normal photographing signal.
  • the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513GL and 513GR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513GL and 513GR are used as signals for phase difference detection, and a signal obtained by adding two pixel signals is used as a signal for normal photographing.
  • the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513RL and 513RR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513RL and 513RR are used as phase difference detection signals, and a signal obtained by adding two pixel signals is used as a normal photographing signal.
  • the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513IRL and 513IRR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513IRL and 513IRR are used as phase difference detection signals, and a signal obtained by adding the two pixel signals is used as a normal photographing signal.
  • the image sensor 112e can receive light of different wavelength bands independently in each pixel and output a photographing pixel value and a phase difference pixel value corresponding to the amount of received light for each wavelength band. Therefore, in the image sensor 112e, unlike the image sensors 112a to 112d of the first to third embodiments, it is not necessary to arrange the wavelength selection filters having different transmission wavelength bands in a mosaic pattern on the imaging surface. In addition, by using the image sensor 112e, it is not necessary to interpolate pixel values in a wavelength band different from the wavelength band for receiving light by demosaic processing in each pixel. Therefore, by using the image sensor 112e, the spatial resolution of light in each wavelength band is improved. Thereby, the detection accuracy of the in-focus position is improved regardless of the phase difference sequence of any wavelength band.
  • FIG. 41 illustrates an example in which the pupil is divided in the horizontal direction
  • the image sensor 112e there are pixels that perform pupil division in the vertical direction (column direction, vertical direction).
  • the S / N ratio of the phase difference image can be improved by performing the integration process of the phase difference pixel value of each phase difference pixel as described above. As a result, focusing accuracy is improved.
  • FIG. 42 shows a configuration example of a stacked spectroscopic pixel 501b divided into four pupils.
  • the upper diagram of FIG. 42 schematically shows an exploded view of the pixel 501b viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 501b viewed from the upper direction.
  • the same parts as those in FIG. 41 are denoted by the same reference numerals.
  • the pixel 501b includes a light shielding unit 512b, a photoelectric conversion layer 513LU (not shown), a photoelectric conversion layer 513LD, instead of the light shielding unit 512a and the photoelectric conversion layers 513L and 513R.
  • a photoelectric conversion layer 513RU (not shown) and a photoelectric conversion layer 513RD are provided.
  • the photoelectric conversion unit 513BLU, the photoelectric conversion unit 513GLU (not shown), the photoelectric conversion unit 513RLU (not shown), and the photoelectric conversion unit 513IRLU (not shown) are stacked so as to almost overlap in order from the top.
  • the photoelectric conversion unit 513BLD, the photoelectric conversion unit 513GLD, the photoelectric conversion unit 513RLD, and the photoelectric conversion unit 513IRLD are stacked so as to almost overlap in order from the top.
  • the photoelectric conversion unit 513BRU, the photoelectric conversion unit 513GRU (not illustrated), the photoelectric conversion unit 513RRU (not illustrated), and the photoelectric conversion unit 513IRRU (not illustrated) are stacked so as to substantially overlap in order from the top.
  • the photoelectric conversion unit 513RD the photoelectric conversion unit 513BRD, the photoelectric conversion unit 513GRD, the photoelectric conversion unit 513RRD, and the photoelectric conversion unit 513IRRD are stacked so as to almost overlap in order from the top.
  • the light shielding part 512b has substantially the same shape as the light shielding part 213b of the pixel 201b in FIG.
  • the photoelectric conversion unit 513BLU, the photoelectric conversion unit 513BLD, the photoelectric conversion unit 513BRU, and the photoelectric conversion unit 513BRD are provided on the light receiving surface of the pixel 501b, on the light receiving surface of the pixel 201b in FIG.
  • the converter 214RU and the photoelectric converter 214RD are arranged at substantially the same position.
  • the photoelectric conversion units 513BLU to 513IRLU are arranged at positions offset in the upper left direction of the light receiving surface of the pixel 501b. Yes.
  • the photoelectric conversion units 513BLD to 513IRLD are arranged at positions deviated in the lower left direction of the light receiving surface of the pixel 501b.
  • the photoelectric conversion units 513BRU to 513IRRU are arranged at positions offset in the upper right direction of the light receiving surface of the pixel 501b.
  • the photoelectric conversion units 513BRD to 513IRRD are arranged at positions offset in the lower right direction of the light receiving surface of the pixel 501b. Yes.
  • the photoelectric conversion units 513BLU to 513BRD have sensitivity to light in the B wavelength band. That is, the photoelectric conversion units 513BLU to 513BRD absorb light in the B wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513BLU to 513BRD each output a pixel signal corresponding to the amount of light received in the B wavelength band.
  • the photoelectric conversion units 513GLU to 513GRD are sensitive to light in the G wavelength band. That is, the photoelectric conversion units 513GLU to 513GRD absorb light in the G wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513GLU to 513GRD output pixel signals corresponding to the amount of light received in the G wavelength band.
  • the photoelectric conversion units 513RLU to 513RRD are sensitive to light in the R wavelength band. That is, the photoelectric conversion units 513RLU to 513RRD absorb light in the R wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513RLU to 513RRD each output a pixel signal corresponding to the amount of light received in the R wavelength band.
  • the photoelectric conversion units 513IRLU to 513IRRD have sensitivity to light in the IR wavelength band. That is, the photoelectric conversion units 513IRLU to 513IRRD absorb light in the IR wavelength band and transmit light in other wavelength bands.
  • the photoelectric conversion units 513IRLU to 513IRRD output pixel signals corresponding to the amount of received light in the IR wavelength band.
  • incident light is divided into pupils in the horizontal direction and the vertical direction.
  • the pixel 501a splits incident light in the stacking direction perpendicular to the light receiving surface, and simultaneously receives light in the B wavelength band, the G wavelength band, the R wavelength band, and the IR wavelength band. Pixel signals for light can be output independently.
  • the pixel 501b can individually output the pixel signals of the photoelectric conversion units 513BLU to 513BRD, or can add and output them in any combination.
  • the pixel 501b can individually output pixel signals of the photoelectric conversion units 513GLU to 513GRD, or add and output them in any combination.
  • the pixel 501b can individually output the pixel signals of the photoelectric conversion units 513RLU to 513RRD, or can add and output them in any combination.
  • the pixel 501b can individually output the pixel signals of the photoelectric conversion units 513IRLU to 513IRRD, or can add and output them in any combination.
  • a stacked spectroscopic pixel can be used.
  • FIG. 43 shows a configuration example of a pixel 501c that is a dedicated pixel for imaging.
  • the upper diagram in FIG. 43 schematically shows an exploded view of the pixel 501c as seen from the lateral direction
  • the lower diagram schematically shows a plan view of the pixel 501c as seen from the upper direction.
  • the same parts as those in FIG. 41 are denoted by the same reference numerals.
  • the pixel 501c is different from the pixel 501a in FIG. 41 in that a light shielding portion 512c and a photoelectric conversion layer 513M are provided instead of the light shielding portion 512a and the photoelectric conversion layers 513L and 513R.
  • the photoelectric conversion units 513BM to 513IRM are stacked so as to substantially overlap in order from the top.
  • the light shielding portion 512c has substantially the same shape as the light shielding portion 213c of the pixel 201c in FIG.
  • the photoelectric conversion unit 513BM is disposed on the light receiving surface of the pixel 501c at substantially the same position as the photoelectric conversion unit 214M on the light receiving surface of the pixel 201c in FIG. Therefore, the light receiving areas of the photoelectric conversion units 513BM to 513IRM are not decentered with respect to the on-chip microlens 511.
  • the photoelectric conversion unit 513BM is sensitive to light in the B wavelength band. That is, the photoelectric conversion unit 513BM absorbs light in the B wavelength band and transmits light in other wavelength bands.
  • the photoelectric conversion unit 513BM outputs a pixel signal corresponding to the amount of light received in the B wavelength band.
  • the photoelectric conversion unit 513GM is sensitive to light in the G wavelength band. That is, the photoelectric conversion unit 513GM absorbs light in the G wavelength band and transmits light in other wavelength bands.
  • the photoelectric conversion unit 513GM outputs a pixel signal corresponding to the amount of light received in the G wavelength band.
  • the photoelectric conversion unit 513RM is sensitive to light in the R wavelength band. That is, the photoelectric conversion unit 513RM absorbs light in the R wavelength band and transmits light in other wavelength bands.
  • the photoelectric conversion unit 513RM outputs a pixel signal corresponding to the amount of light received in the R wavelength band.
  • the photoelectric conversion unit 513IRM has sensitivity to light in the IR wavelength band. That is, the photoelectric conversion unit 513IRM absorbs light in the IR wavelength band and transmits light in other wavelength bands. Then, the photoelectric conversion unit 513IRM outputs a pixel signal corresponding to the amount of light received in the IR wavelength band.
  • FIG. 44 shows a configuration example of a pixel 501dL that is a phase difference detection dedicated pixel.
  • the upper diagram of FIG. 44 schematically shows an exploded view of the pixel 501dL viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 501dL viewed from the upper direction.
  • the same parts as those in FIG. 41 are denoted by the same reference numerals.
  • the pixel 501dL is different from the pixel 501a in FIG. 41 in that a light shielding part 512dL is provided instead of the light shielding part 512a, and the photoelectric conversion layer 513R is not provided.
  • the light shielding part 512dL has substantially the same shape as the light shielding part 213dL of the pixel 201dL in FIG.
  • the photoelectric conversion unit 513BL receives light in the B wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of light received.
  • the photoelectric conversion unit 513GL receives light in the G wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of light received.
  • the photoelectric conversion unit 513RL receives light in the R wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of received light.
  • the photoelectric conversion unit 513IRL receives light in the IR wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of light received. Thereby, in the pixel 501dL, the incident light is divided into pupils in almost the left half.
  • FIG. 45 shows a configuration example of a pixel 501dR that is a phase difference detection dedicated pixel.
  • the upper diagram in FIG. 45 schematically shows an exploded view of the pixel 501dR viewed from the lateral direction
  • the lower diagram schematically shows a plan view of the pixel 501dR viewed from the upper direction.
  • the same parts as those in FIG. 41 are denoted by the same reference numerals.
  • the pixel 501dR is different from the pixel 501a in FIG. 41 in that a light shielding part 512dR is provided instead of the light shielding part 512a and the photoelectric conversion layer 513L is not provided.
  • the light shielding part 512dR has substantially the same shape as the light shielding part 213dR of the pixel 201dR in FIG.
  • the photoelectric conversion unit 513BR receives light in the B wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received.
  • the photoelectric conversion unit 513GR receives light in the G wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received.
  • the photoelectric conversion unit 513RR receives light in the R wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received.
  • the photoelectric conversion unit 513IRR receives light in the IR wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received. As a result, in the pixel 501dR, the incident light is divided into pupils substantially in the right half.
  • wavelength bands of visible light and invisible light in the above description are examples thereof, and the type and number of wavelength bands to be detected can be arbitrarily changed.
  • wavelength bands for example, wavelength bands of ultraviolet rays and infrared rays.
  • Invisible light may be detected.
  • the number and combination of wavelength bands of invisible light to be detected can be changed.
  • the number and combination of wavelength bands of visible light and invisible light to be detected can be changed.
  • a Ye pixel for detecting a yellow wavelength band (hereinafter referred to as a Ye wavelength band) and a W pixel may be provided.
  • a Ye wavelength band a yellow wavelength band
  • W pixel a W + IR pixel
  • the present technology can be applied to various devices and parts that perform AF.
  • the present technology can be applied not only to a single camera but also to various devices including a camera (for example, a smartphone, a mobile phone, etc.).
  • the present technology can also be applied to devices and parts that drive a lens from the outside.
  • the present technology can also be applied to devices and parts that perform AF control from the outside of a camera provided with a lens and an image sensor.
  • the present technology can be applied when shooting either a still image or a moving image.
  • the series of processes described above can be executed by hardware or can be executed by software.
  • a program constituting the software is installed in the computer.
  • the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
  • FIG. 46 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 705 is further connected to the bus 704.
  • An input unit 706, an output unit 707, a storage unit 708, a communication unit 709, and a drive 710 are connected to the input / output interface 705.
  • the input unit 706 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 707 includes a display, a speaker, and the like.
  • the storage unit 708 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 709 includes a network interface.
  • the drive 710 drives a removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 701 loads the program stored in the storage unit 708 to the RAM 703 via the input / output interface 705 and the bus 704 and executes the program, for example. Is performed.
  • the program executed by the computer (CPU 701) can be provided by being recorded in, for example, a removable medium 711 as a package medium or the like.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 708 via the input / output interface 705 by attaching the removable medium 711 to the drive 710. Further, the program can be received by the communication unit 709 via a wired or wireless transmission medium and installed in the storage unit 708. In addition, the program can be installed in advance in the ROM 702 or the storage unit 708.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
  • the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
  • each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • the present technology can take the following configurations.
  • a phase difference pixel value which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction.
  • An integration processing unit for integrating; Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region
  • An image processing apparatus comprising: a phase difference detection unit configured to detect a phase difference between a plurality of phase difference pixel values corresponding to a plurality of second partial regions located in positions that are biased in the opposite direction.
  • the image processing apparatus further including a focus position detection unit.
  • the image processing apparatus further including a focus control unit that controls a focus position based on the in-focus position.
  • the phase difference detection unit selects at least one of a direction, a wavelength band, and a position for detecting the phase difference based on a predetermined condition.
  • the integration processing unit performs integration in the time direction of the phase difference pixel value using a weight based on the movement of the subject between frames in an image including pixel values based on incident light on the entire light receiving surface.
  • the image processing device according to any one of (4) to (4).
  • the integration processing unit separately calculates a weight based on a horizontal distance between pixels and a weight based on a vertical distance, and performs integration processing in the spatial direction using the two calculated weights.
  • the image processing apparatus according to any one of (5).
  • the integration processing unit uses a weight based on the similarity between the pixels of the first image,
  • the phase difference pixel value is integrated in the spatial direction and the parameter of the first image does not satisfy the condition, the direction parallel to the direction of the partial region corresponding to the phase difference pixel value, A pixel value based on incident light on the entire light receiving surface with respect to a direction orthogonal to the direction of the partial region corresponding to the phase difference pixel value using a weight based on the similarity between pixels of the first image.
  • the image processing apparatus according to any one of (1) to (6), wherein a spatial direction integration of the phase difference pixel value is performed using a weight based on a similarity between pixels of the second image.
  • the integration processing unit performs integration in the wavelength direction of the phase difference pixel value by adding the phase difference pixel values of different wavelength bands of the same pixel using weights based on lateral chromatic aberration between wavelength bands.
  • (1) The image processing apparatus according to any one of (7).
  • the integration processing unit further performs integration in the wavelength direction of the phase difference pixel value using a weight based on the difference between the phase difference pixel values of different wavelength bands of the same pixel. .
  • the integration processing unit performs two or more integration processes among integration processes in a time direction, a spatial direction, and a wavelength direction in a predetermined order, and a predetermined parameter representing the image quality of the phase difference pixel value is
  • the image processing apparatus according to any one of (1) to (9), wherein when the predetermined condition is satisfied, subsequent integration processing is not performed.
  • (11) A first photoelectric conversion unit located in a position biased in the first direction of the light receiving surface, and a second photoelectric conversion located in a position biased in a second direction opposite to the first direction of the light receiving surface.
  • the image processing apparatus according to any one of (1) to (10), further including an imaging element in which a plurality of first phase difference pixels that are pixels including at least one of the units are arranged.
  • the image sensor includes a third photoelectric conversion unit located in a position biased in a third direction orthogonal to the first direction and the second direction of the light receiving surface, and the third photoelectric conversion unit.
  • the plurality of second phase difference pixels which are pixels each including at least one of the fourth photoelectric conversion units located in a position deviated in a fourth direction opposite to the first direction, are arranged.
  • the first phase difference pixel includes the first photoelectric conversion unit and the second photoelectric conversion unit
  • the second phase difference pixel includes the third photoelectric conversion unit and the fourth photoelectric conversion unit,
  • the image processing device according to (12) wherein the first phase difference pixel and the second phase difference pixel are arranged in a predetermined pattern on the imaging element.
  • the image processing device according to any one of (11) to (13), wherein the first phase difference pixel includes a first pixel that receives invisible light in a first wavelength band.
  • the first phase difference pixel further includes a second pixel that receives invisible light in the first wavelength band and visible light in the second wavelength band.
  • the first phase difference pixel includes a first pixel that receives visible light in a first wavelength band, a second pixel that receives visible light in a second wavelength band, and the first wavelength band and The system according to any one of (11) to (13), including a third pixel that receives visible light in a third wavelength band including the second wavelength band and invisible light in a fourth wavelength band.
  • Image processing device including a third pixel that receives visible light in a third wavelength band including the second wavelength band and invisible light in a fourth wavelength band.
  • the first phase difference pixel includes a first pixel that receives visible light in a first wavelength band and invisible light in a second wavelength band, visible light in a third wavelength band, and the second wavelength band.
  • the second pixel that receives invisible light, and the visible light in the fourth wavelength band including the first wavelength band and the third wavelength band, and the invisible light in the second wavelength band are received.
  • the image processing apparatus according to any one of (11) to (13), including a third pixel.
  • a first photoelectric conversion layer composed of a plurality of photoelectric conversion units that are stacked at positions shifted in the first direction of the light receiving surface and receive light of different wavelength bands, and the first direction of the light receiving surface.
  • a phase difference pixel that is a pixel including at least one of the second photoelectric conversion layers that are stacked at positions deviated in the second direction opposite to that of the plurality of photoelectric conversion units that receive light of different wavelength bands.
  • the image processing apparatus according to any one of (1) to (10), further including a plurality of image pickup elements.
  • a phase difference pixel value which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction.
  • a phase difference pixel value which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction.
  • An integration step to integrate Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region

Abstract

The present technology pertains to an image processing device, an image processing method, and a program, which enable improvement of focusing accuracy. A camera is provided with: an integration processing unit which integrates a phase-difference pixel value that is a pixel value based on incident light on a partial area located at a decentered position on the light receiving surface of a pixel by using adaptive weighting in at least one of a time direction, a spatial direction, and a wavelength direction; and a phase-difference detecting unit that detects, on the basis of the integrated phase-difference pixel value, a phase difference between an image formed by a plurality of phase-difference pixel values corresponding to a plurality of first partial areas at positions decentered in the same direction and an image formed by a plurality of phase-difference pixel values corresponding to a plurality of second partial areas located at positions decentered in the direction opposite to the first partial areas. The present technology is applicable to, for example, digital cameras.

Description

画像処理装置、画像処理方法、及び、プログラムImage processing apparatus, image processing method, and program
 本技術は、画像処理装置、画像処理方法、及び、プログラムに関し、特に、AF(Auto Focus)を行う場合に用いて好適な画像処理装置、画像処理方法、及び、プログラムに関する。 The present technology relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program suitable for use in AF (Auto Focus).
 従来、AF(Auto Focus)方式の一種に位相差検出方式がある。また、位相差検出方式のAFの一種に、位相差の検出に用いられる画素(以下、位相差検出画素と称する)を撮像素子の撮像面に配置した像面位相差方式がある。 Conventionally, there is a phase difference detection method as a kind of AF (Auto-Focus) method. As one type of phase difference detection AF, there is an image plane phase difference method in which pixels used for phase difference detection (hereinafter referred to as phase difference detection pixels) are arranged on the imaging surface of an image sensor.
 位相差検出画素は、瞳分割することにより入射光量が制限されるため、通常の撮影用の画素と比較して感度が低下する。そこで、従来、所定の領域内にある位相差検出画素の信号を合成した合成信号を用いて、位相差の検出を行うことが提案されている(例えば、特許文献1参照)。 The phase difference detection pixel has a lower sensitivity than a normal imaging pixel because the amount of incident light is limited by dividing the pupil. Thus, conventionally, it has been proposed to detect a phase difference using a synthesized signal obtained by synthesizing signals of phase difference detection pixels in a predetermined region (see, for example, Patent Document 1).
特開2009-3122号公報JP 2009-3122 A
 しかしながら、特許文献1に記載の発明では、互いに相関が低い位相差検出画素が領域内に存在する場合、位相差検出画素の信号が加算されることにより、かえって合焦位置の検出誤差が生じ、合焦精度が低下するおそれがある。 However, in the invention described in Patent Document 1, when there are phase difference detection pixels having low correlation with each other in the region, the signals of the phase difference detection pixels are added, thereby causing a detection error of the in-focus position. Focusing accuracy may be reduced.
 そこで、本技術は、合焦精度を向上させるようにするものである。 Therefore, this technique is intended to improve the focusing accuracy.
 本技術の一側面の画像処理装置は、画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理部と、積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出部とを備える。 An image processing apparatus according to an aspect of the present technology is configured to calculate a phase difference pixel value, which is a pixel value based on incident light on a partial region at a biased position of a light receiving surface of a pixel, in a time direction, a spatial direction, and a wavelength direction. An integration processing unit that adaptively weights and integrates in at least one direction, and a plurality corresponding to the plurality of first partial regions that are located in the same direction based on the phase difference pixel value after integration. Between the image composed of the phase difference pixel values and the image composed of the plurality of phase difference pixel values corresponding to the plurality of second partial regions located in positions opposite to the first partial region. And a phase difference detection unit for detecting the phase difference.
 前記位相差に基づいて合焦位置を検出し、前記位相差の検出に用いた波長帯と焦点を合わせる波長帯とが異なる場合、波長帯間の軸上色収差に基づいて合焦位置を補正する合焦位置検出部をさらに設けることができる。 An in-focus position is detected based on the phase difference, and when the wavelength band used to detect the phase difference is different from the wavelength band to be focused, the in-focus position is corrected based on axial chromatic aberration between the wavelength bands. A focus position detection unit can be further provided.
 前記合焦位置に基づいて、焦点の位置を制御する焦点制御部をさらに設けることができる。 A focus control unit that controls the position of the focus based on the in-focus position can be further provided.
 前記位相差検出部には、所定の条件に基づいて、前記位相差を検出する方向、波長帯、及び、位置のうち少なくとも1つ以上を選択させることができる。 The phase difference detection unit can select at least one or more of a direction, a wavelength band, and a position for detecting the phase difference based on a predetermined condition.
 前記積分処理部には、前記受光面全体への入射光に基づく画素値からなる画像におけるフレーム間の被写体の動きに基づく重みを用いて、前記位相差画素値の時間方向の積分を行わせることができる。 The integration processing unit is configured to perform integration in the time direction of the phase difference pixel value using a weight based on movement of a subject between frames in an image including pixel values based on incident light on the entire light receiving surface. Can do.
 前記積分処理部には、画素間の水平方向の距離に基づく重みと垂直方向の距離に基づく重みを個別に算出し、算出した2つの重みを用いて空間方向の積分処理を行わせることができる。 The integration processing unit can individually calculate a weight based on a horizontal distance between pixels and a weight based on a vertical distance, and perform integration processing in the spatial direction using the two calculated weights. .
 前記積分処理部には、前記位相差画素値からなる第1の画像の画質を表す所定のパラメータが所定の条件を満たす場合、前記第1の画像の画素間の類似度に基づく重みを用いて、前記位相差画素値の空間方向の積分を行わせ、前記第1の画像の前記パラメータが前記条件を満たさない場合、前記位相差画素値に対応する前記部分領域の方向に平行な方向に対して、前記第1の画像の画素間の類似度に基づく重みを用い、前記位相差画素値に対応する前記部分領域の方向に直交する方向に対して、前記受光面全体への入射光に基づく画素値からなる第2の画像の画素間の類似度に基づく重みを用いて、前記位相差画素値の空間方向の積分を行わせることができる。 When the predetermined parameter representing the image quality of the first image composed of the phase difference pixel values satisfies a predetermined condition, the integration processing unit uses a weight based on the similarity between the pixels of the first image. If the phase difference pixel value is integrated in the spatial direction and the parameter of the first image does not satisfy the condition, the phase difference pixel value is parallel to the direction of the partial region corresponding to the phase difference pixel value. And using a weight based on the similarity between pixels of the first image, and based on the incident light on the entire light receiving surface with respect to a direction orthogonal to the direction of the partial region corresponding to the phase difference pixel value. Integration of the phase difference pixel value in the spatial direction can be performed using a weight based on the similarity between the pixels of the second image made up of pixel values.
 前記積分処理部には、同じ画素の異なる波長帯の前記位相差画素値を、波長帯間の倍率色収差に基づく重みを用いて加算することにより、前記位相差画素値の波長方向の積分を行わせることができる。 The integration processing unit performs integration of the phase difference pixel values in the wavelength direction by adding the phase difference pixel values of different wavelength bands of the same pixel using weights based on magnification chromatic aberration between the wavelength bands. Can be made.
 前記積分処理部には、さらに同じ画素の異なる波長帯の前記位相差画素値の差に基づく重みを用いて、前記位相差画素値の波長方向の積分を行わせることができる。 The integration processing unit may further perform integration of the phase difference pixel value in the wavelength direction using a weight based on the difference between the phase difference pixel values of different wavelength bands of the same pixel.
 前記積分処理部には、時間方向、空間方向、及び、波長方向の積分処理のうち2以上の積分処理を所定の順序で行わせるとともに、前記位相差画素値からなる画像の画質を表す所定のパラメータが所定の条件を満たした場合、後の積分処理を行わせないようにすることができる。 The integration processing unit is configured to perform two or more integration processes in a predetermined order among the integration processes in the time direction, the spatial direction, and the wavelength direction, and a predetermined value representing the image quality of the image including the phase difference pixel values. When the parameter satisfies a predetermined condition, it is possible to prevent the subsequent integration process from being performed.
 前記受光面の第1の方向に偏った位置にある第1の光電変換部、及び、前記受光面の前記第1の方向と逆の第2の方向に偏った位置にある第2の光電変換部のうち少なくとも一方を備える画素である第1の位相差画素が複数配置されている撮像素子をさらに設けることができる。 A first photoelectric conversion unit located in a position biased in the first direction of the light receiving surface, and a second photoelectric conversion located in a position biased in a second direction opposite to the first direction of the light receiving surface. An imaging element in which a plurality of first phase difference pixels, which are pixels including at least one of the units, are arranged can be further provided.
 前記撮像素子には、前記受光面の前記第1の方向及び前記第2の方向と直交する第3の方向に偏った位置にある第3の光電変換部、及び、前記受光面の前記第3の方向と逆の第4の方向に偏った位置にある第4の光電変換部のうち少なくとも一方を備える画素である第2の位相差画素をさらに複数配置することができる。 The image sensor includes a third photoelectric conversion unit located in a position biased in a third direction orthogonal to the first direction and the second direction of the light receiving surface, and the third photoelectric conversion unit. A plurality of second phase difference pixels, which are pixels including at least one of the fourth photoelectric conversion units located in a position biased in a fourth direction opposite to the first direction, can be arranged.
 前記第1の位相差画素には、前記第1の光電変換部及び前記第2の光電変換部を設け、前記第2の位相差画素には、前記第3の光電変換部及び前記第4の光電変換部を設け、前記撮像素子には、前記第1の位相差画素及び前記第2の位相差画素を所定のパターンで配置させることができる。 The first phase difference pixel is provided with the first photoelectric conversion unit and the second photoelectric conversion unit, and the second phase difference pixel is provided with the third photoelectric conversion unit and the fourth photoelectric conversion unit. A photoelectric conversion unit may be provided, and the first phase difference pixel and the second phase difference pixel may be arranged in a predetermined pattern on the imaging element.
 前記第1の位相差画素には、第1の波長帯の不可視光を受光する第1の画素を含ませることができる。 The first phase difference pixel may include a first pixel that receives invisible light in a first wavelength band.
 前記第1の位相差画素には、前記第1の波長帯の不可視光及び第2の波長帯の可視光を受光する第2の画素をさらに含ませることができる。 The first phase difference pixel may further include a second pixel that receives invisible light in the first wavelength band and visible light in the second wavelength band.
 前記第1の位相差画素には、第1の波長帯の可視光を受光する第1の画素、第2の波長帯の可視光を受光する第2の画素、並びに、前記第1の波長帯及び前記第2の波長帯を含む第3の波長帯の可視光、及び、第4の波長帯の不可視光を受光する第3の画素を含ませることができる。 The first phase difference pixel includes a first pixel that receives visible light in a first wavelength band, a second pixel that receives visible light in a second wavelength band, and the first wavelength band. And a third pixel that receives visible light in a third wavelength band including the second wavelength band and invisible light in a fourth wavelength band.
 前記第1の位相差画素には、第1の波長帯の可視光及び第2の波長帯の不可視光を受光する第1の画素、第3の波長帯の可視光及び前記第2の波長帯の不可視光を受光する第2の画素、並びに、前記第1の波長帯及び前記第3の波長帯を含む第4の波長帯の可視光、及び、前記第2の波長帯の不可視光を受光する第3の画素を含ませることができる。 The first phase difference pixel includes a first pixel that receives visible light in a first wavelength band and invisible light in a second wavelength band, visible light in a third wavelength band, and the second wavelength band. The second pixel that receives the invisible light, the visible light in the fourth wavelength band including the first wavelength band and the third wavelength band, and the invisible light in the second wavelength band. A third pixel can be included.
 前記受光面の第1の方向に偏った位置において積層され、それぞれ異なる波長帯の光を受光する複数の光電変換部からなる第1の光電変換層、並びに、前記受光面の前記第1の方向と逆の第2の方向に偏った位置において積層され、それぞれ異なる波長帯の光を受光する複数の光電変換部からなる第2の光電変換層のうち少なくとも一方を備える画素である位相差画素が複数配置されている撮像素子をさらに設けることができる。 A first photoelectric conversion layer composed of a plurality of photoelectric conversion units that are stacked at positions shifted in the first direction of the light receiving surface and receive light of different wavelength bands, and the first direction of the light receiving surface. A phase difference pixel that is a pixel including at least one of the second photoelectric conversion layers that are stacked at positions deviated in the second direction opposite to that of the plurality of photoelectric conversion units that receive light of different wavelength bands. A plurality of image sensors can be further provided.
 本技術の一側面の画像処理方法は、画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理ステップと、積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出ステップとを含む。 In the image processing method according to one aspect of the present technology, a phase difference pixel value, which is a pixel value based on incident light on a partial region at a biased position of a light receiving surface of a pixel, is obtained in a time direction, a spatial direction, and a wavelength direction. An integration processing step that adaptively weights and integrates in at least one direction, and a plurality corresponding to the plurality of first partial regions that are located in the same direction based on the phase difference pixel value after integration. Between the image composed of the phase difference pixel values and the image composed of the plurality of phase difference pixel values corresponding to the plurality of second partial regions located in positions opposite to the first partial region. A phase difference detecting step of detecting a phase difference of
 本技術の一側面のプログラムは、画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理ステップと、積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出ステップとを含む処理をコンピュータに実行させる。 A program according to one aspect of the present technology is configured to set a phase difference pixel value, which is a pixel value based on incident light to a partial region at a biased position on a light receiving surface of a pixel, to at least one of a time direction, a spatial direction, and a wavelength direction. An integration processing step for adaptively weighting and integrating the direction, and a plurality of the plurality of the first partial regions corresponding to the plurality of first partial regions located in the same direction based on the phase difference pixel value after integration. A position between an image made up of phase difference pixel values and an image made up of a plurality of phase difference pixel values corresponding to a plurality of second partial regions located in positions opposite to the first partial region. A computer is caused to execute processing including a phase difference detection step of detecting a phase difference.
 本技術の一側面においては、画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値が、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分され、積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差が検出される。 In one aspect of the present technology, the phase difference pixel value, which is a pixel value based on incident light on a partial region at a biased position on the light receiving surface of the pixel, is at least one direction in the time direction, the spatial direction, and the wavelength direction. From the plurality of phase difference pixel values corresponding to the plurality of first partial regions located at positions deviated in the same direction based on the phase difference pixel values after being integrated adaptively weighted And a phase difference between the plurality of phase difference pixel values corresponding to the plurality of second partial areas that are offset in the direction opposite to the first partial area is detected. .
 本技術の一側面によれば、位相差画素値からなる像の位相差の検出精度を向上させることができる。その結果、合焦精度を向上させることができる。 According to one aspect of the present technology, it is possible to improve the detection accuracy of the phase difference of an image made up of phase difference pixel values. As a result, focusing accuracy can be improved.
本技術を適用したカメラの一実施の形態を示すブロック図である。It is a block diagram which shows one Embodiment of the camera to which this technique is applied. カメラのAF制御部の構成例を示すブロック図である。It is a block diagram which shows the structural example of the AF control part of a camera. 撮像素子の画素の構成例を模式的に示す分解図及び平面図である。It is the exploded view and top view which show typically the structural example of the pixel of an image pick-up element. 画素の配置例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel. B+IRb、G+IRg、R+IRrフィルタの分光特性の例を示すグラフである。It is a graph which shows the example of the spectral characteristic of a B + IRb, G + IRg, and R + IRr filter. 画素の配置及び瞳分割方向の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel, and a pupil division direction. 画素の配置及び瞳分割方向の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel, and a pupil division direction. AF制御処理を説明するためのフローチャートである。It is a flowchart for demonstrating AF control processing. 時間方向積分処理を行うIIRフィルタの例を示す回路図である。It is a circuit diagram which shows the example of the IIR filter which performs a time direction integration process. 空間方向積分処理を行うFIRフィルタの例を示す回路図である。It is a circuit diagram which shows the example of the FIR filter which performs a spatial direction integration process. ガウシアンフィルタの例を示す図である。It is a figure which shows the example of a Gaussian filter. 倍率色収差を説明するための図である。It is a figure for demonstrating a magnification chromatic aberration. 波長方向積分処理を行うFIRフィルタの例を示す回路図である。It is a circuit diagram which shows the example of the FIR filter which performs a wavelength direction integration process. 軸上色収差量表の例を示す図である。It is a figure which shows the example of an axial chromatic aberration amount table | surface. 前ピン状態の場合の位相ずれ量の例を示す図である。It is a figure which shows the example of the phase shift amount in the case of a front pin state. 合焦状態の場合の位相ずれ量の例を示す図である。It is a figure which shows the example of the phase shift amount in the case of a focusing state. 後ピン状態の場合の位相ずれ量の例を示す図である。It is a figure which shows the example of the phase shift amount in the case of a back pin state. 軸上色収差を説明するための図である。It is a figure for demonstrating axial chromatic aberration. 軸上色収差を説明するための図である。It is a figure for demonstrating axial chromatic aberration. 軸上色収差を説明するための図である。It is a figure for demonstrating axial chromatic aberration. 倍率色収差量表の例を示す図である。It is a figure which shows the example of a magnification chromatic aberration amount table | surface. 撮像素子の画素の第1の変形例を模式的に示す分解図及び平面図である。It is an exploded view and a top view showing typically the 1st modification of a pixel of an image sensor. 撮像素子の画素の第2の変形例を模式的に示す分解図及び平面図である。It is an exploded view and a top view showing typically the 2nd modification of a pixel of an image sensor. 撮像素子の画素の第3の変形例を模式的に示す分解図及び平面図である。It is an exploded view and a top view showing typically the 3rd modification of a pixel of an image sensor. 撮像素子の画素の第4の変形例を模式的に示す分解図及び平面図である。It is an exploded view and a top view showing typically the 4th modification of a pixel of an image sensor. 画素の配置及び瞳分割方向の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel, and a pupil division direction. 画素の配置のパターン例を示す図である。It is a figure which shows the example of a pattern of arrangement | positioning of a pixel. 画素の配置のパターン例を示す図である。It is a figure which shows the example of a pattern of arrangement | positioning of a pixel. 画素の配置及び瞳分割方向の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel, and a pupil division direction. 画素の配置及び瞳分割方向の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel, and a pupil division direction. 画素の配置の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel. 画素の配置及び瞳分割方向の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel, and a pupil division direction. IR波長帯が前ピン状態の場合のG波長帯とIR波長帯の位相ずれ量を比較した図である。It is the figure which compared the amount of phase shifts of G wavelength band and IR wavelength band when IR wavelength band is a front pin state. IR波長帯が後ピン状態の場合のG波長帯とIR波長帯の位相ずれ量を比較した図である。It is the figure which compared the amount of phase shifts of G wavelength band and IR wavelength band when IR wavelength band is a back pin state. G波長帯が合焦状態の場合のG波長帯とIR波長帯の位相ずれ量を比較した図である。It is the figure which compared the amount of phase shifts of the G wavelength band and IR wavelength band when the G wavelength band is in focus. 画素の配置の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel. W+IRフィルタの分光特性の例を示すグラフである。It is a graph which shows the example of the spectral characteristic of a W + IR filter. 画素の配置の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel. B+IR画素、G+IR画素、及び、R+IR画素の分光特性の例を示す図である。It is a figure which shows the example of the spectral characteristics of B + IR pixel, G + IR pixel, and R + IR pixel. 画素の配置の例を示す図である。It is a figure which shows the example of arrangement | positioning of a pixel. 積層分光型の画素の第1の例を模式的に示す分解図及び平面図である。It is the exploded view and top view which show typically the 1st example of a lamination | stacking spectroscopy type pixel. 積層分光型の画素の第2の例を模式的に示す分解図及び平面図である。It is the exploded view and top view which show typically the 2nd example of a lamination | stacking spectroscopy type pixel. 積層分光型の画素の第3の例を模式的に示す分解図及び平面図である。It is the exploded view and top view which show typically the 3rd example of a lamination | stacking spectroscopy type pixel. 積層分光型の画素の第4の例を模式的に示す分解図及び平面図である。It is the exploded view and top view which show typically the 4th example of a lamination | stacking spectroscopy type pixel. 積層分光型の画素の第5の例を模式的に示す分解図及び平面図である。It is the exploded view and top view which show typically the 5th example of a lamination | stacking spectroscopy type pixel. コンピュータの構成例を示すブロック図である。It is a block diagram which shows the structural example of a computer.
 以下、本技術を実施するための形態(以下、実施の形態という)について説明する。なお、説明は以下の順序で行う。
1.第1の実施の形態(可視光のみを用いる場合)
2.第1の実施の形態の変形例
3.第2の実施の形態(IR光を用いる場合1)
4.第2の実施の形態の変形例
5.第3の実施の形態(IR光を用いる場合2)
6.第3の実施の形態の変形例
7.第4の実施の形態(積層分光型の撮像素子を用いる場合)
8.第4の実施の形態の変形例
9.その他の変形例
Hereinafter, modes for carrying out the present technology (hereinafter referred to as embodiments) will be described. The description will be given in the following order.
1. First embodiment (when only visible light is used)
2. 2. Modification of the first embodiment Second embodiment (when IR light is used 1)
4). 4. Modification of second embodiment Third embodiment (when IR light is used 2)
6). 6. Modification of third embodiment Fourth embodiment (when using a multilayer spectroscopic imaging device)
8). Modified example of fourth embodiment 9. Other variations
<1.第1の実施の形態>
{カメラ101aの構成例}
 図1は、本技術を適用したデジタルカメラの一実施の形態であるカメラ101aの構成例を示すブロック図である。
<1. First Embodiment>
{Configuration example of camera 101a}
FIG. 1 is a block diagram illustrating a configuration example of a camera 101a which is an embodiment of a digital camera to which the present technology is applied.
 カメラ101aは、レンズ111、撮像素子112a、AGC(Automatic Gain Control)部113、ADC(Analog Digital Converter)部114、画素補間部115、撮影画像信号処理部116、表示系駆動部117、出力表示モニタ118、撮影画像圧縮部119、撮影画像記憶部120、AF(Auto Focus)制御部121、及び、レンズ駆動部122を含むように構成される。AF制御部121は、焦点検出用画像取得部131、積分処理部132、位相差検出部133、合焦位置検出部134、焦点制御部135、焦点検出用画像記憶部136、及び、色収差データ記憶部137を含むように構成される。 The camera 101a includes a lens 111, an image sensor 112a, an AGC (Automatic Gain Control) unit 113, an ADC (Analog Digital Converter) unit 114, a pixel interpolation unit 115, a captured image signal processing unit 116, a display system driving unit 117, and an output display monitor. 118, a captured image compression unit 119, a captured image storage unit 120, an AF (Auto-Focus) control unit 121, and a lens driving unit 122. The AF control unit 121 includes a focus detection image acquisition unit 131, an integration processing unit 132, a phase difference detection unit 133, a focus position detection unit 134, a focus control unit 135, a focus detection image storage unit 136, and chromatic aberration data storage. It is comprised so that the part 137 may be included.
 レンズ111は、被写体からの光(入射光)を撮像素子112aの撮像面において結像させる。レンズ111は、レンズ駆動部122の制御の下に光軸方向に移動することができる。レンズ111が光軸方向に移動することにより、レンズ111の焦点の位置が光軸方向に移動する。 The lens 111 forms an image of light (incident light) from the subject on the imaging surface of the imaging element 112a. The lens 111 can move in the optical axis direction under the control of the lens driving unit 122. As the lens 111 moves in the optical axis direction, the focal position of the lens 111 moves in the optical axis direction.
 なお、図内では、レンズ111が1枚のみ示されているが、複数のレンズを組み合わせたレンズシステムにより、レンズ111を構成することも可能である。 In the drawing, only one lens 111 is shown, but the lens 111 can be configured by a lens system in which a plurality of lenses are combined.
 撮像素子112aは、位相差検出機能を備えた像面位相差撮像素子からなる。撮像素子112aは、レンズ111を透過した光を受光し、受光した光による像を撮像し、得られた画像信号(以下、単に画像とも称する)を出力する。撮像素子112aは、後述するように、通常の画像(以下、撮影画像と称する)に加えて、位相差検出方式のAF(Auto Focus)に用いる画像(以下、位相差画像と称する)を出力することが可能である。 The image sensor 112a is composed of an image plane phase difference image sensor having a phase difference detection function. The image sensor 112a receives light transmitted through the lens 111, captures an image of the received light, and outputs an obtained image signal (hereinafter also simply referred to as an image). As will be described later, the imaging device 112a outputs an image (hereinafter referred to as a phase difference image) used for phase difference detection AF (Auto-Focus) in addition to a normal image (hereinafter referred to as a captured image). It is possible.
 AGC部113は、撮像素子112aから供給される撮影画像及び位相差画像のSN比や被写体環境の明るさに応じて、撮影画像及び位相差画像の増幅に用いるゲインを制御する。AGC部113は、撮影画像及び位相差画像のそれぞれに対して、独立してゲインを制御することが可能である。 The AGC unit 113 controls the gain used to amplify the captured image and the phase difference image according to the SN ratio of the captured image and the phase difference image supplied from the image sensor 112a and the brightness of the subject environment. The AGC unit 113 can control the gain independently for each of the captured image and the phase difference image.
 ADC部114は、アナログの画像信号である撮影画像及び位相差画像をデジタルの画像信号に変換する。ADC部114は、デジタルに変換した撮影画像及び位相差画像を画素補間部115に供給する。 The ADC unit 114 converts the captured image and the phase difference image, which are analog image signals, into digital image signals. The ADC unit 114 supplies the captured image and the phase difference image converted to digital to the pixel interpolation unit 115.
 画素補間部115は、ホワイトバランス処理部(不図示)でホワイトバランス処理を行った撮影画像及び位相差画像に対してデモザイク処理を行う。すなわち、画素補間部115は、撮影画像及び位相差画像の各画素位置において、赤(R)、緑(G)、青(B)の各波長帯の画素値のうち、その画素位置において検出されていない波長帯の画素値を、周囲の画素の画素値を用いて補間する。画素補間部115は、デモザイク処理後の撮影画像を、撮影画像信号処理部116、及び、AF制御部121の焦点検出用画像取得部131に供給する。画素補間部115は、デモザイク処理後の位相差画像を、AF制御部121の焦点検出用画像取得部131に供給する。 The pixel interpolation unit 115 performs demosaic processing on the captured image and phase difference image that have been subjected to white balance processing by a white balance processing unit (not shown). That is, the pixel interpolation unit 115 is detected at each pixel position of the red (R), green (G), and blue (B) wavelength bands at each pixel position of the captured image and the phase difference image. Interpolate pixel values in the non-wavelength band using pixel values of surrounding pixels. The pixel interpolation unit 115 supplies the captured image after the demosaic process to the captured image signal processing unit 116 and the focus detection image acquisition unit 131 of the AF control unit 121. The pixel interpolation unit 115 supplies the demodulated phase difference image to the focus detection image acquisition unit 131 of the AF control unit 121.
 撮影画像信号処理部116は、撮影画像に対して、現像処理におけるデモザイク処理以降の各種の信号処理を行う。撮影画像信号処理部116は、各種の信号処理を行った後の撮影画像を表示系駆動部117及び撮影画像圧縮部119に供給する。 The captured image signal processing unit 116 performs various signal processing after the demosaic process in the development process on the captured image. The captured image signal processing unit 116 supplies the captured image after performing various signal processings to the display system driving unit 117 and the captured image compression unit 119.
 表示系駆動部117は、撮影画像及び各種のGUI(Graphical User Interface)を出力表示モニタ118に表示させる。 The display system drive unit 117 displays the captured image and various GUIs (Graphical User Interface) on the output display monitor 118.
 出力表示モニタ118は、例えば、LCD(Liquid Crystal Display)、有機ELディスプレイ等のディスプレイにより構成される。 The output display monitor 118 is configured by a display such as an LCD (Liquid Crystal Display) or an organic EL display.
 撮影画像圧縮部119は、撮影画像記憶部120が記憶可能なフォーマットで撮影画像を圧縮する。撮影画像圧縮部119は、圧縮した撮影画像を撮影画像記憶部120に記憶させる。 The photographed image compression unit 119 compresses the photographed image in a format that can be stored in the photographed image storage unit 120. The captured image compression unit 119 stores the compressed captured image in the captured image storage unit 120.
 撮影画像記憶部120は、例えば、ハードディスクや不揮発性のメモリなどよりなる。或いは、撮影画像記憶部120は、例えば、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリなどのリムーバブルメディア、及び、リムーバブルメディアを駆動するドライブにより構成される。 The photographed image storage unit 120 includes, for example, a hard disk or a nonvolatile memory. Alternatively, the captured image storage unit 120 includes a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and a drive that drives the removable medium.
 AF制御部121は、撮影画像及び位相差画像に基づいて、レンズ111の合焦位置を検出する。そして、AF制御部121は、検出結果に基づいて、レンズ111の焦点が被写体に合うように、レンズ駆動部122を制御して、レンズ111の光軸方向の位置を調整する。 The AF control unit 121 detects the in-focus position of the lens 111 based on the captured image and the phase difference image. Then, the AF control unit 121 adjusts the position of the lens 111 in the optical axis direction by controlling the lens driving unit 122 so that the lens 111 is focused on the subject based on the detection result.
 焦点検出用画像取得部131は、画素補間部115から供給される撮影画像及び位相差画像を焦点検出用画像記憶部136に記憶させたり、積分処理部132に供給したりする。また、焦点検出用画像取得部131は、撮影画像及び位相差画像のSN比を検出する。例えば、焦点検出用画像取得部131は、撮影画像AE部152や位相差画像AE部154による露光時間やAGC等の設定値と、得られた画像の信号レベルに基づいて、撮影画像及び位相差画像のSN比を推定し、推定したSN比を検出値とする。さらに、焦点検出用画像取得部131は、検出結果に基づいて、撮像素子112aの露光時間及びAGC部113のゲインを設定する。焦点検出用画像取得部131は、設定した露光時間を示す露光時間制御信号を撮像素子112aに供給し、設定したゲインを示すAGCゲイン制御信号をAGC部113に供給する。 The focus detection image acquisition unit 131 stores the captured image and phase difference image supplied from the pixel interpolation unit 115 in the focus detection image storage unit 136 or supplies them to the integration processing unit 132. The focus detection image acquisition unit 131 detects the SN ratio of the captured image and the phase difference image. For example, the focus detection image acquisition unit 131 determines the captured image and the phase difference based on the set values such as the exposure time and AGC by the captured image AE unit 152 and the phase difference image AE unit 154 and the signal level of the obtained image. The SN ratio of the image is estimated, and the estimated SN ratio is set as a detection value. Further, the focus detection image acquisition unit 131 sets the exposure time of the image sensor 112a and the gain of the AGC unit 113 based on the detection result. The focus detection image acquisition unit 131 supplies an exposure time control signal indicating the set exposure time to the image sensor 112a, and supplies an AGC gain control signal indicating the set gain to the AGC unit 113.
 積分処理部132は、後述するように、時間方向、空間方向、及び、波長方向について、適応的に重みを付けた位相差画像の各画素の画素値の積分処理を行う。積分処理部132は、積分処理後の位相差画像を位相差検出部133に供給する。 The integration processing unit 132 performs integration processing of pixel values of each pixel of the phase difference image that is adaptively weighted in the time direction, the spatial direction, and the wavelength direction, as will be described later. The integration processing unit 132 supplies the phase difference image after the integration processing to the phase difference detection unit 133.
 位相差検出部133は、積分処理後の位相差画像に基づいて、レンズ111の焦点のずれにより生じる像の位相差を示す位相ずれ量を検出する。位相差検出部133は、位相ずれ量の検出結果を合焦位置検出部134に供給する。 The phase difference detection unit 133 detects a phase shift amount indicating the phase difference of the image caused by the focus shift of the lens 111 based on the phase difference image after the integration process. The phase difference detection unit 133 supplies the detection result of the phase shift amount to the focus position detection unit 134.
 合焦位置検出部134は、位相ずれ量の検出結果に基づいて、レンズ111の合焦位置を検出する。合焦位置検出部134は、合焦位置の検出結果を焦点制御部135に供給する。 The focus position detection unit 134 detects the focus position of the lens 111 based on the detection result of the phase shift amount. The focus position detection unit 134 supplies the focus position detection result to the focus control unit 135.
 焦点制御部135は、レンズ駆動部122を制御して、レンズ111の焦点の位置を制御する。例えば、焦点制御部135は、合焦位置の検出結果に基づいて、レンズ駆動部122を制御して、レンズ111を光軸方向に移動させ、レンズ111の焦点を合わせる。 The focus control unit 135 controls the lens driving unit 122 to control the focus position of the lens 111. For example, the focus control unit 135 controls the lens driving unit 122 based on the detection result of the focus position, moves the lens 111 in the optical axis direction, and focuses the lens 111.
 焦点検出用画像記憶部136は、合焦位置の検出に必要な複数フレーム分の撮影画像及び位相差画像を一時的に記憶する。 The focus detection image storage unit 136 temporarily stores a plurality of frames of captured images and phase difference images necessary for detecting the focus position.
 色収差データ記憶部137は、レンズ111として使用可能な各レンズ又はレンズユニットの色収差のデータを記憶する。 The chromatic aberration data storage unit 137 stores chromatic aberration data of each lens or lens unit that can be used as the lens 111.
 レンズ駆動部122は、例えば、アクチュエータ等からなり、レンズ111を光軸方向に移動させる。 The lens driving unit 122 includes, for example, an actuator or the like, and moves the lens 111 in the optical axis direction.
{AF制御部121の構成例}
 図2は、カメラ101aのAF制御部121の焦点検出用画像取得部131及び積分処理部132の構成例を示すブロック図である。
{Configuration example of AF control unit 121}
FIG. 2 is a block diagram illustrating a configuration example of the focus detection image acquisition unit 131 and the integration processing unit 132 of the AF control unit 121 of the camera 101a.
 焦点検出用画像取得部131は、撮影画像取得部151、撮影画像AE(Automatic Exposure)部152、位相差画像取得部153、及び、位相差画像AE(Automatic Exposure)部154を含むように構成される。積分処理部132は、時間方向重み調整部161、時間方向積分処理部162、空間方向重み調整部163、空間方向積分処理部164、波長方向重み調整部165、及び、波長方向積分処理部166を含むように構成される。 The focus detection image acquisition unit 131 includes a captured image acquisition unit 151, a captured image AE (Automatic Exposure) unit 152, a phase difference image acquisition unit 153, and a phase difference image AE (Automatic Exposure) unit 154. The The integration processing unit 132 includes a time direction weight adjustment unit 161, a time direction integration processing unit 162, a spatial direction weight adjustment unit 163, a spatial direction integration processing unit 164, a wavelength direction weight adjustment unit 165, and a wavelength direction integration processing unit 166. Configured to include.
 撮影画像取得部151は、画像補間部115からデモザイク処理後の撮影画像を取得する。撮影画像取得部151は、取得した撮影画像を焦点検出用画像記憶部136に記憶させる。また、撮影画像取得部151は、必要に応じて、撮影画像AE部152及び時間方向重み調整部161に撮影画像を供給する。 The captured image acquisition unit 151 acquires a captured image after demosaic processing from the image interpolation unit 115. The captured image acquisition unit 151 stores the acquired captured image in the focus detection image storage unit 136. The captured image acquisition unit 151 supplies the captured image to the captured image AE unit 152 and the time direction weight adjustment unit 161 as necessary.
 撮影画像AE部152は、撮影画像のSN比に応じて、次に撮影画像を撮影する際の露光時間及びゲインを設定する。撮影画像AE部152は、露光時間制御信号を撮像素子112aに供給することにより、撮像素子112aの露光時間を調整する。また、撮影画像AE部152は、AGCゲイン制御信号をAGC部113に供給することにより、AGC部113のゲインを調整する。 The photographed image AE unit 152 sets the exposure time and gain when the photographed image is photographed next, according to the SN ratio of the photographed image. The photographed image AE unit 152 adjusts the exposure time of the image sensor 112a by supplying an exposure time control signal to the image sensor 112a. The captured image AE unit 152 adjusts the gain of the AGC unit 113 by supplying an AGC gain control signal to the AGC unit 113.
 位相差画像取得部153は、画像補間部115からデモザイク処理後の位相差画像を取得する。位相差画像取得部153は、取得した位相差画像を焦点検出用画像記憶部136に記憶させる。また、位相差画像取得部153は、必要に応じて、位相差画像AE部154及び時間方向重み調整部161に位相差画像を供給する。 The phase difference image acquisition unit 153 acquires a phase difference image after demosaic processing from the image interpolation unit 115. The phase difference image acquisition unit 153 stores the acquired phase difference image in the focus detection image storage unit 136. Further, the phase difference image acquisition unit 153 supplies the phase difference image to the phase difference image AE unit 154 and the time direction weight adjustment unit 161 as necessary.
 位相差画像AE部154は、位相差画像のSN比に応じて、次に位相差画像を撮影する際の露光時間及びゲインを設定する。位相差画像AE部154は、露光時間制御信号を撮像素子112aに供給することにより、撮像素子112aの露光時間を調整する。また、位相差画像AE部154は、AGCゲイン制御信号をAGC部113に供給することにより、AGC部113のゲインを調整する。さらに、位相差画像AE部154は、位相差画像のSN比を検出し、検出結果を位相差画像取得部153に通知する。 The phase difference image AE unit 154 sets an exposure time and a gain when the phase difference image is next photographed according to the SN ratio of the phase difference image. The phase difference image AE unit 154 adjusts the exposure time of the image sensor 112a by supplying an exposure time control signal to the image sensor 112a. Further, the phase difference image AE unit 154 adjusts the gain of the AGC unit 113 by supplying an AGC gain control signal to the AGC unit 113. Further, the phase difference image AE unit 154 detects the SN ratio of the phase difference image, and notifies the phase difference image acquisition unit 153 of the detection result.
 時間方向重み調整部161は、後述するように、撮影画像及び位相差画像に基づいて、撮影画像及び位相差画像の時間方向の積分処理に用いる重みを調整する。時間方向重み調整部161は、撮影画像、位相差画像、並びに、設定した重みを示す情報を時間方向積分処理部162に供給する。 The time direction weight adjustment unit 161 adjusts weights used for integration processing in the time direction of the captured image and the phase difference image based on the captured image and the phase difference image, as will be described later. The time direction weight adjustment unit 161 supplies the captured image, the phase difference image, and information indicating the set weight to the time direction integration processing unit 162.
 時間方向積分処理部162は、後述するように、時間方向重み調整部161により設定された重みを用いて、撮影画像及び位相差画像の時間方向の積分処理を行う。時間方向積分処理部162は、積分処理後の撮影画像及び位相差画像を、空間方向重み調整部163、及び、空間方向積分処理部164に供給する。また、時間方向積分処理部162は、位相差画像を位相差検出部133に供給する。 The time direction integration processing unit 162 performs integration processing in the time direction of the captured image and the phase difference image using the weight set by the time direction weight adjustment unit 161, as will be described later. The time direction integration processing unit 162 supplies the captured image and phase difference image after the integration processing to the space direction weight adjustment unit 163 and the space direction integration processing unit 164. Further, the time direction integration processing unit 162 supplies the phase difference image to the phase difference detection unit 133.
 空間方向重み調整部163は、後述するように、撮影画像及び位相差画像に基づいて、撮影画像及び位相差画像の空間方向の積分処理に用いる重みを調整する。空間方向重み調整部163は、設定した重みを示す情報を空間方向積分処理部164に供給する。 As will be described later, the spatial direction weight adjustment unit 163 adjusts the weight used for integration processing of the captured image and the phase difference image in the spatial direction based on the captured image and the phase difference image. The spatial direction weight adjustment unit 163 supplies information indicating the set weight to the spatial direction integration processing unit 164.
 空間方向積分処理部164は、後述するように、空間方向重み調整部163により設定された重みを用いて、撮影画像及び位相差画像の空間方向の積分処理を行う。空間方向積分処理部164は、積分処理後の撮影画像及び位相差画像を、波長方向重み調整部165、及び、波長方向積分処理部166に供給する。また、空間方向積分処理部164は、位相差画像を位相差検出部133に供給する。 As will be described later, the spatial direction integration processing unit 164 performs integration processing in the spatial direction of the captured image and the phase difference image using the weight set by the spatial direction weight adjustment unit 163. The spatial direction integration processing unit 164 supplies the captured image and the phase difference image after the integration processing to the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166. Further, the spatial direction integration processing unit 164 supplies the phase difference image to the phase difference detection unit 133.
 波長方向重み調整部165は、後述するように、撮影画像及び位相差画像に基づいて、撮影画像及び位相差画像の波長方向の積分処理に用いる重みを調整する。波長方向重み調整部165は、設定した重みを示す情報を波長方向積分処理部166に供給する。 The wavelength direction weight adjustment unit 165 adjusts the weight used for the integration processing in the wavelength direction of the captured image and the phase difference image based on the captured image and the phase difference image, as will be described later. The wavelength direction weight adjustment unit 165 supplies information indicating the set weight to the wavelength direction integration processing unit 166.
 波長方向積分処理部166は、後述するように、波長方向重み調整部165により設定された重みを用いて、位相差画像の波長方向の積分処理を行う。波長方向積分処理部166は、位相差画像を位相差検出部133に供給する。 The wavelength direction integration processing unit 166 performs integration processing in the wavelength direction of the phase difference image using the weight set by the wavelength direction weight adjustment unit 165 as described later. The wavelength direction integration processing unit 166 supplies the phase difference image to the phase difference detection unit 133.
{撮像素子112aの画素201aの構成例}
 次に、図3を参照して、撮像素子112aの画素201aの構成例について説明する。図3の上の図は、画素201aを横方向から見た分解図を模式的に示し、下の図は、画素201aを上方向から見た平面図を模式的に示している。
{Configuration Example of Pixel 201a of Image Sensor 112a}
Next, a configuration example of the pixel 201a of the image sensor 112a will be described with reference to FIG. The upper diagram of FIG. 3 schematically shows an exploded view of the pixel 201a viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 201a viewed from the upper direction.
 画素201aにおいては、オンチップマイクロレンズ211、波長選択フィルタ212、遮光部213a、及び、光電変換部214L,214Rが順に積層されている。 In the pixel 201a, an on-chip microlens 211, a wavelength selection filter 212, a light shielding unit 213a, and photoelectric conversion units 214L and 214R are sequentially stacked.
 オンチップマイクロレンズ211に入射した光は、オンチップマイクロレンズ211の光軸中心である画素201aの受光面の中央方向へ集光する。そして、波長選択フィルタ212により、入射光の所定の波長帯の成分が透過され、遮光部213aにより遮光されていない光電変換部214L,214Rの受光領域に入射する。遮光部213aは、隣接画素との混色防止及び画素201aの瞳分割の効果を奏する。 The light incident on the on-chip microlens 211 is condensed toward the center of the light receiving surface of the pixel 201a, which is the optical axis center of the on-chip microlens 211. Then, a component of a predetermined wavelength band of incident light is transmitted by the wavelength selection filter 212 and is incident on the light receiving regions of the photoelectric conversion units 214L and 214R that are not shielded by the light shielding unit 213a. The light shielding unit 213a has effects of preventing color mixture with adjacent pixels and pupil division of the pixel 201a.
 光電変換部214L及び214Rは、例えば、それぞれフォトダイオード等の光電変換素子からなる。なお、光電変換部214L及び214Rの構造には、シリコン等の半導体基板だけでなく薄膜等の任意の構造を採用することができる。 The photoelectric conversion units 214L and 214R are each composed of a photoelectric conversion element such as a photodiode, for example. Note that the photoelectric conversion units 214L and 214R may employ any structure such as a thin film as well as a semiconductor substrate such as silicon.
 光電変換部214Lと光電変換部214Rは、所定の間隔を空けて水平方向(行方向、左右方向)に並ぶように配置されている。すなわち、光電変換部214Lは、画素201aの受光面の左方向に偏った位置に配置されている。光電変換部214Lの受光領域は、オンチップマイクロレンズ211に対して左方向に偏心している。光電変換部214Rは、光電変換部214Lとは逆に、画素201aの受光面の右方向に偏った位置に配置されている。光電変換部214Rの受光領域は、オンチップマイクロレンズ211に対して右方向に偏心している。 The photoelectric conversion unit 214L and the photoelectric conversion unit 214R are arranged in a horizontal direction (row direction, left-right direction) with a predetermined interval. That is, the photoelectric conversion unit 214L is disposed at a position that is biased to the left of the light receiving surface of the pixel 201a. The light receiving region of the photoelectric conversion unit 214L is eccentric to the left with respect to the on-chip microlens 211. Opposite to the photoelectric conversion unit 214L, the photoelectric conversion unit 214R is disposed at a position biased to the right of the light receiving surface of the pixel 201a. The light receiving region of the photoelectric conversion unit 214R is eccentric to the right with respect to the on-chip microlens 211.
 従って、光電変換部214Lは、画素201aの受光面のほぼ左半分に入射する光を受光し、受光量に応じた画素信号を出力する。光電変換部214Rは、画素201aの受光面のほぼ右半分に入射する光を受光し、受光量に応じた画素信号を出力する。これにより、画素201aでは、入射光が撮像面内で水平方向(行方向、左右方向)に瞳分割される。 Therefore, the photoelectric conversion unit 214L receives light incident on substantially the left half of the light receiving surface of the pixel 201a, and outputs a pixel signal corresponding to the amount of light received. The photoelectric conversion unit 214R receives light incident on substantially the right half of the light receiving surface of the pixel 201a, and outputs a pixel signal corresponding to the amount of received light. Thereby, in the pixel 201a, the incident light is pupil-divided in the horizontal direction (row direction, left-right direction) within the imaging surface.
 画素201aは、光電変換部214L及び214Rの画素信号を個別に出力したり、加算して出力したりすることができる。光電変換部214L及び214Rから個別に出力された画素信号は、位相差検出用の信号として用いられる。また、2つの画素信号を加算した信号は、画素201aの受光面全体への入射光に基づく画素信号であり、通常の撮影用の信号として用いられる。この通常の撮影用の信号により、撮影画像が生成される。 The pixel 201a can individually output the pixel signals of the photoelectric conversion units 214L and 214R, or can add and output the pixel signals. Pixel signals individually output from the photoelectric conversion units 214L and 214R are used as signals for phase difference detection. The signal obtained by adding the two pixel signals is a pixel signal based on light incident on the entire light receiving surface of the pixel 201a, and is used as a signal for normal photographing. A photographed image is generated by the normal photographing signal.
 なお、図3では、水平方向に瞳分割する例を示したが、撮像素子112aにおいては、撮像面内で垂直方向(列方向、上下方向)に瞳分割する画素も存在する。以下、撮像面内で垂直方向に2つに瞳分割されている画素の上方向に配置されている光電変換部を光電変換部214Uと称し、下方向に配置されている光電変換部を光電変換部214Dと称する。また、以下、光電変換部214L乃至214Dを個々に区別する必要がない場合、単に光電変換部214と称する。 Although FIG. 3 shows an example in which the pupil is divided in the horizontal direction, in the image sensor 112a, there are pixels that divide the pupil in the vertical direction (column direction, vertical direction) in the imaging surface. Hereinafter, the photoelectric conversion unit arranged in the upper direction of the pixel that is divided into two pupils in the vertical direction in the imaging plane is referred to as a photoelectric conversion unit 214U, and the photoelectric conversion unit arranged in the lower direction is photoelectrically converted. This will be referred to as a part 214D. Hereinafter, when it is not necessary to distinguish the photoelectric conversion units 214L to 214D from each other, they are simply referred to as the photoelectric conversion unit 214.
 さらに、以下、光電変換部214L、214R、214U、及び、214Dの各画素信号により示される画素値(以下、位相差画素値とも称する)を、それぞれ画素値qL、qR、qU、及び、qDと称する。 Further, hereinafter, pixel values (hereinafter also referred to as phase difference pixel values) indicated by the pixel signals of the photoelectric conversion units 214L, 214R, 214U, and 214D are respectively referred to as pixel values qL, qR, qU, and qD. Called.
 また、以下、位相差画像の座標(x,y)の方向dの位相差画素値をqd(x,y)により表す。方向dには、左方向を示すL、右方向を示すR、上方向を示すU、及び、下方向を示すDのいずれかが設定される。 Further, hereinafter, the phase difference pixel value in the direction d of the coordinates (x, y) of the phase difference image is represented by qd (x, y). In the direction d, any one of L indicating the left direction, R indicating the right direction, U indicating the upward direction, and D indicating the downward direction is set.
 さらに、以下、位相差画素値qd(x,y)の波長帯λを特に区別する場合、qd(x,y,λ)により表す。波長帯λには、赤色の波長帯(以下、R波長帯と称する)を示すλr、緑色の波長帯(以下、G波長帯と称する)を示すλg、及び、青色の波長帯(以下、B波長帯と称する)を示すλbのいずれかが設定される。 Further, hereinafter, when particularly distinguishing the wavelength band λ of the phase difference pixel value qd (x, y), it is represented by qd (x, y, λ). The wavelength band λ includes λr indicating a red wavelength band (hereinafter referred to as an R wavelength band), λg indicating a green wavelength band (hereinafter referred to as a G wavelength band), and a blue wavelength band (hereinafter referred to as B). One of λb indicating the wavelength band) is set.
 また、以下、位相差画素値qd(x,y,λ)のフレーム時刻tを特に区別する場合、qd(t,x,y,λ)により表す。 Further, hereinafter, when particularly distinguishing the frame time t of the phase difference pixel value qd (x, y, λ), it is represented by qd (t, x, y, λ).
 さらに、以下、撮影画像の画素値(以下、撮影画素値とも称する)を、p(x,y)、p(x,y,λ)、又は、p(t,x,y,λ)により表す。なお、撮影画素値は、同じ画素の位相差画素値qLと位相差画素値qRを加算した画素値、或いは、同じ画素の位相差画素値qUと位相差画素値qDを加算した画素値となる。 Further, hereinafter, the pixel value of the captured image (hereinafter also referred to as a captured pixel value) is represented by p (x, y), p (x, y, λ), or p (t, x, y, λ). . The photographic pixel value is a pixel value obtained by adding the phase difference pixel value qL and the phase difference pixel value qR of the same pixel or a pixel value obtained by adding the phase difference pixel value qU and the phase difference pixel value qD of the same pixel. .
 また、以下、画素201aを光電変換部214により2つに分けた場合、各部分をそれぞれ位相差画素と称する。これに対して、以下、画素201a全体を位相差画素と区別する場合、撮影画素と称する。なお、以下、光電変換部214Lに対応する位相差画素を左方向の位相差画素と称し、光電変換部214Rに対応する位相差画素を右方向の位相差画素と称する。また、以下、光電変換部214Uに対応する位相差画素を上方向の位相差画素と称し、光電変換部214Dに対応する位相差画素を下方向の位相差画素と称する。 Hereinafter, when the pixel 201a is divided into two by the photoelectric conversion unit 214, each part is referred to as a phase difference pixel. On the other hand, hereinafter, when the entire pixel 201a is distinguished from the phase difference pixel, it is referred to as a photographic pixel. Hereinafter, the phase difference pixel corresponding to the photoelectric conversion unit 214L is referred to as a left phase difference pixel, and the phase difference pixel corresponding to the photoelectric conversion unit 214R is referred to as a right direction phase difference pixel. Hereinafter, the phase difference pixel corresponding to the photoelectric conversion unit 214U is referred to as an upward phase difference pixel, and the phase difference pixel corresponding to the photoelectric conversion unit 214D is referred to as a downward phase difference pixel.
{画素の配置例}
 図4は、撮像素子112aの画素の配置例を示している。撮像素子112aでは、図4に示される2×2画素を1単位とするベイヤ配列に従って画素が配置される。すなわち、G波長帯の光を検出するG画素(Gr画素及びGb画素)が斜め方向に配置される。また、R波長帯の光を検出するR画素とB波長帯の光を検出するB画素が、G画素とは逆の斜め方向に配置される。なお、Gr画素は、R画素と同じ行に配置されているG画素であり、Gb画素は、B画素と同じ行に配置されているG画素である。
{Pixel arrangement example}
FIG. 4 shows an arrangement example of pixels of the image sensor 112a. In the image sensor 112a, the pixels are arranged according to a Bayer array having 2 × 2 pixels as one unit shown in FIG. That is, G pixels (Gr pixels and Gb pixels) that detect light in the G wavelength band are arranged in an oblique direction. In addition, an R pixel that detects light in the R wavelength band and a B pixel that detects light in the B wavelength band are arranged in an oblique direction opposite to the G pixel. The Gr pixel is a G pixel arranged in the same row as the R pixel, and the Gb pixel is a G pixel arranged in the same row as the B pixel.
 B画素の波長選択フィルタ212には、例えば、約405~500nmの波長帯の光を透過するBフィルタが用いられる。Bフィルタは、例えば、B+IRbフィルタと赤外カットフィルタとを積層することにより実現される。 For the wavelength selection filter 212 of the B pixel, for example, a B filter that transmits light in a wavelength band of about 405 to 500 nm is used. The B filter is realized, for example, by stacking a B + IRb filter and an infrared cut filter.
 G画素の波長選択フィルタ212には、例えば、約475~600nmの波長帯の光を透過するGフィルタが用いられる。Gフィルタは、例えば、G+IRbフィルタと赤外カットフィルタとを積層することにより実現される。 As the wavelength selection filter 212 of the G pixel, for example, a G filter that transmits light in a wavelength band of about 475 to 600 nm is used. The G filter is realized, for example, by stacking a G + IRb filter and an infrared cut filter.
 R画素の波長選択フィルタ212には、例えば、約580~650nmの波長帯の光を透過するRフィルタが用いられる。Rフィルタは、例えば、R+IRbフィルタと赤外カットフィルタとを積層することにより実現される。 For the wavelength selection filter 212 of the R pixel, for example, an R filter that transmits light in a wavelength band of about 580 to 650 nm is used. The R filter is realized, for example, by stacking an R + IRb filter and an infrared cut filter.
 図5は、B+IRbフィルタ、G+IRgフィルタ、及び、R+IRrフィルタの分光特性の例を示すグラフである。グラフの横軸は波長を示し、縦軸は分光透過率を示している。 FIG. 5 is a graph showing examples of spectral characteristics of the B + IRb filter, the G + IRg filter, and the R + IRr filter. The horizontal axis of the graph indicates the wavelength, and the vertical axis indicates the spectral transmittance.
 波形251は、B+IRbフィルタの分光特性の例を示している。B+IRbフィルタは、B波長帯(約405~500nm)だけでなく、近赤外波長帯(以下、IR波長帯と称する)の一部(約800~900nm付近)の光も透過する。 Waveform 251 shows an example of the spectral characteristics of the B + IRb filter. The B + IRb filter transmits not only the B wavelength band (about 405 to 500 nm) but also a part of the near infrared wavelength band (hereinafter referred to as the IR wavelength band) (about 800 to 900 nm).
 波形252は、G+IRgフィルタの分光特性の例を示している。G+IRgフィルタは、G波長帯(約475~600nm)だけでなく、IR波長帯の一部(約800~900nm付近)の光も透過する。 Waveform 252 shows an example of spectral characteristics of the G + IRg filter. The G + IRg filter transmits not only the G wavelength band (about 475 to 600 nm) but also a part of the IR wavelength band (about 800 to 900 nm).
 波形253は、R+IRrフィルタの分光特性の例を示している。G+IRgフィルタは、R波長帯(約580~650nm)だけでなく、IR波長帯の一部(約800~900nm付近)の光まで透過する。 Waveform 253 shows an example of spectral characteristics of the R + IRr filter. The G + IRg filter transmits not only the R wavelength band (about 580 to 650 nm) but also a part of the IR wavelength band (about 800 to 900 nm).
 また、分光特性の図示は省略するが、赤外カットフィルタは、例えば、700nm以上の光を遮断する。 Although illustration of spectral characteristics is omitted, the infrared cut filter blocks light of 700 nm or more, for example.
 図6及び図7は、各画素の瞳分割方向の例を示している。図6及び図7に示されるように、撮像素子112aの撮像面において、水平方向に瞳分割された画素と垂直方向に瞳分割された画素が規則的に配置されている。 6 and 7 show examples of the pupil division direction of each pixel. As shown in FIGS. 6 and 7, the pixels divided in the horizontal direction and the pixels divided in the vertical direction are regularly arranged on the imaging surface of the imaging element 112 a.
 より具体的には、Gr画素は水平方向に瞳分割されており、Gb画素は垂直方向に瞳分割されている。R画素は2行毎又は2列毎に存在し、水平方向に瞳分割されたR画素と垂直方向に瞳分割されたR画素が、水平方向及び垂直方向に交互に並ぶように配置されている。例えば、0行目の1列目のR画素は水平方向に瞳分割され、0行目の3列目のR画素は垂直方向に瞳分割され、0行目の5列目のR画素は水平方向に瞳分割されている。また、例えば、1列目の0行目のR画素は水平方向に瞳分割され、1列目の2行目のR画素は垂直方向に瞳分割され、1列目の4行目のR画素は水平方向に瞳分割されている。B画素も同様に、2行毎又は2列毎に存在し、水平方向に瞳分割されたB画素と垂直方向に瞳分割されたB画素が、水平方向及び垂直方向に交互に並ぶように配置されている。 More specifically, the Gr pixel is divided into pupils in the horizontal direction, and the Gb pixel is divided into pupils in the vertical direction. R pixels exist every two rows or every two columns, and R pixels that are pupil-divided in the horizontal direction and R pixels that are pupil-divided in the vertical direction are arranged alternately in the horizontal and vertical directions. . For example, the R pixel in the first column of the 0th row is pupil-divided in the horizontal direction, the R pixel in the third column of the 0th row is pupil-divided in the vertical direction, and the R pixel in the fifth column of the 0th row is horizontal. The pupil is divided in the direction. Further, for example, the R pixel in the 0th row in the first column is pupil-divided in the horizontal direction, the R pixel in the second row in the first column is pupil-divided in the vertical direction, and the R pixel in the fourth row in the first column. Is divided into pupils in the horizontal direction. Similarly, B pixels exist every two rows or every two columns, and are arranged so that B pixels divided in pupils in the horizontal direction and B pixels divided in pupils in the vertical direction are alternately arranged in the horizontal direction and the vertical direction. Has been.
{AF制御処理}
 次に、図8のフローチャートを参照して、カメラ101aにより実行されるAF制御処理について説明する。
{AF control processing}
Next, AF control processing executed by the camera 101a will be described with reference to the flowchart of FIG.
 ステップS1において、カメラ101aは、撮影画像用のAE制御を行う。具体的には、撮像素子112aは、撮影画像AE部152から供給される露光時間制御信号に基づいて、露光時間を設定する。AGC部113は、撮影画像AE部152から供給されるAGCゲイン制御信号に基づいて、ゲインを設定する。 In step S1, the camera 101a performs AE control for a captured image. Specifically, the image sensor 112a sets the exposure time based on the exposure time control signal supplied from the captured image AE unit 152. The AGC unit 113 sets the gain based on the AGC gain control signal supplied from the captured image AE unit 152.
 例えば、被写体の照度に応じて、露光時間が設定される。また、露光時間をフレーム内で最長の時間に設定しても感度不足の場合には、ゲインが上げられる。また、2回目以降のステップS1の処理においては、ステップS3の撮影画像のSN比の検出結果に基づいて、撮影画像用のゲイン及び露光時間が設定される。 For example, the exposure time is set according to the illuminance of the subject. If the exposure time is set to the longest time in the frame and the sensitivity is insufficient, the gain is increased. In the second and subsequent processing of step S1, the gain and exposure time for the captured image are set based on the detection result of the S / N ratio of the captured image in step S3.
 ステップS2において、カメラ101aは、撮影画像を取得する。具体的には、撮像素子112aは、撮像の結果得られた撮影画像をAGC部113に供給する。すなわち、撮像素子112aは、各画素の各光電変換部214の画素信号を加算して出力することにより、撮影画像をAGC部113に供給する。 In step S2, the camera 101a acquires a captured image. Specifically, the image sensor 112 a supplies a captured image obtained as a result of imaging to the AGC unit 113. That is, the image sensor 112 a supplies the captured image to the AGC unit 113 by adding and outputting the pixel signals of the photoelectric conversion units 214 of the pixels.
 AGC部113は、ステップS1の処理で設定されたゲインで撮影画像の信号レベルを増幅し、ADC部114に供給する。ADC部114は、撮影画像をAD変換し、AD変換した撮影画像を画素補間部115に供給する。 The AGC unit 113 amplifies the signal level of the captured image with the gain set in the process of step S <b> 1 and supplies the amplified signal level to the ADC unit 114. The ADC unit 114 AD-converts the captured image and supplies the AD-converted captured image to the pixel interpolation unit 115.
 画素補間部115は、撮影画像に対してホワイトバランス処理を行った後、デモザイク処理を行う。すなわち、画素補間部115は、撮影画像の各画素において、R、G、Bの各波長帯の画素値のうち、検出されていない波長帯の画素値を、周囲の画素の画素値を用いて補間する。 The pixel interpolation unit 115 performs demosaic processing after performing white balance processing on the captured image. That is, the pixel interpolation unit 115 uses the pixel values of the surrounding pixels, using the pixel values of the undetected wavelength bands among the pixel values of the R, G, and B wavelength bands in each pixel of the captured image. Interpolate.
 画素補間部115は、デモザイク処理後の撮影画像を、撮影画像信号処理部116及び撮影画像取得部151に供給する。撮影画像取得部151は、取得した撮影画像を撮影画像AE部152に供給するとともに、焦点検出用画像記憶部136に記憶させる。 The pixel interpolation unit 115 supplies the captured image after the demosaic process to the captured image signal processing unit 116 and the captured image acquisition unit 151. The captured image acquisition unit 151 supplies the acquired captured image to the captured image AE unit 152 and stores it in the focus detection image storage unit 136.
 ステップS3において、撮影画像AE部152は、任意の手法を用いて、撮影画像のSN比を検出する。 In step S3, the captured image AE unit 152 detects an S / N ratio of the captured image using an arbitrary method.
 ステップS4において、カメラ101aは、位相差画像用のAE制御を行う。具体的には、撮像素子112aは、位相差画像AE部154から供給される露光時間制御信号に基づいて、露光時間を設定する。AGC部113は、位相差画像AE部154から供給されるAGCゲイン制御信号に基づいて、ゲインを設定する。 In step S4, the camera 101a performs AE control for the phase difference image. Specifically, the image sensor 112a sets the exposure time based on the exposure time control signal supplied from the phase difference image AE unit 154. The AGC unit 113 sets a gain based on the AGC gain control signal supplied from the phase difference image AE unit 154.
 例えば、被写体の照度に応じて、露光時間が設定される。また、露光時間をフレーム内で最長の時間に設定しても感度不足の場合には、ゲインが上げられる。また、2回目以降のステップS4の処理においては、ステップS6の位相差画像のSN比の検出結果に基づいて、位相差画像用のゲイン及び露光時間が設定される。 For example, the exposure time is set according to the illuminance of the subject. If the exposure time is set to the longest time in the frame and the sensitivity is insufficient, the gain is increased. In the second and subsequent processing of step S4, the phase difference image gain and exposure time are set based on the detection result of the S / N ratio of the phase difference image in step S6.
 ステップS5において、カメラ101aは、位相差画像を取得する。具体的には、撮像素子112aは、撮像の結果得られた位相差画像をAGC部113に供給する。すなわち、撮像素子112aは、各画素の各光電変換部214の画素信号を個別に出力することにより、位相差画像をAGC部113に供給する。 In step S5, the camera 101a acquires a phase difference image. Specifically, the image sensor 112 a supplies a phase difference image obtained as a result of imaging to the AGC unit 113. That is, the image sensor 112a supplies the phase difference image to the AGC unit 113 by individually outputting the pixel signal of each photoelectric conversion unit 214 of each pixel.
 ここで、位相差画像の位相差画素値に対する入射光量は、撮影画像の撮影画素値に対する入射光量の約半分に制限される。従って、位相差画像の撮影感度は、撮影画像の撮影感度より低下し、その結果、位相画像のSN比は、撮影画像のSN比より低くなる。 Here, the amount of incident light with respect to the phase difference pixel value of the phase difference image is limited to about half of the amount of incident light with respect to the photographing pixel value of the photographed image. Therefore, the imaging sensitivity of the phase difference image is lower than the imaging sensitivity of the captured image, and as a result, the SN ratio of the phase image is lower than the SN ratio of the captured image.
 AGC部113は、ステップS4の処理で設定されたゲインで位相差画像の信号レベルを増幅し、ADC部114に供給する。ADC部114は、位相差画像をAD変換し、AD変換した位相差画像を画素補間部115に供給する。 The AGC unit 113 amplifies the signal level of the phase difference image with the gain set in the process of step S4 and supplies the amplified signal level to the ADC unit 114. The ADC unit 114 performs AD conversion on the phase difference image and supplies the AD converted phase difference image to the pixel interpolation unit 115.
 画素補間部115は、デモザイク処理を行う。すなわち、画素補間部115は、位相差画像の各位相差画素において、R、G、Bの各波長帯のうち検出されていない波長帯の画素値を、周囲の位相差画素の画素値を用いて補間する。このとき、画素補間部115は、補間対象となる位相差画素と同じ方向の位相差画素の画素値を用いて補間する。例えば、画素補間部115は、補間対象が左方向の位相差画素である場合、周囲の左方向の位相差画素の画素値を用いて補間する。なお、波長方向の積分でホワイトバランスが必要な時は、例えば、画素補間部115が、撮影画像のホワイトバランスゲインを適用して、位相差画像に対してホワイトバランス処理を行うようにしてもよい。 The pixel interpolation unit 115 performs demosaic processing. In other words, the pixel interpolation unit 115 uses the pixel values of the wavelength bands that are not detected in the R, G, and B wavelength bands in the respective phase difference pixels of the phase difference image, using the pixel values of the surrounding phase difference pixels. Interpolate. At this time, the pixel interpolation unit 115 performs interpolation using the pixel value of the phase difference pixel in the same direction as the phase difference pixel to be interpolated. For example, when the interpolation target is a left phase difference pixel, the pixel interpolation unit 115 performs interpolation using the pixel values of the surrounding left phase difference pixels. When white balance is necessary for integration in the wavelength direction, for example, the pixel interpolation unit 115 may apply white balance gain of the captured image and perform white balance processing on the phase difference image. .
 画素補間部115は、デモザイク処理後の位相差画像を、位相差画像取得部153に供給する。位相差画像取得部153は、取得した位相差画像を位相差画像AE部154に供給するとともに、焦点検出用画像記憶部136に記憶させる。 The pixel interpolation unit 115 supplies the phase difference image after the demosaic process to the phase difference image acquisition unit 153. The phase difference image acquisition unit 153 supplies the acquired phase difference image to the phase difference image AE unit 154 and stores it in the focus detection image storage unit 136.
 ステップS6において、位相差画像AE部154は、任意の手法を用いて、位相差画像のSN比を検出する。位相差画像AE部154は、位相差画像のSN比の検出結果を位相差画像取得部153に通知する。 In step S6, the phase difference image AE unit 154 detects the SN ratio of the phase difference image using an arbitrary method. The phase difference image AE unit 154 notifies the phase difference image acquisition unit 153 of the detection result of the SN ratio of the phase difference image.
 ステップS7において、位相差画像取得部153は、位相差画像のSN比が所定の閾値未満であるか否かを判定する。位相差画像のSN比が所定の閾値未満であると判定された場合、処理はステップS8に進む。 In step S7, the phase difference image acquisition unit 153 determines whether or not the SN ratio of the phase difference image is less than a predetermined threshold value. If it is determined that the S / N ratio of the phase difference image is less than the predetermined threshold, the process proceeds to step S8.
 上述したように、位相差画像は、撮影画像より撮影感度が低くなる。これに対して、カメラ101aは、被写体の照度が高い状況下では、撮影画像と位相差画像の撮影時に個別にAE制御を行うことで、位相差画像の感度不足を補償することができる。例えば、カメラ101aは、位相差画像の撮影時に露光時間を長めに設定して、感度不足を補償しながら、撮影画像の撮影時に露光時間を短めに設定して、画素値の飽和を防止することができる。 As described above, the phase difference image has lower photographing sensitivity than the photographed image. On the other hand, the camera 101a can compensate for the lack of sensitivity of the phase difference image by performing AE control separately at the time of shooting the shot image and the phase difference image under a situation where the illuminance of the subject is high. For example, the camera 101a sets a longer exposure time when capturing a phase difference image to compensate for insufficient sensitivity, and sets a shorter exposure time when capturing a captured image to prevent pixel value saturation. Can do.
 一方、被写体の照度が低い状況下では、露光時間を長くするだけでは、位相差画像の感度不足を補償することができない。これに対して、例えば、露光時間をフレーム時間内で最長に設定した上で、AGC部113のゲインを上げることで感度不足を補償することも考えられる。しかし、ゲインを上げすぎると、ノイズ成分も増幅されるため、かえって位相差画像のSN比が低下する。その結果、位相差画像を用いた合焦位置の検出精度が低下する。 On the other hand, under the condition where the illuminance of the subject is low, it is not possible to compensate for the lack of sensitivity of the phase difference image only by increasing the exposure time. On the other hand, for example, it is conceivable to compensate for the lack of sensitivity by increasing the gain of the AGC unit 113 after setting the exposure time to the longest within the frame time. However, if the gain is increased too much, noise components are also amplified, so that the SN ratio of the phase difference image is lowered. As a result, the detection accuracy of the in-focus position using the phase difference image is lowered.
 また、フレームレートを落として、露光時間を延ばす方法も考えられる。しかし、フレームレートを延ばすと、AFの追従性能が低下してしまう。 Also, a method of extending the exposure time by reducing the frame rate is conceivable. However, if the frame rate is increased, the AF tracking performance is degraded.
 そこで、位相差画像のSN比が所定の閾値未満である場合、ステップS8以降の処理において、位相差画像に対して時間方向、空間方向、及び、波長方向の積分処理のうち少なくとも1つが行われる。これにより、位相差画像の信号成分が増幅され、相対的にノイズ成分が低減され、位相差画像のSN比が改善される。 Therefore, when the S / N ratio of the phase difference image is less than the predetermined threshold value, at least one of integration processing in the time direction, the spatial direction, and the wavelength direction is performed on the phase difference image in the processing after step S8. . As a result, the signal component of the phase difference image is amplified, the noise component is relatively reduced, and the SN ratio of the phase difference image is improved.
 ステップS8において、カメラ101aは、時間方向積分処理を行う。例えば、カメラ101aは、最新のフレームの位相差画像と過去のフレームの位相差画像との間で、同じ位置の同じ方向及び同じ波長帯の位相差画素の画素値の重み付け加算を行う。 In step S8, the camera 101a performs time direction integration processing. For example, the camera 101a performs weighted addition of the pixel values of the phase difference pixels in the same direction and the same wavelength band at the same position between the phase difference image of the latest frame and the phase difference image of the past frame.
 ここで、同じ位置の画素のフレーム間の相関には、被写体の動き量(被写体の時間方向の変化量)が大きく影響する。すなわち、同じ位置の画素のフレーム間の相関は、被写体のフレーム間の動き量が小さいほど高くなり、被写体のフレーム間の動き量が大きいほど低くなる。 Here, the amount of movement of the subject (the amount of change in the time direction of the subject) greatly affects the correlation between the frames of pixels at the same position. That is, the correlation between frames of pixels at the same position increases as the amount of motion between frames of the subject decreases, and decreases as the amount of motion between frames of the subject increases.
 また、被写体のフレーム間の動き量の検出精度は、位相差画像よりSN比が高い撮影画像を用いた方が高くなる。一方、撮影画像の撮影画素値と位相差画像の位相差画素値とは、時間方向における相関が高い。そこで、時間方向重み調整部161は、撮影画像におけるフレーム間の被写体の動き量に基づいて、時間方向積分処理に用いる重みを設定する。 Also, the detection accuracy of the amount of motion between the frames of the subject is higher when using a captured image having a higher S / N ratio than the phase difference image. On the other hand, the photographic pixel value of the photographic image and the phase difference pixel value of the phase difference image have a high correlation in the time direction. Therefore, the time direction weight adjustment unit 161 sets the weight used for the time direction integration processing based on the amount of movement of the subject between frames in the captured image.
 例えば、撮影画像取得部151は、最新のフレーム及び過去の撮影画像を焦点検出用画像記憶部136から読み出し、時間方向重み調整部161に供給する。位相差画像取得部153は、最新のフレーム及び過去の位相差画像を焦点検出用画像記憶部136から読み出し、時間方向重み調整部161に供給する。時間方向重み調整部161は、ガウシアンフィルタや移動平均フィルタ等のローパスフィルタを用いて、最新のフレームの撮影画像及び1フレーム前の撮影画像を空間的に平滑化する。時間方向重み調整部161は、平滑化した2つの撮影画像の間の差分をとり、その結果に基づいて各画素における被写体の動き係数を算出する。 For example, the captured image acquisition unit 151 reads the latest frame and the past captured image from the focus detection image storage unit 136 and supplies them to the time direction weight adjustment unit 161. The phase difference image acquisition unit 153 reads the latest frame and the past phase difference image from the focus detection image storage unit 136 and supplies them to the time direction weight adjustment unit 161. The time direction weight adjustment unit 161 spatially smoothes the latest frame and the previous frame using a low pass filter such as a Gaussian filter or a moving average filter. The time direction weight adjustment unit 161 calculates a difference between two smoothed captured images, and calculates a motion coefficient of a subject in each pixel based on the result.
 時間方向重み調整部161は、算出した動き係数に基づいて、撮影画像の各画素に対する重みを設定する。ここで、時間方向重み調整部161は、動き係数の大きい画素(被写体の動きが大きい画素)ほど重みを小さくし、動き係数の小さい画素(被写体の動きが小さい画素)ほど重みを大きくする。 The time direction weight adjustment unit 161 sets a weight for each pixel of the captured image based on the calculated motion coefficient. Here, the time direction weight adjustment unit 161 decreases the weight as the pixel has a larger motion coefficient (a pixel having a large subject motion), and increases the weight as a pixel having a small motion coefficient (a pixel having a small subject motion).
 時間方向重み調整部161は、設定した各画素に対する重みを示す情報を時間方向積分処理部162に供給する。また、時間方向重み調整部161は、最新のフレームの撮影画像及び位相差画像、並びに、1フレーム前の撮影画像及び位相差画像を時間方向積分処理部162に供給する。 The time direction weight adjustment unit 161 supplies information indicating the weight for each set pixel to the time direction integration processing unit 162. In addition, the time direction weight adjustment unit 161 supplies the latest captured image and phase difference image of the frame, and the captured image and phase difference image of the previous frame to the time direction integration processing unit 162.
 時間方向積分処理部162は、時間方向重み調整部161により設定された重みを用いて、位相差画像の各位相差画素値の時間方向の重み付け加算を行う。 The time direction integration processing unit 162 performs weighted addition in the time direction of each phase difference pixel value of the phase difference image using the weight set by the time direction weight adjustment unit 161.
 なお、カメラ101aは、ハードウエアの制約等により、過去の画像を何フレーム分も保持しておくのは困難である。そこで、時間方向積分処理部162は、例えば、図9に示されるようなIIR(Infinite Impulse Response)フィルタを用いて、位相差画像の各位相差画素値の時間方向の重み付け加算を行う。具体的には、時間方向積分処理部162は、次式(1)に従って、最新のフレームの位相差画像に対して、IIRフィルタを用いた時間方向の重み付け積分処理を行う。 Note that it is difficult for the camera 101a to hold past images for several frames due to hardware limitations and the like. Therefore, the time direction integration processing unit 162 performs weighted addition in the time direction of each phase difference pixel value of the phase difference image using, for example, an IIR (Infinite Impulse よ う な Response) filter as shown in FIG. Specifically, the time direction integration processing unit 162 performs weighted integration processing in the time direction using an IIR filter on the phase difference image of the latest frame according to the following equation (1).
 q´(t)=(1-k)*q(t)+k*Σq ・・・(1) Q ′ (t) = (1−k) * q (t) + k * Σq (1)
 なお、q(t)は、時刻tのフレームにおける積分前の位相差画素値を示し、q´(t)は、時刻tのフレームにおける積分後の位相差画素値を示している。kは、帰還率を示している。従って、式(1)による時間方向積分処理は、現在のフレームの位相差画素値と過去のフレームの位相差画素値との帰還率kによるブレンド処理となる。 Note that q (t) represents the phase difference pixel value before integration in the frame at time t, and q ′ (t) represents the phase difference pixel value after integration in the frame at time t. k represents a feedback rate. Therefore, the time direction integration process according to the equation (1) is a blend process based on the feedback rate k between the phase difference pixel value of the current frame and the phase difference pixel value of the past frame.
 なお、この時間方向積分処理は、同じ波長帯の位相差画素値同士で行われる。すなわち、デモザイク処理を行うことにより、同じ位相差画素に対して、R波長帯、G波長帯、及び、B波長帯の位相差画素値がそれぞれ求められている。そこで、時間方向積分処理部162は、式(1)により、各位相差画素の同じ波長帯の位相差画素値同士を、時間方向に重み付け加算する。 This time direction integration process is performed between phase difference pixel values in the same wavelength band. That is, by performing demosaic processing, phase difference pixel values in the R wavelength band, the G wavelength band, and the B wavelength band are obtained for the same phase difference pixel. Therefore, the time direction integration processing unit 162 weights and adds the phase difference pixel values of the same wavelength band of each phase difference pixel in the time direction according to Expression (1).
 ここで、時間方向重み調整部161により撮影画像の各画素に対して設定された重みが、帰還率kに用いられる。すなわち、位相差画像の各位相差画素に対する帰還率kとして、撮影画像の同じ位置にある画素に設定された重みが用いられる。従って、撮影画像の同じ画素内にある2つの位相差画素に対して、同じ帰還率kが用いられる。また、被写体の動きが大きい位相差画素において、帰還率kが小さくなり、過去のフレームの位相差画素値に対する重みが小さくなる。一方、被写体の動きが小さい位相差画素において、帰還率kが大きくなり、過去のフレームの位相差画素値に対する重みが大きくなる。 Here, the weight set for each pixel of the captured image by the time direction weight adjustment unit 161 is used for the feedback rate k. That is, the weight set for the pixels at the same position in the captured image is used as the feedback rate k for each phase difference pixel of the phase difference image. Therefore, the same feedback factor k is used for the two phase difference pixels in the same pixel of the captured image. In addition, in the phase difference pixel where the movement of the subject is large, the feedback rate k is small, and the weight for the phase difference pixel value of the past frame is small. On the other hand, in a phase difference pixel with small subject movement, the feedback rate k increases, and the weight for the phase difference pixel value of the past frame increases.
 また、時間方向積分処理部162は、次式(2)に従って、最新のフレームの撮影画像に対して、IIRフィルタを用いた時間方向の重み付け積分処理を行う。 Also, the time direction integration processing unit 162 performs weighted integration processing in the time direction using the IIR filter on the captured image of the latest frame according to the following equation (2).
 p´(t)=(1-k)*p(t)+k*Σp ・・・(2) P '(t) = (1-k) * p (t) + k * Σp (2)
 なお、p(t)は、時刻tのフレームにおける積分前の撮影画素値を示し、q´(t)は、時刻tのフレームにおける積分後の撮影画素値を示している。kは、帰還率を示している。従って、式(2)による時間方向積分処理は、現在のフレームの撮影画素値と過去のフレームの撮影画素値との帰還率kによるブレンド処理となる。 Note that p (t) represents a photographic pixel value before integration in a frame at time t, and q ′ (t) represents a photographic pixel value after integration in a frame at time t. k represents a feedback rate. Therefore, the time direction integration process according to the equation (2) is a blend process based on the feedback rate k between the captured pixel value of the current frame and the captured pixel value of the past frame.
 なお、この時間方向積分処理は、同じ波長帯の撮影画素値同士で行われる。すなわち、デモザイク処理を行うことにより、同じ撮影画素に対して、R波長帯、G波長帯、及び、B波長帯の撮影画素値が求められている。そこで、時間方向積分処理部162は、式(2)により、各撮影画素の同じ波長帯の撮影画素値同士を重み付け加算する。 Note that this time direction integration processing is performed between imaged pixel values in the same wavelength band. That is, by performing demosaic processing, imaging pixel values in the R wavelength band, the G wavelength band, and the B wavelength band are obtained for the same imaging pixel. Therefore, the time direction integration processing unit 162 weights and adds the photographic pixel values in the same wavelength band of the respective photographic pixels by Expression (2).
 時間方向積分処理部162は、時間方向積分処理後の撮影画像及び位相差画像を、空間方向重み調整部163及び空間方向積分処理部164に供給する。 The time direction integration processing unit 162 supplies the captured image and the phase difference image after the time direction integration processing to the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164.
 なお、時間方向積分処理の方法は、上述した方法に限定されるものではない。例えば、IIRフィルタの代わりに、FIR(Finite Impulse Response)フィルタを用いることも可能である。 Note that the method of time direction integration processing is not limited to the method described above. For example, an FIR (FiniteFImpulse Response) filter may be used instead of the IIR filter.
 また、例えば、位相差画像のSN比が所定の閾値以上である場合、位相差画像のフレーム間の差分に基づいて、位相差画像の積分に用いる帰還率kを設定するようにしてもよい。すなわち、位相差画像における被写体の動き量に基づいて、位相差画像の各位相差画素に対する帰還率kを設定するようにしてもよい。なお、ここでの閾値は、例えば、ステップS7の判定処理で用いられる閾値より低い値に設定される。 Also, for example, when the S / N ratio of the phase difference image is equal to or greater than a predetermined threshold, the feedback rate k used for integration of the phase difference image may be set based on the difference between frames of the phase difference image. That is, the feedback rate k for each phase difference pixel of the phase difference image may be set based on the amount of movement of the subject in the phase difference image. Note that the threshold value here is set to a value lower than the threshold value used in the determination process in step S7, for example.
 ステップS9において、空間方向積分処理部164は、位相差画像のSN比が所定の閾値未満であるか否かを判定する。具体的には、空間方向積分処理部164は、任意の手法を用いて、時間方向積分処理後の位相差画像のSN比を検出する。そして、空間方向積分処理部164が、位相差画像のSN比が所定の閾値未満であると判定した場合、処理はステップS10に進む。なお、この判定処理の閾値は、例えば、ステップS7の判定処理で用いられる閾値と同じ値に設定される。 In step S9, the spatial direction integration processing unit 164 determines whether or not the SN ratio of the phase difference image is less than a predetermined threshold value. Specifically, the spatial direction integration processing unit 164 detects the SN ratio of the phase difference image after the time direction integration processing using an arbitrary method. If the spatial direction integration processing unit 164 determines that the SN ratio of the phase difference image is less than the predetermined threshold, the process proceeds to step S10. In addition, the threshold value of this determination process is set to the same value as the threshold value used in the determination process of step S7, for example.
 ステップS10において、カメラ101aは、空間方向積分処理を行う。例えば、カメラ101aは、時間方向積分処理後の位相差画像の各位相差画素の位相差画素値に、同じフレーム内の近傍の同じ方向及び同じ波長帯の位相差画素値を重み付け加算する。 In step S10, the camera 101a performs a spatial direction integration process. For example, the camera 101a weights and adds the phase difference pixel values of the same direction and the same wavelength band in the vicinity of the same frame to the phase difference pixel value of each phase difference pixel of the phase difference image after the time direction integration processing.
 例えば、空間方向積分処理部164は、図10に示されるような、水平方向及び垂直方向に有限のタップ数を持つFIRフィルタを用いて、空間方向の重み付け積分処理を行う。 For example, the spatial direction integration processing unit 164 performs weighted integration processing in the spatial direction using an FIR filter having a finite number of taps in the horizontal direction and the vertical direction as shown in FIG.
 ここで、被写体によらず固定係数のフィルタを使用すると、被写体の模様や絵柄等によっては、かえって有効な位相差画像が得られなくなる場合がある。例えば、画像のSN比を改善する平滑化フィルタの代表的なものに、ガウシアンフィルタや移動平均フィルタがある。図11は、ガウシアンフィルタの例を示している。ガウシアンフィルタ261乃至263は、1次元のガウシアンフィルタの例である。ガウシアンフィルタ264は、2次元の5×5画素のガウシアンフィルタの例である。 Here, when a fixed coefficient filter is used regardless of the subject, an effective phase difference image may not be obtained depending on the pattern or pattern of the subject. For example, typical smoothing filters that improve the S / N ratio of an image include a Gaussian filter and a moving average filter. FIG. 11 shows an example of a Gaussian filter. The Gaussian filters 261 to 263 are examples of one-dimensional Gaussian filters. The Gaussian filter 264 is an example of a two-dimensional 5 × 5 pixel Gaussian filter.
 しかし、平滑化フィルタを用いると、全体的に画像がぼけてしまい、位相ずれ量の検出に必要なエッジ成分が弱められてしまうおそれがある。 However, if a smoothing filter is used, the image may be blurred as a whole, and the edge component necessary for detecting the amount of phase shift may be weakened.
 一方、エッジを強調し先鋭化するフィルタの代表的なものとして、輪郭強調に利用されるラプラシアンフィルタ等の各種微分フィルタがある。しかし、これらのフィルタを用いると、逆にノイズ成分まで強調してしまい、位相ずれ量の誤検出の原因となる。 On the other hand, as a typical filter that emphasizes and sharpens edges, there are various differential filters such as a Laplacian filter used for contour enhancement. However, when these filters are used, the noise component is emphasized, which causes erroneous detection of the phase shift amount.
 そこで、空間方向積分処理部164は、被写体の模様や絵柄等に応じて、適応的に重みを調整する可変係数フィルタを使用する。そのようなフィルタの好例として、例えば、バイラテラルフィルタ、イプシロンフィルタ、又は、Non-local Means(非局所平均)フィルタのようなエッジ保存型の平滑化フィルタがある。エッジ保存型の平滑化フィルタは、固定係数フィルタに比べ、重み算出のための演算量が増えるが、より強力にSN比を改善しながら、鮮鋭な位相差画像を得ることが可能になる。以下では、主にバイラテラルフィルタを用いる場合を例に挙げて説明する。 Therefore, the spatial direction integration processing unit 164 uses a variable coefficient filter that adaptively adjusts the weight according to the pattern or pattern of the subject. A good example of such a filter is an edge-preserving smoothing filter such as a bilateral filter, an epsilon filter, or a non-local Means filter. The edge preserving type smoothing filter increases the amount of calculation for weight calculation compared with the fixed coefficient filter, but it is possible to obtain a sharp phase difference image while improving the S / N ratio more strongly. Hereinafter, a case where a bilateral filter is mainly used will be described as an example.
 また、空間方向重み調整部163は、画素間の空間方向の相関性が高いほど重みを大きくし、画素間の空間方向の相関性が低いほど重みを小さくする。ここで、同じフレーム内の画素間の相関には、被写体の水平方向及び垂直方向の類似度が大きく寄与する。すなわち、同じフレーム内の画素間の相関は、その画素間の被写体の類似度が高くなるほど高くなり、その画素間の被写体の類似度が低くなるほど低くなる。従って、空間方向重み調整部163は、画素間の被写体の類似度が高いほど重みを大きくし、画素間の被写体の類似度が低いほど重みを小さくする。 Also, the spatial direction weight adjustment unit 163 increases the weight as the spatial direction correlation between the pixels is higher, and decreases the weight as the spatial direction correlation between the pixels is lower. Here, the similarity between the horizontal direction and the vertical direction of the subject greatly contributes to the correlation between pixels in the same frame. That is, the correlation between pixels in the same frame increases as the subject similarity between the pixels increases, and decreases as the subject similarity between the pixels decreases. Accordingly, the spatial direction weight adjustment unit 163 increases the weight as the subject similarity between the pixels is higher, and decreases the weight as the subject similarity between the pixels is lower.
 ここで、被写体の水平方向及び垂直方向の類似度の検出精度を上げるには、撮影画像と位相差画像の併用するのが有効である。 Here, in order to increase the detection accuracy of the similarity in the horizontal direction and the vertical direction of the subject, it is effective to use the photographed image and the phase difference image in combination.
 撮影画像は、被写体の模様などに含まれる高周波成分に対し、焦点ずれによるボケを付加したものに相当する。 The photographed image corresponds to a high-frequency component included in the subject pattern or the like with blur due to defocusing.
 位相差画像は、光束を水平方向又は垂直方向に瞳分割した成分からなるが、撮像面内において、瞳分割方向に直交する方向に対して瞳分割による偏心の影響を受けない。そのため、位相差画像は、瞳分割方向に直交する方向においては、撮影画像との相関が高くなる。一方、位相差画像は、瞳分割方向に対しては、光束の入射方向を制限しているため、撮影画像より感度が低くなる。しかし、位相差画像は、被写体の高周波成分を撮影画像より多く含み、ボケが少なくなる。従って、位相差画像は、瞳分割方向においては、撮影画像との相関が低くなる。 The phase difference image is composed of a component obtained by dividing the light beam into pupils in the horizontal direction or the vertical direction, but is not affected by eccentricity due to pupil division in the direction orthogonal to the pupil division direction in the imaging plane. For this reason, the phase difference image has a high correlation with the captured image in the direction orthogonal to the pupil division direction. On the other hand, the phase difference image has a lower sensitivity than the captured image because the incident direction of the light beam is limited in the pupil division direction. However, the phase difference image includes more high-frequency components of the subject than the photographed image, and blur is reduced. Therefore, the phase difference image has a low correlation with the captured image in the pupil division direction.
 このように、位相差画像は、撮影画像と比べてSN比が低くなるが、高周波成分を撮影画像より多く含む。また、位相差画像は、瞳分割方向に直交する方向において撮影画像と相関が高く、瞳分割方向において撮影画像と相関が低くなる。 Thus, the phase difference image has a lower S / N ratio than the photographed image, but contains more high-frequency components than the photographed image. In addition, the phase difference image has a high correlation with the captured image in the direction orthogonal to the pupil division direction, and has a low correlation with the captured image in the pupil division direction.
 そこで、空間方向重み調整部163及び空間方向積分処理部164は、位相差画像のSN比に応じて、撮影画像と位相差画像を使い分けながら、空間方向積分処理を行う。具体的には、空間方向重み調整部163及び空間方向積分処理部164は、位相差画像のSN比が所定の閾値以上である場合と閾値未満である場合とに分けて処理を行う。なお、ここでの閾値は、例えば、ステップS9の判定処理の閾値より低い値に設定される。 Therefore, the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 perform the spatial direction integration processing while properly using the captured image and the phase difference image according to the SN ratio of the phase difference image. Specifically, the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 perform processing separately for a case where the SN ratio of the phase difference image is greater than or equal to a predetermined threshold and a case where it is less than the threshold. Note that the threshold value here is set to a value lower than the threshold value of the determination process in step S9, for example.
 位相差画像のSN比が所定の閾値以上である場合、空間方向重み調整部163は、位相差画像の水平方向及び垂直方向の類似度に基づいて重み付けを行う。そして、空間方向積分処理部164は、それらの垂直方向及び水平方向の重みを用いて、二次元畳み込み積分処理を行う。 When the SN ratio of the phase difference image is equal to or greater than a predetermined threshold, the spatial direction weight adjustment unit 163 performs weighting based on the horizontal and vertical similarity of the phase difference image. Then, the spatial direction integration processing unit 164 performs a two-dimensional convolution integration process using these vertical and horizontal weights.
 一方、位相差画像のSN比が所定の閾値未満である場合、空間方向重み調整部163は、瞳分割方向(光電変換部214の方向)に直交する方向に対しては、位相差画像よりSN比が高く、位相差画像との相関が高い撮影画像の類似度に基づいて重み付けを行う。一方、空間方向重み調整部163は、瞳分割方向(光電変換部214の方向)に平行な方向に対しては、位相差画像の類似度に基づいて重み付けを行う。そして、空間方向積分処理部164は、それらの垂直方向及び水平方向の重みを用いて、二次元畳み込み積分処理を行う。 On the other hand, when the S / N ratio of the phase difference image is less than the predetermined threshold, the spatial direction weight adjustment unit 163 determines that the SN value is higher than the phase difference image in the direction orthogonal to the pupil division direction (direction of the photoelectric conversion unit 214). Weighting is performed based on the similarity of the captured image having a high ratio and a high correlation with the phase difference image. On the other hand, the spatial direction weight adjustment unit 163 performs weighting on the direction parallel to the pupil division direction (direction of the photoelectric conversion unit 214) based on the similarity of the phase difference image. Then, the spatial direction integration processing unit 164 performs a two-dimensional convolution integration process using these vertical and horizontal weights.
 具体的には、空間方向重み調整部163及び空間方向積分処理部164は、時間方向積分処理後の位相差画像の上下方向及び左右方向の位相差画素値のSN比が共に所定の閾値以上である場合、例えば、次式(3)及び(4)により、位相差画像の各座標(x,y)の波長帯λの方向dの位相差画素値qd”(x,y,λ)を算出する。なお、方向dは、上方向、下方向、左方向、又は、右方向のいずれかである。波長帯λは、R波長帯、G波長帯、又は、B波長帯のいずれかである。 Specifically, the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 have both the SN ratios of the phase difference pixel values in the vertical and horizontal directions of the phase difference image after the time direction integration processing equal to or greater than a predetermined threshold. In some cases, for example, the phase difference pixel value qd ″ (x, y, λ) in the direction d of the wavelength band λ of each coordinate (x, y) of the phase difference image is calculated by the following equations (3) and (4). The direction d is any one of the upward direction, the downward direction, the left direction, and the right direction, and the wavelength band λ is any of the R wavelength band, the G wavelength band, and the B wavelength band. .
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 なお、qd’(x,y,λ)は、時間方向積分処理後の座標(x,y)の波長帯λの方向dの位相差画素値である。 Note that qd ′ (x, y, λ) is a phase difference pixel value in the direction d of the wavelength band λ of the coordinate (x, y) after the time direction integration processing.
 f(n,σh)は、座標(x,y)と座標(x+n,y+m)の間の水平方向の距離nに基づく重みである。σhは、撮像面内における水平方向の距離に対するガウシアンフィルタの標準偏差である。この重みは、距離nが近いほど大きくなり、距離nが遠いほど小さくなり、主に画像の水平方向の平滑化の効果をもたらす。 f (n, σ h ) is a weight based on the horizontal distance n between the coordinates (x, y) and the coordinates (x + n, y + m). σ h is the standard deviation of the Gaussian filter with respect to the distance in the horizontal direction within the imaging surface. This weight becomes larger as the distance n is closer, and becomes smaller as the distance n is farther, and mainly has the effect of smoothing the image in the horizontal direction.
 f(m,σv)は、座標(x,y)と座標(x+n,y+m)の間の垂直方向の距離mに基づく重みである。σvは、撮像面内における垂直方向の距離に対するガウシアンフィルタの標準偏差である。この重みは、距離mが近いほど大きくなり、距離mが遠いほど小さくなり、主に画像の垂直方向の平滑化の効果をもたらす。 f (m, σ v ) is a weight based on the vertical distance m between the coordinates (x, y) and the coordinates (x + n, y + m). σ v is the standard deviation of the Gaussian filter with respect to the distance in the vertical direction within the imaging surface. This weight becomes larger as the distance m is closer, and becomes smaller as the distance m is farther, and mainly has the effect of smoothing the image in the vertical direction.
 このように、瞳分割方向に平行な方向と直交する方向では、位相差の相関性が異なるため、距離による重み成分が、水平方向および垂直方向で個別に算出される。これにより、独立に分散(標準偏差σh及び標準偏差σv)を制御することができる。 Thus, since the correlation of the phase difference differs in the direction orthogonal to the direction parallel to the pupil division direction, the weight component due to the distance is calculated separately in the horizontal direction and the vertical direction. Thereby, the dispersion (standard deviation σ h and standard deviation σ v ) can be controlled independently.
 Wqdhv(x,y,n,m,λ)は、時間方向積分処理後の座標(x,y)と座標(x+n,y+m)の波長帯λの方向dの位相差画素値の差に基づく重みである。σSqは、撮像面内における位相差画素値の差に対するガウシアンフィルタの標準偏差である。この重みは、位相差画素値の差が小さいほど大きくなり、位相差画素値の差が大きいほど小さくなり、主に水平方向及び垂直方向のエッジ保存の効果をもたらす。 Wqd hv (x, y, n, m, λ) is based on the difference between the phase difference pixel value in the direction d in the wavelength band λ of the coordinate (x, y) and the coordinate (x + n, y + m) after the time direction integration processing. It is a weight. σ Sq is the standard deviation of the Gaussian filter with respect to the difference in the phase difference pixel value in the imaging plane. This weight increases as the difference between the phase difference pixel values decreases, and decreases as the difference between the phase difference pixel values increases.
 なお、一般的には、Wqdhv(x,y,n,m,λ)において、画素値の差ではなく、輝度差が用いられることが多い。しかし、輝度差を用いた場合、座標(x,y)の位相差画素と座標(x+n,y+m)の位相差画像のスペクトル(色)が全く異なっていても、輝度差がない場合には、座標(x+n,y+m)の位相差画素の成分が残ってしまうため、誤検出の原因となる。これに対して、輝度差ではなく、波長帯毎に画素値の差を用いることにより、その波長帯の座標(x+n,y+m)の位相差画素の成分を、同一波長帯の座標(x,y)の位相差画素値に反映させることができる。 In general, in Wqd hv (x, y, n, m, λ), a luminance difference is often used instead of a pixel value difference. However, when the luminance difference is used, even if the spectrum (color) of the phase difference pixel at the coordinates (x, y) and the phase difference image at the coordinates (x + n, y + m) are completely different, there is no luminance difference. Since the component of the phase difference pixel at the coordinates (x + n, y + m) remains, it causes erroneous detection. On the other hand, by using the pixel value difference for each wavelength band instead of the luminance difference, the phase difference pixel component of the coordinate (x + n, y + m) of the wavelength band is converted to the coordinate (x, y of the same wavelength band). ) Phase difference pixel value.
 このように、時間方向積分処理後の位相差画像の上下方向及び左右方向の位相差画素値のSN比が所定の閾値以上である場合、位相差画像のみを用いて空間方向の積分処理が行われる。 Thus, when the SN ratio of the phase difference pixel values in the vertical direction and the horizontal direction of the phase difference image after the time direction integration processing is equal to or greater than a predetermined threshold value, the spatial direction integration processing is performed using only the phase difference image. Is called.
 一方、空間方向重み調整部163及び空間方向積分処理部164は、時間方向積分処理後の位相差画像の上下方向の位相差画素値のSN比が所定の閾値未満である場合、次式(5)乃至(7)により、各座標(x,y)の波長帯λの方向dの位相差画素値qd”(x,y,λ)を算出する。なお、方向dは、上方向又は下方向のいずれかである。波長帯λは、R波長帯、G波長帯、又は、B波長帯のいずれかである。 On the other hand, when the SN ratio of the phase difference pixel value in the vertical direction of the phase difference image after the time direction integration processing is less than a predetermined threshold, the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164 ) To (7), the phase difference pixel value qd ″ (x, y, λ) in the direction d of the wavelength band λ of each coordinate (x, y) is calculated. The direction d is the upward direction or the downward direction. The wavelength band λ is any one of the R wavelength band, the G wavelength band, and the B wavelength band.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 なお、Wph(x,y,n,m,λ)は、時間方向積分処理後の座標(x,y+m)と座標(x+n,y+m)の撮影画素値の差に基づく重みである。σSpは、撮像面内における撮影画素値の差に対するガウシアンフィルタの標準偏差である。この重みは、撮影画素値の差が小さいほど大きくなり、撮影画素値の差が大きいほど小さくなり、主に水平方向のエッジ保存の効果をもたらす。 Incidentally, Wp h (x, y, n, m, λ) is a weight that is based on the difference in shooting pixel values of coordinates after the time direction integration process (x, y + m) and the coordinates (x + n, y + m ). σ Sp is the standard deviation of the Gaussian filter with respect to the difference in the photographic pixel value in the imaging plane. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation in the horizontal direction.
 Wqdv(x,y,n,m,λ)は、時間方向積分処理後の座標(x+n,y)と座標(x+n,y+m)の波長帯λの方向dの位相差画素値の差に基づく重みである。この重みは、位相差画素値の差が小さいほど大きくなり、位相差画素値の差が大きいほど小さくなり、主に垂直方向のエッジ保存の効果をもたらす。 Wqd v (x, y, n, m, λ) is based on the difference between the phase difference pixel value in the direction d in the wavelength band λ of the coordinate (x + n, y) after the time direction integration processing and the coordinate (x + n, y + m). It is a weight. This weight increases as the difference between the phase difference pixel values decreases, and decreases as the difference between the phase difference pixel values increases. This weight mainly brings about the effect of edge preservation in the vertical direction.
 このように、時間方向積分処理後の位相差画像の上下方向の位相差画素値のSN比が所定の閾値未満である場合、上下方向の位相画素値については、撮影画像及び位相差画像の両方を用いて空間方向積分処理が行われる。すなわち、瞳分割方向に直交する水平方向に対しては、撮影画像の類似度に基づいて重み付けが行われる。一方、瞳分割方向に平行な垂直方向に対しては、位相差画像の類似度に基づいて重み付けが行われる。 Thus, when the SN ratio of the phase difference pixel value in the vertical direction of the phase difference image after the time direction integration processing is less than a predetermined threshold, both the captured image and the phase difference image are used for the phase pixel value in the vertical direction. The spatial direction integration process is performed using. That is, weighting is performed on the horizontal direction orthogonal to the pupil division direction based on the similarity of the captured image. On the other hand, the vertical direction parallel to the pupil division direction is weighted based on the similarity of the phase difference image.
 また、空間方向重み調整部163及び空間方向積分処理部164は、時間方向積分処理後の位相差画像の左右方向の位相差画素値のSN比が所定の閾値未満である場合、次式(8)乃至(10)により、各座標(x,y)の波長帯λの方向dの位相差画素値qd”(x,y,λ)を算出する。なお、方向dは、左方向又は右方向のいずれかである。波長帯λは、R波長帯、G波長帯、又は、B波長帯のいずれかである。 In addition, the spatial direction weight adjustment unit 163 and the spatial direction integration processing unit 164, when the SN ratio of the phase difference pixel value in the horizontal direction of the phase difference image after the time direction integration processing is less than a predetermined threshold, ) To (10), the phase difference pixel value qd ″ (x, y, λ) in the direction d of the wavelength band λ of each coordinate (x, y) is calculated. The direction d is the left direction or the right direction. The wavelength band λ is any one of the R wavelength band, the G wavelength band, and the B wavelength band.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 Wqdh(x,y,n,m,λ)は、時間方向積分処理後の座標(x,y+m)と座標(x+n,y+m)の波長帯λの方向dの位相差画素値の差に基づく重みである。この重みは、位相差画素値の差が小さいほど大きくなり、位相差画素値の差が大きいほど小さくなり、主に水平方向のエッジ保存の効果をもたらす。 Wqd h (x, y, n, m, λ) is based on the difference between the phase difference pixel value in the direction d in the wavelength band λ of the coordinate (x, y + m) and the coordinate (x + n, y + m) after the time direction integration processing. It is a weight. This weight increases as the difference between the phase difference pixel values decreases, and decreases as the difference between the phase difference pixel values increases. This weight mainly brings about the effect of edge preservation in the horizontal direction.
 Wpv(x,y,n,m,λ)は、時間方向積分処理後の座標(x+n,y)と座標(x+n,y+m)の撮影画素値の差に基づく重みである。この重みは、撮影画素値の差が小さいほど大きくなり、撮影画素値の差が大きいほど小さくなり、主に垂直方向のエッジ保存の効果をもたらす。 Wp v (x, y, n, m, λ) is a weight based on the difference between the captured pixel values of the coordinate (x + n, y) and the coordinate (x + n, y + m) after the time direction integration processing. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation in the vertical direction.
 このように、時間方向積分処理後の位相差画像の左右方向の位相差画素値のSN比が所定の閾値未満である場合、左右方向の位相画素値については、撮影画像及び位相差画像の両方を用いて空間方向積分処理が行われる。すなわち、瞳分割方向に直交する垂直方向に対しては、撮影画像の類似度に基づいて重み付けが行われる。一方、瞳分割方向に平行な水平方向に対しては、位相差画像の類似度に基づいて重み付けが行われる。 As described above, when the SN ratio of the phase difference pixel value in the left-right direction of the phase difference image after the time direction integration processing is less than the predetermined threshold, both the captured image and the phase difference image are used for the phase pixel value in the left-right direction. The spatial direction integration process is performed using. That is, weighting is performed on the vertical direction orthogonal to the pupil division direction based on the similarity of the captured image. On the other hand, the horizontal direction parallel to the pupil division direction is weighted based on the similarity of the phase difference image.
 また、空間方向積分処理部164は、次式(11)及び(12)に従って、時間方向積分処理後の撮影画像に対して、FIRフィルタを用いた空間方向の重み付け積分処理を行う。 Further, the spatial direction integration processing unit 164 performs weighted integration processing in the spatial direction using the FIR filter on the photographed image after the time direction integration processing according to the following equations (11) and (12).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 Wphv(x,y,n,m,λ)は、時間方向積分処理後の座標(x,y)と座標(x+n,y+m)の撮影画素値の差に基づく重みである。この重みは、撮影画素値の差が小さいほど大きくなり、撮影画素値の差が大きいほど小さくなり、主にエッジ保存の効果をもたらす。 Wp hv (x, y, n, m, λ) is a weight based on the difference between the captured pixel values of the coordinate (x, y) and the coordinate (x + n, y + m) after the time direction integration processing. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation.
 空間方向積分処理部164は、空間方向積分処理後の撮影画像及び位相差画像を、波長方向重み調整部165及び波長方向積分処理部166に供給する。 The spatial direction integration processing unit 164 supplies the captured image and the phase difference image after the spatial direction integration processing to the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166.
 ステップS11において、波長方向積分処理部166は、位相差画像のSN比が所定の閾値未満であるか否かを判定する。具体的には、波長方向積分処理部166は、任意の手法を用いて、空間方向積分処理後の位相差画像のSN比を検出する。そして、波長方向積分処理部166が、位相差画像のSN比が所定の閾値未満であると判定した場合、処理はステップS12に進む。なお、この判定処理の閾値は、例えば、ステップS7及びS9の判定処理で用いられる閾値と同じ値に設定される。 In step S11, the wavelength direction integration processing unit 166 determines whether or not the SN ratio of the phase difference image is less than a predetermined threshold value. Specifically, the wavelength direction integration processing unit 166 detects the SN ratio of the phase difference image after the spatial direction integration processing using an arbitrary method. If the wavelength direction integration processing unit 166 determines that the SN ratio of the phase difference image is less than the predetermined threshold value, the process proceeds to step S12. In addition, the threshold value of this determination process is set to the same value as the threshold value used in the determination process of steps S7 and S9, for example.
 ステップS12において、カメラ101aは、波長方向積分処理を行う。例えば、カメラ101aは、空間方向積分処理後の位相差画像の各位相差画素において、対象となる波長帯の画素値に他の波長帯の画素値の重み付け加算を行う。例えば、カメラ101aは、ある位相差画素のR波長帯の位相差画素値に、同じ位相差画素のB波長帯及びG波長帯の位相差画素値を重み付け加算することにより、R波長帯の位相差画素値の波長方向の積分処理を行う。 In step S12, the camera 101a performs wavelength direction integration processing. For example, the camera 101a performs weighted addition of pixel values of other wavelength bands to pixel values of the target wavelength band in each phase difference pixel of the phase difference image after the spatial direction integration processing. For example, the camera 101a weights and adds the phase difference pixel values of the B wavelength band and the G wavelength band of the same phase difference pixel to the phase difference pixel value of the R wavelength band of a certain phase difference pixel, thereby changing the level of the R wavelength band. Integration processing of the phase difference pixel value in the wavelength direction is performed.
 ここで、同じ位相差画素の異なる波長帯の画素値の相関には、レンズ111の軸上色収差と倍率色収差が大きく影響する。 Here, the axial chromatic aberration and the lateral chromatic aberration of the lens 111 greatly affect the correlation between pixel values of different wavelength bands of the same phase difference pixel.
 具体的には、光は、波長によりレンズ111での屈折率が異なる。一般的に、紫外線など波長の短い光は屈折しやすく、赤外線など波長の長い光は屈折しにくい。そして、短波長光はレンズ111に近い位置に焦点を結び、長波長光はレンズ111から遠い位置に焦点を結ぶことにより、軸上色収差が生じる。従って、軸上色収差は、レンズ111の光軸に沿った方向に発生する。なお、この軸上色収差に対する補正は、後述するステップS16の処理で行われる。 More specifically, the refractive index of light at the lens 111 varies depending on the wavelength. In general, light having a short wavelength such as ultraviolet rays is easily refracted, and light having a long wavelength such as infrared rays is not easily refracted. The short wavelength light is focused at a position close to the lens 111, and the long wavelength light is focused at a position far from the lens 111, thereby causing axial chromatic aberration. Accordingly, axial chromatic aberration occurs in a direction along the optical axis of the lens 111. The correction for the longitudinal chromatic aberration is performed in the process of step S16 described later.
 また、波長による屈折率の違いにより、レンズ111に斜めから入射した光は、撮像素子112aの撮像面において異なる位置に焦点を結ぶ。これにより、例えば、図12に示されるように、撮像面において、光軸cを中心とする波長λkの光による像Lkと、波長λiの光による像Liの倍率に差が生じ、倍率色収差が生じる。 Further, due to the difference in the refractive index depending on the wavelength, the light incident obliquely on the lens 111 is focused at different positions on the image pickup surface of the image pickup device 112a. As a result, for example, as shown in FIG. 12, on the imaging surface, a difference is generated between the magnification of the image Lk with the light having the wavelength λ k centered on the optical axis c and the image Li with the light with the wavelength λ i. Chromatic aberration occurs.
 さらに、その他の波長方向の相関要因として、被写体環境における波長帯毎の光量分布と被写体の波長帯毎の反射率が考えられる。波長帯毎の反射率は、被写体固有のものであり、被写体の位置との相関要因が支配的である。 Furthermore, as other correlation factors in the wavelength direction, the light quantity distribution for each wavelength band in the subject environment and the reflectance for each wavelength band of the subject can be considered. The reflectance for each wavelength band is unique to the subject, and the correlation factor with the position of the subject is dominant.
 例えば、白い被写体は、B波長帯、G波長帯、R波長帯の光に対する反射率が、ほぼ同様に高くなる。青い被写体は、B波長帯の光に対する反射率のみが高くなる。黄色の被写体は、G波長帯とR波長帯の光に対する反射率が高くなる。このように、可視光の波長帯においては、被写体本来の色や光源のスペクトルにより、各波長帯間の相関が大きく異なる。 For example, a white subject has substantially the same reflectivity with respect to light in the B wavelength band, the G wavelength band, and the R wavelength band. A blue subject has only a high reflectance with respect to light in the B wavelength band. The yellow subject has a high reflectance with respect to light in the G wavelength band and the R wavelength band. Thus, in the wavelength band of visible light, the correlation between the wavelength bands varies greatly depending on the original color of the subject and the spectrum of the light source.
 従って、波長帯間で相関が大きく異なる信号同士を単純に積分すると、逆に各波長帯の信号成分がノイズとなり失われてしまう。輝度値は、まさにその典型例であり、B波長帯、G波長帯、R波長帯の各波長帯に対する被写体の反射率によらずに、一定の割合で混合する。そのため、色が大きく異なるにもかかわらず輝度値がほぼ同じテクスチャは、輝度では検出することできない。 Therefore, if signals with significantly different correlations between wavelength bands are simply integrated, the signal components in each wavelength band are lost as noise. The luminance value is just a typical example, and is mixed at a constant ratio regardless of the reflectance of the subject with respect to each of the B wavelength band, the G wavelength band, and the R wavelength band. For this reason, textures having substantially the same luminance value despite greatly different colors cannot be detected by luminance.
 そこで、波長方向積分処理部166は、倍率色収差、及び、被写体の色模様等に対応するために、波長帯毎及び画素毎に適応的に重みを調整する可変フィルタを用いる。例えば、波長方向積分処理部166は、図13に示されるような、波長方向に有限のタップ数を持つFIRフィルタを用いて、波長方向の重み付け積分処理を行う。 Therefore, the wavelength direction integration processing unit 166 uses a variable filter that adaptively adjusts the weight for each wavelength band and for each pixel in order to cope with the chromatic aberration of magnification and the color pattern of the subject. For example, the wavelength direction integration processing unit 166 performs weighted integration processing in the wavelength direction using an FIR filter having a finite number of taps in the wavelength direction as shown in FIG.
 ここで、被写体の色模様等に対して、適応的に波長帯毎かつ画素毎に重みを調整するフィルタの好例として、バイラテラルフィルタ、イプシロンフィルタ、又は、Non-local Means(非局所平均)フィルタのようなエッジ保存型の平滑化フィルタがある。以下では、主にバイラテラルフィルタを用いる場合を例に挙げて説明する。 Here, as a good example of a filter that adaptively adjusts the weight for each wavelength band and for each pixel with respect to the color pattern of the subject, a bilateral filter, an epsilon filter, or a non-local Means (nonlocal average) filter There are edge-preserving smoothing filters such as Hereinafter, a case where a bilateral filter is mainly used will be described as an example.
 まず、波長方向重み調整部165は、カメラ101aに装着されているレンズ111の倍率色収差量を示す倍率色収差量表を色収差データ記憶部137から読み出す。 First, the wavelength direction weight adjustment unit 165 reads a magnification chromatic aberration amount table indicating a magnification chromatic aberration amount of the lens 111 attached to the camera 101 a from the chromatic aberration data storage unit 137.
 図14は、倍率色収差量表の例を示している。倍率色収差量表は、縦軸に示される波長に対する横軸に示される波長の倍率色収差量が示されている。例えば、c(λk,λi)は、波長λkに対する波長λiの倍率色収差量を示す。なお、現在説明している例においては、波長λk及び波長λiは、R波長帯、G波長帯、及び、B波長帯のいずれかとなる。 FIG. 14 shows an example of a magnification chromatic aberration amount table. The lateral chromatic aberration amount table shows the lateral chromatic aberration amount of the wavelength indicated on the horizontal axis with respect to the wavelength indicated on the vertical axis. For example, c (λ k , λ i ) indicates the amount of chromatic aberration of magnification at the wavelength λ i with respect to the wavelength λ k . In the presently described example, the wavelength λ k and the wavelength λ i are any of the R wavelength band, the G wavelength band, and the B wavelength band.
 ところで、軸上色収差及び倍率色収差は、使用するレンズシステムにより異なる。例えば、レンズ111を交換できない場合、レンズ111用の倍率色収差量表が、色収差データ記憶部137に記憶されている。 By the way, axial chromatic aberration and lateral chromatic aberration vary depending on the lens system used. For example, when the lens 111 cannot be replaced, a magnification chromatic aberration amount table for the lens 111 is stored in the chromatic aberration data storage unit 137.
 一方、レンズ111の交換が可能な場合、例えば、使用可能な全てのレンズシステムの倍率色収差量表が色収差データ記憶部137に予め記憶される。また、例えば、波長帯毎の色収差量等の光学特性を特定するための識別情報が、カメラ101aに読み取り可能な形式で各レンズシステムに記録される。そして、カメラ101aにレンズシステムが装着されたとき、AF制御部121は、そのレンズシステムの識別情報を読み出す。波長方向重み調整部165は、その識別情報に基づいて、カメラ101aに装着されているレンズシステム(レンズ111)の倍率色収差量表を色収差データ記憶部137から読み出す。 On the other hand, when the lens 111 can be exchanged, for example, a magnification chromatic aberration amount table of all usable lens systems is stored in the chromatic aberration data storage unit 137 in advance. Further, for example, identification information for specifying optical characteristics such as the amount of chromatic aberration for each wavelength band is recorded in each lens system in a format readable by the camera 101a. When the lens system is attached to the camera 101a, the AF control unit 121 reads the identification information of the lens system. Based on the identification information, the wavelength direction weight adjustment unit 165 reads a magnification chromatic aberration amount table of the lens system (lens 111) attached to the camera 101a from the chromatic aberration data storage unit 137.
 また、通常のレンズシステムでは、合焦位置から一定量以上外れたレンズ位置において、レンズ位置に関わらず、異なる波長帯間の光束による光軸中央における像の広がりの差は、ほぼ一定になる。従って、レンズシステム毎に、所定の複数の波長帯間の倍率色収差量を求め、図14に示されるような倍率色収差量表を作成することが可能である。 In a normal lens system, the difference in the spread of the image in the center of the optical axis due to the light flux between different wavelength bands is almost constant regardless of the lens position at the lens position deviated from the in-focus position by a certain amount or more. Therefore, it is possible to obtain the magnification chromatic aberration amount between a plurality of predetermined wavelength bands for each lens system, and to create a magnification chromatic aberration amount table as shown in FIG.
 そして、波長方向重み調整部165及び波長方向積分処理部166は、例えば、次式(13)により、位相差画像の各位相差画素の波長帯λkの位相差画素値qd’’’(λk)を算出する。なお、波長帯λkは、R波長帯、G波長帯、又は、B波長帯のいずれかである。 Then, the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166, for example, by the following equation (13), the phase difference pixel value qd ′ ″ (λ k ) of the wavelength band λ k of each phase difference pixel of the phase difference image. ) Is calculated. The wavelength band λ k is any one of the R wavelength band, the G wavelength band, and the B wavelength band.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 q”(λi)は、空間方向積分処理後の波長帯λiの位相差画素値である。 q ″ (λ i ) is a phase difference pixel value of the wavelength band λ i after the spatial direction integration processing.
 f{c(λk,λi),σc}は、波長帯λkと波長帯λiの倍率色収差量c(λk,λi)に基づく重みである。σcは、倍率色収差量c(λk,λi)に対するガウシアンフィルタの標準偏差である。この重みは、倍率色収差量c(λk,λi)が大きいほど小さくなり、倍率色収差量c(λk,λi)が小さいほど大きくなり、主に平滑化の効果をもたらす。 f {c (λ k , λ i ), σ c } is a weight based on the amount of chromatic aberration of magnification c (λ k , λ i ) of the wavelength band λ k and the wavelength band λ i . σ c is a standard deviation of the Gaussian filter with respect to the chromatic aberration of magnification c (λ k , λ i ). This weight decreases as the magnification chromatic aberration amount c (λ k, λ i) is large, it becomes greater as the lateral chromatic aberration amount c (λ k, λ i) is small, resulting in the effect of mainly smoothing.
 f{q”(λi)-q”(λk),σλq}は、空間方向積分処理後の波長帯λiと波長帯λkの位相差画素値の差に基づく重みである。σλqは、位相差画素値の差に対するガウシアンフィルタの標準偏差である。この重みは、位相差画素値の差が小さいほど大きくなり、位相差画素値の差が大きいほど小さくなり、主にエッジ保存の効果をもたらす。 f {q ″ (λ i ) −q ″ (λ k ), σ λq } is a weight based on the difference between the phase difference pixel values of the wavelength band λ i and the wavelength band λ k after the spatial direction integration processing. σ λq is the standard deviation of the Gaussian filter with respect to the difference between the phase difference pixel values. This weight increases as the difference in phase difference pixel value decreases, and decreases as the difference in phase difference pixel value increases. This weight mainly brings about the effect of edge preservation.
 このように、波長帯間の倍率色収差量及び位相差画素値の差に応じた重み付き積分処理が行われる。 In this way, weighted integration processing according to the difference between the chromatic aberration of magnification and the phase difference pixel value between the wavelength bands is performed.
 これにより、例えば、白やグレー(無彩色)の被写体では、R波長帯、G波長帯、及び、B波長帯の各成分の相関がともに高いので、Wフィルタ(透明フィルタ)を用いたW画素を使用するのと同等のレベルまでSN比を向上させることができる。また、例えば、黄色の被写体では、相関性が高いG波長帯とR波長帯の成分によりSN比を向上させつつ、相関性の低いB波長帯の成分によるノイズの混入を防止することができる。 Accordingly, for example, in a white or gray (achromatic) subject, the correlation between the components in the R wavelength band, the G wavelength band, and the B wavelength band is high, so that a W pixel using a W filter (transparent filter) is used. The signal-to-noise ratio can be improved to a level equivalent to the use of. Further, for example, in a yellow subject, it is possible to prevent noise from being mixed due to a component of the B wavelength band having a low correlation while improving the SN ratio by the components of the G wavelength band and the R wavelength band having a high correlation.
 波長方向積分処理部166は、波長方向積分処理後の位相差画像を位相差検出部133に供給する。 The wavelength direction integration processing unit 166 supplies the phase difference image after the wavelength direction integration processing to the phase difference detection unit 133.
 なお、例えば、空間方向積分処理後の位相差画像のSN比が所定の閾値未満である場合、波長方向重み調整部165及び波長方向積分処理部166は、例えば、次式(14)により位相差画素値qd’’’(λk)を算出するようにしてもよい。なお、ここでの閾値は、例えば、ステップS11の判定処理で用いられる閾値より低い値に設定される。 For example, when the S / N ratio of the phase difference image after the spatial direction integration processing is less than a predetermined threshold, the wavelength direction weight adjustment unit 165 and the wavelength direction integration processing unit 166 may calculate the phase difference by the following equation (14), for example. The pixel value qd ′ ″ (λ k ) may be calculated. Note that the threshold value here is set to a value lower than the threshold value used in the determination processing in step S11, for example.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 f{p”(λi)-p”(λk),σλp}は、空間方向積分処理後の波長帯λiと波長帯λkの撮影画素値の差に基づく重みである。σλpは、撮影画素値の差に対するガウシアンフィルタの標準偏差である。この重みは、撮影画素値の差が小さいほど大きくなり、撮影画素値の差が大きいほど小さくなり、主にエッジ保存の効果をもたらす。 f {p ″ (λ i ) −p ″ (λ k ), σ λp } is a weight based on the difference between the imaging pixel values of the wavelength band λ i and the wavelength band λ k after the spatial direction integration processing. σ λp is the standard deviation of the Gaussian filter with respect to the difference in the captured pixel value. This weight increases as the difference between the photographic pixel values decreases, and decreases as the difference between the photographic pixel values increases. This weight mainly brings about the effect of edge preservation.
 すなわち、空間方向積分処理後の位相差画像のSN比が十分でない場合、位相差画素値の代わりに、撮影画像の撮影画素値を用いて、重みを設定するようにしてもよい。なお、この場合、その分色模様の相関性は低下するので、SN比と相関性のトレードオフとなる。 That is, when the S / N ratio of the phase difference image after the spatial direction integration processing is not sufficient, the weight may be set using the captured pixel value of the captured image instead of the phase difference pixel value. In this case, since the correlation of the color separation pattern is lowered, the S / N ratio and the correlation are traded off.
 その後、処理はステップS13に進む。 Thereafter, the process proceeds to step S13.
 一方、ステップS11において、位相差画像のSN比が所定の閾値以上であると判定された場合、ステップS12の処理はスキップされ、処理はステップS13に進む。すなわち、この場合、空間方向積分処理が終了した時点で、位相差画像のSN比が信頼性の高い位相差を検出するのに十分なレベルに達したと判定され、波長方向積分処理はスキップされる。このとき、波長方向積分処理部166は、空間方向積分処理後の位相差画像を位相差検出部133に供給する。 On the other hand, if it is determined in step S11 that the S / N ratio of the phase difference image is equal to or greater than the predetermined threshold, the process of step S12 is skipped, and the process proceeds to step S13. That is, in this case, when the spatial direction integration processing is completed, it is determined that the SN ratio of the phase difference image has reached a level sufficient to detect a reliable phase difference, and the wavelength direction integration processing is skipped. The At this time, the wavelength direction integration processing unit 166 supplies the phase difference image after the spatial direction integration processing to the phase difference detection unit 133.
 また、ステップS9において、位相差画像のSN比が所定の閾値以上であると判定された場合、ステップS10乃至S12の処理はスキップされ、処理はステップS13に進む。すなわち、この場合、時間方向積分処理が終了した時点で、位相差画像のSN比が信頼性の高い位相差を検出するのに十分なレベルに達したと判定され、空間方向積分処理及び波長方向積分処理はスキップされる。このとき、空間方向積分処理部164は、時間方向積分処理後の位相差画像を位相差検出部133に供給する。 If it is determined in step S9 that the S / N ratio of the phase difference image is equal to or greater than the predetermined threshold value, the processes in steps S10 to S12 are skipped, and the process proceeds to step S13. That is, in this case, when the time direction integration processing is completed, it is determined that the SN ratio of the phase difference image has reached a level sufficient to detect a reliable phase difference, and the spatial direction integration processing and the wavelength direction are determined. The integration process is skipped. At this time, the spatial direction integration processing unit 164 supplies the phase difference image after the time direction integration processing to the phase difference detection unit 133.
 さらに、ステップS7において、位相差画像のSN比が所定の閾値以上であると判定された場合、ステップS8乃至S12の処理はスキップされ、処理はステップS13に進む。すなわち、この場合、積分処理を行わなくても、位相差画像のSN比が信頼性の高い位相差を検出するのに十分なレベルに達していると判定され、全ての積分処理がスキップされる。このとき、位相差画像取得部153は、最新のフレームの位相差画像を焦点検出用画像記憶部136から読み出し、時間方向重み調整部161及び時間方向積分処理部162を介して、位相差検出部133に供給する。 Furthermore, if it is determined in step S7 that the S / N ratio of the phase difference image is equal to or greater than a predetermined threshold, the processes in steps S8 to S12 are skipped, and the process proceeds to step S13. That is, in this case, even if integration processing is not performed, it is determined that the SN ratio of the phase difference image has reached a level sufficient to detect a reliable phase difference, and all integration processing is skipped. . At this time, the phase difference image acquisition unit 153 reads the phase difference image of the latest frame from the focus detection image storage unit 136, and passes through the time direction weight adjustment unit 161 and the time direction integration processing unit 162, thereby the phase difference detection unit. 133.
 ステップS13において、位相差検出部133は、位相ずれ量の検出に用いる位相差列を選択する。ここで、位相差列と位相ずれ量の詳細について説明する。 In step S13, the phase difference detection unit 133 selects a phase difference sequence used for detection of the phase shift amount. Here, details of the phase difference sequence and the phase shift amount will be described.
 位相差列とは、同じ波長帯かつ同じ方向の位相差画素であって、瞳分割方向と平行な行又は列に並ぶ位相差画素の位相差画素値を画素位置の順に並べたものである。 A phase difference column is a phase difference pixel in the same wavelength band and in the same direction, in which phase difference pixel values of phase difference pixels arranged in a row or column parallel to the pupil division direction are arranged in the order of pixel positions.
 例えば、撮像素子112aの2m行(m=0,1,2,3,・・・)におけるG波長帯λgの左方向の位相差列をQL(2m,λg)とすると、QL(2m,λg)=(qL(0,2m),qL(2,2m),qL(4,2m),・・・)となる。また、撮像素子112aの2m行におけるG波長帯λgの右方向の位相差列QR(2m,λg)とすると、QR(2m,λg)=(qR(0,2m),qR(2,2m),qR(4,2m),・・・)となる。 For example, assuming that the left phase difference column of the G wavelength band λg in 2m rows (m = 0, 1, 2, 3,...) Of the image sensor 112a is QL (2m, λg), QL (2m, λg). ) = (QL (0,2m), qL (2,2m), qL (4,2m),...). Further, assuming that the phase difference column QR (2m, λg) in the right direction of the G wavelength band λg in the 2m row of the image sensor 112a is QR (2m, λg) = (qR (0, 2m), qR (2, 2m). , QR (4,2m),.
 なお、撮像素子112aにおいて、Gr画素は、偶数行に配置されており、いずれも水平方向に瞳分割されている。従って、位相差列QL(2m,λg)及び位相差列QR(2m,λg)は、Gr画素の位相差成分となる。すなわち、位相差列QL(2m,λg)は、2m行のGr画素の左方向の画素値qLからなる位相差列である。位相差列QR(2m,λg)は、2m行のGr画素の右方向の画素値qRからなる位相差列である。 In the image sensor 112a, the Gr pixels are arranged in even-numbered rows, and all are pupil-divided in the horizontal direction. Accordingly, the phase difference sequence QL (2m, λg) and the phase difference sequence QR (2m, λg) are the phase difference components of the Gr pixel. That is, the phase difference sequence QL (2m, λg) is a phase difference sequence composed of pixel values qL in the left direction of 2m rows of Gr pixels. The phase difference sequence QR (2m, λg) is a phase difference sequence composed of pixel values qR in the right direction of 2m rows of Gr pixels.
 また、例えば、撮像素子112aの2n+1列(n=0,1,2,3,・・・)におけるG波長帯λgの上方向の位相差列をQU(2n+1,λg)とすると、QU(2n+1,λg)=(qU(2n+1,1),qU(2n+1,3),qU(2n+1,5),・・・)となる。撮像素子112aの2n+1列におけるG波長帯λgの下方向の位相差列をQD(2n+1,λg)とすると、QD(2n+1,λg)=(qD(2n+1,1),qD(2n+1,3),qD(2n+1,5),・・・)となる。 Further, for example, if the upward phase difference column of the G wavelength band λg in the 2n + 1 column (n = 0, 1, 2, 3,...) Of the image sensor 112a is QU (2n + 1, λg), QU (2n + 1) , λg) = (qU (2n + 1,1), qU (2n + 1,3), qU (2n + 1,5),...). Assuming that the downward phase difference column of the G wavelength band λg in the 2n + 1 column of the image sensor 112a is QD (2n + 1, λg), QD (2n + 1, λg) = (qD (2n + 1,1), qD (2n + 1,3), qD (2n + 1,5),.
 なお、撮像素子112aにおいて、Gb画素は、奇数列に配置されており、いずれも垂直方向に瞳分割されている。従って、位相差列QU(2n+1,λg)及び位相差列QD(2n+1,λg)は、Gb画素の位相差成分となる。すなわち、位相差列QU(2n+1,λg)は、2n+1列のGb画素の上方向の画素値qUからなる位相差列である。位相差列QR(2n+1,λg)は、2n+1列のGb画素の下方向の画素値qDからなる位相差列である。 Note that in the image sensor 112a, the Gb pixels are arranged in odd-numbered columns, and all are pupil-divided in the vertical direction. Therefore, the phase difference sequence QU (2n + 1, λg) and the phase difference sequence QD (2n + 1, λg) are the phase difference components of the Gb pixel. That is, the phase difference sequence QU (2n + 1, λg) is a phase difference sequence composed of the pixel values qU in the upward direction of 2n + 1 Gb pixels. The phase difference sequence QR (2n + 1, λg) is a phase difference sequence composed of downward pixel values qD of 2n + 1 Gb pixels.
 同様に、撮像素子112aの4m+1行(m=0,1,2,3,・・・)におけるB波長帯λbの左方向の位相差列をQL(4m+1,λb)とすると、QL(4m+1,λb)=(qL(0,4m+1),qL(4,4m+1),qL(8,4m+1),・・・)となる。撮像素子112aの4m+1行(m=0,1,2,3,・・・)におけるB波長帯λbの右方向の位相差列をQR(4m+1,λb)とすると、QR(4m+1,λb)=(qR(0,4m+1),qR(4,4m+1),qR(8,4m+1),・・・)となる。 Similarly, if the left phase difference column of the B wavelength band λb in the 4m + 1 row (m = 0, 1, 2, 3,...) Of the image sensor 112a is QL (4m + 1, λb), QL (4m + 1, λb). λb) = (qL (0,4m + 1), qL (4,4m + 1), qL (8,4m + 1),...). If the phase difference column in the right direction of the B wavelength band λb in the 4m + 1 row (m = 0, 1, 2, 3,...) Of the image sensor 112a is QR (4m + 1, λb), QR (4m + 1, λb) = (QR (0,4m + 1), qR (4,4m + 1), qR (8,4m + 1),...).
 撮像素子112aの4m+3行(m=0,1,2,3,・・・)におけるB波長帯λbの左方向の位相差列をQL(4m+3,λb)とすると、QL(4m+3,λb)=(qL(2,4m+3),qL(6,4m+3),qL(10,4m+3),・・・)となる。撮像素子112aの4m+3行(m=0,1,2,3,・・・)におけるB波長帯λbの右方向の位相差列をQR(4m+3,λb)とすると、QR(4m+3,λb)=(qR(2,4m+3),qR(6,4m+3),qR(10,4m+3),・・・)となる。 When the phase difference sequence in the left direction of the B wavelength band λb in 4m + 3 rows (m = 0, 1, 2, 3,...) Of the image sensor 112a is QL (4m + 3, λb), QL (4m + 3, λb) = (QL (2,4m + 3), qL (6,4m + 3), qL (10,4m + 3),...). If the phase difference column in the right direction of the B wavelength band λb in 4m + 3 rows (m = 0, 1, 2, 3,...) Of the image sensor 112a is QR (4m + 3, λb), QR (4m + 3, λb) = (QR (2,4m + 3), qR (6,4m + 3), qR (10,4m + 3),...).
 撮像素子112aの4n列(n=0,1,2,3,・・・)におけるB波長帯λbの上方向の位相差列をQU(4n,λb)とすると、QU(4n,λb)=(qU(4n,3),qU(4n,7),qU(4n,11),・・・)となる。撮像素子112aの4n列(n=0,1,2,3,・・・)におけるB波長帯λbの下方向の位相差列をQD(4n,λb)とすると、QD(4n,λb)=(qD(4n,3),qD(4n,7),qD(4n,11),・・・)となる。 When the upper phase difference sequence of the B wavelength band λb in the 4n sequence (n = 0, 1, 2, 3,...) Of the image sensor 112a is QU (4n, λb), QU (4n, λb) = (QU (4n, 3), qU (4n, 7), qU (4n, 11),...)). If the downward phase difference sequence of the B wavelength band λb in the 4n sequence (n = 0, 1, 2, 3,...) Of the image sensor 112a is QD (4n, λb), QD (4n, λb) = (QD (4n, 3), qD (4n, 7), qD (4n, 11),...)).
 撮像素子112aの4n+2列(n=0,1,2,3,・・・)におけるB波長帯λbの上方向の位相差列をQU(4n+2,λb)とすると、QU(4n+2,λb)=(qU(4n+2,1),qU(4n+2,5),qU(4n+2,9),・・・)となる。撮像素子112aの4n+2列(n=0,1,2,3,・・・)におけるB波長帯λbの下方向の位相差列をQD(4n+2,λb)とすると、QD(4n+2,λb)=(qD(4n+2,1),qD(4n+2,5),qD(4n+2,9),・・・)となる。 When the upper phase difference column in the B wavelength band λb in the 4n + 2 column (n = 0, 1, 2, 3,...) Of the image sensor 112a is QU (4n + 2, λb), QU (4n + 2, λb) = (QU (4n + 2,1), qU (4n + 2,5), qU (4n + 2,9),...)). If the downward phase difference column of the B wavelength band λb in the 4n + 2 column (n = 0, 1, 2, 3,...) Of the image sensor 112a is QD (4n + 2, λb), QD (4n + 2, λb) = (QD (4n + 2,1), qD (4n + 2,5), qD (4n + 2,9),...)).
 撮像素子112aの4m行(m=0,1,2,3,・・・)におけるR波長帯λrの左方向の位相差列をQL(4m,λr)とすると、QL(4m,λr)=(qL(1,4m),qL(5,4m),qL(9,4m),・・・)となる。撮像素子112aの4m行(m=0,1,2,3,・・・)におけるR波長帯λrの右方向の位相差列をQR(4m,λr)とすると、QR(4m,λr)=(qR(1,4m),qR(5,4m),qR(9,4m),・・・)となる。 If the phase difference sequence in the left direction of the R wavelength band λr in 4m rows (m = 0, 1, 2, 3,...) Of the image sensor 112a is QL (4m, λr), QL (4m, λr) = (QL (1,4m), qL (5,4m), qL (9,4m),...). If the phase difference column in the right direction of the R wavelength band λr in 4m rows (m = 0, 1, 2, 3,...) Of the image sensor 112a is QR (4m, λr), QR (4m, λr) = (QR (1,4m), qR (5,4m), qR (9,4m),...).
 撮像素子112aの4m+2行(m=0,1,2,3,・・・)におけるR波長帯λrの左方向の位相差列をQL(4m+2,λr)とすると、QL(4m+2,λr)=(qL(3,4m+2),qL(7,4m+2),qL(11,4m+2),・・・)となる。撮像素子112aの4m+2行(m=0,1,2,3,・・・)におけるR波長帯λrの右方向の位相差列をQR(4m+2,λr)とすると、QR(4m+2,λr)=(qR(3,4m+2),qR(7,4m+2),qR(11,4m+2),・・・)となる。 If the left phase difference column of the R wavelength band λr in 4m + 2 rows (m = 0, 1, 2, 3,...) Of the image sensor 112a is QL (4m + 2, λr), QL (4m + 2, λr) = (QL (3,4m + 2), qL (7,4m + 2), qL (11,4m + 2),...). If the phase difference column in the right direction of the R wavelength band λr in 4m + 2 rows (m = 0, 1, 2, 3,...) Of the image sensor 112a is QR (4m + 2, λr), QR (4m + 2, λr) = (QR (3,4m + 2), qR (7,4m + 2), qR (11,4m + 2),...).
 撮像素子112aの4n+1列(n=0,1,2,3,・・・)におけるR波長帯λrの上方向の位相差列をQU(4n+1,λr)とすると、QU(4n+1,λr)=(qU(4n+1,2),qU(4n+1,6),qU(4n+1,10),・・・)となる。撮像素子112aの4n+1行(n=0,1,2,3,・・・)におけるR波長帯λrの下方向の位相差列をQD(4n+1,λr)とすると、QD(4n+1,λr)=(qD(4n+1,2),qD(4n+1,6),qD(4n+1,10),・・・)となる。 When the upper phase difference column of the R wavelength band λr in the 4n + 1 column (n = 0, 1, 2, 3,...) Of the image sensor 112a is QU (4n + 1, λr), QU (4n + 1, λr) = (QU (4n + 1, 2), qU (4n + 1, 6), qU (4n + 1, 10),...)). If the downward phase difference column in the R wavelength band λr in 4n + 1 rows (n = 0, 1, 2, 3,...) Of the image sensor 112a is QD (4n + 1, λr), QD (4n + 1, λr) = (QD (4n + 1, 2), qD (4n + 1, 6), qD (4n + 1, 10),...)).
 撮像素子112aの4n+3列(n=0,1,2,3,・・・)におけるR波長帯λrの上方向の位相差列をQU(4n+3,λr)とすると、QU(4n+3,λr)=(qU(4n+3,0),qU(4n+3,4),qU(4n+3,8),・・・)となる。撮像素子112aの4n+3列(n=0,1,2,3,・・・)におけるR波長帯λrの下方向の位相差列をQD(4n+3,λr)とすると、QD(4n+3,λr)=(qD(4n+3,0),qD(4n+3,4),qD(4n+3,8),・・・)となる。 If the upper phase difference column of the R wavelength band λr in the 4n + 3 columns (n = 0, 1, 2, 3,...) Of the image sensor 112a is QU (4n + 3, λr), QU (4n + 3, λr) = (QU (4n + 3, 0), qU (4n + 3,4), qU (4n + 3, 8),...). If the downward phase difference column in the R wavelength band λr in the 4n + 3 columns (n = 0, 1, 2, 3,...) Of the image sensor 112a is QD (4n + 3, λr), QD (4n + 3, λr) = (QD (4n + 3, 0), qD (4n + 3, 4), qD (4n + 3, 8),...)).
 そして、位相差列は、波長帯と方向の組み合わせにより12種類に分かれる。すなわち、R波長帯の左方向の位相差列QL(y,λr)、R波長帯の右方向の位相差列QR(y,λr)、R波長帯の上方向の位相差列QU(x,λr)、R波長帯の下方向の位相差列QD(x,λr)、G波長帯の左方向の位相差列QL(y,λg)、G波長帯の右方向の位相差列QR(y,λg)、G波長帯の上方向の位相差列QU(x,λg)、G波長帯の下方向の位相差列QD(x,λg)、B波長帯の左方向の位相差列QL(y,λb)、B波長帯の右方向の位相差列QR(y,λb)、B波長帯の上方向の位相差列QU(x,λb)、及び、B波長帯の下方向の位相差列QD(x,λb)の12種類の位相差列が存在する。 And, the phase difference sequence is divided into 12 types depending on the combination of wavelength band and direction. That is, the left phase difference sequence QL (y, λr) of the R wavelength band, the right phase difference sequence QR (y, λr) of the R wavelength band, and the upward phase difference sequence QU (x, λr), R phase band downward phase difference sequence QD (x, λr), G wavelength band left phase difference sequence QL (y, λg), G wavelength band right phase difference sequence QR (y , Λg), an upward phase difference sequence QU (x, λg), a downward phase difference sequence QD (x, λg) of the G wavelength band, and a left phase difference sequence QL ( y, λb), right phase difference column QR (y, λb) in the B wavelength band, upward phase difference sequence QU (x, λb) in the B wavelength band, and downward phase difference in the B wavelength band There are twelve types of phase difference columns in the column QD (x, λb).
 そして、同じ行のR波長帯の位相差列QL(y,λr)により形成される像と位相差列QR(y,λr)により形成される像との間のずれ量(位相差)が、水平方向の位相ずれ量として検出される。同じ行のG波長帯の位相差列QL(y,λg)により形成される像と位相差列QR(y,λg)により形成される像との間のずれ量(位相差)が、水平方向の位相ずれ量として検出される。同じ行のB波長帯の位相差列QL(y,λb)に形成される像と位相差列QR(y,λb)により形成される像との間のずれ量(位相差)が、水平方向の位相ずれ量として検出される。 Then, the shift amount (phase difference) between the image formed by the phase difference column QL (y, λr) in the R wavelength band of the same row and the image formed by the phase difference column QR (y, λr) is It is detected as the amount of phase shift in the horizontal direction. The amount of deviation (phase difference) between the image formed by the phase difference column QL (y, λg) and the image formed by the phase difference column QR (y, λg) in the G wavelength band of the same row is horizontal. Is detected as a phase shift amount. The amount of deviation (phase difference) between the image formed in the phase difference column QL (y, λb) in the B wavelength band of the same row and the image formed by the phase difference column QR (y, λb) is horizontal. Is detected as a phase shift amount.
 同じ列のR波長帯の位相差列QU(x,λr)により形成される像と位相差列QD(x,λr)により形成される像との間のずれ量(位相差)が、垂直方向の位相ずれ量として検出される。同じ列のG波長帯の位相差列QU(x,λg)により形成される像と位相差列QD(x,λg)により形成される像との間のずれ量(位相差)が、垂直方向の位相ずれ量として検出される。同じ列のB波長帯の位相差列QU(x,λb)に形成される像と位相差列QD(x,λb)により形成される像との間のずれ量(位相差)が、垂直方向の位相ずれ量として検出される。 The shift amount (phase difference) between the image formed by the phase difference column QU (x, λr) in the R wavelength band of the same column and the image formed by the phase difference column QD (x, λr) is vertical. Is detected as a phase shift amount. The deviation amount (phase difference) between the image formed by the phase difference column QU (x, λg) in the G wavelength band of the same column and the image formed by the phase difference column QD (x, λg) is vertical. Is detected as a phase shift amount. The deviation amount (phase difference) between the image formed in the phase difference column QU (x, λb) in the B wavelength band of the same column and the image formed by the phase difference column QD (x, λb) is vertical. Is detected as a phase shift amount.
 図15乃至図17は、位相ずれ量の具体例を示している。 15 to 17 show specific examples of the phase shift amount.
 図15乃至図17の左側の図は、ある波長帯λの光束がレンズ111に入射する様子を上から見た模式図である。撮像素子112a側から見て、レンズ111の左側(図内の下側)を通過する光束が粗い点線で示され、レンズ111の右側(図内の上側)を通過する光束が細かい点線で示されている。 FIGS. 15 to 17 are schematic views of a state in which a light flux in a certain wavelength band λ is incident on the lens 111 from above. When viewed from the image sensor 112a side, the light beam passing through the left side (lower side in the figure) of the lens 111 is indicated by a rough dotted line, and the light beam passing through the right side (upper side in the figure) of the lens 111 is indicated by a fine dotted line. ing.
 図15の左側の図は、レンズ111の焦点が被写体より前に合っている前ピン状態を示している。より具体的には、波長帯λの光束に対して、撮像素子112aが合焦位置より相対的にレンズ111に近づく方向にずれている状態を示している。 15 shows the front pin state in which the lens 111 is focused in front of the subject. More specifically, a state in which the image sensor 112a is displaced in the direction closer to the lens 111 than the in-focus position with respect to the light flux in the wavelength band λ is illustrated.
 図16の左側の図は、レンズ111の焦点が被写体に合っている合焦状態を示している。より具体的には、波長帯λの光束に対して、撮像素子112aが合焦位置にある状態を示している。 16 shows the in-focus state where the lens 111 is focused on the subject. More specifically, the imaging element 112a is in a focused position with respect to the light flux in the wavelength band λ.
 図17の左側の図は、レンズ111の焦点が被写体より後ろに合っている後ピン状態を示している。より具体的には、波長帯λの光束に対して、撮像素子112aが合焦位置より相対的にレンズ111より遠ざかる方向にずれている状態を示している。 The figure on the left side of FIG. 17 shows a rear pin state in which the lens 111 is focused behind the subject. More specifically, it shows a state in which the image sensor 112a is displaced in the direction away from the lens 111 relative to the focused position with respect to the light flux in the wavelength band λ.
 図15乃至図17の右側の図は、波長帯λの光束が左側の図に示される状態である場合において、波長帯λの光束の左方向の位相差列QLと右方向の位相差列QRの関係を示すグラフである。グラフの横軸は、位相差画像のある行における画素の位置を示し、縦軸は位相差画素値を示している。左方向の位相差列QLの波形は、粗い点線で示され、右方向の位相差列QRの波形は、細かい点線で示されている。 FIGS. 15 to 17 show the left phase difference column QL and the right phase difference column QR of the light beam in the wavelength band λ when the light beam in the wavelength band λ is in the state shown in the left figure. It is a graph which shows the relationship. The horizontal axis of the graph indicates the pixel position in a certain row of the phase difference image, and the vertical axis indicates the phase difference pixel value. The waveform of the phase difference sequence QL in the left direction is indicated by a rough dotted line, and the waveform of the phase difference sequence QR in the right direction is indicated by a fine dotted line.
 位相差列QLの波形は、左方向の位相差画素(光電変換部214L)により検出される波長帯λの光束の像の位置を示し、位相差列QRの波形は、右方向の位相差画素(光電変換部214R)により検出される波長帯λの光束の像の位置を示す。なお、図15乃至図17では、位相差列QL及び位相差列QRの波形が、左右対称の正規分布に近い波形になっているが、実際の波形は、被写体の図柄などにより変化する。 The waveform of the phase difference column QL indicates the position of the image of the light flux in the wavelength band λ detected by the left phase difference pixel (photoelectric conversion unit 214L), and the waveform of the phase difference column QR is the phase difference pixel in the right direction. The position of the image of the light flux in the wavelength band λ detected by the (photoelectric conversion unit 214R) is shown. In FIG. 15 to FIG. 17, the waveforms of the phase difference sequence QL and the phase difference sequence QR are waveforms that are close to a symmetric normal distribution, but the actual waveform varies depending on the pattern of the subject.
 図16の右側の図に示されるように、合焦状態の場合、位相差列QLの波形と位相差列QRの波形はほぼ一致し、高周波成分が最も高くなる。一方、図15の右側の図に示されるように、前ピン状態の場合、合焦状態の場合と比較して、位相差列QLの波形は左方向にずれ、位相差列QRの波形は右方向にずれる。また、図17の右側の図に示されるように、後ピン状態の場合、合焦状態の場合と比較して、位相差列QLの波形は右方向にずれ、位相差列QRの波形は左方向にずれる。 As shown in the diagram on the right side of FIG. 16, in the in-focus state, the waveform of the phase difference sequence QL and the waveform of the phase difference sequence QR are almost the same, and the high frequency component is the highest. On the other hand, as shown in the diagram on the right side of FIG. 15, in the front pin state, the waveform of the phase difference sequence QL is shifted to the left and the waveform of the phase difference sequence QR is Deviation in direction. Further, as shown in the diagram on the right side of FIG. 17, in the rear pin state, the waveform of the phase difference sequence QL is shifted to the right and the waveform of the phase difference sequence QR is left compared to the case of the in-focus state. Deviation in direction.
 この位相差列QLの波形と位相差列QRの波形との間のずれ量が、水平方向の位相ずれ量となる。なお、G波長帯の水平方向の位相ずれ量は、2列間隔の分解能で検出され、B波長帯及びR波長帯の水平方向の位相ずれ量は、4列間隔の分解能で検出される。従って、Gr画素の位相差成分を用いて、2列間隔の分解能で縦線の検出が可能である。B画素又はR画素の位相差成分を用いて、4列間隔の分解能で縦線の検出が可能である。 The amount of deviation between the waveform of the phase difference row QL and the waveform of the phase difference row QR is the amount of phase deviation in the horizontal direction. Note that the horizontal phase shift amount of the G wavelength band is detected with a resolution of two column intervals, and the horizontal phase shift amount of the B wavelength band and the R wavelength band is detected with a resolution of four column intervals. Therefore, it is possible to detect vertical lines with a resolution of two columns using the phase difference component of the Gr pixel. Using the phase difference component of the B pixel or R pixel, it is possible to detect vertical lines with a resolution of four columns.
 また、水平方向の位相差列(位相差列QL及び位相差列QR)は、縦線など水平方向にコントラスト差が大きい被写体の位相ずれ量の検出に適している。換言すれば、水平方向の位相差列は、縦線等の被写体の水平方向のコントラスト差の検出に適している。一方、水平方向の位相差列は、横線など垂直方向にコントラスト差が大きい被写体の位相ずれ量の検出にはあまり適していない。 Also, the horizontal phase difference sequence (phase difference sequence QL and phase difference sequence QR) is suitable for detecting the phase shift amount of a subject having a large contrast difference in the horizontal direction such as a vertical line. In other words, the horizontal phase difference sequence is suitable for detecting a horizontal contrast difference of a subject such as a vertical line. On the other hand, the phase difference sequence in the horizontal direction is not very suitable for detecting the phase shift amount of a subject having a large contrast difference in the vertical direction such as a horizontal line.
 さらに、前ピン状態と後ピン状態とでは、位相差列QLの波形と位相差列QRの波形との位置関係が逆になる。従って、両者の位置関係により、レンズ111が合焦状態、前ピン状態、又は、後ピン状態のいずれであるかが検出される。 Furthermore, the positional relationship between the waveform of the phase difference sequence QL and the waveform of the phase difference sequence QR is reversed between the front pin state and the rear pin state. Therefore, whether the lens 111 is in a focused state, a front pin state, or a rear pin state is detected based on the positional relationship between the two.
 垂直方向についても、位相差列QUの波形と位相差列QDの波形との間のずれ量が、垂直方向の位相ずれ量として検出される。なお、G波長帯の垂直方向の位相ずれ量は、2行間隔の分解能で検出され、B波長帯及びR波長帯の位相ずれ量は、4行間隔の分解能で検出される。従って、Gr画素の位相差成分を用いて、2行間隔の分解能で横線の検出が可能である。B画素又はR画素の位相差成分を用いて、4行間隔の分解能で横線の検出が可能である。 Also in the vertical direction, the shift amount between the waveform of the phase difference sequence QU and the waveform of the phase difference sequence QD is detected as the phase shift amount in the vertical direction. The phase shift amount in the vertical direction of the G wavelength band is detected with a resolution of two rows, and the phase shift amount of the B wavelength band and the R wavelength band is detected with a resolution of four rows. Accordingly, it is possible to detect a horizontal line with a resolution of two rows using the phase difference component of the Gr pixel. Using the phase difference component of the B pixel or R pixel, it is possible to detect a horizontal line with a resolution of four rows.
 また、垂直方向の位相差列(位相差列QU及び位相差列QD)は、横線など垂直方向にコントラスト差が大きい被写体の位相ずれ量の検出に適している。換言すれば、垂直方向の位相差列は、横線等の被写体の垂直方向のコントラスト差の検出に適している。一方、垂直方向の位相差列は、縦線など水平方向にコントラスト差が大きい被写体の位相ずれ量の検出にはあまり適していない。 The vertical phase difference sequence (phase difference sequence QU and phase difference sequence QD) is suitable for detecting the phase shift amount of a subject having a large contrast difference in the vertical direction such as a horizontal line. In other words, the vertical phase difference sequence is suitable for detecting a vertical contrast difference of a subject such as a horizontal line. On the other hand, the phase difference sequence in the vertical direction is not very suitable for detecting the phase shift amount of a subject having a large contrast difference in the horizontal direction such as a vertical line.
 さらに、前ピン状態と後ピン状態とでは、位相差列QUの波形と位相差列QDの波形との位置関係が逆になる。従って、両者の位置関係により、カメラ101aが合焦状態、前ピン状態、又は、後ピン状態のいずれであるかが検出される。 Furthermore, the positional relationship between the waveform of the phase difference sequence QU and the waveform of the phase difference sequence QD is reversed between the front pin state and the rear pin state. Therefore, whether the camera 101a is in the focused state, the front pin state, or the rear pin state is detected based on the positional relationship between the two.
 そして、後述するように、検出された位相ずれ量に基づいて、合焦位置が求められ、焦点の制御が行われる。 Then, as will be described later, an in-focus position is obtained based on the detected amount of phase shift, and focus control is performed.
 ここで、上述したように、水平方向の位相差列は、縦線等の被写体の水平方向のコントラスト差の検出に適しており、垂直方向の位相差列は、横線等の被写体の垂直方向のコントラスト差の検出に適している。一方、被写体のコントラストが変化する方向は、通常一様ではなく偏りがある。従って、被写体毎に、コントラスト差を検出しやすい方向と検出しにくい方向が生じる。 Here, as described above, the horizontal phase difference sequence is suitable for detecting the horizontal contrast difference of the subject such as a vertical line, and the vertical phase difference sequence is the vertical direction of the subject such as a horizontal line. Suitable for detecting contrast differences. On the other hand, the direction in which the contrast of the subject changes is usually not uniform and biased. Therefore, a direction in which a contrast difference is easily detected and a direction in which it is difficult to detect are generated for each subject.
 また、B波長帯の位相差列は、被写体のB波長帯のコントラストの検出に適しており、G波長帯の位相差列は、被写体のG波長帯のコントラストの検出に適しており、R波長帯の位相差列は、被写体のR波長帯のコントラストの検出に適している。一方、被写体の反射率は波長帯毎に異なるため、被写体のコントラストは波長帯毎に異なる。従って、被写体毎に、コントラスト差を検出しやすい波長帯と検出しにくい波長帯が生じる。 The phase difference sequence in the B wavelength band is suitable for detecting the contrast in the B wavelength band of the subject, and the phase difference sequence in the G wavelength band is suitable for detecting the contrast in the G wavelength band of the subject. The band phase difference sequence is suitable for detecting the contrast of the subject in the R wavelength band. On the other hand, since the reflectance of the subject differs for each wavelength band, the contrast of the subject differs for each wavelength band. Therefore, a wavelength band in which a contrast difference is easily detected and a wavelength band in which detection is difficult are generated for each subject.
 さらに、同じ種類(同じ波長帯及び方向)の位相差列が、行毎又は列毎に複数存在する。一方、被写体のコントラストは、被写体の場所により異なる。従って、被写体毎に、コントラスト差を検出しやすい場所と検出しにくい場所が生じる。 Furthermore, a plurality of phase difference columns of the same type (same wavelength band and direction) exist for each row or each column. On the other hand, the contrast of the subject varies depending on the location of the subject. Therefore, there are places where it is easy to detect the contrast difference and places where it is difficult to detect for each subject.
 一方、被写体のコントラスト差が大きい方が、位相差列を構成する位相差画素値の値が大きくなり、位相ずれ量の検出精度が向上する。その結果、レンズ111の合焦位置の検出精度が向上し、カメラ101aの合焦精度が向上する。 On the other hand, when the contrast difference of the subject is larger, the value of the phase difference pixel value constituting the phase difference sequence becomes larger, and the detection accuracy of the phase shift amount is improved. As a result, the detection accuracy of the focus position of the lens 111 is improved, and the focus accuracy of the camera 101a is improved.
 そこで、位相差検出部133は、積分処理によりSN比が向上した位相差画像から抽出される複数の位相差列の中から、信頼性の高い位相差(すなわち、位相ずれ量)を検出可能な方向、波長帯、及び、位置の位相差列を選択する。より具体的には、位相差検出部133は、同じ波長帯及び同じ行の位相差列QLと位相差列QRの組み合わせ、又は、同じ波長帯及び同じ列の位相差列QUと下方向の位相差列QDの組み合わせの中から、信頼性の高い位相差を検出可能な組み合わせを1組以上選択する。これにより、位相差画像において、位相差の検出に適した方向、波長帯、及び、位置が選択される。 Therefore, the phase difference detection unit 133 can detect a highly reliable phase difference (that is, a phase shift amount) from among a plurality of phase difference sequences extracted from a phase difference image whose SN ratio has been improved by integration processing. A phase difference sequence of direction, wavelength band, and position is selected. More specifically, the phase difference detection unit 133 is a combination of the phase difference column QL and the phase difference column QR in the same wavelength band and the same row, or the phase difference column QU in the same wavelength band and the same column and the position in the downward direction. One or more combinations that can detect a phase difference with high reliability are selected from the combinations of the phase difference sequences QD. Thereby, a direction, a wavelength band, and a position suitable for detecting the phase difference are selected in the phase difference image.
 例えば、位相差検出部133は、被写体の急峻なエッジ成分を含む位置、方向、及び、波長帯に対応する位相差列を選択する。或いは、例えば、位相差検出部133は、SN比が高い位置、方向、及び、波長帯に対応する位相差列を選択する。或いは、例えば、位相差検出部133は、最終的に合焦させたい波長帯の位相差列のうち、信頼性の高い位相差を検出することが可能な方向及び位置に対応する位相差列を選択する。 For example, the phase difference detection unit 133 selects a phase difference sequence corresponding to the position, direction, and wavelength band including the steep edge component of the subject. Alternatively, for example, the phase difference detection unit 133 selects a phase difference sequence corresponding to a position, a direction, and a wavelength band with a high SN ratio. Alternatively, for example, the phase difference detection unit 133 selects a phase difference sequence corresponding to a direction and position where a highly reliable phase difference can be detected from among the phase difference sequences in the wavelength band to be finally focused. select.
 或いは、例えば、位相差検出部133は、焦点を合わせたい位置の位相差列のうち、信頼性の高い位相差を検出することが可能な方向及び波長帯に対応する位相差列を選択する。ここで、焦点を合わせたい位置とは、例えば、ユーザが指定した位置や、人物の顔が写っている位置等である。 Alternatively, for example, the phase difference detection unit 133 selects a phase difference sequence corresponding to a direction and a wavelength band in which a highly reliable phase difference can be detected from among the phase difference sequences at the position to be focused. Here, the position to be focused is, for example, a position designated by the user, a position where a person's face is shown, or the like.
 なお、位相差検出部133は、特に各波長帯の位相差列に優劣や優先順位の設定がない場合、例えば、G波長帯の位相差列を優先して選択するようにしてもよい。これは、人間の目が、G波長帯の光に対して最も感度が高いからである。 Note that the phase difference detection unit 133 may select the phase difference sequence in the G wavelength band with priority, for example, when there is no superiority or inferiority or priority setting in the phase difference sequence in each wavelength band. This is because the human eye is most sensitive to light in the G wavelength band.
 ステップS14において、位相差検出部133は、位相ずれ量を検出する。具体的には、位相差検出部133は、ステップS13の処理で選択した同じ組に属する位相差列の相関演算を行うことにより、位相ずれ量を検出する。 In step S14, the phase difference detection unit 133 detects a phase shift amount. Specifically, the phase difference detection unit 133 detects the phase shift amount by performing a correlation calculation of the phase difference sequences belonging to the same group selected in the process of step S13.
 例えば、位相差検出部133は、同じ組に属する2つの位相差列のうち一方を瞳分割方向に少しずつずらしていく。そして、位相差検出部133は、例えば、2つの位相差列の波形が最も重なる割合が高い位置を検出する。或いは、例えば、位相差検出部133は、2つの位相差列のエッジ部分の波形のピークのパターンが最も重なる位置を検出する。そして、位相差検出部133は、位相差列をずらす前の位置と検出した位置との間の距離を位相ずれ量として検出する。 For example, the phase difference detection unit 133 shifts one of the two phase difference sequences belonging to the same group little by little in the pupil division direction. Then, the phase difference detection unit 133 detects, for example, a position where the ratio of the waveforms of the two phase difference sequences is the highest. Alternatively, for example, the phase difference detection unit 133 detects the position where the waveform peak patterns at the edge portions of the two phase difference sequences overlap most. Then, the phase difference detection unit 133 detects the distance between the position before the phase difference sequence is shifted and the detected position as the phase shift amount.
 なお、位相ずれ量の検出方法は、上述した方法に限定されるものではなく、任意の方法を採用することが可能である。 Note that the method of detecting the amount of phase shift is not limited to the method described above, and any method can be employed.
 また、ステップS13の処理で2組以上の位相差列が選択された場合、各組について位相ずれ量が検出される。 Further, when two or more sets of phase difference sequences are selected in the process of step S13, the amount of phase shift is detected for each set.
 位相差検出部133は、位相ずれ量の検出結果を合焦位置検出部134に供給する。 The phase difference detection unit 133 supplies the detection result of the phase shift amount to the focus position detection unit 134.
 ステップS15において、合焦位置検出部134は、位相ずれ量をデフォーカス量に変換する。例えば、位相差列QLの波形と位相差列QRの波形との間の位相ずれ量は、図15及び図17の網掛けで示される二等辺三角形の撮像素子112aの撮像面における底辺の長さに相当する。一方、この二等辺三角形の高さは、合焦させるために必要なレンズ111の光軸方向への移動量、すなわち、デフォーカス量に相当する。すなわち、デフォーカス量は位相ずれ量と比例関係にある。従って、合焦位置検出部134は、二等辺三角形の底辺である位相ずれ量に基づいて、二等辺三角形の高さを算出することで、デフォーカス量を求める。 In step S15, the focus position detection unit 134 converts the phase shift amount into the defocus amount. For example, the amount of phase shift between the waveform of the phase difference sequence QL and the waveform of the phase difference sequence QR is the length of the base on the imaging surface of the isosceles triangular imaging element 112a indicated by hatching in FIGS. It corresponds to. On the other hand, the height of the isosceles triangle corresponds to the amount of movement of the lens 111 in the optical axis direction necessary for focusing, that is, the defocus amount. That is, the defocus amount is proportional to the phase shift amount. Therefore, the focus position detection unit 134 calculates the height of the isosceles triangle based on the phase shift amount that is the base of the isosceles triangle to obtain the defocus amount.
 なお、複数の位相差列の組が選択され、複数の位相ずれ量が検出されている場合、例えば、各位相ずれ量に基づいてデフォーカス量が求められる。そして、複数の位相ずれ量により求まるデフォーカス量から、信頼性などを考慮して最終的なデフォーカス量が算出される。例えば、信頼性などに基づく重みを用いて、複数のデフォーカス量を重み付け加算することにより、最終的なデフォーカス量が算出される。 When a plurality of sets of phase difference sequences are selected and a plurality of phase shift amounts are detected, for example, a defocus amount is obtained based on each phase shift amount. Then, the final defocus amount is calculated from the defocus amount obtained from the plurality of phase shift amounts in consideration of reliability and the like. For example, the final defocus amount is calculated by weighting and adding a plurality of defocus amounts using weights based on reliability or the like.
 ステップS16において、合焦位置検出部134は、ステップS15の処理で求めたデフォーカス量に基づいて、合焦位置を求める。 In step S16, the focus position detection unit 134 obtains the focus position based on the defocus amount obtained in the process of step S15.
 ここで、図18乃至図20に示されるように、軸上色収差の影響により、合焦位置は波長帯毎に異なる。図18乃至図20の左側の図は、図15乃至図17の左側の図と同様に、B波長帯、G波長帯、及び、R波長帯の光束がレンズ111に入射する様子をそれぞれ上から見た模式図である。なお、図18乃至図20の左側の図は、G波長帯の光束が合焦している場合の光束の様子をそれぞれ示している。図18乃至図20の右側の図は、図15乃至図17の右側の図と同様に、光束が左側の図に示される状態である場合において、各波長帯の光束の左方向の位相差列QLと右方向の位相差列QRの関係を示している。 Here, as shown in FIGS. 18 to 20, the in-focus position differs for each wavelength band due to the influence of axial chromatic aberration. 18 to 20, as in the left diagrams of FIGS. 15 to 17, the light beams in the B wavelength band, the G wavelength band, and the R wavelength band are incident on the lens 111 from above. It is the seen schematic diagram. The diagrams on the left side of FIGS. 18 to 20 respectively show the state of the light beam when the light beam in the G wavelength band is in focus. The right-side diagrams of FIGS. 18 to 20 are the phase difference sequences in the left direction of the light beams in the respective wavelength bands in the case where the light beams are in the state shown in the left-side diagram, similarly to the right-side diagrams of FIGS. 15 to 17. The relationship between QL and the phase difference sequence QR in the right direction is shown.
 図18に示されるように、G波長帯の光束が合焦している場合、G波長帯より短波長で屈折率の高いB波長帯の光束は、後ピン状態になる。一方、図20に示されるように、G波長帯より長波長で屈折率の低いR波長帯の光束は、前ピン状態になる。 As shown in FIG. 18, when the light beam in the G wavelength band is in focus, the light beam in the B wavelength band having a shorter wavelength than the G wavelength band and having a high refractive index is in a back-pin state. On the other hand, as shown in FIG. 20, the light flux in the R wavelength band having a longer wavelength than the G wavelength band and having a low refractive index is in a front pin state.
 そこで、合焦位置検出部134は、位相ずれ量の検出に用いた波長帯(以下、位相差検出波長帯と称する)と、最終的に合焦に用いる波長帯(以下、合焦波長帯と称する)とが異なる場合、デフォーカス量に基づいて求めた合焦位置を補正する。 Therefore, the focus position detection unit 134 uses the wavelength band used for detecting the phase shift amount (hereinafter referred to as the phase difference detection wavelength band) and the wavelength band used for focusing (hereinafter referred to as the focus wavelength band). The in-focus position obtained based on the defocus amount is corrected.
 上述したように、軸上色収差は、使用するレンズシステムにより異なる。そこで、例えば、カメラ101aで使用可能な全てのレンズシステムの軸上色収差量表が、倍率色収差量表と同様に色収差データ記憶部137に予め記憶される。 As described above, the axial chromatic aberration varies depending on the lens system used. Therefore, for example, the axial chromatic aberration amount tables of all lens systems that can be used in the camera 101a are stored in advance in the chromatic aberration data storage unit 137 in the same manner as the magnification chromatic aberration amount table.
 図21は、軸上色収差量表の一例が示されている。この軸上色収差量表は、図14の倍率色収差補正表と同様の構成の表である。すなわち、軸上色収差量表は、縦軸に示される波長に対する横軸に示される波長の軸上色収差量が示されている。例えば、a(λk,λi)は、波長λkに対する波長λiの軸上色収差量を示す。なお、現在説明している例においては、波長λk及び波長λiは、R波長帯、G波長帯、及び、B波長帯のいずれかとなる。 FIG. 21 shows an example of the axial chromatic aberration amount table. This axial chromatic aberration amount table is a table having the same configuration as the magnification chromatic aberration correction table of FIG. That is, the axial chromatic aberration amount table shows the axial chromatic aberration amount of the wavelength indicated on the horizontal axis with respect to the wavelength indicated on the vertical axis. For example, a (λ k , λ i ) indicates the amount of axial chromatic aberration at the wavelength λ i with respect to the wavelength λ k . In the presently described example, the wavelength λ k and the wavelength λ i are any of the R wavelength band, the G wavelength band, and the B wavelength band.
 合焦位置検出部134は、位相差検出波長帯と合焦波長帯とが異なる場合、レンズ111の識別情報に基づいて、カメラ101aに装着されているレンズ111の軸上色収差量表を色収差データ記憶部137から読み出す。次に、合焦位置検出部134は、軸上色収差量表に基づいて、位相検出波長帯に対する合焦波長帯の軸上色収差量を求める。そして、合焦位置検出部134は、求めた軸上色収差量に基づいて、位相検出波長に対する合焦位置を、合焦波長帯に対する合焦位置に補正する。 When the phase difference detection wavelength band and the focusing wavelength band are different, the in-focus position detection unit 134 obtains the axial chromatic aberration amount table of the lens 111 attached to the camera 101a based on the identification information of the lens 111 and the chromatic aberration data. Read from the storage unit 137. Next, the focus position detection unit 134 obtains the axial chromatic aberration amount of the focused wavelength band with respect to the phase detection wavelength band based on the axial chromatic aberration amount table. Then, the focus position detection unit 134 corrects the focus position with respect to the phase detection wavelength to the focus position with respect to the focus wavelength band based on the obtained amount of longitudinal chromatic aberration.
 例えば、色温度が低い被写体の場合、R波長帯成分のコントラストが強いため、R波長帯の位相ずれ量の信頼度が高い。一方で、最終的にG波長帯で合焦したい場合が想定される。この場合、例えば、合焦位置検出部134は、R波長帯の位相ずれ量を検出し、検出した位相ずれ量に基づいてR波長帯のデフォーカス量を求める。また、合焦位置検出部134は、R波長帯のデフォーカス量に基づいて、R波長帯の合焦位置を求める。そして、合焦位置検出部134は、R波長帯に対するG波長帯の軸上色収差量に基づいて、R波長帯の合焦位置をG波長帯の合焦位置に補正する。 For example, in the case of a subject with a low color temperature, the contrast of the R wavelength band component is strong, and thus the reliability of the phase shift amount in the R wavelength band is high. On the other hand, the case where it is finally desired to focus on the G wavelength band is assumed. In this case, for example, the focus position detection unit 134 detects the phase shift amount in the R wavelength band, and obtains the defocus amount in the R wavelength band based on the detected phase shift amount. Further, the focus position detection unit 134 obtains the focus position of the R wavelength band based on the defocus amount of the R wavelength band. Then, the focus position detection unit 134 corrects the focus position of the R wavelength band to the focus position of the G wavelength band based on the axial chromatic aberration amount of the G wavelength band with respect to the R wavelength band.
 合焦位置検出部134は、求めた合焦位置を焦点制御部135に通知する。 The focus position detection unit 134 notifies the focus control unit 135 of the obtained focus position.
 ステップS17において、カメラ101aは、焦点を合わせる。すなわち、焦点制御部135は、レンズ駆動部122を制御して、合焦位置検出部134から通知された合焦位置まで、レンズ111を光軸方向に移動させる。 In step S17, the camera 101a focuses. In other words, the focus control unit 135 controls the lens driving unit 122 to move the lens 111 in the optical axis direction to the focus position notified from the focus position detection unit 134.
 その後、処理はステップS1に戻り、ステップS1以降の処理が実行される。そして、ステップS1以降の処理を繰り返し実行することにより、被写体の変化やレンズ駆動に追従して、適切な位置に焦点が合わせられる。 Thereafter, the process returns to step S1, and the processes after step S1 are executed. Then, by repeatedly executing the processing after step S1, the focus is adjusted to an appropriate position following the change of the subject and lens driving.
 なお、AF制御処理を実行するタイミングや繰り返す間隔は、任意に設定することが可能である。 It should be noted that the timing for executing the AF control process and the repetition interval can be arbitrarily set.
 以上のようにして、時間方向、空間方向、及び、波長方向の積分処理を行うことにより、位相差画像のノイズを低減し、SN比を向上させることができる。また、時間方向、空間方向、及び、波長方向の相関の大小により適切に重みが調整されるので、積分処理により位相差画像にノイズが混入することが防止される。さらに、位相ずれ量の検出に適した波長帯、方向、及び、位置の位相差列が選択される。また、軸上色収差に基づいて合焦位置が補正される。その結果、合焦精度が向上する。また、被写体の照度が低い場合にも、その被写体に正確に焦点を合わせることができる。 As described above, by performing integration processing in the time direction, the spatial direction, and the wavelength direction, noise in the phase difference image can be reduced and the SN ratio can be improved. In addition, since the weight is appropriately adjusted according to the correlation between the time direction, the space direction, and the wavelength direction, it is possible to prevent noise from being mixed into the phase difference image by the integration process. Further, a phase difference sequence of a wavelength band, a direction, and a position suitable for detecting the phase shift amount is selected. Further, the focus position is corrected based on the longitudinal chromatic aberration. As a result, focusing accuracy is improved. Even when the illuminance of the subject is low, the subject can be accurately focused.
 また、位相差画素は、撮影画素に対して受光部の面積が小さいため、感度が低下する。これを避けるために、一般的に、撮影画素で用いる色フィルタの代わりに透過率の高いWフィルタを用いることで、位相差画素の感度を向上させる。これに対して、カメラ101aでは、位相差画素にWフィルタを用いなくても、位相差画像のSN比を向上させることができる。また、Wフィルタを用いた場合に異なる波長帯の光を受光することにより生じる色収差の影響を防止することができる。 Also, the sensitivity of the phase difference pixel decreases because the area of the light receiving portion is smaller than that of the photographic pixel. In order to avoid this, generally, the sensitivity of the phase difference pixel is improved by using a W filter having a high transmittance instead of the color filter used in the photographing pixel. On the other hand, in the camera 101a, the SN ratio of the phase difference image can be improved without using a W filter for the phase difference pixel. In addition, when a W filter is used, it is possible to prevent the influence of chromatic aberration caused by receiving light in different wavelength bands.
 さらに、全ての画素が、撮像用及び位相差検出用の両方に使用可能である。従って、位相差検出専用の画素を用いる場合に生じる欠陥画素による画質の劣化が生じない。また、合焦時と撮影時の親和性が高くなり、制御が容易になる。さらに、位相差検出用に専用のラインセンサ等の部品を設ける必要がなく、コストアップが生じない。また、レンズ駆動等を行わずに合焦位置を検出することができ、合焦速度が速くなる。 Furthermore, all the pixels can be used for both imaging and phase difference detection. Accordingly, there is no deterioration in image quality due to defective pixels that occurs when a pixel dedicated to phase difference detection is used. In addition, the compatibility at the time of focusing and shooting is increased, and control is facilitated. In addition, there is no need to provide a dedicated line sensor or the like for detecting the phase difference, and the cost is not increased. In addition, the focus position can be detected without driving the lens and the focus speed is increased.
 また、位相差画像のSN比が所定の閾値以上になった時点で、後の積分処理が省略されるので、さらに合焦速度を高速化することができる。 Further, when the SN ratio of the phase difference image becomes equal to or greater than a predetermined threshold value, the subsequent integration process is omitted, so that the focusing speed can be further increased.
<2.第1の実施の形態の変形例>
{AF制御処理に関する変形例}
 以上の説明では、時間方向、空間方向、波長方向の順に積分処理を行う例を示したが、異なる順序で積分処理を行うようにすることも可能である。或いは、被写体や周囲の環境等に応じて、積分処理の順序を適宜変更するようにしてもよい。
<2. Modification of First Embodiment>
{Variation concerning AF control processing}
In the above description, the example in which the integration process is performed in the order of the time direction, the spatial direction, and the wavelength direction has been described, but the integration process may be performed in a different order. Alternatively, the order of integration processing may be changed as appropriate according to the subject, surrounding environment, and the like.
 さらに、以上の説明では、位相差画像のSN比が所定の閾値以上になった時点で積分処理を終了する例を示したが、例えば、必ず全ての積分処理を行うようにしてもよい。 Furthermore, in the above description, the example in which the integration process is terminated when the S / N ratio of the phase difference image is equal to or greater than a predetermined threshold has been described. However, for example, all the integration processes may be performed without fail.
 また、例えば、時間方向、空間方向、及び、波長方向のいずれか1つ又は2つの積分処理のみを実装するようにしてもよい。 Further, for example, only one or two integration processes in the time direction, the space direction, and the wavelength direction may be implemented.
 さらに、各積分処理における重みの調整方法は、その一例であり、他の方法により重みを調整するようにしてもよい。 Furthermore, the weight adjustment method in each integration process is an example, and the weight may be adjusted by another method.
 また、例えば、各積分処理において、重みを固定した固定係数のフィルタを用いて積分処理を行うようにしてもよい。 Also, for example, in each integration process, the integration process may be performed using a fixed coefficient filter with a fixed weight.
 さらに、例えば、被写体の動き量は、画素の位置と受光波長帯の組み合わせで異なる。従って、時間方向の積分処理において、時間方向だけでなく、空間方向及び波長方向との畳み込み積分を行うようにしてもよい。 Furthermore, for example, the amount of movement of the subject differs depending on the combination of the pixel position and the light receiving wavelength band. Therefore, in the integration process in the time direction, convolution integration not only in the time direction but also in the spatial direction and the wavelength direction may be performed.
 また、例えば、被写体の水平方向及び垂直方向の類似度は、撮影時刻と受光波長帯の組み合わせにより異なる。従って、空間方向の積分処理において、空間方向だけでなく、時間方向及び波長方向との畳み込み積分を行うようにしてもよい。 Also, for example, the similarity of the subject in the horizontal and vertical directions varies depending on the combination of the shooting time and the light receiving wavelength band. Therefore, in the integration process in the spatial direction, not only the spatial direction but also the convolution integration with the time direction and the wavelength direction may be performed.
 さらに、倍率色収差は、光軸中心から周囲に向かって像高に応じて変化する。従って、波長方向の積分処理において、光軸中心からの距離に応じて、上述した式(13)及び式(14)の倍率色収差量c(λk,λi)を補正するようにしてもよい。 Further, the chromatic aberration of magnification changes according to the image height from the center of the optical axis toward the periphery. Therefore, in the integration process in the wavelength direction, the chromatic aberration of magnification c (λ k , λ i ) in the above-described equations (13) and (14) may be corrected according to the distance from the optical axis center. .
 また、各積分処理の重みの算出に用いる画像の種類は、任意に選択することができる。例えば、撮影画像と位相差画像の両方を用いても良いし、撮影画像のみを用いてもよいし、位相差画像のみを用いてもよい。 Also, the type of image used for calculating the weight of each integration process can be arbitrarily selected. For example, both the captured image and the phase difference image may be used, only the captured image may be used, or only the phase difference image may be used.
 さらに、以上の説明では、撮影画像だけでなく、位相差画像のデモザイク処理を行う例を示したが、必ずしも位相差画像のデモザイク処理を行う必要はない。なお、位相差画像のデモザイク処理を行わない場合、波長方向の積分処理時に、積分対象となる位相差画素の波長帯と異なる他の波長帯の位相差画素値を算出する必要がある。この場合、例えば、その位相差画素の近傍の他の波長帯の位相差画素の位相差画素値を用いて、簡易なデモザイク処理を行うことにより、他の波長帯の位相差画素値を求めるようすればよい。 Furthermore, in the above description, an example of performing demosaic processing of not only a captured image but also a phase difference image has been shown, but it is not always necessary to perform a demosaic process of a phase difference image. When the demosaic process of the phase difference image is not performed, it is necessary to calculate a phase difference pixel value in another wavelength band different from the wavelength band of the phase difference pixel to be integrated during the integration process in the wavelength direction. In this case, for example, the phase difference pixel value of the other wavelength band is obtained by performing a simple demosaic process using the phase difference pixel value of the phase difference pixel of the other wavelength band near the phase difference pixel. do it.
 また、例えば、デモザイク処理で求めた位相差画素値を位相差列に用いるようにしてもよい。 Further, for example, the phase difference pixel value obtained by demosaic processing may be used for the phase difference sequence.
 さらに、以上では、位相ずれ量の検出に適した波長帯、方向、及び、位置の位相差列を選択する例を示したが、例えば、波長帯、方向、及び、位置のうちの1つ又は2つのみを選択するようにすることも可能である。 Furthermore, in the above, an example of selecting a phase difference sequence of a wavelength band, a direction, and a position suitable for detecting a phase shift amount has been shown. For example, one of the wavelength band, the direction, and the position or It is also possible to select only two.
 また、例えば、全ての位相差列について位相ずれ量を検出した後、合焦位置の検出に用いる位相ずれ量を選択するようにしてもよい。 Further, for example, after detecting the phase shift amount for all the phase difference sequences, the phase shift amount used for detecting the in-focus position may be selected.
 さらに、以上では、位相差画像のSN比に基づいて、各積分処理を実行するか否かの判定や、積分処理の重みを求める画像の選択を行う例を示したが、SN比以外の画質を表すパラメータ(例えば、エッジ量やコントラスト量等)を用いるようにしてもよい。また、複数のパラメータを組み合わせて用いるようにしてもよい。 Furthermore, in the above, based on the S / N ratio of the phase difference image, an example of determining whether to perform each integration process or selecting an image for obtaining a weight of the integration process has been shown. May be used (for example, edge amount, contrast amount, etc.). A plurality of parameters may be used in combination.
{撮像素子に関する変形例}
 以上の説明では、撮像素子112aの各画素を水平方向又は垂直方向のいずれか一方に2つに瞳分割する例を示したが、分割数は2つに限定されるものではない。
{Variation of image sensor}
In the above description, an example in which each pixel of the image sensor 112a is divided into two pupils in either the horizontal direction or the vertical direction is shown, but the number of divisions is not limited to two.
 図22は、4つに瞳分割した画素201bの構成例を示している。図22の上の図は、画素201bを横方向から見た分解図を模式的に示し、下の図は、画素201bを上方向から見た平面図を模式的に示している。なお、図中、図3と同じ部分には同じ符号を付している。 FIG. 22 shows a configuration example of the pixel 201b divided into four pupils. The upper diagram in FIG. 22 schematically illustrates an exploded view of the pixel 201b as viewed from the lateral direction, and the lower diagram schematically illustrates a plan view of the pixel 201b as viewed from the upper direction. In addition, in the figure, the same code | symbol is attached | subjected to the same part as FIG.
 画素201bは、図3の画素201aと比較して、遮光部213a、並びに、光電変換部214L及び214Rの代わりに、遮光部213b、並びに、光電変換部214LU乃至214RDが設けられている点が異なる。 The pixel 201b is different from the pixel 201a in FIG. 3 in that a light shielding unit 213b and photoelectric conversion units 214LU to 214RD are provided instead of the light shielding unit 213a and the photoelectric conversion units 214L and 214R. .
 なお、以下、光電変換部214LU乃至214RDを個々に区別する必要がない場合、単に光電変換部214と称する。 Note that, hereinafter, when it is not necessary to individually distinguish the photoelectric conversion units 214LU to 214RD, they are simply referred to as the photoelectric conversion unit 214.
 オンチップマイクロレンズ211に入射した光は、オンチップマイクロレンズ211の光軸中心である画素201bの受光面の中央方向へ集光する。そして、波長選択フィルタ212により、入射光の所定の波長帯の成分が透過され、遮光部213bで遮光されていない光電変換部214LU乃至214RDの受光領域に入射する。遮光部213bは、隣接画素との混色防止及び画素201bの瞳分割の効果を奏する。 The light incident on the on-chip microlens 211 is collected toward the center of the light receiving surface of the pixel 201b, which is the optical axis center of the on-chip microlens 211. Then, a component of a predetermined wavelength band of incident light is transmitted by the wavelength selection filter 212 and is incident on the light receiving regions of the photoelectric conversion units 214LU to 214RD that are not shielded by the light shielding unit 213b. The light shielding unit 213b has the effect of preventing color mixing with adjacent pixels and pupil division of the pixel 201b.
 光電変換部214LU乃至214RDは、例えば、それぞれフォトダイオード等の光電変換素子からなる。光電変換部214LU乃至214RDは、画素201bの受光面を垂直方向(列方向)及び水平方向(行方向)に4分割するように配置されている。すなわち、光電変換部214LUは、画素201bの受光面の左上方向に偏った位置に配置され、光電変換部214LUの受光領域は、オンチップマイクロレンズ211に対して左上方向に偏心している。光電変換部214LDは、画素201bの受光面の左下方向に偏った位置に配置され、光電変換部214LDの受光領域は、オンチップマイクロレンズ211に対して左下方向に偏心している。光電変換部214RUは、画素201bの受光面の右上方向に偏った位置に配置され、光電変換部214RUの受光領域は、オンチップマイクロレンズ211に対して右上方向に偏心している。光電変換部214RDは、画素201bの受光面の右下方向に偏った位置に配置され、光電変換部214RDの受光領域は、オンチップマイクロレンズ211に対して右下方向に偏心している。 The photoelectric conversion units 214LU to 214RD are each composed of a photoelectric conversion element such as a photodiode, for example. The photoelectric conversion units 214LU to 214RD are arranged so that the light receiving surface of the pixel 201b is divided into four parts in the vertical direction (column direction) and the horizontal direction (row direction). That is, the photoelectric conversion unit 214LU is disposed at a position offset in the upper left direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214LU is eccentric in the upper left direction with respect to the on-chip microlens 211. The photoelectric conversion unit 214LD is disposed at a position deviated in the lower left direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214LD is decentered in the lower left direction with respect to the on-chip microlens 211. The photoelectric conversion unit 214RU is disposed at a position offset in the upper right direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214RU is eccentric in the upper right direction with respect to the on-chip microlens 211. The photoelectric conversion unit 214RD is arranged at a position offset in the lower right direction of the light receiving surface of the pixel 201b, and the light receiving region of the photoelectric conversion unit 214RD is eccentric in the lower right direction with respect to the on-chip microlens 211.
 従って、光電変換部214LUは、画素201bの受光面のほぼ左上半分に入射する光を受光し、受光量に応じた画素信号を出力する。光電変換部214LDは、画素201bの受光面のほぼ左下半分に入射する光を受光し、受光量に応じた画素信号を出力する。光電変換部214RUは、画素201bの受光面のほぼ右上半分に入射する光を受光し、受光量に応じた画素信号を出力する。光電変換部214RDは、画素201bの受光面のほぼ右下半分に入射する光を受光し、受光量に応じた画素信号を出力する。これにより、画素201aでは、入射光が水平方向及び垂直方向に瞳分割される。 Therefore, the photoelectric conversion unit 214LU receives the light incident on the upper left half of the light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of light received. The photoelectric conversion unit 214LD receives light incident on the lower left half of the light receiving surface of the pixel 201b and outputs a pixel signal corresponding to the amount of received light. The photoelectric conversion unit 214RU receives light incident on the upper right half of the light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of received light. The photoelectric conversion unit 214RD receives light incident on the lower right half of the light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of light received. Thereby, in the pixel 201a, incident light is pupil-divided in the horizontal direction and the vertical direction.
 画素201bは、光電変換部214LU乃至214RDの画素信号を個別に出力したり、2以上の画素信号を任意の組み合わせで加算して出力することができる。例えば、光電変換部214LU乃至214RDから個別に出力された画素信号は、位相差検出用の信号として用いられる。また、重心位置がオンチップマイクロレンズ211の中心から偏心するように複数の光電変換部214を選択して、選択した光電変換部214の画素信号を加算した信号は、位相差検出用の信号として用いられる。さらに、4つの画素信号を全て加算した信号は、撮影用の信号として用いられる。 The pixel 201b can individually output the pixel signals of the photoelectric conversion units 214LU to 214RD, or can add and output two or more pixel signals in any combination. For example, pixel signals individually output from the photoelectric conversion units 214LU to 214RD are used as phase difference detection signals. A signal obtained by selecting a plurality of photoelectric conversion units 214 so that the position of the center of gravity is decentered from the center of the on-chip microlens 211 and adding the pixel signals of the selected photoelectric conversion units 214 is used as a phase difference detection signal. Used. Further, a signal obtained by adding all four pixel signals is used as a signal for photographing.
 これにより、例えば、1種類の画素201bにより、左方向、右方向、上方向、及び、下方向の位相差画素値を検出することが可能になる。 Thereby, for example, the phase difference pixel values in the left direction, the right direction, the upper direction, and the lower direction can be detected by one type of pixel 201b.
 また、以上の説明では、撮像用の画素と位相差検出用の画素を兼用する撮像位相差兼用画素を用いる例を示したが、本技術は、撮像専用の画素と位相差検出専用の画素とを分けて設ける場合にも適用することができる。 In the above description, an example of using an imaging phase-difference pixel that combines an imaging pixel and a phase-difference detection pixel has been described. However, the present technology provides an imaging-dedicated pixel and a phase-difference detection-dedicated pixel. The present invention can also be applied to the case where these are provided separately.
 図23は、撮像専用画素である画素201cの構成例を示している。図23の上の図は、画素201cを横方向から見た分解図を模式的に示し、下の図は、画素201cを上方向から見た平面図を模式的に示している。なお、図中、図3と同じ部分には同じ符号を付している。 FIG. 23 shows a configuration example of a pixel 201c that is a dedicated pixel for imaging. The upper diagram in FIG. 23 schematically illustrates an exploded view of the pixel 201c as viewed from the lateral direction, and the lower diagram schematically illustrates a plan view of the pixel 201c as viewed from the upper direction. In addition, in the figure, the same code | symbol is attached | subjected to the same part as FIG.
 画素201cは、図3の画素201aと比較して、遮光部213a、並びに、光電変換部214L及び214Rの代わりに、遮光部213c、並びに、光電変換部214Mが設けらている点が異なる。 The pixel 201c is different from the pixel 201a in FIG. 3 in that a light shielding unit 213c and a photoelectric conversion unit 214M are provided instead of the light shielding unit 213a and the photoelectric conversion units 214L and 214R.
 オンチップマイクロレンズ211に入射した光は、オンチップマイクロレンズ211の光軸中心である画素の中央方向へ集光する。そして、波長選択フィルタ212により、入射光の所定の波長帯の成分が透過され、遮光部213cで遮光されていない光電変換部214Mの受光領域に入射する。遮光部213cは、隣接画素との混色防止の効果を奏する。 The light incident on the on-chip microlens 211 is collected toward the center of the pixel, which is the optical axis center of the on-chip microlens 211. Then, a component of a predetermined wavelength band of incident light is transmitted by the wavelength selection filter 212 and is incident on the light receiving region of the photoelectric conversion unit 214M that is not shielded by the light shielding unit 213c. The light shielding part 213c has an effect of preventing color mixing with adjacent pixels.
 光電変換部214Mは、例えば、それぞれフォトダイオード等の光電変換素子からなる。光電変換部214Mは、画素201cの受光面のほぼ中央に配置され、光電変換部214Mの受光領域は、オンチップマイクロレンズ211に対して偏心していない。そして、光電変換部214Mは、画素201bの受光面のほぼ全体に入射する光を受光し、受光量に応じた画素信号を出力する。 The photoelectric conversion unit 214M includes, for example, photoelectric conversion elements such as photodiodes. The photoelectric conversion unit 214M is disposed in the approximate center of the light receiving surface of the pixel 201c, and the light receiving region of the photoelectric conversion unit 214M is not decentered with respect to the on-chip microlens 211. The photoelectric conversion unit 214M receives light incident on almost the entire light receiving surface of the pixel 201b, and outputs a pixel signal corresponding to the amount of light received.
 なお、以下、波長選択フィルタ212がRフィルタである画素201cを画素201Rcと称する。以下、波長選択フィルタ212がGフィルタである画素201cを画素201Gcと称する。以下、波長選択フィルタ212がBフィルタである画素201cを画素201Bcと称する。 Hereinafter, the pixel 201c in which the wavelength selection filter 212 is an R filter is referred to as a pixel 201Rc. Hereinafter, the pixel 201c in which the wavelength selection filter 212 is a G filter is referred to as a pixel 201Gc. Hereinafter, the pixel 201c in which the wavelength selection filter 212 is a B filter is referred to as a pixel 201Bc.
 図24は、位相差検出専用画素である画素201dLの構成例を示している。図24の上の図は、画素201dLを横方向から見た分解図を模式的に示し、下の図は、画素201dLを上方向から見た平面図を模式的に示している。なお、図中、図3と同じ部分には同じ符号を付している。 FIG. 24 shows a configuration example of a pixel 201dL that is a phase difference detection dedicated pixel. The upper diagram of FIG. 24 schematically shows an exploded view of the pixel 201dL viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 201dL viewed from the upper direction. In addition, in the figure, the same code | symbol is attached | subjected to the same part as FIG.
 画素201dLは、図3の画素201aと比較して、遮光部213aの代わりに遮光部213dLが設けられ、光電変換部214Rが設けられていない点が異なる。 The pixel 201dL is different from the pixel 201a in FIG. 3 in that a light shielding unit 213dL is provided instead of the light shielding unit 213a and the photoelectric conversion unit 214R is not provided.
 オンチップマイクロレンズ211に入射した光は、オンチップマイクロレンズ211の光軸中心である画素201dLの受光面の中央方向へ集光する。波長選択フィルタ212は、例えば、Wフィルタからなり、入射光のほとんどの波長成分が透過し、遮光部213dLで遮光されていない光電変換部214dLの受光領域に入射する。遮光部213dLは、隣接画素との混色防止及び画素201dLの瞳分割の効果を奏する。 The light incident on the on-chip microlens 211 is condensed toward the center of the light receiving surface of the pixel 201 dL, which is the optical axis center of the on-chip microlens 211. The wavelength selection filter 212 is made of, for example, a W filter, and transmits most of the wavelength component of incident light and enters the light receiving region of the photoelectric conversion unit 214dL that is not shielded by the light shielding unit 213dL. The light shielding unit 213dL has effects of preventing color mixing with adjacent pixels and pupil division of the pixel 201dL.
 光電変換部214dLは、画素201dの受光面のほぼ左半分に入射する光を受光し、受光量に応じた画素信号を出力する。これにより、画素201dLでは、入射光がほぼ左半分に瞳分割される。 The photoelectric conversion unit 214dL receives light incident on substantially the left half of the light receiving surface of the pixel 201d, and outputs a pixel signal corresponding to the amount of light received. Thereby, in the pixel 201dL, the incident light is divided into pupils substantially in the left half.
 図25は、位相差検出専用画素である画素201dRの構成例を示している。図25の上の図は、画素201dRを横方向から見た分解図を模式的に示し、下の図は、画素201dRを上方向から見た平面図を模式的に示している。なお、図中、図3と同じ部分には同じ符号を付している。 FIG. 25 shows a configuration example of the pixel 201dR which is a phase difference detection dedicated pixel. The upper diagram in FIG. 25 schematically shows an exploded view of the pixel 201dR as seen from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 201dR as seen from the upper direction. In addition, in the figure, the same code | symbol is attached | subjected to the same part as FIG.
 画素201dRは、図3の画素201aと比較して、遮光部213aの代わりに遮光部213dRが設けられ、光電変換部214Lが設けられていない点が異なる。 The pixel 201dR is different from the pixel 201a in FIG. 3 in that a light shielding unit 213dR is provided instead of the light shielding unit 213a and the photoelectric conversion unit 214L is not provided.
 オンチップマイクロレンズ211に入射した光は、オンチップマイクロレンズ211の光軸中心である画素201dRの受光面の中央方向へ集光する。波長選択フィルタ212は、例えば、Wフィルタからなり、入射光のほとんどの波長成分が透過し、遮光部213dRで遮光されていない光電変換部214dRの受光領域に入射する。遮光部213dRは、隣接画素との混色防止及び画素201dRの瞳分割の効果を奏する。 The light incident on the on-chip microlens 211 is condensed toward the center of the light receiving surface of the pixel 201dR, which is the optical axis center of the on-chip microlens 211. The wavelength selection filter 212 is made of, for example, a W filter, and transmits most of the wavelength component of incident light and enters the light receiving region of the photoelectric conversion unit 214dR that is not shielded by the light shielding unit 213dR. The light shielding unit 213dR has effects of preventing color mixing with adjacent pixels and pupil division of the pixel 201dR.
 光電変換部214dRは、画素201dの受光面のほぼ右半分に入射する光を受光し、受光量に応じた画素信号を出力する。これにより、画素201dRでは、入射光がほぼ右半分に瞳分割される。 The photoelectric conversion unit 214dR receives light incident on substantially the right half of the light receiving surface of the pixel 201d, and outputs a pixel signal corresponding to the amount of light received. Thereby, in the pixel 201dR, the incident light is divided into pupils substantially in the right half.
 なお、図24及び図25では、水平方向に瞳分割する画素の例を示したが、その他に、垂直方向に瞳分割する画素も存在する。以下、受光面の上方向に偏った位置に配置される光電変換部214Uを備える画素を画素201dUと称する。また、以下、受光面の下方向に偏った位置に配置される光電変換部214Dを備える画素を画素201dDと称する。さらに、以下、画素201dL乃至201dDを個々に区別する必要がない場合、単に、画素201dと称する。 24 and 25 show examples of pixels that divide the pupil in the horizontal direction, but there are also pixels that divide the pupil in the vertical direction. Hereinafter, a pixel including the photoelectric conversion unit 214U arranged at a position biased upward in the light receiving surface is referred to as a pixel 201dU. Hereinafter, a pixel including the photoelectric conversion unit 214D disposed at a position biased downward in the light receiving surface is referred to as a pixel 201dD. Further, hereinafter, when it is not necessary to individually distinguish the pixels 201dL to 201dD, they are simply referred to as a pixel 201d.
 図26乃至図29は、画素201c及び画素201dL乃至201dDを用いた撮像素子112bの画素配置の例を示している。図26は、撮像素子112b全体の画素配置の例を示している。図27及び図28は、2×2画素単位の画素配置の例を示している。図29は、8×8画素単位の画素配置の例を示している。なお、この画素配置は、特開2010-152161号公報の開示内容を参考にしたものであり、他の画素配置を採用することも可能である。 FIGS. 26 to 29 show examples of the pixel arrangement of the image sensor 112b using the pixel 201c and the pixels 201dL to 201dD. FIG. 26 shows an example of the pixel arrangement of the entire image sensor 112b. 27 and 28 show examples of pixel arrangement in units of 2 × 2 pixels. FIG. 29 shows an example of pixel arrangement in units of 8 × 8 pixels. This pixel arrangement is based on the disclosure of JP 2010-152161 A, and other pixel arrangements can be adopted.
 図27に示されるパターン1では、撮像専用の画素201Rc乃至201Bcが、ベイヤ配列に従って配置されている。 In pattern 1 shown in FIG. 27, pixels 201Rc to 201Bc dedicated to imaging are arranged according to the Bayer array.
 一方、図28に示されるパターン2A乃至2Dでは、図27のパターン1と比較して、画素201Rc及び201Bcの代わりに、位相差検出専用の画素201dL乃至201dDが配置されている。 On the other hand, in the patterns 2A to 2D shown in FIG. 28, the pixels 201dL to 201dD dedicated to phase difference detection are arranged instead of the pixels 201Rc and 201Bc as compared with the pattern 1 of FIG.
 具体的には、パターン2Aでは、画素201Rcの代わりに画素201dRが配置され、画素201Bcの代わりに画素201dLが配置されている。パターン2Bでは、画素201Rcの代わりに画素201dLが配置され、画素201Bcの代わりに画素201dRが配置されている。すなわち、パターン2A及び2Bでは、R画素及びB画素の代わりに、水平方向に瞳分割する画素201dL及び201dRが、斜め方向に隣接するように配置されている。 Specifically, in the pattern 2A, a pixel 201dR is arranged instead of the pixel 201Rc, and a pixel 201dL is arranged instead of the pixel 201Bc. In the pattern 2B, a pixel 201dL is arranged instead of the pixel 201Rc, and a pixel 201dR is arranged instead of the pixel 201Bc. That is, in the patterns 2A and 2B, instead of the R pixel and the B pixel, the pixels 201dL and 201dR that are pupil-divided in the horizontal direction are arranged so as to be adjacent in the oblique direction.
 パターン2Cでは、画素201Rcの代わりに画素201dDが配置され、画素201Bcの代わりに画素201dUが配置されている。パターン2Dでは、画素201Rcの代わりに画素201dUが配置され、画素201Bcの代わりに画素201dDが配置されている。すなわち、パターン2C及び2Dでは、R画素及びB画素の代わりに、垂直方向に瞳分割する画素201dU及び201dDが、斜め方向に隣接するように配置されている。 In the pattern 2C, a pixel 201dD is arranged instead of the pixel 201Rc, and a pixel 201dU is arranged instead of the pixel 201Bc. In the pattern 2D, a pixel 201dU is arranged instead of the pixel 201Rc, and a pixel 201dD is arranged instead of the pixel 201Bc. That is, in the patterns 2C and 2D, instead of the R pixel and the B pixel, the pixels 201dU and 201dD that divide the pupil in the vertical direction are arranged so as to be adjacent in the oblique direction.
 そして、図26及び図29の太線の枠で示されるように、8×8画素の領域毎に、パターン2A及びパターン2Bのうちのいずれか1つ、並びに、パターン2C及びパターン2Dのうちのいずれか1つが配置される。 Then, as shown by the thick line frame in FIG. 26 and FIG. 29, any one of the pattern 2A and the pattern 2B and any of the pattern 2C and the pattern 2D for each 8 × 8 pixel region. One is arranged.
 これにより、画素201dLが、座標(8n,16m+14)及び座標(8n+1,16m+7)に配置される。画素201dRが、座標(8n,16m+6)及び座標(8n+1,16m+15)に配置される。画素201dUが、座標(16n+5,8m+3)及び座標(16n+12,8m+2)に配置される。画素201dDが、座標(16n+4,8m+2)及び座標(16n+13,8m+3)に配置される。なお、n=0,1,2,・・・であり、m=0,1,2,・・・である。 Thereby, the pixel 201dL is arranged at the coordinates (8n, 16m + 14) and the coordinates (8n + 1, 16m + 7). Pixels 201dR are arranged at coordinates (8n, 16m + 6) and coordinates (8n + 1, 16m + 15). Pixels 201dU are arranged at coordinates (16n + 5, 8m + 3) and coordinates (16n + 12, 8m + 2). Pixels 201dD are arranged at coordinates (16n + 4, 8m + 2) and coordinates (16n + 13, 8m + 3). Note that n = 0, 1, 2,..., And m = 0, 1, 2,.
 そして、画素201dL乃至201dDの各位相差画素値に対して、上述した方法と同様の方法により積分処理が行われる。ただし、画素201dL乃至201dDの波長選択フィルタ212は全てWフィルタなので、波長方向の積分処理は行われず、時間方向及び空間方向の積分処理のみが実行される。 Then, integration processing is performed on the phase difference pixel values of the pixels 201dL to 201dD by the same method as described above. However, since all the wavelength selection filters 212 of the pixels 201dL to 201dD are W filters, integration processing in the wavelength direction is not performed, and only integration processing in the time direction and the spatial direction is performed.
 これにより、位相差検出専用画素である画素201dL乃至201dDを用いた場合にも、画素201dL乃至201dDの位相差画素値からなる位相差列のSN比を適切に上げることができる。その結果、合焦精度が向上する。 Thereby, even when the pixels 201dL to 201dD, which are the pixels for phase difference detection, are used, the SN ratio of the phase difference sequence including the phase difference pixel values of the pixels 201dL to 201dD can be appropriately increased. As a result, focusing accuracy is improved.
<3.第2の実施の形態>
 次に、図30乃至図35を参照して、本技術の第2の実施の形態について説明する。本技術の第1の実施の形態では、可視光の波長帯のみ感度を有する像面位相差撮像素子を用いる例を示した。一方、本技術の第2の実施の形態では、不可視光の波長帯まで感度を有する像面位相差撮像素子が用いられる。
<3. Second Embodiment>
Next, a second embodiment of the present technology will be described with reference to FIGS. 30 to 35. In the first embodiment of the present technology, an example in which an image plane phase difference imaging element having sensitivity only in the visible light wavelength band is shown. On the other hand, in the second embodiment of the present technology, an image plane phase difference imaging element having sensitivity up to the invisible light wavelength band is used.
{撮像素子112cの構成例}
 図30乃至図32は、第2の実施の形態においてカメラ101b(不図示)に使用される撮像素子112cの構成例を示している。なお、カメラ101bは、カメラ101aの撮像素子112aを撮像素子112cに置き換えたものである。
{Configuration Example of Image Sensor 112c}
30 to 32 show a configuration example of the image sensor 112c used in the camera 101b (not shown) in the second embodiment. The camera 101b is obtained by replacing the image sensor 112a of the camera 101a with an image sensor 112c.
 撮像素子112cは、図4乃至図7に示される撮像素子112aと比較して、Gr画素の代わりに、約700~1100nmの近赤外光を透過するIRフィルタが設けられたIR画素が配置されている点が異なる。IR画素に用いるIRフィルタは、例えば、上述した図5の波形251の分光特性を有するB+IRbフィルタと、波形253の分光特性を有するR+IRrフィルタを積層することにより実現される。 As compared with the image sensor 112a shown in FIGS. 4 to 7, the image sensor 112c includes an IR pixel provided with an IR filter that transmits near-infrared light of about 700 to 1100 nm, instead of the Gr pixel. Is different. The IR filter used for the IR pixel is realized, for example, by stacking the B + IRb filter having the spectral characteristics of the waveform 251 in FIG. 5 and the R + IRr filter having the spectral characteristics of the waveform 253 described above.
 また、図32に示されるように、IR画素は、奇数行及び奇数列に配置されている。さらに、水平方向に瞳分割されたIR画素と垂直方向に瞳分割されたIR画素が、行方向及び列方向に交互に配置されている。 Also, as shown in FIG. 32, the IR pixels are arranged in odd rows and odd columns. Further, IR pixels that are pupil-divided in the horizontal direction and IR pixels that are pupil-divided in the vertical direction are alternately arranged in the row direction and the column direction.
{AF制御処理}
 次に、撮像素子112cを用いた場合のカメラ101bのAF制御処理について説明する。なお、撮像素子112cを用いた場合も、撮像素子112aを用いた場合と同様に、図8のフローチャートに従って、AF制御処理が実行される。
{AF control processing}
Next, an AF control process of the camera 101b when the image sensor 112c is used will be described. Note that when the image sensor 112c is used, the AF control process is executed in accordance with the flowchart of FIG. 8 in the same manner as when the image sensor 112a is used.
 具体的には、ステップS1乃至S12において、上述した処理と同様の処理が実行される。ただし、ステップS8、S10及びS12において、IR波長帯の位相差画素値の積分処理も実行される。なお、IR波長帯の位相差画素値の積分処理は、他の波長帯の位相差画素値と同様の方法により行われる。 Specifically, in steps S1 to S12, processing similar to that described above is executed. However, in steps S8, S10, and S12, an integration process of the phase difference pixel value in the IR wavelength band is also executed. The integration process of the phase difference pixel values in the IR wavelength band is performed by the same method as that for the phase difference pixel values in the other wavelength bands.
 ステップS13において、位相差検出部133は、位相ずれ量の検出に用いる位相差列を選択する。 In step S13, the phase difference detection unit 133 selects a phase difference sequence used for detection of the phase shift amount.
 ここで、例えば、撮像素子112cの4m+1行におけるIR波長帯λirの左方向の位相差列をQL(4m+1,λir)とすると、QL(4m+1,λir)=(qL(1,4m+1),qL(5,4m+1),qL(9,4m+1),・・・)となる。また、撮像素子112cの4m+1行におけるIR波長帯λirの右方向の位相差列をQR(4m+1,λir)とすると、QR(4m+1,λir)=(qR(1,4m+1),qR(5,4m+1),qR(9,4m+1),・・・)となる。 Here, for example, assuming that the left phase difference sequence of the IR wavelength band λir in the 4m + 1 row of the image sensor 112c is QL (4m + 1, λir), QL (4m + 1, λir) = (qL (1, 4m + 1), qL ( 5,4m + 1), qL (9,4m + 1),. Further, assuming that the phase difference sequence in the right direction of the IR wavelength band λir in the 4m + 1 row of the image sensor 112c is QR (4m + 1, λir), QR (4m + 1, λir) = (qR (1,4m + 1), qR (5,4m + 1). ), QR (9,4m + 1),.
 撮像素子112cの4m+3行におけるIR波長帯λirの左方向の位相差列をQL(4m+3,λir)とすると、QL(4m+3,λir)=(qL(3,4m+3),qL(7,4m+3),qL(11,4m+3),・・・)となる。また、撮像素子112cの4m+3行におけるIR波長帯λirの右方向の位相差列をQR(4m+3,λir)とすると、QR(4m+3,λir)=(qR(3,4m+3),qR(7,4m+3),qR(11,4m+3),・・・)となる。 Assuming that the left phase difference column of the IR wavelength band λir in the 4m + 3 row of the image sensor 112c is QL (4m + 3, λir), QL (4m + 3, λir) = (qL (3, 4m + 3), qL (7, 4m + 3), qL (11,4m + 3),. Further, if the phase difference sequence in the right direction of the IR wavelength band λir in the 4m + 3 row of the image sensor 112c is QR (4m + 3, λir), QR (4m + 3, λir) = (qR (3,4m + 3), qR (7, 4m + 3). ), QR (11,4m + 3),.
 撮像素子112cの4n+1列におけるIR波長帯λirの上方向の位相差列をQU(4n+1,λir)とすると、QU(4n+1,λir)=(qU(4n+1,3),qU(4n+1,7),qU(4n+1,11),・・・)となる。また、撮像素子112cの4n+1列におけるIR波長帯λirの下方向の位相差列をQD(4n+1,λir)とすると、QD(4n+1,λir)=(qD(4n+1,3),qD(4n+1,7),qD(4n+1,11),・・・)となる。 Assuming that the upward phase difference column in the IR wavelength band λir in the 4n + 1 column of the image sensor 112c is QU (4n + 1, λir), QU (4n + 1, λir) = (qU (4n + 1,3), qU (4n + 1,7), qU (4n + 1,11),... Further, assuming that the downward phase difference column in the IR wavelength band λir in the 4n + 1 column of the image sensor 112c is QD (4n + 1, λir), QD (4n + 1, λir) = (qD (4n + 1,3), qD (4n + 1,7) ), QD (4n + 1,11),.
 撮像素子112cの4n+3列におけるIR波長帯λirの上方向の位相差列をQU(4n+3,λir)とすると、QU(4n+3,λir)=(qU(4n+3,1),qU(4n+3,5),qU(4n+3,9),・・・)となる。また、撮像素子112cの4n+1列におけるIR波長帯λirの下方向の位相差列をQD(4n+3,λir)とすると、QD(4n+3,λir)=(qD(4n+3,1),qD(4n+3,5),qD(4n+3,9),・・・)となる。 Assuming that the upper phase difference column of the IR wavelength band λir in the 4n + 3 column of the image sensor 112c is QU (4n + 3, λir), QU (4n + 3, λir) = (qU (4n + 3, 1), qU (4n + 3, 5), qU (4n + 3, 9),. Further, assuming that the downward phase difference column in the IR wavelength band λir in the 4n + 1 column of the image sensor 112c is QD (4n + 3, λir), QD (4n + 3, λir) = (qD (4n + 3, 1), qD (4n + 3, 5) ), QD (4n + 3, 9),.
 例えば、可視光の照度が十分である場合、上述した第1の実施の形態と同様に、位相差検出部133は、R波長帯、G波長、又は、B波長帯の水平方向又は垂直方向の位相差列の中から、位相ずれ量の検出に用いる位相差列を選択する。 For example, when the illuminance of visible light is sufficient, the phase difference detection unit 133 can be used in the horizontal direction or the vertical direction of the R wavelength band, the G wavelength, or the B wavelength band as in the first embodiment described above. A phase difference sequence to be used for detecting a phase shift amount is selected from the phase difference sequence.
 一方、夜間の撮影時などにおいて、可視光の照度が十分でない場合、R波長帯、G波長、及び、B波長帯のいずれの位相差列もSN比が低くなり、位相ずれ量の検出精度が低下する。 On the other hand, when the illuminance of visible light is not sufficient, such as when photographing at night, the S / N ratio is low in any of the phase difference sequences in the R wavelength band, G wavelength, and B wavelength band, and the detection accuracy of the phase shift amount descend.
 例えば、図33乃至図35は、可視光の照度が十分でない場合におけるG波長帯及びIR波長帯の位相ずれ量を比較した図である。図33乃至図35の右側の図は、G波長帯の光束及びIR波長帯の光束がレンズ111に入射する様子を上から見た模式図である。G波長帯の光束が実線で示され、IR波長帯の光束が点線で示されている。図33乃至図35の左側の図は、G波長帯及びIR波長帯の光束が右側の図に示される状態である場合における、G波長帯及びIR波長帯の位相ずれ量の例を示している。 For example, FIGS. 33 to 35 are diagrams comparing phase shift amounts of the G wavelength band and the IR wavelength band when the illuminance of visible light is not sufficient. The right-side diagrams of FIGS. 33 to 35 are schematic views of a state in which a light beam in the G wavelength band and a light beam in the IR wavelength band are incident on the lens 111 from above. A light beam in the G wavelength band is indicated by a solid line, and a light beam in the IR wavelength band is indicated by a dotted line. The left diagrams of FIGS. 33 to 35 show examples of the amount of phase shift between the G wavelength band and the IR wavelength band when the light beams in the G wavelength band and the IR wavelength band are in the state shown in the right diagram. .
 図33は、IR波長帯の光束が後ピン状態の場合を示し、図34は、IR波長帯の光束が、前ピン状態の場合を示し、図35は、G波長帯の光束が合焦状態の場合を示している。 FIG. 33 shows a case where the light beam in the IR wavelength band is in the rear pin state, FIG. 34 shows a case where the light beam in the IR wavelength band is in the front pin state, and FIG. 35 shows a state where the light beam in the G wavelength band is in focus. Shows the case.
 図33乃至図35に示されるように、可視光が低照度の場合、G波長帯の位相差列の位相差画素値が極めて小さくなり、位相ずれ量の検出精度が低下する。例えば、図33及び図34には、検出される可能性のある位相ずれ量の最大値(上側の矢印)及び最小値(下側の矢印)が示されているが、この最大値と最小値の間で位相ずれ量が検出される可能性がある。 As shown in FIG. 33 to FIG. 35, when the visible light has a low illuminance, the phase difference pixel value of the phase difference sequence in the G wavelength band becomes extremely small, and the detection accuracy of the phase shift amount decreases. For example, FIG. 33 and FIG. 34 show the maximum value (upper arrow) and the minimum value (lower arrow) of the phase shift amount that may be detected. There is a possibility that the phase shift amount is detected between the two.
 一方、例えば、位相差画像AE部154が、全ての波長帯に対して同一のAE制御をした場合、IR波長帯の位相差列の位相差画素値は、比較的大きくなる。また、この場合、露光時間とゲインは、G波長帯とIR波長帯で同一となるため、含まれるノイズ量は同等になる。 On the other hand, for example, when the phase difference image AE unit 154 performs the same AE control for all the wavelength bands, the phase difference pixel value of the phase difference sequence in the IR wavelength band becomes relatively large. In this case, since the exposure time and the gain are the same in the G wavelength band and the IR wavelength band, the amount of noise included is the same.
 このように、可視光が低照度の環境下においては、IR波長帯の位相差列の方が、他の可視光の波長帯の位相差列より位相差画素値が大きくなり、SN比が高くなる場合がある。そこで、例えば、IR波長帯の位相差列のSN比が他の可視光の位相差列のSN比より高い場合、位相差検出部133は、IR波長帯の水平方向又は垂直方向の位相差列の中から、位相ずれ量の検出に用いる位相差列を選択する。 Thus, in an environment where visible light is low in illumination, the phase difference sequence in the IR wavelength band has a larger phase difference pixel value and a higher S / N ratio than the phase difference sequence in the other visible light wavelength bands. There is a case. Therefore, for example, when the SN ratio of the phase difference sequence in the IR wavelength band is higher than the SN ratio of the phase difference sequence of other visible light, the phase difference detection unit 133 uses the phase difference sequence in the horizontal direction or the vertical direction of the IR wavelength band. The phase difference sequence to be used for detecting the phase shift amount is selected from the above.
 なお、図35に示されるように、G波長帯において合焦した場合、G波長帯の被写体像のコントラストがほぼ最大になる。これにより、図33及び図34と比較して、G波長帯の位相差列の位相差画素値が大きくなる。一方、G波長帯において合焦することにより、図33及び図34と比較して、IR波長帯の位相差列の位相差画素値は小さくなる場合がある。 Note that, as shown in FIG. 35, when focusing in the G wavelength band, the contrast of the subject image in the G wavelength band is almost maximized. Thereby, compared with FIG.33 and FIG.34, the phase difference pixel value of the phase difference row | line | column of G wavelength band becomes large. On the other hand, by focusing in the G wavelength band, the phase difference pixel value of the phase difference sequence in the IR wavelength band may be smaller than in FIGS. 33 and 34.
 そこで、例えば、IR波長帯の位相差列を用いてG波長帯において合焦することにより、G波長帯の位相差列のSN比がIR波長帯の位相差列のSN比を上回った場合、位相差検出部133は、次の合焦時にG波長帯の位相差列を選択する。 Therefore, for example, when the S / N ratio of the phase difference sequence of the G wavelength band exceeds the S / N ratio of the phase difference sequence of the IR wavelength band by focusing in the G wavelength band using the phase difference sequence of the IR wavelength band, The phase difference detection unit 133 selects a phase difference sequence in the G wavelength band at the next focusing time.
 また、位相差検出部133は、近赤外画像(モノクロ)の撮影が行われる場合、R、G、B波長帯の位相差列を用いずに、IR波長帯の水平又方向は垂直方向の位相差列の中から、位相ずれ量の検出に用いる位相差列を選択する。 Further, when a near-infrared image (monochrome) is captured, the phase difference detection unit 133 does not use the phase difference sequence of the R, G, and B wavelength bands, and the horizontal or vertical direction of the IR wavelength band is the vertical direction. A phase difference sequence to be used for detecting a phase shift amount is selected from the phase difference sequence.
 ステップS14において、第1の実施の形態と同様の処理により、位相ずれ量が検出される。 In step S14, the amount of phase shift is detected by the same processing as in the first embodiment.
 ステップS15において、第1の実施の形態と同様の処理により、位相ずれ量がデフォーカス量に変換される。 In step S15, the phase shift amount is converted into the defocus amount by the same processing as in the first embodiment.
 ステップS16において、第1の実施の形態と同様の処理により、合焦位置が求められる。すなわち、合焦位置検出部134は、ステップS15の処理で求めたデフォーカス量に基づいて、合焦位置を求める。さらに、合焦位置検出部134は、位相ずれ量の検出に用いた位相差検出波長帯と、最終的に合焦に用いる合焦波長帯とが異なる場合、図21に示される軸上色収差量表を用いて合焦位置を補正する。 In step S16, the in-focus position is obtained by the same processing as in the first embodiment. That is, the focus position detection unit 134 obtains the focus position based on the defocus amount obtained in the process of step S15. Furthermore, the in-focus chromatic aberration amount shown in FIG. 21 is obtained when the in-focus position detection unit 134 differs from the in-focus wavelength band used for focusing and the phase difference detection wavelength band used for detecting the phase shift amount. The focus position is corrected using the table.
 例えば、IR波長帯が位相差検出波長帯であり、G波長帯が合焦波長帯である場合、IR波長帯に対するG波長帯の軸上色収差に基づいて、合焦位置が補正される。IR波長帯の光と可視光とは軸上色収差が大きくなるため、この合焦位置の補正が特に有効となる。 For example, when the IR wavelength band is the phase difference detection wavelength band and the G wavelength band is the focusing wavelength band, the focusing position is corrected based on the axial chromatic aberration of the G wavelength band with respect to the IR wavelength band. Since the axial chromatic aberration is large between light in the IR wavelength band and visible light, this in-focus position correction is particularly effective.
 なお、近赤外画像(モノクロ)を撮影する場合、IR波長帯の位相ずれ量により求められた合焦位置が、補正されずにそのまま用いられる。 Note that when a near-infrared image (monochrome) is taken, the in-focus position obtained from the amount of phase shift in the IR wavelength band is used as it is without being corrected.
 ステップS17において、第1の実施の形態と同様の処理により、焦点が合わせられる。 In step S17, the focus is adjusted by the same processing as in the first embodiment.
 その後、処理はステップS1に戻り、ステップS1以降の処理が実行される。 Thereafter, the process returns to step S1, and the processes after step S1 are executed.
 このようにして、夜間の撮影時等の可視光の照度が低い場合においても、正確に合焦位置を検出し、被写体に焦点を合わせることができる。 In this way, even when the illuminance of visible light is low, such as during night shooting, it is possible to accurately detect the in-focus position and focus on the subject.
 また、IR波長帯までの軸上色収差を補正したレンズシステムは少なく、高価である。一方、本発明の第2の実施の形態では、そのような高価なレンズシステムを用いなくても、IR波長帯の光を用いて、正確に焦点を合わせることができる。 Also, there are few and expensive lens systems that correct axial chromatic aberration up to the IR wavelength band. On the other hand, in the second embodiment of the present invention, it is possible to accurately focus using light in the IR wavelength band without using such an expensive lens system.
 また、撮影時と合焦時で赤外カットフィルタを挿抜する機構や、赤外光検出用のラインセンサ等を設ける必要がなく、コストダウンを実現することができる。 Further, it is not necessary to provide a mechanism for inserting / removing an infrared cut filter at the time of photographing and at the time of focusing, a line sensor for detecting infrared light, and the like, so that cost reduction can be realized.
<4.第2の実施の形態の変形例>
{撮像素子に関する変形例}
 例えば、図36に示されるように、図31の画素配置におけるIR画素の代わりに、W+IR画素を配置するようにしてもよい。
<4. Modification of Second Embodiment>
{Variation of image sensor}
For example, as shown in FIG. 36, W + IR pixels may be arranged instead of the IR pixels in the pixel arrangement of FIG.
 図37の波形301は、W+IR画素に設けられるバンドパスフィルタ(以下、W+IRフィルタと称する)の分光特性の例を示すグラフである。グラフの横軸は波長を示し、縦軸は分光透過率を示している。 A waveform 301 in FIG. 37 is a graph showing an example of spectral characteristics of a bandpass filter (hereinafter referred to as a W + IR filter) provided in a W + IR pixel. The horizontal axis of the graph indicates the wavelength, and the vertical axis indicates the spectral transmittance.
 W+IRフィルタは、B波長帯、G波長帯、R波長帯を含む波長帯の可視光と、IR波長帯(約800~900nm付近のみ)の光を透過する。従って、W+IRフィルタを用いたW+IR画素の感度は、他の画素より高くなる。 The W + IR filter transmits visible light in a wavelength band including the B wavelength band, G wavelength band, and R wavelength band, and light in the IR wavelength band (only about 800 to 900 nm). Therefore, the sensitivity of the W + IR pixel using the W + IR filter is higher than that of the other pixels.
 なお、W+IR画素の画素値のうちIR波長帯の成分のみを利用する場合、次式(15)に示されるように、W+IR画素の画素値から、周囲のR画素、G画素、及び、B画素の画素値を減算することにより、IR成分の画素値IRを求めることができる。 When only the IR wavelength band component of the pixel value of the W + IR pixel is used, the surrounding R pixel, G pixel, and B pixel are calculated from the pixel value of the W + IR pixel as shown in the following equation (15). The pixel value IR of the IR component can be obtained by subtracting the pixel value of.
IR=(W+IR)-(B+G+R) ・・・(15) IR = (W + IR) − (B + G + R) (15)
 なお、W+IR、B、G、及び、Rは、W+IR画素、B画素、G画素、及び、R画素の画素値をそれぞれ示している。 Note that W + IR, B, G, and R indicate pixel values of the W + IR pixel, the B pixel, the G pixel, and the R pixel, respectively.
<5.第3の実施の形態>
 次に、図38及び図39を参照して、本技術の第3の実施の形態について説明する。
<5. Third Embodiment>
Next, with reference to FIGS. 38 and 39, a third embodiment of the present technology will be described.
 上述した第2の実施の形態では、R画素、G画素、及び、B画素の波長選択フィルタにおいて赤外カットフィルタが積層され、IR波長帯の光が遮断される。一方、IR画素では、波長選択フィルタにおいて赤外カットフィルタが積層されず、IR波長帯の光が受光される。 In the second embodiment described above, an infrared cut filter is laminated in the wavelength selection filters of the R pixel, the G pixel, and the B pixel, and the light in the IR wavelength band is blocked. On the other hand, in the IR pixel, an infrared cut filter is not stacked in the wavelength selection filter, and light in the IR wavelength band is received.
 従って、IR画素とIR画素以外の可視光波長帯の画素(R画素、G画素、B画素)との境界部分において、赤外カットフィルタの膜厚分の段差が発生する。そして、この段差を横切る斜め方向の入射光により、可視光波長帯の画素において、IR波長帯成分の混色が発生する。この混色の影響は、画素数の増加による撮像素子の微細化により、画素間隔が小さくなるに従い、より大きくなる。 Therefore, a step corresponding to the film thickness of the infrared cut filter is generated at the boundary between the IR pixel and a pixel in the visible light wavelength band other than the IR pixel (R pixel, G pixel, B pixel). Then, the incident light in the oblique direction crossing the step causes color mixing of the IR wavelength band component in the visible wavelength band pixel. The influence of this color mixture becomes larger as the pixel interval becomes smaller due to the miniaturization of the image sensor due to the increase in the number of pixels.
 第3の実施の形態は、この混色の発生を防止できるようにするものである。 The third embodiment is intended to prevent the occurrence of this color mixture.
{撮像素子112dの構成例}
 図38は、第3の実施の形態においてカメラ101c(不図示)に使用される撮像素子112d(不図示)における画素の配置例を示している。なお、カメラ101cは、カメラ101aの撮像素子112aを撮像素子112dに置き換えたものである。
{Configuration Example of Image Sensor 112d}
FIG. 38 shows an example of pixel arrangement in an image sensor 112d (not shown) used for the camera 101c (not shown) in the third embodiment. The camera 101c is obtained by replacing the image sensor 112a of the camera 101a with an image sensor 112d.
 図38の画素の配置を第2の実施の形態の図31の画素の配置と比較すると、B画素、G画素、及び、R画素の代わりに、B+IR画素、G+IR画素、及び、R+IR画素が配置されている点が異なる。 Comparing the pixel arrangement of FIG. 38 with the pixel arrangement of FIG. 31 of the second embodiment, B + IR pixels, G + IR pixels, and R + IR pixels are arranged instead of B pixels, G pixels, and R pixels. Is different.
 B+IR画素には、B波長帯(約405~500nm)及びIR波長帯の一部(約800~900nm付近)を透過するB+IRフィルタが設けられる。B+IRフィルタは、例えば、上述した図5の波形251の分光特性を有するB+IRbフィルタと、上述した図37の波形301の分光特性を有するW+IRフィルタとを積層することで実現される。 B + IR pixels are provided with a B + IR filter that transmits the B wavelength band (about 405 to 500 nm) and a part of the IR wavelength band (about 800 to 900 nm). The B + IR filter is realized, for example, by stacking the B + IRb filter having the spectral characteristics of the waveform 251 in FIG. 5 and the W + IR filter having the spectral characteristics of the waveform 301 in FIG. 37 described above.
 G+IR画素には、G波長帯(約475~600nm)及びIR波長帯の一部(約800~900nm付近)を透過するG+IRフィルタが設けられる。G+IRフィルタは、例えば、上述した図5の波形252の分光特性を有するG+IRgフィルタと、W+IRフィルタとを積層することで実現される。 The G + IR pixel is provided with a G + IR filter that transmits the G wavelength band (about 475 to 600 nm) and a part of the IR wavelength band (about 800 to 900 nm). The G + IR filter is realized, for example, by stacking the G + IRg filter having the spectral characteristics of the waveform 252 in FIG. 5 and the W + IR filter.
 R+IR画素には、R波長帯(約580~650nm)及びIR波長帯の一部(約800~900nm付近)を透過するR+IRフィルタが設けられる。R+IRフィルタは、例えば、上述した図5の波形253の分光特性を有するR+IRrフィルタと、W+IRフィルタとを積層することで実現される。 The R + IR pixel is provided with an R + IR filter that transmits the R wavelength band (about 580 to 650 nm) and part of the IR wavelength band (about 800 to 900 nm). The R + IR filter is realized, for example, by stacking the R + IRr filter having the spectral characteristics of the waveform 253 in FIG. 5 and the W + IR filter.
 IR画素には、IR波長帯の一部(約800~900nm付近)のみを透過するIRフィルタが設けられる。IRフィルタは、例えば、B+IRbフィルタ、R+IRrフィルタ、及び、W+IRフィルタを積層することで実現される。 The IR pixel is provided with an IR filter that transmits only a part of the IR wavelength band (around 800 to 900 nm). The IR filter is realized, for example, by stacking a B + IRb filter, an R + IRr filter, and a W + IR filter.
 図39は、B+IR画素、G+IR画素、及び、R+IR画素の分光特性を示すグラフである。グラフの横軸は波長を示し、縦軸は分光透過率を示している。波形401は、B+IR画素の分光特性を示しており、波形402は、G+IR画素の分光特性を示しており、波形403は、R+IR画素の分光特性を示している。 FIG. 39 is a graph showing the spectral characteristics of B + IR pixels, G + IR pixels, and R + IR pixels. The horizontal axis of the graph indicates the wavelength, and the vertical axis indicates the spectral transmittance. A waveform 401 indicates the spectral characteristic of the B + IR pixel, a waveform 402 indicates the spectral characteristic of the G + IR pixel, and a waveform 403 indicates the spectral characteristic of the R + IR pixel.
 ここで、B+IR画素、G+IR画素、R+IR画素、及び、IR画素の各フィルタが透過するIR波長帯の分光特性は、W+IRフィルタにより正規化され、ほぼ同一(約800~900nm付近)になる。従って、次式(16)乃至(18)に示されるように、B+IR画素の画素値B+IR、G+IR画素の画素値G+IR、及び、R+IR画素の画素値R+IRから、近隣のIR画素の画素値IRを減算することにより、可視光領域の画素値B、画素値G、及び、画素値Rが得られる。 Here, the spectral characteristics of the IR wavelength band transmitted through the filters of the B + IR pixel, the G + IR pixel, the R + IR pixel, and the IR pixel are normalized by the W + IR filter and become almost the same (around 800 to 900 nm). Therefore, as shown in the following equations (16) to (18), the pixel value IR of the neighboring IR pixel is calculated from the pixel value B + IR of the B + IR pixel, the pixel value G + IR of the G + IR pixel, and the pixel value R + IR of the R + IR pixel. By subtracting, the pixel value B, the pixel value G, and the pixel value R in the visible light region are obtained.
B=(B+IR)-IR ・・・(16)
G=(G+IR)-IR ・・・(17)
R=(R+IR)-IR ・・・(18)
B = (B + IR) −IR (16)
G = (G + IR) −IR (17)
R = (R + IR) −IR (18)
 これにより、色再現性に優れたカラー画像を得ることが可能となる。また、可視光波長帯の画素値に、IR波長帯成分が混入し、混色が発生することが防止される。 This makes it possible to obtain a color image with excellent color reproducibility. In addition, the IR wavelength band component is mixed in the pixel value in the visible light wavelength band, thereby preventing color mixing.
 なお、近赤外画像(モノクロ)を撮影する際には、各画素のIR成分を使用することが可能である。 Note that when capturing a near-infrared image (monochrome), the IR component of each pixel can be used.
 また、W+IRフィルタは、例えば、撮像素子112d上にオンチップで形成することもできるし、カメラ101cの光学系の一部に設置することも可能である。 Also, the W + IR filter can be formed on-chip on the image sensor 112d, for example, or can be installed in a part of the optical system of the camera 101c.
 なお、第3の実施の形態では、第2の実施の形態と比較して、IR波長帯の受光範囲が狭くなる。これにより、IR波長帯の位相差列を用いて合焦位置を検出した場合において、軸上色収差を用いた合焦位置の補正の精度が高くなる。一方、IR波長帯の位相差画素値のSN比が低下する。しかし、上述したように、IR波長帯の位相差画素値の時間方向、空間方向、及び、波長方向の積分処理を行うことにより、IR波長帯の位相差画素値のSN比が改善される。その結果、合焦精度が向上する。 In the third embodiment, the light receiving range in the IR wavelength band is narrower than that in the second embodiment. Thereby, when the in-focus position is detected using the phase difference sequence in the IR wavelength band, the accuracy of the in-focus position correction using the longitudinal chromatic aberration is increased. On the other hand, the SN ratio of the phase difference pixel value in the IR wavelength band is lowered. However, as described above, the SN ratio of the phase difference pixel value in the IR wavelength band is improved by performing integration processing in the time direction, the spatial direction, and the wavelength direction of the phase difference pixel value in the IR wavelength band. As a result, focusing accuracy is improved.
<6.第3の実施の形態の変形例>
{撮像素子に関する変形例}
 例えば、図40に示されるように、図38の画素配置におけるIR画素の代わりに、W+IR画素を配置するようにしてもよい。なお、W+IR画素には、例えば、上述した図37の波形301の分光特性を有するW+IRフィルタが設けられる。
<6. Modification of Third Embodiment>
{Variation of image sensor}
For example, as shown in FIG. 40, W + IR pixels may be arranged instead of the IR pixels in the pixel arrangement of FIG. Note that the W + IR pixel is provided with, for example, a W + IR filter having the spectral characteristics of the waveform 301 in FIG. 37 described above.
 ここで、W+IR画素の画素値のうちIR波長帯の成分のみを利用する場合、次式(19)のように、W+IR画素の画素値W+IRから、周囲のB+IR画素の画素値B+IR、G+IR画素の画素値G+IR、及び、R+IR画素の画素値R+IRを減算することにより、IR成分の画素値IRが算出される。 Here, when only the IR wavelength band component is used among the pixel values of the W + IR pixel, the pixel values B + IR and G + IR of the surrounding B + IR pixels are calculated from the pixel value W + IR of the W + IR pixels as in the following equation (19). By subtracting the pixel value G + IR and the pixel value R + IR of the R + IR pixel, the pixel value IR of the IR component is calculated.
IR=((B+IR)+(G+IR)+(R+IR)-(W+IR))/2 ・・・(19) IR = ((B + IR) + (G + IR) + (R + IR) − (W + IR)) / 2 (19)
 また、式(19)により求めた画素値IRを、上述した式(16)乃至(18)に適用することにより、可視光領域の画素値B、画素値G、及び、画素値Rが算出される。 Further, by applying the pixel value IR obtained by Expression (19) to the above Expressions (16) to (18), the pixel value B, the pixel value G, and the pixel value R in the visible light region are calculated. The
<7.第4の実施の形態>
 次に、図41を参照して、本技術の第4の実施の形態について説明する。
<7. Fourth Embodiment>
Next, a fourth embodiment of the present technology will be described with reference to FIG.
 第4の実施の形態では、上述した第1乃至第3の実施の形態と比較して、撮像素子の構成が異なる。具体的には、第4の実施の形態では、積層分光型の撮像素子が用いられる。 In the fourth embodiment, the configuration of the image sensor is different from that of the first to third embodiments described above. Specifically, in the fourth embodiment, a multilayer spectroscopic imaging device is used.
{撮像素子112eの画素501aの構成例}
 図41は、第4の実施の形態においてカメラ101d(不図示)に使用される撮像素子112e(不図示)の画素501aの構成例を示している。なお、カメラ101dは、カメラ101aの撮像素子112aを撮像素子112eに置き換えたものである。
{Configuration Example of Pixel 501a of Image Sensor 112e}
FIG. 41 shows a configuration example of the pixel 501a of the image sensor 112e (not shown) used for the camera 101d (not shown) in the fourth embodiment. The camera 101d is obtained by replacing the image sensor 112a of the camera 101a with an image sensor 112e.
 図41の上の図は、画素501aを横方向から見た分解図を模式的に示し、下の図は、画素501aを上方向から見た平面図を模式的に示している。 The upper diagram in FIG. 41 schematically shows an exploded view of the pixel 501a seen from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 501a seen from the upper direction.
 画素501aにおいては、オンチップマイクロレンズ511、遮光部512a、並びに、光電変換層513L及び光電変換層513Rが上から順に積層されている。光電変換層513Lにおいては、光電変換部513BL乃至513IRLが、上から順にほぼ重なるように積層されている。光電変換層513Rにおいては、光電変換部513BR乃至513IRRが、上から順にほぼ重なるように積層されている。 In the pixel 501a, an on-chip microlens 511, a light shielding portion 512a, a photoelectric conversion layer 513L, and a photoelectric conversion layer 513R are sequentially stacked from the top. In the photoelectric conversion layer 513L, the photoelectric conversion units 513BL to 513IRL are stacked so as to substantially overlap in order from the top. In the photoelectric conversion layer 513R, the photoelectric conversion units 513BR to 513IRR are stacked so as to almost overlap in order from the top.
 オンチップマイクロレンズ511に入射した光は、オンチップマイクロレンズ511の光軸中心である画素501aの受光面の中央方向へ集光する。そして、入射光は、遮光部512aで遮光されていない光電変換部513BL及び513BRの受光領域に入射する。遮光部512aは、隣接画素との混色防止及び画素501aの瞳分割の効果を奏する。 The light incident on the on-chip microlens 511 is condensed toward the center of the light receiving surface of the pixel 501a, which is the optical axis center of the on-chip microlens 511. The incident light is incident on the light receiving regions of the photoelectric conversion units 513BL and 513BR that are not shielded by the light shielding unit 512a. The light shielding unit 512a has effects of preventing color mixture with adjacent pixels and pupil division of the pixel 501a.
 光電変換部513BL乃至513IRL、及び、光電変換部513BR乃至513IRRは、例えば、それぞれフォトダイオード等の光電変換素子からなる。 The photoelectric conversion units 513BL to 513IRL and the photoelectric conversion units 513BR to 513IRR are each composed of a photoelectric conversion element such as a photodiode, for example.
 光電変換部513BLと光電変換部513BRは、所定の間隔を空けて水平方向(行方向、左右方向)に並ぶように配置されている。すなわち、光電変換部513BLは、画素501aの受光面の左方向に偏った位置に配置され、光電変換部513BLの受光領域は、オンチップマイクロレンズ511に対して左方向に偏心している。光電変換部513BRは、画素501aの受光面の右方向に偏った位置に配置され、光電変換部513BRの受光領域は、オンチップマイクロレンズ511に対して右方向に偏心している。 The photoelectric conversion unit 513BL and the photoelectric conversion unit 513BR are arranged in a horizontal direction (row direction, left-right direction) with a predetermined interval. In other words, the photoelectric conversion unit 513BL is disposed at a position offset to the left of the light receiving surface of the pixel 501a, and the light receiving region of the photoelectric conversion unit 513BL is eccentric to the left with respect to the on-chip microlens 511. The photoelectric conversion unit 513BR is disposed at a position offset in the right direction of the light receiving surface of the pixel 501a, and the light receiving region of the photoelectric conversion unit 513BR is eccentric in the right direction with respect to the on-chip microlens 511.
 光電変換部513GL乃至513RLは、光電変換部513BLと同様に、画素501aの受光面の左方向に偏った位置に配置され、光電変換部513GL乃至513IRLの受光領域は、オンチップマイクロレンズ511に対して左方向に偏心している。光電変換部513GR乃至513IRRは、光電変換部513BRと同様に、画素501aの受光面の右方向に偏った位置に配置され、光電変換部513GR乃至513IRRの受光領域は、オンチップマイクロレンズ511に対して右方向に偏心している。 Similarly to the photoelectric conversion unit 513BL, the photoelectric conversion units 513GL to 513RL are arranged at a position biased to the left of the light receiving surface of the pixel 501a. It is eccentric to the left. Similarly to the photoelectric conversion unit 513BR, the photoelectric conversion units 513GR to 513IRR are arranged at positions offset in the right direction of the light receiving surface of the pixel 501a, and the light receiving regions of the photoelectric conversion units 513GR to 513IRR with respect to the on-chip microlens 511. Is eccentric to the right.
 光電変換部513BL及び513BRは、B波長帯の光に対して感度を有する。すなわち、光電変換部513BL及び513BRは、B波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513BL及び513BRは、B波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513BL and 513BR have sensitivity to light in the B wavelength band. That is, the photoelectric conversion units 513BL and 513BR absorb light in the B wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513BL and 513BR each output a pixel signal corresponding to the amount of light received in the B wavelength band.
 光電変換部513GL及び513GRは、G波長帯の光に対して感度を有する。すなわち、光電変換部513GL及び513GRは、G波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513GL及び513GRは、G波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513GL and 513GR are sensitive to light in the G wavelength band. That is, the photoelectric conversion units 513GL and 513GR absorb light in the G wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513GL and 513GR each output a pixel signal corresponding to the amount of light received in the G wavelength band.
 光電変換部513RL及び513RRは、R波長帯の光に対して感度を有する。すなわち、光電変換部513RL及び513RRは、R波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513RL及び513RRは、R波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513RL and 513RR have sensitivity to light in the R wavelength band. That is, the photoelectric conversion units 513RL and 513RR absorb light in the R wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513RL and 513RR each output a pixel signal corresponding to the amount of light received in the R wavelength band.
 光電変換部513IRL及び513IRRは、IR波長帯の光に対して感度を有する。すなわち、光電変換部513IRL及び513IRRは、IR波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513IRL及び513IRRは、IR波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513IRL and 513IRR have sensitivity to light in the IR wavelength band. That is, the photoelectric conversion units 513IRL and 513IRR absorb light in the IR wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513IRL and 513IRR each output a pixel signal corresponding to the amount of light received in the IR wavelength band.
 このように、画素501aでは、入射光が水平方向(行方向、左右方向)に瞳分割される。また、画素501aは、入射光を受光面に対して垂直な積層方向に分光し、B波長帯、G波長帯、R波長帯、及び、IR波長帯の光を同時に受光し、各波長帯の光に対する画素信号を独立して出力することができる。 Thus, in the pixel 501a, incident light is divided into pupils in the horizontal direction (row direction, left-right direction). The pixel 501a splits incident light in the stacking direction perpendicular to the light receiving surface, and simultaneously receives light in the B wavelength band, the G wavelength band, the R wavelength band, and the IR wavelength band. Pixel signals for light can be output independently.
 また、画素501aは、光電変換部513BL及び513BRの画素信号を個別に出力したり、加算して出力したりすることができる。光電変換部513BL及び513BRから個別に出力された画素信号は、位相差検出用の信号として用いられ、2つの画素信号を加算した信号は、通常の撮影用の信号として用いられる。 Further, the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513BL and 513BR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513BL and 513BR are used as phase difference detection signals, and a signal obtained by adding the two pixel signals is used as a normal photographing signal.
 同様に、画素501aは、光電変換部513GL及び513GRの画素信号を個別に出力したり、加算して出力したりすることができる。光電変換部513GL及び513GRから個別に出力された画素信号は、位相差検出用の信号として用いられ、2つの画素信号を加算した信号は、通常の撮影用の信号として用いられる。 Similarly, the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513GL and 513GR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513GL and 513GR are used as signals for phase difference detection, and a signal obtained by adding two pixel signals is used as a signal for normal photographing.
 また、画素501aは、光電変換部513RL及び513RRの画素信号を個別に出力したり、加算して出力したりすることができる。光電変換部513RL及び513RRから個別に出力された画素信号は、位相差検出用の信号として用いられ、2つの画素信号を加算した信号は、通常の撮影用の信号として用いられる。 Further, the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513RL and 513RR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513RL and 513RR are used as phase difference detection signals, and a signal obtained by adding two pixel signals is used as a normal photographing signal.
 さらに、画素501aは、光電変換部513IRL及び513IRRの画素信号を個別に出力したり、加算して出力したりすることができる。光電変換部513IRL及び513IRRから個別に出力された画素信号は、位相差検出用の信号として用いられ、2つの画素信号を加算した信号は、通常の撮影用の信号として用いられる。 Further, the pixel 501a can individually output the pixel signals of the photoelectric conversion units 513IRL and 513IRR, or can add and output them. Pixel signals individually output from the photoelectric conversion units 513IRL and 513IRR are used as phase difference detection signals, and a signal obtained by adding the two pixel signals is used as a normal photographing signal.
 従って、撮像素子112eは、各画素において、異なる波長帯の光を独立して受光し、波長帯毎に受光量に応じた撮影画素値及び位相差画素値を出力することができる。そのため、撮像素子112eでは、第1乃至第3の実施の形態の撮像素子112a乃至112dのように、透過波長帯が異なる波長選択フィルタを撮像面に対してモザイク状に配置する必要がない。また、撮像素子112eを用いることにより、各画素において、受光する波長帯と異なる波長帯の画素値をデモザイク処理により補間する必要がなくなる。従って、撮像素子112eを用いることにより、各波長帯の光の空間方向の分解能が向上する。これにより、いずれの波長帯の位相差列を用いても、合焦位置の検出精度が向上する。 Therefore, the image sensor 112e can receive light of different wavelength bands independently in each pixel and output a photographing pixel value and a phase difference pixel value corresponding to the amount of received light for each wavelength band. Therefore, in the image sensor 112e, unlike the image sensors 112a to 112d of the first to third embodiments, it is not necessary to arrange the wavelength selection filters having different transmission wavelength bands in a mosaic pattern on the imaging surface. In addition, by using the image sensor 112e, it is not necessary to interpolate pixel values in a wavelength band different from the wavelength band for receiving light by demosaic processing in each pixel. Therefore, by using the image sensor 112e, the spatial resolution of light in each wavelength band is improved. Thereby, the detection accuracy of the in-focus position is improved regardless of the phase difference sequence of any wavelength band.
 なお、図41では、水平方向に瞳分割する例を示したが、撮像素子112eにおいては、垂直方向(列方向、上下方向)に瞳分割する画素も存在する。 Note that although FIG. 41 illustrates an example in which the pupil is divided in the horizontal direction, in the image sensor 112e, there are pixels that perform pupil division in the vertical direction (column direction, vertical direction).
 この撮像素子112eを用いた場合にも、上述したように、各位相差画素の位相差画素値の積分処理を行うことにより、位相差画像のSN比を向上させることができる。その結果、合焦精度が向上する。 Even when this image sensor 112e is used, the S / N ratio of the phase difference image can be improved by performing the integration process of the phase difference pixel value of each phase difference pixel as described above. As a result, focusing accuracy is improved.
<8.第4の実施の形態の変形例>
{撮像素子に関する変形例}
 以上の説明では、撮像素子112eの積層分光型の各画素を水平方向又は垂直方向のいずれか一方に2つに瞳分割する例を示したが、分割数及び分割方向は上述した例に限定されるものではない。
<8. Modification of Fourth Embodiment>
{Variation of image sensor}
In the above description, an example is shown in which each of the stacked spectroscopic pixels of the image sensor 112e is divided into two pupils in either the horizontal direction or the vertical direction. However, the number of divisions and the division direction are limited to the above-described example. It is not something.
 図42は、4つに瞳分割した積層分光型の画素501bの構成例を示している。図42の上の図は、画素501bを横方向から見た分解図を模式的に示し、下の図は、画素501bを上方向から見た平面図を模式的に示している。なお、図中、図41と同じ部分には同じ符号を付している。 FIG. 42 shows a configuration example of a stacked spectroscopic pixel 501b divided into four pupils. The upper diagram of FIG. 42 schematically shows an exploded view of the pixel 501b viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 501b viewed from the upper direction. In the figure, the same parts as those in FIG. 41 are denoted by the same reference numerals.
 画素501bは、図41の画素501aと比較して、遮光部512a、並びに、光電変換層513L及び513Rの代わりに、遮光部512b、並びに、光電変換層513LU(不図示)、光電変換層513LD、光電変換層513RU(不図示)、及び、光電変換層513RDが設けられている点が異なる。 Compared with the pixel 501a in FIG. 41, the pixel 501b includes a light shielding unit 512b, a photoelectric conversion layer 513LU (not shown), a photoelectric conversion layer 513LD, instead of the light shielding unit 512a and the photoelectric conversion layers 513L and 513R. A difference is that a photoelectric conversion layer 513RU (not shown) and a photoelectric conversion layer 513RD are provided.
 光電変換層513LUにおいては、光電変換部513BLU、光電変換部513GLU(不図示)、光電変換部513RLU(不図示)、及び、光電変換部513IRLU(不図示)が、上から順にほぼ重なるように積層されている。光電変換層513LDにおいては、光電変換部513BLD、光電変換部513GLD、光電変換部513RLD、及び、光電変換部513IRLDが、上から順にほぼ重なるように積層されている。 In the photoelectric conversion layer 513LU, the photoelectric conversion unit 513BLU, the photoelectric conversion unit 513GLU (not shown), the photoelectric conversion unit 513RLU (not shown), and the photoelectric conversion unit 513IRLU (not shown) are stacked so as to almost overlap in order from the top. Has been. In the photoelectric conversion layer 513LD, the photoelectric conversion unit 513BLD, the photoelectric conversion unit 513GLD, the photoelectric conversion unit 513RLD, and the photoelectric conversion unit 513IRLD are stacked so as to almost overlap in order from the top.
 光電変換層513RUにおいては、光電変換部513BRU、光電変換部513GRU(不図示)、光電変換部513RRU(不図示)、及び、光電変換部513IRRU(不図示)が、上から順にほぼ重なるように積層されている。光電変換層513RDにおいては、光電変換部513BRD、光電変換部513GRD、光電変換部513RRD、及び、光電変換部513IRRDが、上から順にほぼ重なるように積層されている。 In the photoelectric conversion layer 513RU, the photoelectric conversion unit 513BRU, the photoelectric conversion unit 513GRU (not illustrated), the photoelectric conversion unit 513RRU (not illustrated), and the photoelectric conversion unit 513IRRU (not illustrated) are stacked so as to substantially overlap in order from the top. Has been. In the photoelectric conversion layer 513RD, the photoelectric conversion unit 513BRD, the photoelectric conversion unit 513GRD, the photoelectric conversion unit 513RRD, and the photoelectric conversion unit 513IRRD are stacked so as to almost overlap in order from the top.
 遮光部512bは、図22の画素201bの遮光部213bとほぼ同じ形状を有している。 The light shielding part 512b has substantially the same shape as the light shielding part 213b of the pixel 201b in FIG.
 光電変換部513BLU、光電変換部513BLD、光電変換部513BRU、及び、光電変換部513BRDは、画素501bの受光面において、図22の画素201bの受光面における光電変換部214LU、光電変換部214LD、光電変換部214RU、及び、光電変換部214RDとほぼ同じ位置に配置されている。 The photoelectric conversion unit 513BLU, the photoelectric conversion unit 513BLD, the photoelectric conversion unit 513BRU, and the photoelectric conversion unit 513BRD are provided on the light receiving surface of the pixel 501b, on the light receiving surface of the pixel 201b in FIG. The converter 214RU and the photoelectric converter 214RD are arranged at substantially the same position.
 従って、光電変換部513BLU乃至513IRLUは、画素501bの受光面の左上方向に偏った位置に配置され、光電変換部513BLU乃至513IRLUの受光領域は、オンチップマイクロレンズ511に対して左上方向に偏心している。光電変換部513BLD乃至513IRLDは、画素501bの受光面の左下方向に偏った位置に配置され、光電変換部513BLD乃至513IRLDの受光領域は、オンチップマイクロレンズ511に対して左下方向に偏心している。 Therefore, the photoelectric conversion units 513BLU to 513IRLU are arranged at positions offset in the upper left direction of the light receiving surface of the pixel 501b. Yes. The photoelectric conversion units 513BLD to 513IRLD are arranged at positions deviated in the lower left direction of the light receiving surface of the pixel 501b.
 光電変換部513BRU乃至513IRRUは、画素501bの受光面の右上方向に偏った位置に配置され、光電変換部513BRU乃至513IRRUの受光領域は、オンチップマイクロレンズ511に対して右上方向に偏心している。光電変換部513BRD乃至513IRRDは、画素501bの受光面の右下方向に偏った位置に配置され、光電変換部513BRD乃至513IRRDの受光領域は、オンチップマイクロレンズ511に対して右下方向に偏心している。 The photoelectric conversion units 513BRU to 513IRRU are arranged at positions offset in the upper right direction of the light receiving surface of the pixel 501b. The photoelectric conversion units 513BRD to 513IRRD are arranged at positions offset in the lower right direction of the light receiving surface of the pixel 501b. Yes.
 光電変換部513BLU乃至513BRDは、B波長帯の光に対して感度を有する。すなわち、光電変換部513BLU乃至513BRDは、B波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513BLU乃至513BRDは、B波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513BLU to 513BRD have sensitivity to light in the B wavelength band. That is, the photoelectric conversion units 513BLU to 513BRD absorb light in the B wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513BLU to 513BRD each output a pixel signal corresponding to the amount of light received in the B wavelength band.
 光電変換部513GLU乃至513GRDは、G波長帯の光に対して感度を有する。すなわち、光電変換部513GLU乃至513GRDは、G波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513GLU乃至513GRDは、G波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513GLU to 513GRD are sensitive to light in the G wavelength band. That is, the photoelectric conversion units 513GLU to 513GRD absorb light in the G wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513GLU to 513GRD output pixel signals corresponding to the amount of light received in the G wavelength band.
 光電変換部513RLU乃至513RRDは、R波長帯の光に対して感度を有する。すなわち、光電変換部513RLU乃至513RRDは、R波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513RLU乃至513RRDは、R波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513RLU to 513RRD are sensitive to light in the R wavelength band. That is, the photoelectric conversion units 513RLU to 513RRD absorb light in the R wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513RLU to 513RRD each output a pixel signal corresponding to the amount of light received in the R wavelength band.
 光電変換部513IRLU乃至513IRRDは、IR波長帯の光に対して感度を有する。すなわち、光電変換部513IRLU乃至513IRRDは、IR波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513IRLU乃至513IRRDは、IR波長帯の光の受光量に応じた画素信号をそれぞれ出力する。 The photoelectric conversion units 513IRLU to 513IRRD have sensitivity to light in the IR wavelength band. That is, the photoelectric conversion units 513IRLU to 513IRRD absorb light in the IR wavelength band and transmit light in other wavelength bands. The photoelectric conversion units 513IRLU to 513IRRD output pixel signals corresponding to the amount of received light in the IR wavelength band.
 このように、画素501bでは、入射光が水平方向及び垂直方向に瞳分割される。また、画素501aは、入射光を受光面に対して垂直な積層方向に分光し、B波長帯、G波長帯、R波長帯、及び、IR波長帯の光を同時に受光し、各波長帯の光に対する画素信号を独立して出力することができる。 Thus, in the pixel 501b, incident light is divided into pupils in the horizontal direction and the vertical direction. The pixel 501a splits incident light in the stacking direction perpendicular to the light receiving surface, and simultaneously receives light in the B wavelength band, the G wavelength band, the R wavelength band, and the IR wavelength band. Pixel signals for light can be output independently.
 また、画素501bは、光電変換部513BLU乃至513BRDの画素信号を個別に出力したり、任意の組み合わせで加算して出力することができる。同様に、画素501bは、光電変換部513GLU乃至513GRDの画素信号を個別に出力したり、任意の組み合わせで加算して出力することができる。画素501bは、光電変換部513RLU乃至513RRDの画素信号を個別に出力したり、任意の組み合わせで加算して出力することができる。画素501bは、光電変換部513IRLU乃至513IRRDの画素信号を個別に出力したり、任意の組み合わせで加算して出力することができる。 Further, the pixel 501b can individually output the pixel signals of the photoelectric conversion units 513BLU to 513BRD, or can add and output them in any combination. Similarly, the pixel 501b can individually output pixel signals of the photoelectric conversion units 513GLU to 513GRD, or add and output them in any combination. The pixel 501b can individually output the pixel signals of the photoelectric conversion units 513RLU to 513RRD, or can add and output them in any combination. The pixel 501b can individually output the pixel signals of the photoelectric conversion units 513IRLU to 513IRRD, or can add and output them in any combination.
 また、撮像専用の画素と位相差検出専用の画素とを分けて設ける場合にも、積層分光型の画素を用いることができる。 Also, when a pixel dedicated to imaging and a pixel dedicated to phase difference detection are provided separately, a stacked spectroscopic pixel can be used.
 図43は、撮像専用画素である画素501cの構成例を示している。図43の上の図は、画素501cを横方向から見た分解図を模式的に示し、下の図は、画素501cを上方向から見た平面図を模式的に示している。なお、図中、図41と同じ部分には同じ符号を付している。 FIG. 43 shows a configuration example of a pixel 501c that is a dedicated pixel for imaging. The upper diagram in FIG. 43 schematically shows an exploded view of the pixel 501c as seen from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 501c as seen from the upper direction. In the figure, the same parts as those in FIG. 41 are denoted by the same reference numerals.
 画素501cは、図41の画素501aと比較して、遮光部512a、並びに、光電変換層513L及び513Rの代わりに、遮光部512c、並びに、光電変換層513Mが設けられている点が異なる。光電変換層513Mにおいては、光電変換部513BM乃至513IRMが、上から順にほぼ重なるように積層されている。 The pixel 501c is different from the pixel 501a in FIG. 41 in that a light shielding portion 512c and a photoelectric conversion layer 513M are provided instead of the light shielding portion 512a and the photoelectric conversion layers 513L and 513R. In the photoelectric conversion layer 513M, the photoelectric conversion units 513BM to 513IRM are stacked so as to substantially overlap in order from the top.
 遮光部512cは、図23の画素201cの遮光部213cとほぼ同じ形状を有している。 The light shielding portion 512c has substantially the same shape as the light shielding portion 213c of the pixel 201c in FIG.
 光電変換部513BMは、画素501cの受光面において、図23の画素201cの受光面における光電変換部214Mとほぼ同じ位置に配置されている。従って、光電変換部513BM乃至513IRMの受光領域は、オンチップマイクロレンズ511に対して偏心していない。 The photoelectric conversion unit 513BM is disposed on the light receiving surface of the pixel 501c at substantially the same position as the photoelectric conversion unit 214M on the light receiving surface of the pixel 201c in FIG. Therefore, the light receiving areas of the photoelectric conversion units 513BM to 513IRM are not decentered with respect to the on-chip microlens 511.
 光電変換部513BMは、B波長帯の光に対して感度を有する。すなわち、光電変換部513BMは、B波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513BMは、B波長帯の光の受光量に応じた画素信号を出力する。 The photoelectric conversion unit 513BM is sensitive to light in the B wavelength band. That is, the photoelectric conversion unit 513BM absorbs light in the B wavelength band and transmits light in other wavelength bands. The photoelectric conversion unit 513BM outputs a pixel signal corresponding to the amount of light received in the B wavelength band.
 光電変換部513GMは、G波長帯の光に対して感度を有する。すなわち、光電変換部513GMは、G波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513GMは、G波長帯の光の受光量に応じた画素信号を出力する。 The photoelectric conversion unit 513GM is sensitive to light in the G wavelength band. That is, the photoelectric conversion unit 513GM absorbs light in the G wavelength band and transmits light in other wavelength bands. The photoelectric conversion unit 513GM outputs a pixel signal corresponding to the amount of light received in the G wavelength band.
 光電変換部513RMは、R波長帯の光に対して感度を有する。すなわち、光電変換部513RMは、R波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513RMは、R波長帯の光の受光量に応じた画素信号を出力する。 The photoelectric conversion unit 513RM is sensitive to light in the R wavelength band. That is, the photoelectric conversion unit 513RM absorbs light in the R wavelength band and transmits light in other wavelength bands. The photoelectric conversion unit 513RM outputs a pixel signal corresponding to the amount of light received in the R wavelength band.
 光電変換部513IRMは、IR波長帯の光に対して感度を有する。すなわち、光電変換部513IRMは、IR波長帯の光を吸収し、その他の波長帯の光を透過する。そして、光電変換部513IRMは、IR波長帯の光の受光量に応じた画素信号を出力する。 The photoelectric conversion unit 513IRM has sensitivity to light in the IR wavelength band. That is, the photoelectric conversion unit 513IRM absorbs light in the IR wavelength band and transmits light in other wavelength bands. Then, the photoelectric conversion unit 513IRM outputs a pixel signal corresponding to the amount of light received in the IR wavelength band.
 図44は、位相差検出専用画素である画素501dLの構成例を示している。図44の上の図は、画素501dLを横方向から見た分解図を模式的に示し、下の図は、画素501dLを上方向から見た平面図を模式的に示している。なお、図中、図41と同じ部分には同じ符号を付している。 FIG. 44 shows a configuration example of a pixel 501dL that is a phase difference detection dedicated pixel. The upper diagram of FIG. 44 schematically shows an exploded view of the pixel 501dL viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 501dL viewed from the upper direction. In the figure, the same parts as those in FIG. 41 are denoted by the same reference numerals.
 画素501dLは、図41の画素501aと比較して、遮光部512aの代わりに、遮光部512dLが設けられ、光電変換層513Rが設けられていない点が異なる。 The pixel 501dL is different from the pixel 501a in FIG. 41 in that a light shielding part 512dL is provided instead of the light shielding part 512a, and the photoelectric conversion layer 513R is not provided.
 遮光部512dLは、図24の画素201dLの遮光部213dLとほぼ同じ形状を有している。 The light shielding part 512dL has substantially the same shape as the light shielding part 213dL of the pixel 201dL in FIG.
 光電変換部513BLは、画素501dLの受光面のほぼ左半分に入射するB波長帯の光を受光し、受光量に応じた画素信号を出力する。光電変換部513GLは、画素501dLの受光面のほぼ左半分に入射するG波長帯の光を受光し、受光量に応じた画素信号を出力する。光電変換部513RLは、画素501dLの受光面のほぼ左半分に入射するR波長帯の光を受光し、受光量に応じた画素信号を出力する。光電変換部513IRLは、画素501dLの受光面のほぼ左半分に入射するIR波長帯の光を受光し、受光量に応じた画素信号を出力する。これにより、画素501dLでは、入射光がほぼ左半分に瞳分割される。 The photoelectric conversion unit 513BL receives light in the B wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of light received. The photoelectric conversion unit 513GL receives light in the G wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of light received. The photoelectric conversion unit 513RL receives light in the R wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of received light. The photoelectric conversion unit 513IRL receives light in the IR wavelength band that is incident on substantially the left half of the light receiving surface of the pixel 501dL, and outputs a pixel signal corresponding to the amount of light received. Thereby, in the pixel 501dL, the incident light is divided into pupils in almost the left half.
 図45は、位相差検出専用画素である画素501dRの構成例を示している。図45の上の図は、画素501dRを横方向から見た分解図を模式的に示し、下の図は、画素501dRを上方向から見た平面図を模式的に示している。なお、図中、図41と同じ部分には同じ符号を付している。 FIG. 45 shows a configuration example of a pixel 501dR that is a phase difference detection dedicated pixel. The upper diagram in FIG. 45 schematically shows an exploded view of the pixel 501dR viewed from the lateral direction, and the lower diagram schematically shows a plan view of the pixel 501dR viewed from the upper direction. In the figure, the same parts as those in FIG. 41 are denoted by the same reference numerals.
 画素501dRは、図41の画素501aと比較して、遮光部512aの代わりに、遮光部512dRが設けられ、光電変換層513Lが設けられていない点が異なる。 The pixel 501dR is different from the pixel 501a in FIG. 41 in that a light shielding part 512dR is provided instead of the light shielding part 512a and the photoelectric conversion layer 513L is not provided.
 遮光部512dRは、図25の画素201dRの遮光部213dRとほぼ同じ形状を有している。 The light shielding part 512dR has substantially the same shape as the light shielding part 213dR of the pixel 201dR in FIG.
 光電変換部513BRは、画素501dRの受光面のほぼ右半分に入射するB波長帯の光を受光し、受光量に応じた画素信号を出力する。光電変換部513GRは、画素501dRの受光面のほぼ右半分に入射するG波長帯の光を受光し、受光量に応じた画素信号を出力する。光電変換部513RRは、画素501dRの受光面のほぼ右半分に入射するR波長帯の光を受光し、受光量に応じた画素信号を出力する。光電変換部513IRRは、画素501dRの受光面のほぼ右半分に入射するIR波長帯の光を受光し、受光量に応じた画素信号を出力する。これにより、画素501dRでは、入射光がほぼ右半分に瞳分割される。 The photoelectric conversion unit 513BR receives light in the B wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received. The photoelectric conversion unit 513GR receives light in the G wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received. The photoelectric conversion unit 513RR receives light in the R wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received. The photoelectric conversion unit 513IRR receives light in the IR wavelength band that is incident on substantially the right half of the light receiving surface of the pixel 501dR, and outputs a pixel signal corresponding to the amount of light received. As a result, in the pixel 501dR, the incident light is divided into pupils substantially in the right half.
<9.その他の変形例>
 以下、上述した本技術の実施の形態の変形例のうち、上述した変形例以外の変形例について説明する。
<9. Other variations>
Hereinafter, of the modifications of the embodiment of the present technology described above, modifications other than the modifications described above will be described.
{受光波長帯に関する変形例}
 以上の説明における可視光及び不可視光の波長帯は、その一例であり、検出する波長帯の種類や数を任意に変更することができる。
{Modifications related to light receiving wavelength band}
The wavelength bands of visible light and invisible light in the above description are examples thereof, and the type and number of wavelength bands to be detected can be arbitrarily changed.
 例えば、以上の説明では、各実施の形態の撮像素子において、R波長帯、G波長帯、及び、B波長帯の可視光を検出する例を示したが、他の波長帯(例えば、黄色の波長帯等)の可視光を検出するようにしてもよい。また、例えば、検出する可視光の波長帯の数や組み合わせを変更することも可能である。 For example, in the above description, an example in which visible light in the R wavelength band, the G wavelength band, and the B wavelength band is detected in the image pickup device of each embodiment has been described. You may make it detect visible light of a wavelength band etc.). Further, for example, the number and combination of wavelength bands of visible light to be detected can be changed.
 また、以上の説明では、第2乃至第4の実施の形態の撮像素子において、IR波長帯の光を検出する例を示したが、他の波長帯(例えば、紫外線や赤外線の波長帯)の不可視光を検出するようにしてもよい。また、例えば、検出する不可視光の波長帯の数や組み合わせを変更することも可能である。 In the above description, an example in which light in the IR wavelength band is detected in the imaging elements of the second to fourth embodiments has been described. However, in other wavelength bands (for example, wavelength bands of ultraviolet rays and infrared rays). Invisible light may be detected. Further, for example, the number and combination of wavelength bands of invisible light to be detected can be changed.
 さらに、例えば、可視光と不可視光の両方を検出する画素(例えば、図38のG+IR画素等)において、検出する可視光と不可視光の波長帯の数や組み合わせを変更することも可能である。 Furthermore, for example, in a pixel that detects both visible light and invisible light (for example, the G + IR pixel in FIG. 38), the number and combination of wavelength bands of visible light and invisible light to be detected can be changed.
 例えば、図4、図31及び図36において、G画素及びB画素の代わりに、黄色の波長帯(以下、Ye波長帯と称する)を検出するYe画素、及び、W画素を設けるようにしてもよい。また、例えば、図38及び図40において、G+IR画素及びB+IR画素の代わりに、Ye波長帯及びIR波長帯の一部を検出するYe+IR画素、及び、W+IR画素を用いるようにしてもよい。 For example, in FIGS. 4, 31 and 36, instead of the G pixel and the B pixel, a Ye pixel for detecting a yellow wavelength band (hereinafter referred to as a Ye wavelength band) and a W pixel may be provided. Good. For example, in FIGS. 38 and 40, instead of the G + IR pixel and the B + IR pixel, a Ye + IR pixel that detects a part of the Ye wavelength band and the IR wavelength band, and a W + IR pixel may be used.
{その他の変形例}
 また、以上の説明では、画素内で、位相差画素の信号を加算して、撮影画素の信号として出力する例を示したが、位相差画素の信号を画素の外で加算するようにしてもよい。
{Other variations}
In the above description, the example of adding the signals of the phase difference pixels within the pixels and outputting them as the signals of the photographic pixels has been shown. However, the signals of the phase difference pixels may be added outside the pixels. Good.
{本技術の適用範囲}
 本技術は、AFを行う各種の装置や部品等に適用することが可能である。例えば、カメラ単体だけでなく、カメラを備える各種の装置(例えば、スマートフォン、携帯電話機等)にも、本技術を適用することができる。また、例えば、レンズを外部から駆動する装置や部品等にも本技術を適用することができる。さらに、例えば、レンズ及び撮像素子が設けられたカメラの外部からAFの制御を行う装置や部品等にも、本技術を適用することができる。
{Scope of application of this technology}
The present technology can be applied to various devices and parts that perform AF. For example, the present technology can be applied not only to a single camera but also to various devices including a camera (for example, a smartphone, a mobile phone, etc.). In addition, for example, the present technology can also be applied to devices and parts that drive a lens from the outside. Furthermore, for example, the present technology can also be applied to devices and parts that perform AF control from the outside of a camera provided with a lens and an image sensor.
 また、本技術は、静止画及び動画のいずれを撮影する場合にも適用することができる。 In addition, the present technology can be applied when shooting either a still image or a moving image.
{コンピュータの構成例}
 上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
{Example of computer configuration}
The series of processes described above can be executed by hardware or can be executed by software. When a series of processing is executed by software, a program constituting the software is installed in the computer. Here, the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
 図46は、上述した一連の処理をプログラムにより実行するコンピュータのハードウエアの構成例を示すブロック図である。 FIG. 46 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
 コンピュータにおいて、CPU(Central Processing Unit)701,ROM(Read Only Memory)702,RAM(Random Access Memory)703は、バス704により相互に接続されている。 In a computer, a CPU (Central Processing Unit) 701, a ROM (Read Only Memory) 702, and a RAM (Random Access Memory) 703 are connected to each other by a bus 704.
 バス704には、さらに、入出力インタフェース705が接続されている。入出力インタフェース705には、入力部706、出力部707、記憶部708、通信部709、及びドライブ710が接続されている。 An input / output interface 705 is further connected to the bus 704. An input unit 706, an output unit 707, a storage unit 708, a communication unit 709, and a drive 710 are connected to the input / output interface 705.
 入力部706は、キーボード、マウス、マイクロフォンなどよりなる。出力部707は、ディスプレイ、スピーカなどよりなる。記憶部708は、ハードディスクや不揮発性のメモリなどよりなる。通信部709は、ネットワークインタフェースなどよりなる。ドライブ710は、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリなどのリムーバブルメディア711を駆動する。 The input unit 706 includes a keyboard, a mouse, a microphone, and the like. The output unit 707 includes a display, a speaker, and the like. The storage unit 708 includes a hard disk, a nonvolatile memory, and the like. The communication unit 709 includes a network interface. The drive 710 drives a removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU701が、例えば、記憶部708に記憶されているプログラムを、入出力インタフェース705及びバス704を介して、RAM703にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 701 loads the program stored in the storage unit 708 to the RAM 703 via the input / output interface 705 and the bus 704 and executes the program, for example. Is performed.
 コンピュータ(CPU701)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア711に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU 701) can be provided by being recorded in, for example, a removable medium 711 as a package medium or the like. The program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブルメディア711をドライブ710に装着することにより、入出力インタフェース705を介して、記憶部708にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部709で受信し、記憶部708にインストールすることができる。その他、プログラムは、ROM702や記憶部708に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 708 via the input / output interface 705 by attaching the removable medium 711 to the drive 710. Further, the program can be received by the communication unit 709 via a wired or wireless transmission medium and installed in the storage unit 708. In addition, the program can be installed in advance in the ROM 702 or the storage unit 708.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
 また、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In this specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
 さらに、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Furthermore, embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when a plurality of processes are included in one step, the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
 また、本明細書に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。 Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 さらに、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Furthermore, embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 また、例えば、本技術は以下のような構成も取ることができる。 Also, for example, the present technology can take the following configurations.
(1)
 画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理部と、
 積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出部と
 を備える画像処理装置。
(2)
 前記位相差に基づいて合焦位置を検出し、前記位相差の検出に用いた波長帯と焦点を合わせる波長帯とが異なる場合、波長帯間の軸上色収差に基づいて合焦位置を補正する合焦位置検出部を
 さらに備える前記(1)に記載の画像処理装置。
(3)
 前記合焦位置に基づいて、焦点の位置を制御する焦点制御部を
 さらに備える前記(2)に記載の画像処理装置。
(4)
 前記位相差検出部は、所定の条件に基づいて、前記位相差を検出する方向、波長帯、及び、位置のうち少なくとも1つ以上を選択する
 前記(1)又は(2)に記載の画像処理装置。
(5)
 前記積分処理部は、前記受光面全体への入射光に基づく画素値からなる画像におけるフレーム間の被写体の動きに基づく重みを用いて、前記位相差画素値の時間方向の積分を行う
 前記(1)乃至(4)のいずれかに記載の画像処理装置。
(6)
 前記積分処理部は、画素間の水平方向の距離に基づく重みと垂直方向の距離に基づく重みを個別に算出し、算出した2つの重みを用いて空間方向の積分処理を行う
 前記(1)乃至(5)のいずれかに記載の画像処理装置。
(7)
 前記積分処理部は、前記位相差画素値からなる第1の画像の画質を表す所定のパラメータが所定の条件を満たす場合、前記第1の画像の画素間の類似度に基づく重みを用いて、前記位相差画素値の空間方向の積分を行い、前記第1の画像の前記パラメータが前記条件を満たさない場合、前記位相差画素値に対応する前記部分領域の方向に平行な方向に対して、前記第1の画像の画素間の類似度に基づく重みを用い、前記位相差画素値に対応する前記部分領域の方向に直交する方向に対して、前記受光面全体への入射光に基づく画素値からなる第2の画像の画素間の類似度に基づく重みを用いて、前記位相差画素値の空間方向の積分を行う
 前記(1)乃至(6)のいずれかに記載の画像処理装置。
(8)
 前記積分処理部は、同じ画素の異なる波長帯の前記位相差画素値を、波長帯間の倍率色収差に基づく重みを用いて加算することにより、前記位相差画素値の波長方向の積分を行う
 前記(1)乃至(7)のいずれかに記載の画像処理装置。
(9)
 前記積分処理部は、さらに同じ画素の異なる波長帯の前記位相差画素値の差に基づく重みを用いて、前記位相差画素値の波長方向の積分を行う
 前記(8)に記載の画像処理装置。
(10)
 前記積分処理部は、時間方向、空間方向、及び、波長方向の積分処理のうち2以上の積分処理を所定の順序で行うとともに、前記位相差画素値からなる画像の画質を表す所定のパラメータが所定の条件を満たした場合、後の積分処理を行わない
 前記(1)乃至(9)のいずれかに記載の画像処理装置。
(11)
 前記受光面の第1の方向に偏った位置にある第1の光電変換部、及び、前記受光面の前記第1の方向と逆の第2の方向に偏った位置にある第2の光電変換部のうち少なくとも一方を備える画素である第1の位相差画素が複数配置されている撮像素子を
 さらに備える前記(1)乃至(10)のいずれかに記載の画像処理装置。
(12)
 前記撮像素子には、前記受光面の前記第1の方向及び前記第2の方向と直交する第3の方向に偏った位置にある第3の光電変換部、及び、前記受光面の前記第3の方向と逆の第4の方向に偏った位置にある第4の光電変換部のうち少なくとも一方を備える画素である第2の位相差画素がさらに複数配置されている
 前記(11)に記載の画像処理装置。
(13)
 前記第1の位相差画素は、前記第1の光電変換部及び前記第2の光電変換部を備え、
 前記第2の位相差画素は、前記第3の光電変換部及び前記第4の光電変換部を備え、
 前記撮像素子には、前記第1の位相差画素及び前記第2の位相差画素が所定のパターンで配置されている
 前記(12)に記載の画像処理装置。
(14)
 前記第1の位相差画素は、第1の波長帯の不可視光を受光する第1の画素を含む
 前記(11)乃至(13)のいずれかに記載の画像処理装置。
(15)
 前記第1の位相差画素は、前記第1の波長帯の不可視光及び第2の波長帯の可視光を受光する第2の画素をさらに含む
 前記(14)に記載の画像処理装置。
(16)
 前記第1の位相差画素は、第1の波長帯の可視光を受光する第1の画素、第2の波長帯の可視光を受光する第2の画素、並びに、前記第1の波長帯及び前記第2の波長帯を含む第3の波長帯の可視光、及び、第4の波長帯の不可視光を受光する第3の画素を含む
 前記(11)乃至(13)のいずれかに記載の画像処理装置。
(17)
 前記第1の位相差画素は、第1の波長帯の可視光及び第2の波長帯の不可視光を受光する第1の画素、第3の波長帯の可視光及び前記第2の波長帯の不可視光を受光する第2の画素、並びに、前記第1の波長帯及び前記第3の波長帯を含む第4の波長帯の可視光、及び、前記第2の波長帯の不可視光を受光する第3の画素を含む
 前記(11)乃至(13)のいずれかに記載の画像処理装置。
(18)
 前記受光面の第1の方向に偏った位置において積層され、それぞれ異なる波長帯の光を受光する複数の光電変換部からなる第1の光電変換層、並びに、前記受光面の前記第1の方向と逆の第2の方向に偏った位置において積層され、それぞれ異なる波長帯の光を受光する複数の光電変換部からなる第2の光電変換層のうち少なくとも一方を備える画素である位相差画素が複数配置されている撮像素子を
 さらに備える前記(1)乃至(10)のいずれかに記載の画像処理装置。
(19)
 画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理ステップと、
 積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出ステップと
 を含む画像処理方法。
(20)
 画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理ステップと、
 積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出ステップと
 を含む処理をコンピュータに実行させるためのプログラム。
(1)
A phase difference pixel value, which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction. An integration processing unit for integrating;
Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region An image processing apparatus comprising: a phase difference detection unit configured to detect a phase difference between a plurality of phase difference pixel values corresponding to a plurality of second partial regions located in positions that are biased in the opposite direction.
(2)
An in-focus position is detected based on the phase difference, and when the wavelength band used to detect the phase difference is different from the wavelength band to be focused, the in-focus position is corrected based on axial chromatic aberration between the wavelength bands. The image processing apparatus according to (1), further including a focus position detection unit.
(3)
The image processing apparatus according to (2), further including a focus control unit that controls a focus position based on the in-focus position.
(4)
The phase difference detection unit selects at least one of a direction, a wavelength band, and a position for detecting the phase difference based on a predetermined condition. The image processing according to (1) or (2) apparatus.
(5)
The integration processing unit performs integration in the time direction of the phase difference pixel value using a weight based on the movement of the subject between frames in an image including pixel values based on incident light on the entire light receiving surface. The image processing device according to any one of (4) to (4).
(6)
The integration processing unit separately calculates a weight based on a horizontal distance between pixels and a weight based on a vertical distance, and performs integration processing in the spatial direction using the two calculated weights. The image processing apparatus according to any one of (5).
(7)
When the predetermined parameter representing the image quality of the first image composed of the phase difference pixel values satisfies a predetermined condition, the integration processing unit uses a weight based on the similarity between the pixels of the first image, When the phase difference pixel value is integrated in the spatial direction and the parameter of the first image does not satisfy the condition, the direction parallel to the direction of the partial region corresponding to the phase difference pixel value, A pixel value based on incident light on the entire light receiving surface with respect to a direction orthogonal to the direction of the partial region corresponding to the phase difference pixel value using a weight based on the similarity between pixels of the first image. The image processing apparatus according to any one of (1) to (6), wherein a spatial direction integration of the phase difference pixel value is performed using a weight based on a similarity between pixels of the second image.
(8)
The integration processing unit performs integration in the wavelength direction of the phase difference pixel value by adding the phase difference pixel values of different wavelength bands of the same pixel using weights based on lateral chromatic aberration between wavelength bands. (1) The image processing apparatus according to any one of (7).
(9)
The image processing apparatus according to (8), wherein the integration processing unit further performs integration in the wavelength direction of the phase difference pixel value using a weight based on the difference between the phase difference pixel values of different wavelength bands of the same pixel. .
(10)
The integration processing unit performs two or more integration processes among integration processes in a time direction, a spatial direction, and a wavelength direction in a predetermined order, and a predetermined parameter representing the image quality of the phase difference pixel value is The image processing apparatus according to any one of (1) to (9), wherein when the predetermined condition is satisfied, subsequent integration processing is not performed.
(11)
A first photoelectric conversion unit located in a position biased in the first direction of the light receiving surface, and a second photoelectric conversion located in a position biased in a second direction opposite to the first direction of the light receiving surface. The image processing apparatus according to any one of (1) to (10), further including an imaging element in which a plurality of first phase difference pixels that are pixels including at least one of the units are arranged.
(12)
The image sensor includes a third photoelectric conversion unit located in a position biased in a third direction orthogonal to the first direction and the second direction of the light receiving surface, and the third photoelectric conversion unit. The plurality of second phase difference pixels, which are pixels each including at least one of the fourth photoelectric conversion units located in a position deviated in a fourth direction opposite to the first direction, are arranged. Image processing device.
(13)
The first phase difference pixel includes the first photoelectric conversion unit and the second photoelectric conversion unit,
The second phase difference pixel includes the third photoelectric conversion unit and the fourth photoelectric conversion unit,
The image processing device according to (12), wherein the first phase difference pixel and the second phase difference pixel are arranged in a predetermined pattern on the imaging element.
(14)
The image processing device according to any one of (11) to (13), wherein the first phase difference pixel includes a first pixel that receives invisible light in a first wavelength band.
(15)
The image processing apparatus according to (14), wherein the first phase difference pixel further includes a second pixel that receives invisible light in the first wavelength band and visible light in the second wavelength band.
(16)
The first phase difference pixel includes a first pixel that receives visible light in a first wavelength band, a second pixel that receives visible light in a second wavelength band, and the first wavelength band and The system according to any one of (11) to (13), including a third pixel that receives visible light in a third wavelength band including the second wavelength band and invisible light in a fourth wavelength band. Image processing device.
(17)
The first phase difference pixel includes a first pixel that receives visible light in a first wavelength band and invisible light in a second wavelength band, visible light in a third wavelength band, and the second wavelength band. The second pixel that receives invisible light, and the visible light in the fourth wavelength band including the first wavelength band and the third wavelength band, and the invisible light in the second wavelength band are received. The image processing apparatus according to any one of (11) to (13), including a third pixel.
(18)
A first photoelectric conversion layer composed of a plurality of photoelectric conversion units that are stacked at positions shifted in the first direction of the light receiving surface and receive light of different wavelength bands, and the first direction of the light receiving surface. A phase difference pixel that is a pixel including at least one of the second photoelectric conversion layers that are stacked at positions deviated in the second direction opposite to that of the plurality of photoelectric conversion units that receive light of different wavelength bands. The image processing apparatus according to any one of (1) to (10), further including a plurality of image pickup elements.
(19)
A phase difference pixel value, which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction. An integration step to integrate;
Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region A phase difference detection step of detecting a phase difference between the plurality of phase difference pixel values corresponding to the plurality of second partial areas located in positions reversely biased.
(20)
A phase difference pixel value, which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction. An integration step to integrate;
Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region A phase difference detection step of detecting a phase difference between the plurality of phase difference pixel values corresponding to the plurality of second partial regions located in positions opposite to each other in a reverse direction. Program to let you.
 101a乃至101d カメラ, 111 レンズ, 112a乃至112e 撮像素子, 113 AGC部, 115 画素補間部, 121 AF制御部, 122 レンズ駆動部, 132 積分処理部, 133 位相差検出部, 134 合焦位置検出部, 135 焦点制御部, 137 色収差データ記憶部, 152 撮影画像AE部, 154 位相差画像AE部, 161 時間方向重み調整部, 162 時間方向積分処理部, 163 空間方向重み調整部, 164 空間方向積分処理部, 165 波長方向重み調整部, 166 波長方向積分処理部, 201a乃至201dR 画素, 211 オンチップマイクロレンズ, 212 波長選択フィルタ, 213a乃至213dR 遮光部, 214L乃至214M 光電変換部, 501a乃至501dR 画素, 511 オンチップマイクロレンズ, 512a乃至512dR 遮光部, 513BL乃至513IRM 光電変換部 101a to 101d camera, 111 lens, 112a to 112e imaging device, 113 AGC unit, 115 pixel interpolation unit, 121 AF control unit, 122 lens driving unit, 132 integration processing unit, 133 phase difference detection unit, 134 in-focus position detection unit , 135 focus control unit, 137 chromatic aberration data storage unit, 152 photographed image AE unit, 154 phase difference image AE unit, 161 time direction weight adjustment unit, 162 time direction integration processing unit, 163 space direction weight adjustment unit, 164 space direction integration Processing unit, 165 wavelength direction weight adjustment unit, 166 wavelength direction integration processing unit, 201a to 201dR pixels, 211 on-chip microlens, 212 wavelength selection filter, 213a to 213dR shading unit, 21 L to 214M photoelectric conversion unit, 501a to 501dR pixels, 511 on-chip microlens, 512a to 512dR shielding portion, 513BL to 513IRM photoelectric conversion portion

Claims (20)

  1.  画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理部と、
     積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出部と
     を備える画像処理装置。
    A phase difference pixel value, which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction. An integration processing unit for integrating;
    Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region An image processing apparatus comprising: a phase difference detection unit configured to detect a phase difference between a plurality of phase difference pixel values corresponding to a plurality of second partial regions located in positions that are biased in the opposite direction.
  2.  前記位相差に基づいて合焦位置を検出し、前記位相差の検出に用いた波長帯と焦点を合わせる波長帯とが異なる場合、波長帯間の軸上色収差に基づいて合焦位置を補正する合焦位置検出部を
     さらに備える請求項1に記載の画像処理装置。
    An in-focus position is detected based on the phase difference, and when the wavelength band used to detect the phase difference is different from the wavelength band to be focused, the in-focus position is corrected based on axial chromatic aberration between the wavelength bands. The image processing apparatus according to claim 1, further comprising a focus position detection unit.
  3.  前記合焦位置に基づいて、焦点の位置を制御する焦点制御部を
     さらに備える請求項2に記載の画像処理装置。
    The image processing apparatus according to claim 2, further comprising a focus control unit that controls a focus position based on the in-focus position.
  4.  前記位相差検出部は、所定の条件に基づいて、前記位相差を検出する方向、波長帯、及び、位置のうち少なくとも1つ以上を選択する
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the phase difference detection unit selects at least one of a direction, a wavelength band, and a position for detecting the phase difference based on a predetermined condition.
  5.  前記積分処理部は、前記受光面全体への入射光に基づく画素値からなる画像におけるフレーム間の被写体の動きに基づく重みを用いて、前記位相差画素値の時間方向の積分を行う
     請求項1に記載の画像処理装置。
    2. The integration processing unit performs integration in the time direction of the phase difference pixel value using a weight based on the movement of a subject between frames in an image composed of pixel values based on light incident on the entire light receiving surface. An image processing apparatus according to 1.
  6.  前記積分処理部は、画素間の水平方向の距離に基づく重みと垂直方向の距離に基づく重みを個別に算出し、算出した2つの重みを用いて空間方向の積分処理を行う
     請求項1に記載の画像処理装置。
    2. The integration processing unit separately calculates a weight based on a horizontal distance between pixels and a weight based on a vertical distance, and performs integration processing in a spatial direction using the two calculated weights. Image processing apparatus.
  7.  前記積分処理部は、前記位相差画素値からなる第1の画像の画質を表す所定のパラメータが所定の条件を満たす場合、前記第1の画像の画素間の類似度に基づく重みを用いて、前記位相差画素値の空間方向の積分を行い、前記第1の画像の前記パラメータが前記条件を満たさない場合、前記位相差画素値に対応する前記部分領域の方向に平行な方向に対して、前記第1の画像の画素間の類似度に基づく重みを用い、前記位相差画素値に対応する前記部分領域の方向に直交する方向に対して、前記受光面全体への入射光に基づく画素値からなる第2の画像の画素間の類似度に基づく重みを用いて、前記位相差画素値の空間方向の積分を行う
     請求項1に記載の画像処理装置。
    When the predetermined parameter representing the image quality of the first image composed of the phase difference pixel values satisfies a predetermined condition, the integration processing unit uses a weight based on the similarity between the pixels of the first image, When the phase difference pixel value is integrated in the spatial direction and the parameter of the first image does not satisfy the condition, the direction parallel to the direction of the partial region corresponding to the phase difference pixel value, A pixel value based on incident light on the entire light receiving surface with respect to a direction orthogonal to the direction of the partial region corresponding to the phase difference pixel value using a weight based on the similarity between pixels of the first image. The image processing apparatus according to claim 1, wherein the phase difference pixel value is integrated in a spatial direction using a weight based on a similarity between pixels of the second image.
  8.  前記積分処理部は、同じ画素の異なる波長帯の前記位相差画素値を、波長帯間の倍率色収差に基づく重みを用いて加算することにより、前記位相差画素値の波長方向の積分を行う
     請求項1に記載の画像処理装置。
    The integration processing unit performs integration in the wavelength direction of the phase difference pixel value by adding the phase difference pixel values of different wavelength bands of the same pixel using a weight based on lateral chromatic aberration between wavelength bands. Item 8. The image processing apparatus according to Item 1.
  9.  前記積分処理部は、さらに同じ画素の異なる波長帯の前記位相差画素値の差に基づく重みを用いて、前記位相差画素値の波長方向の積分を行う
     請求項8に記載の画像処理装置。
    The image processing apparatus according to claim 8, wherein the integration processing unit further performs integration in the wavelength direction of the phase difference pixel value using a weight based on a difference between the phase difference pixel values of different wavelength bands of the same pixel.
  10.  前記積分処理部は、時間方向、空間方向、及び、波長方向の積分処理のうち2以上の積分処理を所定の順序で行うとともに、前記位相差画素値からなる画像の画質を表す所定のパラメータが所定の条件を満たした場合、後の積分処理を行わない
     請求項1に記載の画像処理装置。
    The integration processing unit performs two or more integration processes among integration processes in a time direction, a spatial direction, and a wavelength direction in a predetermined order, and a predetermined parameter representing the image quality of the phase difference pixel value is The image processing apparatus according to claim 1, wherein when the predetermined condition is satisfied, subsequent integration processing is not performed.
  11.  前記受光面の第1の方向に偏った位置にある第1の光電変換部、及び、前記受光面の前記第1の方向と逆の第2の方向に偏った位置にある第2の光電変換部のうち少なくとも一方を備える画素である第1の位相差画素が複数配置されている撮像素子を
     さらに備える請求項1に記載の画像処理装置。
    A first photoelectric conversion unit located in a position biased in the first direction of the light receiving surface, and a second photoelectric conversion located in a position biased in a second direction opposite to the first direction of the light receiving surface. The image processing apparatus according to claim 1, further comprising: an image sensor in which a plurality of first phase difference pixels that are pixels including at least one of the units are arranged.
  12.  前記撮像素子には、前記受光面の前記第1の方向及び前記第2の方向と直交する第3の方向に偏った位置にある第3の光電変換部、及び、前記受光面の前記第3の方向と逆の第4の方向に偏った位置にある第4の光電変換部のうち少なくとも一方を備える画素である第2の位相差画素がさらに複数配置されている
     請求項11に記載の画像処理装置。
    The image sensor includes a third photoelectric conversion unit located in a position biased in a third direction orthogonal to the first direction and the second direction of the light receiving surface, and the third photoelectric conversion unit. The image according to claim 11, further comprising a plurality of second phase difference pixels that are pixels including at least one of the fourth photoelectric conversion units located in a position biased in a fourth direction opposite to the direction of the image. Processing equipment.
  13.  前記第1の位相差画素は、前記第1の光電変換部及び前記第2の光電変換部を備え、
     前記第2の位相差画素は、前記第3の光電変換部及び前記第4の光電変換部を備え、
     前記撮像素子には、前記第1の位相差画素及び前記第2の位相差画素が所定のパターンで配置されている
     請求項12に記載の画像処理装置。
    The first phase difference pixel includes the first photoelectric conversion unit and the second photoelectric conversion unit,
    The second phase difference pixel includes the third photoelectric conversion unit and the fourth photoelectric conversion unit,
    The image processing apparatus according to claim 12, wherein the first phase difference pixel and the second phase difference pixel are arranged in a predetermined pattern on the imaging element.
  14.  前記第1の位相差画素は、第1の波長帯の不可視光を受光する第1の画素を含む
     請求項11に記載の画像処理装置。
    The image processing apparatus according to claim 11, wherein the first phase difference pixel includes a first pixel that receives invisible light in a first wavelength band.
  15.  前記第1の位相差画素は、前記第1の波長帯の不可視光及び第2の波長帯の可視光を受光する第2の画素をさらに含む
     請求項14に記載の画像処理装置。
    The image processing apparatus according to claim 14, wherein the first phase difference pixel further includes a second pixel that receives invisible light in the first wavelength band and visible light in the second wavelength band.
  16.  前記第1の位相差画素は、第1の波長帯の可視光を受光する第1の画素、第2の波長帯の可視光を受光する第2の画素、並びに、前記第1の波長帯及び前記第2の波長帯を含む第3の波長帯の可視光、及び、第4の波長帯の不可視光を受光する第3の画素を含む
     請求項11に記載の画像処理装置。
    The first phase difference pixel includes a first pixel that receives visible light in a first wavelength band, a second pixel that receives visible light in a second wavelength band, and the first wavelength band and The image processing apparatus according to claim 11, further comprising: a third pixel that receives visible light in a third wavelength band including the second wavelength band and invisible light in a fourth wavelength band.
  17.  前記第1の位相差画素は、第1の波長帯の可視光及び第2の波長帯の不可視光を受光する第1の画素、第3の波長帯の可視光及び前記第2の波長帯の不可視光を受光する第2の画素、並びに、前記第1の波長帯及び前記第3の波長帯を含む第4の波長帯の可視光、及び、前記第2の波長帯の不可視光を受光する第3の画素を含む
     請求項11に記載の画像処理装置。
    The first phase difference pixel includes a first pixel that receives visible light in a first wavelength band and invisible light in a second wavelength band, visible light in a third wavelength band, and the second wavelength band. The second pixel that receives invisible light, and the visible light in the fourth wavelength band including the first wavelength band and the third wavelength band, and the invisible light in the second wavelength band are received. The image processing apparatus according to claim 11, comprising a third pixel.
  18.  前記受光面の第1の方向に偏った位置において積層され、それぞれ異なる波長帯の光を受光する複数の光電変換部からなる第1の光電変換層、並びに、前記受光面の前記第1の方向と逆の第2の方向に偏った位置において積層され、それぞれ異なる波長帯の光を受光する複数の光電変換部からなる第2の光電変換層のうち少なくとも一方を備える画素である位相差画素が複数配置されている撮像素子を
     さらに備える請求項1に記載の画像処理装置。
    A first photoelectric conversion layer composed of a plurality of photoelectric conversion units that are stacked at positions shifted in the first direction of the light receiving surface and receive light of different wavelength bands, and the first direction of the light receiving surface. A phase difference pixel that is a pixel including at least one of the second photoelectric conversion layers that are stacked at positions deviated in the second direction opposite to that of the plurality of photoelectric conversion units that receive light of different wavelength bands. The image processing apparatus according to claim 1, further comprising a plurality of image pickup devices arranged.
  19.  画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理ステップと、
     積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出ステップと
     を含む画像処理方法。
    A phase difference pixel value, which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction. An integration step to integrate;
    Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region A phase difference detection step of detecting a phase difference between the plurality of phase difference pixel values corresponding to the plurality of second partial areas located in positions reversely biased.
  20.  画素の受光面の偏った位置にある部分領域への入射光に基づく画素値である位相差画素値を、時間方向、空間方向、及び、波長方向の少なくとも一方向に適応的に重みを付けて積分する積分処理ステップと、
     積分後の前記位相差画素値に基づいて、同じ方向に偏った位置にある複数の第1の前記部分領域に対応する複数の前記位相差画素値からなる像と、前記第1の部分領域と逆方向に偏った位置にある複数の第2の前記部分領域に対応する複数の前記位相差画素値からなる像との間の位相差を検出する位相差検出ステップと
     を含む処理をコンピュータに実行させるためのプログラム。
    A phase difference pixel value, which is a pixel value based on light incident on a partial area at a biased position on the light receiving surface of the pixel, is adaptively weighted in at least one of the time direction, the spatial direction, and the wavelength direction. An integration step to integrate;
    Based on the phase difference pixel values after integration, an image composed of a plurality of the phase difference pixel values corresponding to the plurality of first partial regions that are biased in the same direction, and the first partial region A phase difference detection step of detecting a phase difference between the plurality of phase difference pixel values corresponding to the plurality of second partial regions located in positions opposite to each other in a reverse direction. Program to let you.
PCT/JP2015/085942 2015-01-07 2015-12-24 Image processing device, image processing method, and program WO2016111175A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015001515 2015-01-07
JP2015-001515 2015-01-28

Publications (1)

Publication Number Publication Date
WO2016111175A1 true WO2016111175A1 (en) 2016-07-14

Family

ID=56355875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/085942 WO2016111175A1 (en) 2015-01-07 2015-12-24 Image processing device, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2016111175A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018116267A (en) * 2017-01-13 2018-07-26 キヤノン株式会社 Focus detection device and method, and imaging device
WO2019082568A1 (en) * 2017-10-24 2019-05-02 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device and electronic apparatus
CN112866553A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
WO2023079842A1 (en) * 2021-11-08 2023-05-11 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device, imaging system, and imaging processing method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009224913A (en) * 2008-03-14 2009-10-01 Canon Inc Image device and its control method
JP2010271499A (en) * 2009-05-20 2010-12-02 Canon Inc Imaging apparatus
JP2011007862A (en) * 2009-06-23 2011-01-13 Fujitsu Ltd Voice recognition device, voice recognition program and voice recognition method
JP2012155221A (en) * 2011-01-27 2012-08-16 Canon Inc Imaging apparatus and its control method
JP2014038164A (en) * 2012-08-13 2014-02-27 Olympus Imaging Corp Image processor
JP2014041202A (en) * 2012-08-21 2014-03-06 Nikon Corp Focus detection device and imaging device
JP2014072541A (en) * 2012-09-27 2014-04-21 Nikon Corp Image sensor and image pick-up device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009224913A (en) * 2008-03-14 2009-10-01 Canon Inc Image device and its control method
JP2010271499A (en) * 2009-05-20 2010-12-02 Canon Inc Imaging apparatus
JP2011007862A (en) * 2009-06-23 2011-01-13 Fujitsu Ltd Voice recognition device, voice recognition program and voice recognition method
JP2012155221A (en) * 2011-01-27 2012-08-16 Canon Inc Imaging apparatus and its control method
JP2014038164A (en) * 2012-08-13 2014-02-27 Olympus Imaging Corp Image processor
JP2014041202A (en) * 2012-08-21 2014-03-06 Nikon Corp Focus detection device and imaging device
JP2014072541A (en) * 2012-09-27 2014-04-21 Nikon Corp Image sensor and image pick-up device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018116267A (en) * 2017-01-13 2018-07-26 キヤノン株式会社 Focus detection device and method, and imaging device
JP7022575B2 (en) 2017-01-13 2022-02-18 キヤノン株式会社 Focus detectors and methods, and imaging devices
WO2019082568A1 (en) * 2017-10-24 2019-05-02 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device and electronic apparatus
US11290665B2 (en) 2017-10-24 2022-03-29 Sony Semiconductor Solutions Corporation Solid-state imaging device and electronic apparatus
CN112866553A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
CN112866553B (en) * 2019-11-12 2022-05-17 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
WO2023079842A1 (en) * 2021-11-08 2023-05-11 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device, imaging system, and imaging processing method

Similar Documents

Publication Publication Date Title
JP5398346B2 (en) Imaging apparatus and signal processing apparatus
EP3261328B1 (en) Image processing apparatus, image processing method, and computer-readable storage medium
JP5012495B2 (en) IMAGING ELEMENT, FOCUS DETECTION DEVICE, FOCUS ADJUSTMENT DEVICE, AND IMAGING DEVICE
US20170070694A1 (en) Imaging apparatus, image processing method, and program
US9521312B2 (en) Focus detection apparatus, control method for the same, and image capture apparatus
US10681286B2 (en) Image processing apparatus, imaging apparatus, image processing method, and recording medium
US20160080727A1 (en) Depth measurement apparatus, imaging apparatus, and depth measurement method
US20190028640A1 (en) Image processing apparatus, imaging apparatus, control methods thereof, and storage medium for generating a display image based on a plurality of viewpoint images
CN107960120B (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
WO2016111175A1 (en) Image processing device, image processing method, and program
CN107431755B (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
US10681278B2 (en) Image capturing apparatus, control method of controlling the same, and storage medium for determining reliability of focus based on vignetting resulting from blur
CN110741632A (en) Imaging device, imaging element, and image processing method
CN110312957B (en) Focus detection apparatus, focus detection method, and computer-readable storage medium
JP2014032214A (en) Imaging apparatus and focus position detection method
JP6555990B2 (en) Distance measuring device, imaging device, and distance measuring method
JP2009047734A (en) Imaging apparatus and image processing program
WO2019202984A1 (en) Image capturing device, distance measuring method, distance measuring program, and recording medium
JP2014215436A (en) Image-capturing device, and control method and control program therefor
KR20170015170A (en) Image capture apparatus and method for controlling the same
US10326926B2 (en) Focus detection apparatus and method, and image capturing apparatus
JP2019057908A (en) Imaging apparatus and control method thereof
CN113596431B (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
US10964739B2 (en) Imaging apparatus and control method thereof
US10868953B2 (en) Image processing device capable of notifying an effect of emphasizing a subject before performing imaging, control method thereof, and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15877048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15877048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP