WO2017163588A1 - Image processing apparatus, image pickup apparatus, and control methods therefor, and program - Google Patents

Image processing apparatus, image pickup apparatus, and control methods therefor, and program Download PDF

Info

Publication number
WO2017163588A1
WO2017163588A1 PCT/JP2017/002504 JP2017002504W WO2017163588A1 WO 2017163588 A1 WO2017163588 A1 WO 2017163588A1 JP 2017002504 W JP2017002504 W JP 2017002504W WO 2017163588 A1 WO2017163588 A1 WO 2017163588A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
viewpoint
changing
processing apparatus
image processing
Prior art date
Application number
PCT/JP2017/002504
Other languages
French (fr)
Japanese (ja)
Inventor
暁彦 上田
福田 浩一
勇希 吉村
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2016112103A external-priority patent/JP6757184B2/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to CN201780019656.XA priority Critical patent/CN108886581B/en
Priority to RU2018137046A priority patent/RU2707066C1/en
Priority to SG11201808036XA priority patent/SG11201808036XA/en
Priority to KR1020187029815A priority patent/KR102157491B1/en
Priority to DE112017001458.1T priority patent/DE112017001458T5/en
Publication of WO2017163588A1 publication Critical patent/WO2017163588A1/en
Priority to PH12018502032A priority patent/PH12018502032A1/en
Priority to US16/137,801 priority patent/US10924665B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging

Definitions

  • the present invention relates to an image processing apparatus, an imaging apparatus, a control method thereof, and a program.
  • Non-Patent Document 1 discloses an imaging apparatus using an imaging element in which one microlens and a plurality of divided photoelectric conversion units are formed for one pixel. This imaging device is configured such that each of the divided photoelectric conversion units receives a light beam that has passed through different pupil partial regions of the photographing lens via one microlens, thereby realizing so-called pupil division. For this reason, the output signal is equivalent to Light Field (LF) data including spatial distribution of light intensity and angle distribution information, and a plurality of viewpoint images can be obtained.
  • LF Light Field
  • Non-Patent Document 1 the acquired LF data is used to generate a composite image formed on a virtual surface different from the imaging surface, so that the focus position (in-focus position) of the captured image after shooting is generated.
  • a refocusing technique that can change also called).
  • Non-Patent Document 1 does not consider a method of operating the display of the viewpoint change image and the display of the image with the focus position changed in parallel.
  • the present invention has been made in view of the above-mentioned problems. That is, it provides a technique capable of operating in parallel an operation for displaying a viewpoint change image and an operation for displaying an image whose focus position has been changed based on a plurality of viewpoint images.
  • the image processing apparatus of the present invention has the following configuration. That is, an acquisition means for acquiring an image signal including the intensity and angle information of incident light, an operation means for receiving an operation for changing the viewpoint and an operation for changing the focus position, and a plurality of viewpoints obtained based on the image signal And processing means for generating a display image in which the viewpoint is changed according to the operation for changing the viewpoint and the focus position is changed according to the operation for changing the focus position, based on the image.
  • an operation for displaying a viewpoint change image and an operation for displaying an image with a changed focus position can be operated in parallel.
  • FIG. 1 is a block diagram illustrating a functional configuration example of a digital camera as an example of an image processing apparatus according to an embodiment of the present invention.
  • 1 is a block diagram illustrating an example of a functional configuration of an image processing unit according to a first embodiment.
  • FIG. 3 is a diagram schematically illustrating a pixel arrangement according to the first embodiment.
  • FIG. 2 is a plan view and a cross-sectional view schematically illustrating a pixel according to the first embodiment. The figure explaining the outline
  • FIG. 6 is a diagram for explaining an example of a light intensity distribution inside a pixel according to the first embodiment.
  • FIG. 6 is a diagram for explaining an example of pupil intensity distribution according to the first embodiment.
  • FIG. 4 is a diagram for explaining a relationship between an image sensor according to Embodiment 1 and pupil division. The figure explaining the relationship between the defocus amount and the image shift amount in the first viewpoint image and the second viewpoint image
  • FIG. 5 is a diagram illustrating an example of a contrast distribution of a captured image according to the first embodiment.
  • FIG. 6 is a diagram for explaining an example of parallax enhancement in which a difference between viewpoint images according to the first embodiment is enlarged. The figure explaining the outline of the refocus process which concerns on Embodiment 1.
  • FIG. 4 is a diagram for explaining a relationship between an image sensor according to Embodiment 1 and pupil division.
  • FIG. 5 is a diagram
  • FIG. , , FIG. 6 is a diagram for explaining pupil shift in the peripheral image height of the image sensor according to the first embodiment.
  • 7 is a flowchart showing a series of operations related to viewpoint movement and focus adjustment operation for a captured image according to the first embodiment.
  • 7 is a flowchart showing a series of operations of viewpoint image operation processing according to the first embodiment.
  • 8 is a flowchart showing a series of operations of development processing according to the first embodiment.
  • FIG. 7 is a flowchart showing a series of operations of parallax image operation processing according to the second embodiment.
  • FIG. 10 is a diagram schematically illustrating a UI for viewpoint movement and focus adjustment according to the third embodiment.
  • 8 is a flowchart showing a series of operations of parallax image operation processing according to the fourth embodiment.
  • FIG. 10 is a diagram schematically illustrating a UI example of viewpoint movement and focus adjustment according to the fourth embodiment.
  • FIG. 10 is a diagram schematically illustrating a UI example of viewpoint movement and focus adjustment according to the fourth embodiment.
  • FIG. 10 is a diagram schematically illustrating a UI example of viewpoint movement and focus adjustment according to the fourth embodiment.
  • an arbitrary device transmits LF data and operation contents to a server device (including a virtual machine) provided with processing means such as a processor on the Internet or a local network, and a part or all of the processing for the LF data is performed by a server.
  • a server device including a virtual machine
  • processing means such as a processor on the Internet or a local network
  • Configurations that execute on the device may be included.
  • the configuration may be such that any device receives and displays the processing result from the server device.
  • FIG. 1 is a block diagram illustrating a functional configuration example of a digital camera 100 as an example of an image processing apparatus according to the present embodiment.
  • One or more of the functional blocks shown in FIG. 1 may be realized by hardware such as an ASIC or a programmable logic array (PLA), or may be realized by a programmable processor such as a CPU or MPU executing software. May be. Further, it may be realized by a combination of software and hardware. Therefore, in the following description, even when different functional blocks are described as the operation subject, the same hardware can be realized as the subject.
  • PDA programmable logic array
  • the first lens group 101 includes, for example, a zoom lens that constitutes the imaging optical system, is disposed at the tip of the imaging optical system, and is held so as to be movable back and forth in the optical axis direction.
  • the shutter 102 includes a diaphragm, and adjusts the amount of light incident on the image sensor 107 during photographing by adjusting the aperture diameter. Also, when a still image is taken, it functions as a shutter that adjusts the exposure time.
  • the shutter 102 and the second lens group 103 constituting the imaging optical system integrally move forward and backward in the optical axis direction, and perform a zooming function (zoom function) by interlocking with the forward and backward movement of the first lens group 101. Eggplant.
  • the third lens group 105 includes, for example, a focus lens that forms an imaging optical system, and performs focus adjustment by advancing and retracting in the optical axis direction.
  • the optical element 106 includes an optical low-pass filter, and reduces false colors and moire in the captured image.
  • the image sensor 107 includes an image sensor composed of, for example, a CMOS photosensor and a peripheral circuit, and is disposed on the imaging surface of the imaging optical system.
  • the zoom actuator 111 includes a driving device that generates an advance / retreat operation of the first lens group 101 to the third lens group 103. By rotating a cam cylinder (not shown), the zoom actuator 111 rotates the first lens group 101 to the third lens group 103. Is moved forward and backward in the optical axis direction.
  • the aperture shutter actuator 112 includes a drive device that generates the operation of the shutter 102, and controls the aperture diameter and shutter operation of the shutter 102 according to the control of the aperture shutter drive unit 128.
  • the focus actuator 114 includes a driving device that generates an advance / retreat operation of the third lens group 105, and performs focus adjustment by driving the third lens group 105 forward / backward in the optical axis direction.
  • the illuminating device 115 includes an electronic flash for illuminating a subject at the time of photographing.
  • the auxiliary light emitting unit 116 includes a light emitting device for AF auxiliary light, and projects an image of a mask having a predetermined aperture pattern onto a subject field through a light projecting lens to detect a focus on a dark subject or a low contrast subject. Improve ability.
  • the control unit 121 includes a CPU (or MPU), a ROM, and a RAM, and controls each unit of the entire digital camera 100 by expanding and executing a program stored in the ROM to execute AF, photographing, and image processing. And a series of operations such as recording.
  • the control unit 121 may include an A / D converter, a D / A converter, a communication interface circuit, and the like.
  • the control unit 121 includes a function as a display control unit that controls display contents displayed on the display unit 131, and may execute processing executed by the image processing unit 125 instead.
  • the electronic flash control unit 122 includes a control circuit or a control module, and controls lighting of the illumination device 115 in synchronization with the photographing operation.
  • the auxiliary light driving unit 123 controls the lighting of the auxiliary light emitting unit 116 in synchronization with the focus detection operation.
  • the image sensor driving unit 124 controls the imaging operation of the image sensor 107 and A / D-converts the acquired image signal and transmits it to the control unit 121.
  • An image processing circuit 125 performs processes such as ⁇ conversion, color interpolation, and JPEG compression of the image acquired by the image sensor 107.
  • the focus driving unit 126, the aperture shutter driving unit 128, and the zoom driving unit 129 each include a control circuit or a control module.
  • the focus driving unit 126 controls the focus actuator 114 based on the focus detection result.
  • the aperture shutter drive unit 128 controls the aperture shutter actuator 112 at a predetermined timing of the photographing operation.
  • the zoom drive unit 129 controls the zoom actuator 111 according to the zoom operation of the photographer.
  • the display unit 131 includes a display device such as an LCD, and displays, for example, information on the shooting mode of the camera, a preview image before shooting and a confirmation image after shooting, a display image in a focused state at the time of focus detection, and the like.
  • the operation unit 132 includes a switch group for operating the digital camera 100, and includes, for example, a power switch, a release (shooting trigger) switch, a zoom operation switch, a shooting mode selection switch, and the like.
  • the control unit 121 controls each unit of the digital camera 100 in order to execute an operation corresponding to the user operation.
  • the recording medium 133 includes, for example, a detachable flash memory, and records captured images.
  • the communication unit 134 includes a communication circuit or a module, and establishes communication with an external device (for example, a server installed outside) using a communication method compliant with a predetermined standard.
  • the communication unit 134 performs upload / download of image data, reception of a result of a predetermined process performed by the external device with respect to the uploaded image data, and the like.
  • the image acquisition unit 151 stores image data read from the recording medium 133.
  • the image data is image data composed of an image (also referred to as an A + B image) obtained by combining a first viewpoint image and a second viewpoint image, which will be described later, and the first viewpoint image.
  • the subtraction unit 152 generates a second viewpoint image by subtracting the first viewpoint image from the A + B image.
  • the shading processing unit 153 corrects the light amount change due to the image heights of the first viewpoint image and the second viewpoint image.
  • the operation information acquisition unit 154 receives adjustment values for viewpoint movement and refocus changed by the user, and supplies adjustment values operated by the user to the viewpoint change processing unit 155 and the refocus processing unit 156.
  • the viewpoint change processing unit 155 synthesizes an image in which the viewpoint is changed by changing the addition ratio (weighting) of the first viewpoint image and the second viewpoint image. Although details will be described later, an image in which the depth of field is enlarged or reduced can be generated by the processing of the viewpoint change processing unit 155.
  • the refocus processing unit 156 generates a composite image by shift-adding the first viewpoint image and the second viewpoint image in the pupil division direction, and generates images at different focus positions. Details of the processing by the refocus processing unit 156 will be described later.
  • the image processing unit 125 performs development processing by the configuration of a white balance unit 157, a demosaicing unit 158, a gamma conversion unit 159, and a color adjustment unit 160 described below.
  • the white balance unit 157 performs white balance processing. Specifically, a gain is applied to each of R, G, and B so that R, G, and B in the white region have the same color.
  • the demosaicing unit 158 generates a color image in which R, G, and B color image data are arranged in all pixels by interpolating two color mosaic image data out of the three primary colors missing in each pixel. .
  • the demosaicing unit 158 performs interpolation on the target pixel using pixels around the target pixel. Thereafter, color image data of the three primary colors R, G, and B is generated as a result of the interpolation processing for each pixel.
  • the gamma conversion unit 159 applies gamma correction processing to the color image data of each pixel to generate color image data matched with the display characteristics of the display unit 131, for example.
  • the color adjustment unit 160 applies various color adjustment processes such as noise reduction, saturation enhancement, hue correction, and edge enhancement, which are processes for improving the appearance of the image, to the color image data.
  • the compression unit 161 compresses the color-adjusted color image data by a method based on a predetermined compression method such as JPEG, and reduces the data size of the color image data when recording.
  • the output unit 163 outputs the above-described color image data, compressed image data, or display data for a user interface.
  • FIG. 3 shows a two-dimensionally arranged pixel array in a range of 4 columns ⁇ 4 rows, and further shows a subpixel array included in each pixel in a range of 8 columns ⁇ 4 rows.
  • each pixel 200R having R (red) spectral sensitivity at the upper left position and the G (green) spectral sensitivity at the upper right and lower left positions.
  • Each pixel 200G has a pixel 200B having a spectral sensitivity of B (blue) at the lower right position.
  • each pixel has a sub-pixel 201 and a sub-pixel 202 arranged in 2 columns ⁇ 1 row.
  • FIG. 3 is a plan view of the pixel 200G viewed from the light receiving surface side (+ z side) of the image sensor 107
  • FIG. 4B is a cross-sectional view of the aa cross section of FIG. 4A viewed from the ⁇ y side. ing.
  • the pixel 200G is configured to include a photoelectric conversion unit 301 and a photoelectric conversion unit 302 that are divided into NH in the x direction (two divisions) and NV in the y direction (one division).
  • the photoelectric conversion unit 301 and the photoelectric conversion unit 302 correspond to the sub-pixel 201 and the sub-pixel 202, respectively.
  • the pixel 200G has a microlens 305 for condensing incident light on the light receiving side (+ z direction) of the pixel, and a light beam incident through the microlens 305 is converted into the photoelectric conversion unit 301 or the photoelectric conversion unit.
  • 302 is configured to receive light.
  • the photoelectric conversion unit 301 and the photoelectric conversion unit 302 may be a pin structure photodiode in which an intrinsic layer is sandwiched between a p-type layer and an n-type layer, or an intrinsic layer may be omitted and a pn junction if necessary. A photodiode may be used.
  • the color filter 306 is disposed between the microlens 305, the photoelectric conversion unit 301, and the photoelectric conversion unit 302, and allows light having a predetermined frequency to pass therethrough.
  • FIG. 4B shows an example in which one color filter 306 is provided for the pixel 200G. However, if necessary, a color filter having a different spectral transmittance may be provided for each sub-pixel, or the color filter may be omitted. May be.
  • pairs of electrons and holes are generated according to the amount of received light, and further separated by a depletion layer. Thereafter, negatively charged electrons are accumulated in the n-type layer, and holes are output to the outside of the image sensor 107 through the p-type layer 300 connected to a constant voltage source (not shown). Electrons accumulated in the n-type layers of the photoelectric conversion unit 301 and the photoelectric conversion unit 302 are transferred to the capacitance unit (FD) through the transfer gate and converted into a voltage signal.
  • FD capacitance unit
  • FIG. 5 shows a correspondence between a cross-sectional view of the pixel 200G shown in FIG. 4A taken along the line aa from the + y side and the exit pupil plane of the imaging optical system.
  • the x-axis and y-axis of the cross-sectional view of the pixel 200G are inverted with respect to FIGS. 4A to 4B in order to correspond to the coordinate axis of the exit pupil plane.
  • the pupil partial area 501 of the sub-pixel 201 represents a pupil area that can be received by the sub-pixel 201.
  • the pupil partial area 501 of the sub-pixel 201 has a centroid eccentric on the + X side on the pupil plane, and is generally in a conjugate relationship by the light receiving surface of the photoelectric conversion unit 301 whose centroid is eccentric in the ⁇ x direction and the microlens. It has become.
  • the pupil partial area 502 of the sub-pixel 202 represents a pupil area that can be received by the sub-pixel 202.
  • the pupil partial region 502 of the sub-pixel 202 has a centroid eccentric on the ⁇ X side on the pupil plane, and is generally in a conjugate relationship by the light receiving surface of the photoelectric conversion unit 302 whose centroid is eccentric in the + x direction and the microlens. It has become.
  • the pupil region 500 is a pupil region that can receive light in the entire pixel 200G when the photoelectric conversion unit 301 and the photoelectric conversion unit 302 (subpixel 201 and subpixel 202) are all combined.
  • 6A to 6B show examples of light intensity distributions when light is incident on the microlens 305 formed in the pixel 200G.
  • 6A shows a light intensity distribution in a cross section parallel to the microlens optical axis
  • FIG. 6B shows a light intensity distribution in a cross section perpendicular to the microlens optical axis at the microlens focal position.
  • H represents the convex surface of the microlens 305
  • f represents the focal length of the microlens
  • nF ⁇ represents the movable range of the focal position by refocusing described later
  • represents the maximum angle of the incident light beam.
  • the incident light is condensed at the focal position by the microlens 305, but the diameter of the condensing spot is not smaller than the diffraction limit ⁇ due to the influence of diffraction due to the wave nature of the light.
  • the diameter of the condensing spot is not smaller than the diffraction limit ⁇ due to the influence of diffraction due to the wave nature of the light.
  • the condensing spot of the microlens is also about 1 ⁇ m.
  • the pupil partial region 501 (pupil partial region 502 in the photoelectric conversion unit 302) that is in a conjugate relationship with the light receiving surface of the photoelectric conversion unit 301 via the microlens 305 is clearly divided into pupils due to diffraction blur.
  • the light reception rate distribution (pupil intensity distribution) is not.
  • the pupil intensity distribution in this pixel 200G is as shown in FIG. 7 schematically showing the pupil coordinates on the horizontal axis and the light reception rate on the vertical axis.
  • the pupil intensity distribution 701 is an example of the pupil intensity distribution along the X axis (solid line) of the pupil partial area 501 in FIG. 5, and the pupil intensity distribution 702 is an example of the pupil intensity distribution along the X axis of the pupil partial area 502. (Broken lines) are shown.
  • the pupil partial area 501 and the pupil partial area 502 have gentle pupil intensity peaks at different pupil positions, indicating that the light passing through the microlens 305 is gently divided into pupils.
  • each light flux that has passed through different pupil partial areas passes through the imaging surface 800 and enters each pixel of the image sensor 107 at different angles.
  • the light is received by the subpixel 201 (photoelectric conversion unit 301) and the subpixel 202 (photoelectric conversion unit 302) of each pixel divided by 2 ⁇ 1. That is, the image sensor 107 has a plurality of pixels arranged with a plurality of sub-pixels configured to receive light beams that pass through different pupil partial regions of the imaging optical system.
  • the light reception signals of the sub-pixels 201 of each pixel are collected to generate a first viewpoint image
  • the light reception signals of the sub-pixels 202 of each pixel are collected to obtain the second viewpoint image.
  • the first viewpoint image and the second viewpoint image are images of a Bayer array
  • a demosaicing process is applied to the first viewpoint image and the second viewpoint image as necessary. Also good.
  • FIG. 8A shows an example in which the pupil region is divided into two pupils in the horizontal direction, pupil division may be performed in the vertical direction according to the subpixel division method.
  • the present invention is not limited to this, and this embodiment and other embodiments can be applied as long as a plurality of viewpoint images can be acquired by a known technique. For example, as in Japanese Patent Application Laid-Open No.
  • a configuration in which a plurality of cameras with different viewpoints are collectively regarded as the image sensor 107 may be used.
  • the image from the imaging optical system is re-imaged on the microlens array (this is called re-imaging because the image once formed is diffused), and the image plane
  • a configuration in which an image pickup element is provided in the apparatus may be used.
  • a method of inserting a mask (gain modulation element) with an appropriate pattern into the optical path of the photographing optical system can also be used.
  • FIG. 8B schematically shows the relationship between the defocus amount between the first viewpoint image and the second viewpoint image and the image shift amount between the first viewpoint image and the second viewpoint image.
  • An imaging element 107 (not shown in FIG. 8B) is arranged on the imaging surface 800, and the exit pupil of the imaging optical system is divided into two in the pupil partial area 501 and the pupil partial area 502, as in FIGS. Divided.
  • the defocus amount d represents the distance from the imaging position of the subject to the imaging surface 800 by the magnitude
  • the defocus amount d represents, for example, a negative sign (d ⁇ 0) when the imaging position of the subject is on the subject side of the imaging surface 800 (also referred to as a front pin state).
  • a state where the subject imaging position is on the opposite side of the subject from the imaging surface 800 (also referred to as a rear pin state) is represented by a positive sign (d> 0).
  • the front pin state (d ⁇ 0) and the rear pin state (d> 0) are combined to form a defocus state (
  • the luminous flux that has passed through the pupil partial area 501 (pupil partial area 502) out of the luminous flux from the subject 802 is condensed once and then the gravity center position G1 (G2) of the luminous flux.
  • a width ⁇ 1 ( ⁇ 2) and the image is blurred on the imaging surface 800.
  • the blurred image is received by the sub-pixel 201 (sub-pixel 202) constituting each pixel arranged in the image sensor, and a first viewpoint image (second viewpoint image) is generated. Therefore, the first viewpoint image (second viewpoint image) is recorded as a subject image in which the subject 802 is blurred by the width ⁇ 1 ( ⁇ 2) at the gravity center position G1 (G2) on the imaging surface 800.
  • the blur width ⁇ 1 ( ⁇ 2) of the subject image increases approximately in proportion to the increase of the magnitude
  • of the image shift amount p of the subject image between the first viewpoint image and the second viewpoint image is also the magnitude
  • the image shift direction of the subject image between the first viewpoint image and the second viewpoint image is opposite to that in the front pin state, but the same.
  • the first viewpoint image and the second viewpoint image, or the first viewpoint image and the second viewpoint image are increased in accordance with the increase in the defocus amount of the imaging signal obtained by adding the first viewpoint image and the second viewpoint image.
  • the amount of image shift between the second viewpoint images increases.
  • viewpoint image correction processing and refocus processing As a first step, the viewpoint change processing unit 155 calculates a contrast distribution representing the level of contrast based on each pixel value of the captured image. As a second stage, the viewpoint change processing unit 155 performs conversion for enlarging the difference between a plurality of viewpoint images (first viewpoint image and second viewpoint image) for each pixel to enhance parallax based on the calculated contrast distribution. To generate a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image). As a third stage, the refocus processing unit 156 relatively shift-adds a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image) to generate a refocus image.
  • i and j are integers, and the j-th position in the row direction and the i-th direction in the column direction of the image sensor 107 are represented as (j, i).
  • the first viewpoint image of the pixel at the position (j, i) is A0 (j, i)
  • the second viewpoint image is B0 (j, i)
  • the viewpoint change processing unit 155 matches the color centroids of RGB for each position (j, i) with respect to the Bayer-arrayed captured image I (j, i) according to the equation (1) to obtain the luminance Y (j, i) is calculated.
  • the viewpoint change processing unit 155 [1, 2, ⁇ 1, ⁇ 4, ⁇ 1, 2] in the horizontal direction (column i direction) that is the pupil division direction with respect to the luminance Y (j, i). 1] is applied to calculate the high-frequency component dY (j, i) in the horizontal direction.
  • the viewpoint change processing unit 155 applies high-frequency cut filter processing such as [1, 1, 1, 1, 1, 1, 1] in the vertical direction (row j direction) that is not the pupil division direction as necessary. The high frequency noise in the vertical direction may be suppressed.
  • the viewpoint change processing unit 155 calculates a normalized (normalized) horizontal high-frequency component dZ (j, i) according to Equation (2).
  • the reason why the constant Y0 is added to the denominator is to prevent the expression (2) from diverging by dividing by zero.
  • the viewpoint change processing unit 155 may suppress high-frequency noise by applying a high-frequency cut filter process to the luminance Y (j, i) as necessary before normalization by Expression (2).
  • the viewpoint change processing unit 155 calculates the contrast distribution C (j, i) according to the equation (3).
  • the first line of Expression (3) indicates that the contrast distribution C (j, i) is set to 0 when the luminance of the captured image is lower than the predetermined luminance Yc.
  • the third line of Equation (3) indicates that the contrast distribution C (j, i) is set to 1 when the normalized high-frequency component dZ (j, i) is larger than the predetermined value Zc.
  • the second line of Expression (3) indicates that a value obtained by normalizing dZ (j, i) with Zc is a contrast distribution C (j, i).
  • the contrast distribution C (j, i) takes a value in the range of [0, 1]. The closer to 0, the lower the contrast, and the closer to 1, the higher the contrast.
  • FIG. 9 shows an example of the contrast distribution C (j, i) of the captured image obtained by Expression (3).
  • the white portion indicates that there are many horizontal high-frequency components and the contrast is high
  • the black portion indicates that there are few horizontal high-frequency components and the contrast is low.
  • parallax enhancement processing of the parallax image
  • an image shift distribution of the viewpoint image is calculated.
  • the calculation of the image shift distribution is obtained by performing a correlation operation on a pair of images of the first viewpoint image A0 and the second viewpoint image B0 and calculating a relative positional shift amount of the pair of images. .
  • Various known methods are known for the correlation calculation.
  • the viewpoint change processing unit 155 for example, adds the absolute values of the difference between a pair of images as shown in Expression (4) to correlate the images. A value can be obtained.
  • A0i and B0i represent the luminance of the i-th pixel of the first viewpoint image A0 and the second viewpoint image B0, respectively.
  • Ni is a number representing the number of pixels used in the calculation, and is appropriately set according to the minimum calculation range of the image shift distribution.
  • the viewpoint change processing unit 155 calculates, for example, k that minimizes COR (k) in Expression (4) as the image shift amount. That is, with the pair of images shifted by k pixels, the absolute value of the difference between each i-th A0 pixel and B0 pixel in the row direction is taken, and the absolute value is added to a plurality of pixels in the row direction. Then, the viewpoint change processing unit 155 considers the added value, that is, k when COR (k) is smallest, as the image shift amount of A0 and B0, and calculates the shift amount k pixels.
  • the correlation calculation is It is defined by equation (5).
  • A0ij and B0ij represent the luminance of the i-th pixel in the j-th column of the first viewpoint image A0 and the second viewpoint image B0, respectively.
  • ni represents the number of pixels used for the calculation
  • nj represents the number in the column direction of the pair of images for which the correlation calculation is performed.
  • the viewpoint change processing unit 155 calculates k that minimizes the COR (k) in the equation (5) as the image deviation amount in the same manner as the equation (4). Note that the subscript k is added only to i and is independent of j. This corresponds to performing the correlation calculation while moving the two-dimensional image only in the pupil division direction.
  • the viewpoint change processing unit 155 can calculate the image shift amount of each region of the first viewpoint image A0 and the second viewpoint image B0 according to the equation (5), and calculate the image shift distribution.
  • the sharpness processing described later is performed only on the high contrast portion to perform the refocus processing. Therefore, in the above-described contrast distribution calculation process, the correlation calculation according to the equation (5) is not performed on the region where the contrast distribution C (j, i) is 0 (that is, a position having a luminance lower than the predetermined luminance Yc). It may be.
  • an example of the pupil intensity distribution is a gentle pupil division due to diffraction blur in the pupil division by the microlens formed for each pixel and the photoelectric conversion unit divided into a plurality of pixels. For this reason, in a plurality of viewpoint images corresponding to the gently divided pupil intensity distribution, the effective F value in the pupil division direction does not become sufficiently dark (large), so that the effective depth of focus is difficult to increase.
  • the viewpoint change processing unit 155 increases the difference between the viewpoint images for each pixel and emphasizes the parallax with respect to a plurality of viewpoint images (first viewpoint image and second viewpoint image). I do.
  • the viewpoint change processing unit 155 generates a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image) by the parallax enhancement processing.
  • the viewpoint change processing unit 155 enlarges the difference between the viewpoint images according to Expression (6) and Expression (7) for the first viewpoint image A0 (j, i) and the second viewpoint image B0 (j, i).
  • First corrected viewpoint image A (j, i) and second corrected viewpoint image B (j, i) are generated.
  • k (0 ⁇ k ⁇ 1) and ⁇ (0 ⁇ ⁇ ⁇ 1) are real numbers.
  • FIG. 10 shows an example in which the difference between the viewpoint images is enlarged at a predetermined position by the parallax enhancement processing.
  • An example of the first viewpoint image A0 (101) and the second viewpoint image B0 (102) before performing the parallax enhancement processing is indicated by a broken line, and the first viewpoint image after performing the parallax enhancement processing by Expression (4) and Expression (5)
  • Examples of the first corrected viewpoint image A (103) and the second corrected viewpoint image B (104) are shown by solid lines.
  • the horizontal axis indicates the 1152 to 1156th pixels in units of sub-pixels (sub-pixels), and the vertical axis indicates the magnitude of parallax in each pixel.
  • the portion where the difference between the viewpoint images is small does not change much (for example, near the 1154th), but the portion where the difference between the viewpoint images is large is further enlarged (for example, near the 1153rd and 1155th), The parallax is emphasized.
  • the viewpoint change processing unit 155 generates a plurality of corrected viewpoint images in which the difference between the plurality of viewpoint images is enlarged and the parallax is emphasized for each of the plurality of viewpoint images. Note that the viewpoint change processing unit 155 suppresses the load of the parallax enhancement processing by performing calculations using signals of a plurality of subpixels included in the pixel as in Expression (6) and Expression (7). Can do.
  • Equation (6) when the value of k is increased to increase the parallax enhancement, the parallax between a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image) increases. Therefore, by increasing the value of k, it is possible to darken (increase) the effective F value in the dividing direction and deeply correct the effective depth of focus in the dividing direction. However, if the parallax enhancement is excessively increased, the noise of the corrected viewpoint image increases and the S / N decreases.
  • the strength of the parallax enhancement conversion is adjusted in a region-adaptive manner based on the contrast distribution C (j, i). For example, in the region with high contrast, the viewpoint change processing unit 155 increases the parallax enhancement intensity by increasing the parallax, and darkens (increases) the effective F value in the division direction. On the other hand, in a low contrast area, the intensity of parallax enhancement is weakened to maintain S / N, and the S / N reduction is suppressed.
  • the parallax between the plurality of corrected viewpoint images (the first corrected viewpoint image and the second corrected viewpoint image) is increased, the effective F value in the division direction is darkened (increased), and the effective in the division direction is increased.
  • the depth of focus can be corrected deeply.
  • a refocus image is generated using a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image), thereby improving the refocus effect (change in image due to refocusing). Can be emphasized).
  • the viewpoint change processing unit 155 can suppress a decrease in S / N, for example, by increasing the intensity of parallax enhancement in a high-brightness area rather than a low-brightness area of a captured image as necessary. Further, for example, even if the intensity of parallax enhancement is increased in a region where the high-frequency component is larger than a region where the high-frequency component of the captured image is small, for example, S / N reduction can be similarly suppressed.
  • FIG. 11 schematically shows a first modified viewpoint image Ai and a second modified viewpoint image Bi including the signal of the i-th pixel in the column direction of the image sensor 107 arranged on the imaging surface 800.
  • the first modified viewpoint image Ai includes a light reception signal of a light beam incident on the i-th pixel at the principal ray angle ⁇ a (corresponding to the pupil partial region 501 in FIG. 8A).
  • the second modified viewpoint image Bi includes a light reception signal of the light beam incident on the i-th pixel at the principal ray angle ⁇ b (corresponding to the pupil partial region 502 in FIG. 8A). That is, the first modified viewpoint image Ai and the second modified viewpoint image Bi have incident angle information in addition to the light intensity distribution information.
  • the refocus processing unit 156 can generate a refocus image on a predetermined virtual imaging plane. Specifically, the refocus processing unit 156 translates the first modified viewpoint image Ai along the angle ⁇ a and the second modified viewpoint image Bi along the angle ⁇ b to the virtual imaging plane 810, respectively. Then, the refocused images on the virtual imaging plane 810 can be generated by adding the parallel corrected viewpoint images for each pixel. In the example of FIG. 11, translating the first corrected viewpoint image Ai along the angle ⁇ a to the virtual imaging plane 810 corresponds to shifting the first corrected viewpoint image Ai by +0.5 pixels in the column direction. .
  • translating the second corrected viewpoint image Bi to the virtual imaging plane 810 along the angle ⁇ b corresponds to shifting the second corrected viewpoint image Bi by ⁇ 0.5 pixels in the column direction. That is, in the example of FIG. 11, the combination of the first corrected viewpoint image Ai and the second corrected viewpoint image Bi on the virtual imaging plane 810 is relatively +1 between the first corrected viewpoint image Ai and the second corrected viewpoint image Bi. Obtained by pixel shift. Therefore, a refocus image on the virtual imaging plane 810 can be generated by adding the first modified viewpoint image Ai and the shifted second modified viewpoint image Bi + 1 for each pixel.
  • the refocus processing unit 156 shift-adds the first modified viewpoint image A and the second modified viewpoint image B in accordance with the equation (8) as the integer shift amount s, and thereby each virtual amount corresponding to the integer shift amount s.
  • a refocus image I (j, i; s) on the imaging plane is generated.
  • the refocus processing unit 156 first performs demosaicing processing on the first modified viewpoint image A and the second modified viewpoint image B as necessary, and the first modified viewpoint image and the second modified viewpoint image after the demosaicing processing. Shift addition processing may be performed using the viewpoint image. Further, the refocus processing unit 156 generates an interpolated signal between each pixel of the first corrected viewpoint image A and the second corrected viewpoint image B as necessary, and generates a refocus image corresponding to the non-integer shift amount. It may be generated. In this way, it is possible to generate a refocus image in which the position of the virtual imaging plane is changed with a more detailed granularity.
  • the first corrected viewpoint image A and the second corrected viewpoint image B are shift-added to generate a refocus image on the virtual imaging plane. Since the images of the first corrected viewpoint image A and the second corrected viewpoint image B are shifted by shift addition, the relative shift amount (also referred to as image shift amount) with respect to the image before the refocus processing can be known.
  • the integer shift amount s by the refocus processing described above corresponds to this image shift amount.
  • the refocus processing unit 156 can realize the contour enhancement of the subject in the refocus image by performing the sharpness process on the region corresponding to the image shift amount s.
  • Unsharp mask processing applies a blur filter to the local region (original signal) centered on the pixel of interest, and reflects the difference between the pixel values before and after applying the blur processing on the pixel value of the pixel of interest. Realize contour enhancement.
  • the unsharp mask process for the pixel value P to be processed is calculated according to Equation (9).
  • P ′ is a pixel value after application of the processing
  • R is a radius of the blurring filter
  • T is an application amount (%).
  • F (i, j, R) is a pixel value obtained by applying a blurring filter having a radius R to the pixel P (i, j).
  • a known method such as Gaussian blur can be used.
  • Gaussian blur is a process of averaging by applying weighting according to a Gaussian distribution according to the distance from the pixel to be processed, and a natural processing result can be obtained.
  • the size of the radius R of the blurring filter is related to the wavelength of the frequency on the image to which the sharpness processing is to be applied. That is, as R is smaller, a finer pattern is emphasized, and as R is larger, a gentler pattern is emphasized.
  • the application amount T (i, j) is a value that changes the application amount of contour enhancement by unsharp mask processing according to the image shift distribution. Specifically, if the image shift amount at the position of each pixel is pred (i, j) and the shift amount s by refocus processing is,
  • the application amount T is increased in a region that is within pixels), that is, a region that is in a focused state on the virtual imaging plane.
  • the refocusable range represents a range of a focus position that can be changed by the refocus process.
  • FIG. 13 schematically illustrates a refocusable range according to the present embodiment.
  • the allowable circle of confusion is ⁇ and the aperture value of the imaging optical system is F
  • the depth of field at the aperture value F is ⁇ F ⁇ .
  • the effective depth of field for each first modified viewpoint image (second modified viewpoint image) is ⁇ NHF ⁇ and NH times deeper, and the focusing range is expanded NH times.
  • the refocus processing unit 156 performs the focus after the shooting by the refocus processing of translating the first corrected viewpoint image (second corrected viewpoint image) along the principal ray angle ⁇ a ( ⁇ b) shown in FIG.
  • the position can be readjusted (refocused).
  • the defocus amount d from the imaging surface where the focus position can be readjusted (refocused) after shooting is limited, and the refocusable range of the defocus amount d is approximately in the range of Expression (10). is there.
  • 2 ⁇ X (the reciprocal of the Nyquist frequency 1 / (2 ⁇ X) of the pixel period ⁇ X).
  • FIG. 14A to 14C show the principle of the viewpoint movement process.
  • an image sensor 107 (not shown) is arranged on the imaging surface 600, and the exit pupil of the imaging optical system is divided into two, a pupil partial region 501 and a pupil partial region 502, as in FIG. .
  • FIG. 14A shows an example of a case where a blurred image ⁇ 1 + ⁇ 2 of the subject q2 in front is photographed with the focused image p1 of the main subject q1 (also referred to as a front blurring on the main subject).
  • 14B and 14C show the example shown in FIG. 14A separately for the light beam passing through the pupil partial region 501 and the light beam passing through the pupil partial region 502 of the imaging optical system.
  • the light beam from the main subject q1 passes through the pupil partial region 501 and forms an image p1 in a focused state.
  • the light beam from the subject q2 on the near side passes through the pupil partial region 501 and spreads in the blurred image ⁇ 1 in the defocused state.
  • Each light beam is received by the sub-pixel 201 of a different pixel of the image sensor 107, and a first viewpoint image is generated.
  • the image p1 of the main subject q1 and the blurred image ⁇ 1 of the subject q2 in front are received without overlapping. This is because, in a predetermined area (near the image p1 of the subject q1), the closest subject (the blurred image ⁇ 1 of the subject q2) among the plurality of viewpoint images (first viewpoint image and second viewpoint image) is the narrowest range. This is the viewpoint image being shot.
  • the blur image ⁇ 1 of the subject q2 is less reflected and the contrast evaluation value is the highest.
  • a large viewpoint image is obtained.
  • the light beam from the main subject q1 passes through the pupil partial region 502 and forms an image p1 in a focused state.
  • the light beam from the subject q2 on the near side passes through the pupil partial area 502 and spreads in the blurred image ⁇ 2 in the defocused state.
  • Each light beam is received by the sub-pixel 202 of each pixel of the image sensor 107, and a second viewpoint image is generated.
  • the image p1 of the main subject q1 and the blurred image ⁇ 2 of the subject q2 in front are overlapped and received.
  • the viewpoint change processing unit 155 inputs the above-described first viewpoint image A (j, i) and second viewpoint image B (j, i).
  • a table function T (j, i) corresponding to the boundary width ⁇ between the predetermined region R and the predetermined region is calculated.
  • the table function T (j, i) is a function that becomes 1 inside the predetermined region R and 0 outside the predetermined region R, and changes almost continuously from 1 to 0 with the boundary width ⁇ of the predetermined region R.
  • the viewpoint change processing unit 155 may set the predetermined area to be circular or any other shape as necessary, and may set a plurality of predetermined areas and a plurality of boundary widths.
  • the viewpoint change processing unit 155 uses the real coefficient w ( ⁇ 1 ⁇ w ⁇ 1) and the first weight coefficient Wa (j of the first viewpoint image A (j, i) according to the equation (12A). , I). In addition, the viewpoint change processing unit 155 calculates the second weight coefficient Wb (j, i) of the second viewpoint image B (j, i) according to the equation (12B).
  • the viewpoint change processing unit 155 includes the first viewpoint image A (j, i), the second viewpoint image B (j, i), the first weighting coefficient Wa (j, i), and the second Using the weight coefficient Wb (j, i), an output image I (j, i) is generated according to the equation (13).
  • the viewpoint change processing unit 155 may generate the output image Is (j, i) according to the equation (14A) or the equation (14B) in combination with the refocus processing using the shift amount s.
  • the output image Is (j, i) output in this way is an image whose viewpoint has moved and an image whose focus position has been readjusted (refocused).
  • a plurality of viewpoint images are multiplied and combined to generate an output image. That is, when the viewpoint change processing unit 155 reduces the front blurring with respect to the main subject using Expression (13), the first weight of the first viewpoint image in which the overlap between the image p1 and the blurred image ⁇ 1 is small in the vicinity of the image p1.
  • the coefficient Wa is set larger than the second weighting coefficient Wb of the second viewpoint image in which the overlap between the image p1 and the blurred image ⁇ 2 is large, and an output image is generated.
  • the viewpoint change processing unit 155 minimizes the weight coefficient of the viewpoint image in which the closest subject is photographed in the widest range in the predetermined area of the image, or the narrowest range of the closest subject. The weighting coefficient of the viewpoint image captured at is maximized. In addition, the viewpoint change processing unit 155 minimizes the weight coefficient of the viewpoint image having the smallest contrast evaluation value or maximizes the weight coefficient of the viewpoint image having the largest contrast evaluation value in the predetermined region of the output image.
  • the viewpoint change processing unit 155 does not change the blurring shape of the imaging optical system in a region other than the predetermined area where the viewpoint movement process is not performed.
  • the output image may be generated by adding the coefficients and the second weighting coefficient substantially evenly. Further, although a method for generating an output image in which the weighting coefficient (that is, the addition ratio) is changed according to the user's designation will be described later, the user may designate a predetermined area for performing the viewpoint movement process.
  • 15A to 15C show the relationship between the pupil partial areas (501, 502) received by the sub-pixel 201 and the sub-pixel 202 of each pixel and the exit pupil 400 of the imaging optical system.
  • FIG. 15A shows a case where the exit pupil distance Dl of the imaging optical system and the set pupil distance Ds of the image sensor 107 are the same.
  • the exit pupil 400 of the imaging optical system is divided into pupils approximately equally by the pupil partial area 501 and the pupil partial area 502 in the same manner for both the central image height and the peripheral image height.
  • FIG. 15B shows a case where the exit pupil distance Dl of the imaging optical system is shorter than the set pupil distance Ds of the image sensor 107.
  • the exit pupil 400 of the imaging optical system is non-uniformly divided by the pupil partial region 501 and the pupil partial region 502.
  • the effective aperture value of the first viewpoint image corresponding to the pupil partial area 501 is smaller (brighter) than the effective aperture value of the second viewpoint image corresponding to the pupil partial area 502.
  • the effective aperture value of the first viewpoint image corresponding to the pupil partial region 501 is larger (darker) than the effective aperture value of the second viewpoint image corresponding to the pupil partial region 502. ) Value.
  • FIG. 15C shows the case where the exit pupil distance Dl of the imaging optical system is longer than the set pupil distance Ds of the image sensor 107. Also in this case, at the peripheral image height, the exit pupil 400 of the imaging optical system is non-uniformly pupil-divided by the pupil partial area 501 and the pupil partial area 502.
  • the effective aperture value of the first viewpoint image corresponding to the pupil partial area 501 is larger (darker) than the effective aperture value of the second viewpoint image corresponding to the pupil partial area 502.
  • the effective aperture value of the first viewpoint image corresponding to the pupil partial region 501 is smaller (brighter) than the effective aperture value of the second viewpoint image corresponding to the pupil partial region 502. ) Value.
  • the effective F values of the first viewpoint image and the second viewpoint image also become nonuniform. For this reason, one of the first viewpoint image and the second viewpoint image has a larger spread of blur, and the other blur has a smaller spread.
  • the viewpoint change processing unit 155 minimizes the weight coefficient of the viewpoint image with the smallest effective aperture value or the weight coefficient of the viewpoint image with the largest effective aperture value in a predetermined area of the output image as necessary. It is desirable to maximize the value. By performing such viewpoint movement processing, it is possible to reduce front blurring on the main subject.
  • each viewpoint image is an image obtained by passing through half of the original pupil region. Therefore, in the case of the pupil division region divided in two in the horizontal direction, the aperture diameter in the horizontal direction is half. Become. For this reason, the depth of field in the horizontal direction is quadrupled.
  • the first viewpoint image or the second viewpoint image has a depth of field that is twice as long as the vertical and horizontal average of the depth of field of the image (A + B image) obtained by combining the first viewpoint image and the second viewpoint image. It becomes the image which has.
  • the viewpoint change processing unit 155 generates an image with an increased depth of field by generating a composite image by changing the addition ratio of the first viewpoint image or the second viewpoint image to other than 1: 1. be able to. Further, the viewpoint change processing unit 155 applies the above-described unsharp mask process using the contrast distribution and the image shift distribution to an image in which the addition ratio of the first viewpoint image or the second viewpoint image is changed. By doing in this way, the synthetic
  • step S ⁇ b> 101 the imaging element 107 performs imaging in accordance with an instruction from the control unit 121.
  • the image sensor 107 outputs parallax image data. Specifically, the image sensor 107 outputs the above-described viewpoint images (A + B image and A image) as image data of one file format.
  • the recording medium 133 temporarily stores image data output from the image sensor 107.
  • the image processing unit 125 reads parallax image data in accordance with an instruction from the control unit 121. For example, the image processing unit 125 acquires the image data stored in the recording medium 133 using the image acquisition unit 151. At this time, the image processing unit 125 generates a B image from the A + B image and, for example, a first viewpoint image (A image) that is an image of the left viewpoint and a second viewpoint image (an image of the right viewpoint) ( B image).
  • the control unit 121 controls the operation unit 132 and the output of the image processing unit 125 to perform viewpoint image operation processing described later, that is, viewpoint movement and focus adjustment for the captured image.
  • viewpoint image operation process is completed, the control unit 121 ends the series of processes.
  • control unit 121 causes the display unit 131 to display a user interface (hereinafter simply referred to as UI) including a viewpoint movement UI and a focus adjustment UI, and a captured image.
  • UI user interface
  • the control unit 121 determines whether or not to perform viewpoint movement based on the user operation input via the operation unit 132.
  • the control unit 121 determines that the viewpoint movement is performed and advances the process to S203.
  • the control unit 121 determines that the viewpoint is not moved and advances the process to S207.
  • the control unit 121 further acquires a user operation for operating the viewpoint movement UI via the operation unit 132.
  • FIG. 19A an example of the viewpoint movement UI displayed on the display unit 131 is shown in FIG. 19A.
  • an image (a captured image or a viewpoint image) is displayed in a part of the area 1000 that forms the UI.
  • the viewpoint video is generated using only the left and right viewpoint images.
  • the viewpoint movement UI arranges the slider 1001 and the slider bar 1002 in the horizontal direction so that the user can operate the operation member in the direction in which the viewpoint changes. As a result, the user can more intuitively operate the viewpoint movement.
  • the control unit 121 uses the image processing unit 125 to generate a composite image in which the addition ratio of the viewpoint images is changed. Specifically, the image processing unit 125 acquires the position of the slider 1001 designated in S203 via the operation information acquisition unit 154. The image processing unit 125 generates an image whose viewpoint has been moved by changing the addition ratio of the first viewpoint image and the second viewpoint image in accordance with the position of the slider 101 and combining them (that is, viewpoint movement processing).
  • viewpoint movement processing When the value at the right end of the slider bar 1002 is defined as 1, the value at the center is defined as 0, and the value at the left end is defined as ⁇ 1, the image processing unit 125 calculates the ratio between the first viewpoint image and the second viewpoint image when the slider 1001 is at the position x. Is changed to (1 + x) :( 1 ⁇ x).
  • control unit 121 uses the image processing unit 125 to apply development processing to the image synthesized in S204.
  • the development process will be described later with reference to the flowchart of FIG.
  • control unit 121 displays an image to which the development process has been applied in step S205 on the display unit 131.
  • the control unit 121 determines whether or not to perform focus adjustment based on a user operation input via the operation unit 132. If the input user operation indicates that focus adjustment is to be performed, the control unit 121 determines that the focus adjustment is to be performed, and advances the processing to S208. On the other hand, if the input user operation does not indicate that the focus adjustment is to be performed, the control unit 121 determines that the focus adjustment is not performed and ends the series of processes.
  • FIG. 19A shows an example of the focus adjustment UI.
  • a slider bar is set in the direction in which the viewpoint is moved.
  • the focus adjustment UI is installed in a direction different from the viewpoint movement (with a different angle).
  • the slider bar 1003 and the slider 1004 of the focus adjustment UI are set in a direction orthogonal to the direction of the slider bar 1002 that is the viewpoint movement UI (that is, the vertical direction).
  • control unit 121 controls focus adjustment in a direction in which the rear pin state becomes stronger when the slider 1004 is moved upward, and in a direction in which the front pin state becomes stronger when the slider 1004 is moved downward.
  • the focus adjustment range corresponds to the refocusable range described above, and is calculated according to Equation (10).
  • step S209 the control unit 121 calculates the focus adjustment position based on the slider position designated in step S208 using the image processing unit 125, and performs the above-described refocus processing.
  • the image processing unit 125 determines the defocus amount (or shift amount) for the refocusable range based on the position of the slider 1004 with respect to the slider bar 1002.
  • step S ⁇ b> 210 the control unit 121 performs development processing using the image processing unit 125. Then, the control unit 121 displays the image developed on the display unit 131 in step S211, ends the series of operations related to the parallax image operation processing, and returns the processing to the caller.
  • step S ⁇ b> 301 the image processing unit 125 performs white balance processing by applying a gain to each color of R, G, and B so that R, G, and B in the white region have the same color.
  • step S302 the image processing unit 125 performs a demosaicing process. Specifically, the image processing unit 125 interpolates the input image in each specified direction, and then performs direction selection, so that the interpolation processing result of each of the three primary colors R, G, and B is obtained for each pixel. A color image signal is generated.
  • the image processing unit 125 performs gamma processing.
  • the image processing unit 125 performs various color adjustment processes such as noise reduction, saturation enhancement, hue correction, and edge enhancement to improve the appearance of the image.
  • the image processing unit 125 performs compression processing on the color image signal color-adjusted in step S304 using a predetermined method such as JPEG, and outputs compressed image data.
  • the control unit 121 records the image data output from the image processing unit 125 on the recording medium 133, ends a series of operations related to the development processing, and returns the processing to the caller.
  • FIG. 19B shows an example in which a composite image that has been subjected to the refocus processing by operating the focus adjustment UI is displayed.
  • the control unit 121 performs focus adjustment (refocus processing) so that the front focus state becomes strong (using the image processing unit 125) and displays an output image. ing.
  • FIG. 19C shows an example in which the viewpoint moving UI slider 1001 is further moved to the right with respect to FIG.
  • the control unit 121 displays an output image in which the depth of field is expanded by performing viewpoint movement processing (using the image processing unit 125).
  • FIG. 19D shows an example in which the viewpoint movement process is performed by moving the slider 1001 of the viewpoint movement UI to the left with respect to FIG. 19B to display a composite image in which the depth of field is expanded.
  • the viewpoint movement process is performed by moving the slider 1001 of the viewpoint movement UI to the left with respect to FIG. 19B to display a composite image in which the depth of field is expanded.
  • FIG. 20 shows an example in which a UI that can change the degree of enhancement in the above-described parallax enhancement processing and sharpness processing is further added.
  • the control unit 121 arranges a slider 1005 and a slider bar 1006 that can change the degree of emphasis.
  • the operation on the slider 1005 changes the parameter corresponding to the variable k in the above-described parallax enhancement processing or the application amount T in the sharpness processing, and changes the degree of enhancement in the displayed composite image.
  • the viewpoint movement UI is arranged in the horizontal direction in order to perform the viewpoint movement process in the horizontal direction based on the signal obtained by the image sensor in which each pixel is divided in the vertical direction.
  • the direction in which the viewpoint movement UI is disposed may be aligned with the dividing direction (for example, the vertical direction).
  • the focus adjustment UI may be arranged in a direction different from the viewpoint movement UI so that the distinction between the two UIs can be made clearer, or the vertical direction can be maintained and the operation on the focus position can be performed more intuitively. May be.
  • an operation for moving the viewpoint and an operation for operating the focus position are received, and a composite image corresponding to the operation is generated, Displayed.
  • the user can perform viewpoint movement, depth of field expansion, and focus position adjustment (refocus) in parallel.
  • an operation for displaying a viewpoint change image and an operation for displaying an image with a changed focus position can be operated in parallel.
  • the viewpoint movement UI is arranged to be operable in the horizontal direction. In this way, the direction in which the viewpoint can be moved matches the direction in which the user can operate, so that the user can operate more intuitively.
  • Embodiment 2 Next, Embodiment 2 will be described.
  • the configuration of the digital camera 100 of the present embodiment is the same as that of the first embodiment, and a part of the parallax image operation processing is different. For this reason, the same reference numerals are assigned to the same components, and redundant descriptions are omitted, and differences will be mainly described.
  • step S401 the control unit 121 determines whether to display the vertical position of the image data. For example, the control unit 121 determines whether the image is captured in the vertical position with reference to the metadata of the input image. If the control unit 121 determines that the image is captured in the vertical position, the control unit 121 advances the processing to S402 to display the vertical position. On the other hand, if it is determined that the image is not captured in the vertical position, the process proceeds to S403 to display the horizontal position.
  • the determination in S401 may be performed when the user sets the vertical position display via the button of the operation unit 132 or the like, or indicates the division direction of the pixels of the image sensor from the metadata. Information may be acquired and the display orientation may be determined according to the division direction.
  • control unit 121 displays the image in the vertical position, displays the viewpoint movement UI so as to be changeable in the vertical direction, and displays the focus adjustment UI so as to be changeable in the horizontal direction.
  • control unit 121 displays the image in the horizontal position, displays the viewpoint movement UI to be changeable in the horizontal direction, and displays the focus adjustment UI in the change display in the vertical direction.
  • the control unit 121 performs the processing related to S202 to S211 in the same manner as in the first embodiment, and returns the processing to the caller.
  • the viewpoint movement UI and the focus adjustment UI are dynamically switched according to whether the input image is the vertical position display or the horizontal position display. In this way, the user can perform an operation according to a direction in which the viewpoint of the captured image can be moved even when there are captured images with different display directions.
  • Embodiment 3 Next, Embodiment 3 will be described.
  • the third embodiment is different in that an image sensor in which each pixel is divided into two in the horizontal direction and the vertical direction is used. For this reason, the configuration of the digital camera 100 other than this point is the same as that of the first embodiment. Accordingly, the same components are denoted by the same reference numerals and redundant description is omitted, and differences will be mainly described.
  • FIG. 22 shows an array of pixels in a range of 4 columns ⁇ 4 rows and an array of subpixels in a range of 8 columns ⁇ 8 rows for the image sensor 107 of the present embodiment.
  • each pixel includes sub-pixels 221 to 224 arranged in 2 columns ⁇ 2 rows.
  • the image sensor 107 has a large number of 4 columns ⁇ 4 rows of pixels (8 columns ⁇ 8 rows of sub-pixels) shown in FIG. 22 arranged on the surface so that a captured image (sub-pixel signal) can be acquired.
  • FIG. 23A shows a plan view of one pixel 200G shown in FIG. 22 viewed from the light receiving surface side (+ z side) of the image sensor 107
  • FIG. 23A is a cross-sectional view of the aa cross section of FIG. 23A viewed from the ⁇ y side. Shown in 23B.
  • photoelectric conversion units 2301 to 2304 that are NH-divided (two divisions) in the x direction and NV-divided (two divisions) in the y direction are formed.
  • the photoelectric conversion units 2301 to 2304 correspond to the subpixels 221 to 224, respectively.
  • the first viewpoint image is generated by collecting the light reception signals of the sub-pixels 201 of each pixel.
  • the second viewpoint image is the light reception signal of the sub-pixel 202 of each pixel
  • the third viewpoint image is the light reception signal of the sub-pixel 203 of each pixel
  • the fourth viewpoint image is the light reception signal of the sub-pixel 204 of each pixel. Collect and generate each.
  • the first viewpoint image to the fourth viewpoint image are images of a Bayer array, and the demosaicing process may be performed from the first viewpoint image to the fourth viewpoint image as necessary. Good.
  • the viewpoint change processing unit 155 performs contrast processing as in the first embodiment. That is, the luminance Y (j, i) is calculated according to the equation (1) for the captured image I (j, i) with the Bayer array. In addition, the viewpoint change processing unit 155 calculates a high frequency component dY (j, i), a high frequency component dZ (j, i), and a contrast distribution C (j, i).
  • the viewpoint change processing unit 155 performs parallax enhancement processing of the viewpoint image.
  • the viewpoint change processing unit 155 enlarges the difference between the viewpoint images from the first viewpoint image A0 (j, i) to the fourth viewpoint image D0 (j, i) according to Expression (15) and Expression (16). Conversion for parallax enhancement is performed.
  • the viewpoint change processing unit 155 generates a fourth modified viewpoint image D (j, i) from the first modified viewpoint image A (j, i) by this process.
  • the refocus processing unit 156 performs refocus processing using the corrected viewpoint image output by the viewpoint change processing unit 155. Specifically, the refocus processing unit 156 shifts and adds the fourth corrected viewpoint image D from the first corrected viewpoint image A according to Expression (17) as the integer shift amount s.
  • the refocus image I (j, i; s) is generated while maintaining the Bayer array.
  • the image processing unit 125 performs a demosaicing process on the generated refocus image I (j, i; s).
  • the demosaicing process is applied from the first corrected viewpoint image to the fourth corrected viewpoint image, and the refocus processing unit 156 shifts from the first corrected viewpoint image after the demosaicing process to the fourth corrected viewpoint image.
  • a refocus image may be generated by performing addition processing. Further, the refocus processing unit 156 generates an interpolated signal between each pixel of the fourth corrected viewpoint image from the first corrected viewpoint image as necessary, and generates a refocus image corresponding to the non-integer shift amount. Also good.
  • the viewpoint change processing unit 155 moves the two-dimensional image by k pixels only in the vertical pupil division direction, and obtains the difference between the pixel of the first viewpoint image A0 and the third viewpoint image C0. Therefore, the equation of correlation calculation to be added for a plurality of rows is defined by equation (18).
  • A0ij and C0ij represent the luminance of the i-th pixel in the j-th column of the first viewpoint image A0 and the third viewpoint image B0, respectively.
  • Ni is a number representing the number of pixels used in the calculation, and nj is the number in the column direction of a pair of images for which correlation calculation is performed.
  • the viewpoint change processing unit 155 calculates k that minimizes COR ′ (k) shown in Expression (18) as an image shift amount. Note that the subscript k is added only to j and is independent of i. This corresponds to performing the correlation calculation while moving the two-dimensional image only in the vertical pupil division direction. As described above, the viewpoint change processing unit 155 can generate an image shift distribution by calculating the image shift amount of each region of the first viewpoint image A0 and the third viewpoint image C0. Although A0 and C0 are used in this embodiment, correlation calculation may be performed using B0 and D0, or correlation calculation may be performed using a signal obtained by adding A0 and B0 and a signal obtained by adding C0 and D0. Good.
  • the viewpoint change processing unit 155 calculates the weight coefficient of each viewpoint image according to the equations (19A) to (19D) as the actual coefficient w ( ⁇ 1 ⁇ w ⁇ 1).
  • Wa (j, i) is the first weighting factor of the first viewpoint image A (j, i)
  • Wb (j, i) is the second weighting factor of the second viewpoint image B (j, i)
  • Wc (J, i) is the third weighting coefficient of the third viewpoint image C (j, i)
  • Wd (j, i) is the fourth weighting coefficient of the fourth viewpoint image D (j, i).
  • the viewpoint change processing unit 155 generates an output image I (j, i) according to the equation (20) from the weighting coefficient corresponding to each viewpoint image.
  • the configuration of the viewpoint movement UI and the focus adjustment UI according to the present embodiment will be described with reference to FIG.
  • the pupil division direction is divided in two directions, the horizontal direction and the vertical direction, so that the user can move the viewpoint in the vertical direction and the horizontal direction.
  • the biaxial slider and slider bar for a user to operate in each direction are provided.
  • a horizontal slider bar 3001 and a slider 3002 are arranged for horizontal viewpoint movement, and a vertical slider bar 4001 and a slider 4002 are arranged for vertical viewpoint movement.
  • the slider bar 5001 and the slider 5002 are arranged in different directions from the viewpoint movement UI.
  • the focus adjustment UI is arranged so as to pass through the intersection of the viewpoint movement UIs arranged in a cross shape, but may be arranged at other positions. In this way, by moving the sliders in the two directions of viewpoint movement, the weighting coefficients of the first to fourth viewpoint images can be changed, and images with different viewpoints can be generated.
  • the viewpoint movement UI and the focus adjustment UI can be operated in parallel (simultaneously).
  • the operation of moving the viewpoint in two dimensions (horizontal and vertical directions) and the focus position are operated.
  • An operation is received, and a composite image corresponding to the operation is generated and displayed.
  • the user performs two-dimensional viewpoint movement and focus position adjustment (refocus) in parallel. It becomes possible to do.
  • a fourth embodiment Next, a fourth embodiment will be described.
  • the notation of the UI for operating the viewpoint image indicates a direction in which the viewpoint of the viewpoint image is changed according to the operation of the viewpoint image, for example.
  • the configuration of the digital camera 100 of the present embodiment is the same as that of the first embodiment, and a part of the parallax image operation processing is different. For this reason, the same reference numerals are assigned to the same components, and redundant descriptions are omitted, and differences will be mainly described.
  • the control unit 121 determines whether to display the input image in the vertical position in order to determine whether the UI notation for performing the operation on the parallax image is matched with the vertical position display of the image. For example, the control unit 121 refers to the metadata of the input image to determine whether the input image is an image captured in the vertical position. If the input image is determined to be displayed in the vertical position, the control unit 121 sets the UI notation to vertical. The process proceeds to S502 for use in position.
  • step S502 the control unit 121 further determines the rotation angle of the image.
  • the control unit 121 determines the angle (for example, the vertical position rotated 90 degrees to the right or the vertical position rotated 90 degrees to the left) with reference to the metadata of the input image, for example. If the control unit 121 determines that the input image is an image captured by 90-degree right rotation, the control unit 121 advances the processing to S504 in order to display 90-degree right rotation on the display unit 131.
  • the process proceeds to S505 in order to perform a 90-degree left rotation.
  • the determination in S501 and S502 may be performed for the vertical position when the user sets the vertical position display via the button of the operation unit 132 or the like.
  • Information indicating the division direction may be acquired, and UI notation for performing an operation on the parallax image may be determined according to the division direction.
  • step S503 the control unit 121 displays the image in the horizontal position, and displays the UI for performing the viewpoint movement operation as the horizontal position (left and right) (FIG. 26).
  • step S504 the control unit 121 displays the image at the vertical position rotated 90 degrees rightward, and displays the UI notation for performing the viewpoint movement operation with the left side of the slider on the upper side and the right side on the lower side (FIG. 27).
  • step S505 the control unit 121 displays the image at the vertical position rotated 90 degrees rightward, and displays the UI notation for performing the viewpoint movement operation with the left side of the slider on the bottom and the right side on the top (FIG. 28).
  • control unit 121 can switch the notation of the UI for performing the viewpoint movement operation in accordance with the direction of the viewpoint movement UI, and can change the notation in accordance with the rotation angle even in the same vertical position. Switching can be done.
  • the control unit 121 thereafter performs the processes according to S202 to S211 shown in FIG. 25B in the same manner as in the first embodiment, and returns the process to the caller.
  • the notation of the viewpoint movement UI is dynamically switched according to whether the input image is the vertical position display or the horizontal position display and the rotation angle. In this way, the user can perform an operation according to a direction in which the viewpoint of the captured image can be moved even when there are captured images with different display directions.
  • the present invention supplies a program that realizes one or more functions of the above-described embodiments to a system or apparatus via a network or a storage medium, and one or more processors in a computer of the system or apparatus read and execute the program This process can be realized. It can also be realized by a circuit (for example, ASIC) that realizes one or more functions.
  • a circuit for example, ASIC

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

An image processing apparatus according to the present invention is provided with: acquisition means for acquiring an image signal including the intensity of and angle information regarding an incident light beam; operation means for receiving a viewpoint change operation and a focus position change operation; and processing means for changing a viewpoint in accordance with the viewpoint change operation and for generating a display image for which a focus position has been changed in accordance with the focus position change operation, on the basis of a plurality of viewpoint images acquired on the basis of the image signal.

Description

画像処理装置、撮像装置およびこれらの制御方法ならびにプログラムImage processing apparatus, imaging apparatus, control method thereof, and program
 本発明は、画像処理装置、撮像装置およびこれらの制御方法ならびにプログラムに関する。 The present invention relates to an image processing apparatus, an imaging apparatus, a control method thereof, and a program.
 撮影レンズの異なる瞳領域を通過した光束を記録することにより、複数の視点画像を同時に撮影可能な撮像装置が知られている。非特許文献1には、1つの画素に対して、1つのマイクロレンズと複数に分割された光電変換部が形成された撮像素子を用いた撮像装置が開示されている。この撮像素子は、分割された光電変換部のそれぞれが、1つのマイクロレンズを介して撮影レンズの異なる瞳部分領域を通過した光束を受光するように構成され、いわゆる瞳分割を実現する。このため、出力された信号は、光強度の空間分布と角度分布情報とを含むLight Field(LF)データと等価であり、複数の視点画像を得ることができる。 An imaging device capable of simultaneously photographing a plurality of viewpoint images by recording a light flux that has passed through different pupil regions of a photographing lens is known. Non-Patent Document 1 discloses an imaging apparatus using an imaging element in which one microlens and a plurality of divided photoelectric conversion units are formed for one pixel. This imaging device is configured such that each of the divided photoelectric conversion units receives a light beam that has passed through different pupil partial regions of the photographing lens via one microlens, thereby realizing so-called pupil division. For this reason, the output signal is equivalent to Light Field (LF) data including spatial distribution of light intensity and angle distribution information, and a plurality of viewpoint images can be obtained.
 また、非特許文献1には、取得したLFデータを用いて、撮像面とは異なる仮想的な面に結像させた合成画像を生成することにより、撮影後に撮像画像のピント位置(合焦位置ともいう)を変更することができるリフォーカス技術が開示されている。 Further, in Non-Patent Document 1, the acquired LF data is used to generate a composite image formed on a virtual surface different from the imaging surface, so that the focus position (in-focus position) of the captured image after shooting is generated. A refocusing technique that can change (also called).
 ところで、複数の視点画像に基づいて、視点を変更した画像(視点変更画像)を表示させる操作と、リフォーカス技術によってピント位置を変更した画像を表示させる操作とを並行して行いたい場合がある。しかし、非特許文献1では、視点変更画像の表示とピント位置を変更した画像の表示とを並行して操作する方法について考慮していない。 By the way, there is a case where an operation for displaying an image whose viewpoint has been changed (viewpoint changed image) based on a plurality of viewpoint images and an operation for displaying an image whose focus position has been changed by the refocus technique may be performed in parallel. . However, Non-Patent Document 1 does not consider a method of operating the display of the viewpoint change image and the display of the image with the focus position changed in parallel.
 本発明は、上述の問題点に鑑みてなされたものである。すなわち、複数の視点画像に基づいて、視点変更画像を表示させる操作とピント位置を変更した画像を表示させる操作とを並行して操作することが可能な技術を提供する。 The present invention has been made in view of the above-mentioned problems. That is, it provides a technique capable of operating in parallel an operation for displaying a viewpoint change image and an operation for displaying an image whose focus position has been changed based on a plurality of viewpoint images.
 この課題を解決するため、例えば本発明の画像処理装置は以下の構成を備える。すなわち、入射する光線の強度と角度情報を含む画像信号を取得する取得手段と、視点を変更する操作とピント位置を変更する操作とを受け付ける操作手段と、画像信号に基づいて得られる複数の視点画像に基づいて、視点を変更する操作に応じて視点を変更すると共に、ピント位置を変更する操作に応じてピント位置を変更した表示画像を生成する処理手段と、を備えることを特徴とする。 In order to solve this problem, for example, the image processing apparatus of the present invention has the following configuration. That is, an acquisition means for acquiring an image signal including the intensity and angle information of incident light, an operation means for receiving an operation for changing the viewpoint and an operation for changing the focus position, and a plurality of viewpoints obtained based on the image signal And processing means for generating a display image in which the viewpoint is changed according to the operation for changing the viewpoint and the focus position is changed according to the operation for changing the focus position, based on the image.
 本発明によれば、複数の視点画像に基づいて、視点変更画像を表示させる操作とピント位置を変更した画像を表示させる操作とを並行して操作することが可能になる。 According to the present invention, based on a plurality of viewpoint images, an operation for displaying a viewpoint change image and an operation for displaying an image with a changed focus position can be operated in parallel.
 本発明のその他の特徴及び利点は、添付図面を参照とした以下の説明により明らかになるであろう。なお、添付図面においては、同じ若しくは同様の構成には、同じ参照番号を付す。 Other features and advantages of the present invention will become apparent from the following description with reference to the accompanying drawings. In the accompanying drawings, the same or similar components are denoted by the same reference numerals.
 添付図面は明細書に含まれ、その一部を構成し、本発明の実施の形態を示し、その記述と共に本発明の原理を説明するために用いられる。
本発明の実施形態に係る画像処理装置の一例としてのデジタルカメラの機能構成例を示すブロック図 実施形態1に係る画像処理部の機能構成例を示すブロック図 実施形態1に係る画素配列を模式的に示す図 実施形態1に係る画素を模式的に示す平面図及び断面図 実施形態1に係る瞳分割の概要を説明する図 実施形態1に係る画素内部の光強度分布の例を説明する図 実施形態1に係る瞳強度分布の例を説明する図 実施形態1に係る撮像素子と瞳分割との関係を説明する図。 第1視点画像と第2視点画像におけるデフォーカス量と像ずれ量の関係を説明する図 実施形態1に係る撮像画像のコントラスト分布の例を示す図 実施形態1に係る視点画像間の差を拡大した視差強調の例を説明する図 実施形態1に係るリフォーカス処理の概略を説明する図 実施形態1に係るアンシャープネス処理の概要を説明する図 実施形態1に係るリフォーカス可能範囲を説明する図 実施形態1に係る視点移動処理の原理を説明する図 実施形態1に係る撮像素子の周辺像高における瞳ずれを説明する図 実施形態1に係る撮像画像に対する視点移動及びピント調整操作に係る一連の動作を示すフローチャート 実施形態1に係る視点画像操作処理の一連の動作を示すフローチャート 実施形態1に係る現像処理の一連の動作を示すフローチャート 実施形態1に係る視点移動、ピント調整のUIを模式的に示す図 実施形態1に係るUIに強調度合のUIを追加した例を模式的に示す図 実施形態2に係る視差画像操作処理の一連の動作を示すフローチャート 実施形態3に係る画素配列を模式的に示す図 実施形態3に係る画素を模式的に示す平面図及び断面図 実施形態3に係る視点移動、ピント調整のUIを模式的に示す図 実施形態4に係る視差画像操作処理の一連の動作を示すフローチャート 実施形態4に係る視点移動、ピント調整のUIの表記例を模式的に示す図 実施形態4に係る視点移動、ピント調整のUIの表記例を模式的に示す図 実施形態4に係る視点移動、ピント調整のUIの表記例を模式的に示す図
The accompanying drawings are included in the specification, constitute a part thereof, show an embodiment of the present invention, and are used to explain the principle of the present invention together with the description.
1 is a block diagram illustrating a functional configuration example of a digital camera as an example of an image processing apparatus according to an embodiment of the present invention. 1 is a block diagram illustrating an example of a functional configuration of an image processing unit according to a first embodiment. FIG. 3 is a diagram schematically illustrating a pixel arrangement according to the first embodiment. , FIG. 2 is a plan view and a cross-sectional view schematically illustrating a pixel according to the first embodiment. The figure explaining the outline | summary of the pupil division which concerns on Embodiment 1. , FIG. 6 is a diagram for explaining an example of a light intensity distribution inside a pixel according to the first embodiment. FIG. 6 is a diagram for explaining an example of pupil intensity distribution according to the first embodiment. FIG. 4 is a diagram for explaining a relationship between an image sensor according to Embodiment 1 and pupil division. The figure explaining the relationship between the defocus amount and the image shift amount in the first viewpoint image and the second viewpoint image FIG. 5 is a diagram illustrating an example of a contrast distribution of a captured image according to the first embodiment. FIG. 6 is a diagram for explaining an example of parallax enhancement in which a difference between viewpoint images according to the first embodiment is enlarged. The figure explaining the outline of the refocus process which concerns on Embodiment 1. FIG. The figure explaining the outline | summary of the unsharpness process which concerns on Embodiment 1. FIG. The figure explaining the refocus possible range which concerns on Embodiment 1. FIG. , , The figure explaining the principle of the viewpoint movement process which concerns on Embodiment 1. FIG. , , FIG. 6 is a diagram for explaining pupil shift in the peripheral image height of the image sensor according to the first embodiment. 7 is a flowchart showing a series of operations related to viewpoint movement and focus adjustment operation for a captured image according to the first embodiment. 7 is a flowchart showing a series of operations of viewpoint image operation processing according to the first embodiment. 8 is a flowchart showing a series of operations of development processing according to the first embodiment. , , , The figure which shows typically the UI of viewpoint movement and focus adjustment which concern on Embodiment 1. FIG. The figure which shows typically the example which added UI of emphasis degree to UI which concerns on Embodiment 1. FIG. 7 is a flowchart showing a series of operations of parallax image operation processing according to the second embodiment. The figure which shows the pixel arrangement | sequence which concerns on Embodiment 3 typically. , Plane view and sectional view schematically showing a pixel according to Embodiment 3 FIG. 10 is a diagram schematically illustrating a UI for viewpoint movement and focus adjustment according to the third embodiment. , 8 is a flowchart showing a series of operations of parallax image operation processing according to the fourth embodiment. FIG. 10 is a diagram schematically illustrating a UI example of viewpoint movement and focus adjustment according to the fourth embodiment. FIG. 10 is a diagram schematically illustrating a UI example of viewpoint movement and focus adjustment according to the fourth embodiment. FIG. 10 is a diagram schematically illustrating a UI example of viewpoint movement and focus adjustment according to the fourth embodiment.
 (実施形態1)
 以下、本発明の例示的な実施形態について、図面を参照して詳細に説明する。なお、以下では画像処理装置の一例として、LFデータを取得可能な任意のデジタルカメラに本発明を適用した例を説明する。しかし、本発明は、デジタルカメラに限らず、取得したLFデータを処理可能な任意の機器にも適用可能である。これらの機器には、例えば携帯電話機、ゲーム機、タブレット端末、パーソナルコンピュータ、時計型や眼鏡型の情報端末、監視システム、車載システム、内視鏡などの医療システム、画像を提供可能なロボットなどが含まれてよい。また、任意の機器がインターネット又はローカルネットワーク上の、プロセッサ等の処理手段を備えたサーバ機器(仮想マシンを含む)にLFデータと操作内容を送信し、LFデータに対する処理の一部又は全部をサーバ機器で実行する構成が含まれてよい。この場合、任意の機器はサーバ機器から処理結果を受信して表示させる構成が含まれてよい。
(Embodiment 1)
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the drawings. In the following, an example in which the present invention is applied to an arbitrary digital camera capable of acquiring LF data will be described as an example of an image processing apparatus. However, the present invention is not limited to a digital camera and can be applied to any device that can process acquired LF data. These devices include, for example, mobile phones, game machines, tablet terminals, personal computers, clock-type or spectacle-type information terminals, monitoring systems, in-vehicle systems, medical systems such as endoscopes, robots that can provide images, and the like. May be included. Also, an arbitrary device transmits LF data and operation contents to a server device (including a virtual machine) provided with processing means such as a processor on the Internet or a local network, and a part or all of the processing for the LF data is performed by a server. Configurations that execute on the device may be included. In this case, the configuration may be such that any device receives and displays the processing result from the server device.
 (デジタルカメラ100の全体構成)
 図1は、本実施形態の画像処理装置の一例としてのデジタルカメラ100の機能構成例を示すブロック図である。なお、図1に示す機能ブロックの1つ以上は、ASICやプログラマブルロジックアレイ(PLA)などのハードウェアによって実現されてもよいし、CPUやMPU等のプログラマブルプロセッサがソフトウェアを実行することによって実現されてもよい。また、ソフトウェアとハードウェアの組み合わせによって実現されてもよい。従って、以下の説明において、異なる機能ブロックが動作主体として記載されている場合であっても、同じハードウェアが主体として実現されうる。
(Overall configuration of digital camera 100)
FIG. 1 is a block diagram illustrating a functional configuration example of a digital camera 100 as an example of an image processing apparatus according to the present embodiment. One or more of the functional blocks shown in FIG. 1 may be realized by hardware such as an ASIC or a programmable logic array (PLA), or may be realized by a programmable processor such as a CPU or MPU executing software. May be. Further, it may be realized by a combination of software and hardware. Therefore, in the following description, even when different functional blocks are described as the operation subject, the same hardware can be realized as the subject.
 第1レンズ群101は、結像光学系を構成する例えばズームレンズを含み、結像光学系の先端に配置されると共に、光軸方向に進退可能に保持される。シャッタ102は絞りを含み、その開口径を調節することにより撮影時に撮像素子107に入射する光量を調節する。また、静止画を撮影する際には露光する時間を調節するシャッタとして機能する。シャッタ102と、結像光学系を構成する第2レンズ群103とは一体となって光軸方向に進退し、第1レンズ群101の進退動作との連動により、変倍作用(ズーム機能)をなす。 The first lens group 101 includes, for example, a zoom lens that constitutes the imaging optical system, is disposed at the tip of the imaging optical system, and is held so as to be movable back and forth in the optical axis direction. The shutter 102 includes a diaphragm, and adjusts the amount of light incident on the image sensor 107 during photographing by adjusting the aperture diameter. Also, when a still image is taken, it functions as a shutter that adjusts the exposure time. The shutter 102 and the second lens group 103 constituting the imaging optical system integrally move forward and backward in the optical axis direction, and perform a zooming function (zoom function) by interlocking with the forward and backward movement of the first lens group 101. Eggplant.
 第3レンズ群105は、結像光学系を構成する例えば焦点レンズを含み、光軸方向の進退により焦点調節を行なう。光学素子106は光学的ローパスフィルタを含み、撮像画像の偽色やモアレを軽減する。撮像素子107は例えばCMOSフォトセンサーと周辺回路とから構成される撮像素子を含み、結像光学系の結像面に配置される。 The third lens group 105 includes, for example, a focus lens that forms an imaging optical system, and performs focus adjustment by advancing and retracting in the optical axis direction. The optical element 106 includes an optical low-pass filter, and reduces false colors and moire in the captured image. The image sensor 107 includes an image sensor composed of, for example, a CMOS photosensor and a peripheral circuit, and is disposed on the imaging surface of the imaging optical system.
 ズームアクチュエータ111は、第1レンズ群101ないし第3レンズ群103の進退動作を発生させる駆動装置を含み、不図示のカム筒を回動することにより、第1レンズ群101ないし第3レンズ群103を光軸方向に進退駆動させる。絞りシャッタアクチュエータ112は、シャッタ102の動作を発生させる駆動装置を含み、シャッタ102の開口径やシャッタ動作を、絞りシャッタ駆動部128の制御に応じて制御する。フォーカスアクチュエータ114は、第3レンズ群105の進退動作を発生させる駆動装置を含み、第3レンズ群105を光軸方向に進退駆動して焦点調節を行なう。 The zoom actuator 111 includes a driving device that generates an advance / retreat operation of the first lens group 101 to the third lens group 103. By rotating a cam cylinder (not shown), the zoom actuator 111 rotates the first lens group 101 to the third lens group 103. Is moved forward and backward in the optical axis direction. The aperture shutter actuator 112 includes a drive device that generates the operation of the shutter 102, and controls the aperture diameter and shutter operation of the shutter 102 according to the control of the aperture shutter drive unit 128. The focus actuator 114 includes a driving device that generates an advance / retreat operation of the third lens group 105, and performs focus adjustment by driving the third lens group 105 forward / backward in the optical axis direction.
 照明装置115は撮影時の被写体照明用電子フラッシュを含み、例えばキセノン管を用いた閃光照明装置、或いは連続発光するLEDを備えた照明装置である。補助光発光部116はAF補助光の発光装置を含み、所定の開口パターンを有したマスクの像を、投光レンズを介して被写界に投影し、暗い被写体あるいは低コントラストの被写体に対する焦点検出能力を向上させる。 The illuminating device 115 includes an electronic flash for illuminating a subject at the time of photographing. The auxiliary light emitting unit 116 includes a light emitting device for AF auxiliary light, and projects an image of a mask having a predetermined aperture pattern onto a subject field through a light projecting lens to detect a focus on a dark subject or a low contrast subject. Improve ability.
 制御部121は、CPU(又はMPU)、ROM、及びRAMを含み、ROMに記憶されたプログラムをRAMに展開、実行することによりデジタルカメラ100全体の各部を制御して、AF、撮影、画像処理と記録等の一連の動作を実行する。また、制御部121はA/Dコンバータ、D/Aコンバータ、通信インターフェース回路等を備えてもよい。更に、制御部121は、表示部131に表示する表示内容を制御する表示制御部としての機能を含み、また、画像処理部125によって実行される処理を代わりに実行してもよい。 The control unit 121 includes a CPU (or MPU), a ROM, and a RAM, and controls each unit of the entire digital camera 100 by expanding and executing a program stored in the ROM to execute AF, photographing, and image processing. And a series of operations such as recording. The control unit 121 may include an A / D converter, a D / A converter, a communication interface circuit, and the like. Furthermore, the control unit 121 includes a function as a display control unit that controls display contents displayed on the display unit 131, and may execute processing executed by the image processing unit 125 instead.
 電子フラッシュ制御部122は、制御回路又は制御モジュールを含み、撮影動作に同期して照明装置115の点灯を制御する。補助光駆動部123は、焦点検出動作に同期して補助光発光部116の点灯を制御する。撮像素子駆動部124は、撮像素子107の撮像動作を制御するとともに、取得した画像信号をA/D変換して制御部121に送信する。125は画像処理回路で、撮像素子107が取得した画像のγ変換、カラー補間、JPEG圧縮等の処理を行なう。 The electronic flash control unit 122 includes a control circuit or a control module, and controls lighting of the illumination device 115 in synchronization with the photographing operation. The auxiliary light driving unit 123 controls the lighting of the auxiliary light emitting unit 116 in synchronization with the focus detection operation. The image sensor driving unit 124 controls the imaging operation of the image sensor 107 and A / D-converts the acquired image signal and transmits it to the control unit 121. An image processing circuit 125 performs processes such as γ conversion, color interpolation, and JPEG compression of the image acquired by the image sensor 107.
 フォーカス駆動部126、絞りシャッタ駆動部128、及びズーム駆動部129は、それぞれ制御回路又は制御モジュールを含む。フォーカス駆動部126は、焦点検出結果に基づいてフォーカスアクチュエータ114を制御する。また、絞りシャッタ駆動部128は、撮影動作の所定のタイミングで絞りシャッタアクチュエータ112を制御する。更にズーム駆動部129は、撮影者のズーム操作に応じてズームアクチュエータ111を制御する。 The focus driving unit 126, the aperture shutter driving unit 128, and the zoom driving unit 129 each include a control circuit or a control module. The focus driving unit 126 controls the focus actuator 114 based on the focus detection result. Further, the aperture shutter drive unit 128 controls the aperture shutter actuator 112 at a predetermined timing of the photographing operation. Further, the zoom drive unit 129 controls the zoom actuator 111 according to the zoom operation of the photographer.
 表示部131は、LCD等の表示装置を含み、例えばカメラの撮影モードに関する情報、撮影前のプレビュー画像と撮影後の確認用画像、焦点検出時の合焦状態の表示画像等を表示する。操作部132はデジタルカメラ100を操作するためのスイッチ群を含み、例えば電源スイッチ、レリーズ(撮影トリガ)スイッチ、ズーム操作スイッチ、撮影モード選択スイッチ等を含む。操作部132が入力されたユーザ操作を制御部121に送信すると、制御部121はユーザ操作に対応した動作を実行するために、デジタルカメラ100の各部を制御する。記録媒体133は、例えば着脱可能なフラッシュメモリを含み、撮影済み画像を記録する。 The display unit 131 includes a display device such as an LCD, and displays, for example, information on the shooting mode of the camera, a preview image before shooting and a confirmation image after shooting, a display image in a focused state at the time of focus detection, and the like. The operation unit 132 includes a switch group for operating the digital camera 100, and includes, for example, a power switch, a release (shooting trigger) switch, a zoom operation switch, a shooting mode selection switch, and the like. When the user operation input by the operation unit 132 is transmitted to the control unit 121, the control unit 121 controls each unit of the digital camera 100 in order to execute an operation corresponding to the user operation. The recording medium 133 includes, for example, a detachable flash memory, and records captured images.
 通信部134は、通信回路又はモジュールを含み、所定の規格に準拠した通信方式を用いて、外部装置(例えば外部に設置されたサーバ)と通信を確立する。通信部134は、外部装置と、例えば画像データのアップロードやダウンロード、アップロードした画像データに対して外部装置が行った所定の処理の結果の受信等を行う。 The communication unit 134 includes a communication circuit or a module, and establishes communication with an external device (for example, a server installed outside) using a communication method compliant with a predetermined standard. The communication unit 134 performs upload / download of image data, reception of a result of a predetermined process performed by the external device with respect to the uploaded image data, and the like.
 (画像処理部125の構成)
 次に、画像処理部125の構成について、図2を参照して説明する。画像取得部151は、記録媒体133から読み出された画像データを保存する。画像データは、後述する第1の視点画像と第2の視点画像とを合成した画像(A+B像ともいう)と、第1の視点画像とから構成される画像データである。
(Configuration of the image processing unit 125)
Next, the configuration of the image processing unit 125 will be described with reference to FIG. The image acquisition unit 151 stores image data read from the recording medium 133. The image data is image data composed of an image (also referred to as an A + B image) obtained by combining a first viewpoint image and a second viewpoint image, which will be described later, and the first viewpoint image.
 減算部152は、A+B像から第1の視点画像を減算することで第2の視点画像を生成する。シェーディング処理部153は、第1の視点画像と第2の視点画像の像高による光量変化を補正する。操作情報取得部154は、ユーザが変更した視点移動とリフォーカス用の調整値を受け取り、視点変更処理部155とリフォーカス処理部156へユーザが操作した調整値を供給する。 The subtraction unit 152 generates a second viewpoint image by subtracting the first viewpoint image from the A + B image. The shading processing unit 153 corrects the light amount change due to the image heights of the first viewpoint image and the second viewpoint image. The operation information acquisition unit 154 receives adjustment values for viewpoint movement and refocus changed by the user, and supplies adjustment values operated by the user to the viewpoint change processing unit 155 and the refocus processing unit 156.
 視点変更処理部155は、第1の視点画像と第2の視点画像の加算比率(重み付け)を変更して視点を変更した画像を合成する。詳細は後述するが、視点変更処理部155の処理により被写界深度を拡大又は縮小した画像を生成することができる。リフォーカス処理部156は、第1の視点画像と第2の視点画像とを瞳分割方向にシフト加算することにより合成画像を生成し、異なるピント位置の画像を生成する。リフォーカス処理部156による処理についても詳細は後述する。 The viewpoint change processing unit 155 synthesizes an image in which the viewpoint is changed by changing the addition ratio (weighting) of the first viewpoint image and the second viewpoint image. Although details will be described later, an image in which the depth of field is enlarged or reduced can be generated by the processing of the viewpoint change processing unit 155. The refocus processing unit 156 generates a composite image by shift-adding the first viewpoint image and the second viewpoint image in the pupil division direction, and generates images at different focus positions. Details of the processing by the refocus processing unit 156 will be described later.
 また、画像処理部125は、以下に説明するホワイトバランス部157、デモザイキング部158、ガンマ変換部159、色調整部160の構成により現像処理を行う。ホワイトバランス部157はホワイトバランス処理を行う。具体的には白の領域のR、G、Bが等色になるようにR、G、Bの各色にゲインをかける。このホワイトバランス処理をデモザイキング処理前に行うことにより、彩度を算出する際に、色かぶり等により偽色の彩度よりも高い彩度になることを回避し、誤判定を防止することができる。 Further, the image processing unit 125 performs development processing by the configuration of a white balance unit 157, a demosaicing unit 158, a gamma conversion unit 159, and a color adjustment unit 160 described below. The white balance unit 157 performs white balance processing. Specifically, a gain is applied to each of R, G, and B so that R, G, and B in the white region have the same color. By performing this white balance processing before demosaicing processing, when calculating saturation, it is possible to avoid a saturation higher than the false color saturation due to color cast, etc., and to prevent erroneous determination. it can.
 デモザイキング部158は各画素において欠落している3原色のうち2色のカラーモザイク画像データを補間することによって、全ての画素においてR、G、Bのカラー画像データが揃ったカラー画像を生成する。デモザイキング部158は、注目画素に対して、注目画素の周辺の画素を用いた補間を行う。その後、各画素について補間処理結果としてのR、G、Bの3原色のカラー画像データを生成する。 The demosaicing unit 158 generates a color image in which R, G, and B color image data are arranged in all pixels by interpolating two color mosaic image data out of the three primary colors missing in each pixel. . The demosaicing unit 158 performs interpolation on the target pixel using pixels around the target pixel. Thereafter, color image data of the three primary colors R, G, and B is generated as a result of the interpolation processing for each pixel.
 ガンマ変換部159は、各画素のカラー画像データにガンマ補正処理を適用して、例えば表示部131の表示特性に整合させたカラー画像データを生成する。色調整部160は、画像の見栄えを改善するための処理である、例えばノイズ低減、彩度強調、色相補正、エッジ強調といった各種の色調整処理をカラー画像データに適用する。 The gamma conversion unit 159 applies gamma correction processing to the color image data of each pixel to generate color image data matched with the display characteristics of the display unit 131, for example. The color adjustment unit 160 applies various color adjustment processes such as noise reduction, saturation enhancement, hue correction, and edge enhancement, which are processes for improving the appearance of the image, to the color image data.
 圧縮部161は、色調整されたカラー画像データをJPEG等の所定の圧縮方式に準拠した方法で圧縮し、記録する際のカラー画像データのデータサイズを小さくする。出力部163は、上述したカラー画像データ、圧縮された画像データ、又はユーザインターフェース用の表示データを出力する。 The compression unit 161 compresses the color-adjusted color image data by a method based on a predetermined compression method such as JPEG, and reduces the data size of the color image data when recording. The output unit 163 outputs the above-described color image data, compressed image data, or display data for a user interface.
 (撮像素子107の構成)
 本実施形態に係る撮像素子107の画素及び副画素の配列について、図3を参照して説明する。図3は、2次元状に配置された画素配列を4列×4行の範囲で示しており、更に各画素に含まれる副画素配列を8列×4行の範囲で示している。
(Configuration of the image sensor 107)
The arrangement of pixels and sub-pixels of the image sensor 107 according to this embodiment will be described with reference to FIG. FIG. 3 shows a two-dimensionally arranged pixel array in a range of 4 columns × 4 rows, and further shows a subpixel array included in each pixel in a range of 8 columns × 4 rows.
 図3に示す画素配列のうち、2列×2行の画素群では、左上の位置にR(赤)の分光感度を有する画素200Rを、右上と左下の位置にG(緑)の分光感度を有する画素200Gを、右下の位置にB(青)の分光感度を有する画素200Bをそれぞれ有する。更に、各画素は、2列×1行に配列された副画素201と副画素202を有する。 In the pixel array of 2 columns × 2 rows in the pixel array shown in FIG. 3, the pixel 200R having R (red) spectral sensitivity at the upper left position and the G (green) spectral sensitivity at the upper right and lower left positions. Each pixel 200G has a pixel 200B having a spectral sensitivity of B (blue) at the lower right position. Further, each pixel has a sub-pixel 201 and a sub-pixel 202 arranged in 2 columns × 1 row.
 図3に示した4列×4行の画素(8列×4行の副画素)を面状に多数配置することにより、撮像画像(或いは焦点検出信号)の取得が可能になる。撮像素子107は、例えば、画素の配置される周期Pが4μm、画素数Nが横5575列×縦3725行=約2075万画素、副画素の列方向周期PSが2μm、副画素数NSが横11150列×縦3725行=約4150万画素である。 3. By arranging a large number of 4 columns × 4 rows of pixels (8 columns × 4 rows of sub-pixels) shown in FIG. 3 in a plane, a captured image (or focus detection signal) can be acquired. The imaging element 107 has, for example, a pixel arrangement period P of 4 μm, a pixel number N of 5575 columns × 3725 rows = approximately 20.75 million pixels, a column direction cycle PS of subpixels of 2 μm, and a subpixel number NS of 11150 columns × vertical 3725 rows = about 41.5 million pixels.
 図3に示した画素200Gの構造をより詳細に説明する。図4Aは、画素200Gを、撮像素子107の受光面側(+z側)から見た平面図を、図4Bは、図4Aのa-a断面を-y側から見た断面図を、それぞれ示している。 The structure of the pixel 200G shown in FIG. 3 will be described in more detail. 4A is a plan view of the pixel 200G viewed from the light receiving surface side (+ z side) of the image sensor 107, and FIG. 4B is a cross-sectional view of the aa cross section of FIG. 4A viewed from the −y side. ing.
 画素200Gは、x方向にNH分割(2分割)、y方向にNV分割(1分割)された光電変換部301と光電変換部302とを有するように構成される。光電変換部301と光電変換部302とは、副画素201と副画素202とにそれぞれ対応する。 The pixel 200G is configured to include a photoelectric conversion unit 301 and a photoelectric conversion unit 302 that are divided into NH in the x direction (two divisions) and NV in the y direction (one division). The photoelectric conversion unit 301 and the photoelectric conversion unit 302 correspond to the sub-pixel 201 and the sub-pixel 202, respectively.
 また、画素200Gは、画素の受光側(+z方向)に入射光を集光するためのマイクロレンズ305を有し、マイクロレンズ305を通過して入射した光束が、光電変換部301又は光電変換部302により受光されるように構成される。光電変換部301と光電変換部302は、p型層とn型層の間にイントリンシック層を挟んだpin構造フォトダイオードとしてもよいし、必要に応じて、イントリンシック層を省略し、pn接合フォトダイオードとしてもよい。カラーフィルタ306は、マイクロレンズ305と、光電変換部301および光電変換部302との間に配置され、所定の周波数の光を通過させる。図4Bでは、カラーフィルタ306を画素200Gに対して1つ設ける例を示しているが、必要に応じて、副画素毎に分光透過率の異なるカラーフィルタを設けてもよく、或いはカラーフィルタを省略してもよい。 Further, the pixel 200G has a microlens 305 for condensing incident light on the light receiving side (+ z direction) of the pixel, and a light beam incident through the microlens 305 is converted into the photoelectric conversion unit 301 or the photoelectric conversion unit. 302 is configured to receive light. The photoelectric conversion unit 301 and the photoelectric conversion unit 302 may be a pin structure photodiode in which an intrinsic layer is sandwiched between a p-type layer and an n-type layer, or an intrinsic layer may be omitted and a pn junction if necessary. A photodiode may be used. The color filter 306 is disposed between the microlens 305, the photoelectric conversion unit 301, and the photoelectric conversion unit 302, and allows light having a predetermined frequency to pass therethrough. FIG. 4B shows an example in which one color filter 306 is provided for the pixel 200G. However, if necessary, a color filter having a different spectral transmittance may be provided for each sub-pixel, or the color filter may be omitted. May be.
 光電変換部301と光電変換部302では、受光量に応じて電子とホールが対生成され、更に空乏層で分離される。その後、負電荷の電子はn型層に蓄積され、ホールは定電圧源(不図示)に接続されたp型層300を通じて撮像素子107の外部へ出力される。光電変換部301と光電変換部302のn型層に蓄積された電子は、転送ゲートを介して静電容量部(FD)に転送され、電圧信号に変換される。 In the photoelectric conversion unit 301 and the photoelectric conversion unit 302, pairs of electrons and holes are generated according to the amount of received light, and further separated by a depletion layer. Thereafter, negatively charged electrons are accumulated in the n-type layer, and holes are output to the outside of the image sensor 107 through the p-type layer 300 connected to a constant voltage source (not shown). Electrons accumulated in the n-type layers of the photoelectric conversion unit 301 and the photoelectric conversion unit 302 are transferred to the capacitance unit (FD) through the transfer gate and converted into a voltage signal.
 (撮像素子107の画素構造と瞳分割の関係)
 次に、図4A~図4Bに示した撮像素子107の画素構造と瞳分割との対応関係を、図5を参照して説明する。図5は、図4Aに示した画素200Gのa-a断面を+y側から見た断面図と、結像光学系の射出瞳面の対応関係を示している。なお、図5では、射出瞳面の座標軸と対応を取るために、画素200Gの断面図のx軸とy軸を図4A~図4Bに対して反転させている。
(Relationship between pixel structure of image sensor 107 and pupil division)
Next, the correspondence between the pixel structure of the image sensor 107 shown in FIGS. 4A and 4B and pupil division will be described with reference to FIG. FIG. 5 shows a correspondence between a cross-sectional view of the pixel 200G shown in FIG. 4A taken along the line aa from the + y side and the exit pupil plane of the imaging optical system. In FIG. 5, the x-axis and y-axis of the cross-sectional view of the pixel 200G are inverted with respect to FIGS. 4A to 4B in order to correspond to the coordinate axis of the exit pupil plane.
 副画素201の瞳部分領域501は、副画素201で受光可能な瞳領域を表している。副画素201の瞳部分領域501は、瞳面上で+X側に重心が偏心しており、重心が-x方向に偏心している光電変換部301の受光面と、マイクロレンズによって、概ね、共役関係になっている。 The pupil partial area 501 of the sub-pixel 201 represents a pupil area that can be received by the sub-pixel 201. The pupil partial area 501 of the sub-pixel 201 has a centroid eccentric on the + X side on the pupil plane, and is generally in a conjugate relationship by the light receiving surface of the photoelectric conversion unit 301 whose centroid is eccentric in the −x direction and the microlens. It has become.
 一方、副画素202の瞳部分領域502は、副画素202で受光可能な瞳領域を表している。副画素202の瞳部分領域502は、瞳面上で-X側に重心が偏心しており、重心が+x方向に偏心している光電変換部302の受光面と、マイクロレンズによって、概ね、共役関係になっている。なお、瞳領域500は、光電変換部301と光電変換部302(副画素201と副画素202)を全て合わせた際の画素200G全体で受光可能な瞳領域である。 On the other hand, the pupil partial area 502 of the sub-pixel 202 represents a pupil area that can be received by the sub-pixel 202. The pupil partial region 502 of the sub-pixel 202 has a centroid eccentric on the −X side on the pupil plane, and is generally in a conjugate relationship by the light receiving surface of the photoelectric conversion unit 302 whose centroid is eccentric in the + x direction and the microlens. It has become. Note that the pupil region 500 is a pupil region that can receive light in the entire pixel 200G when the photoelectric conversion unit 301 and the photoelectric conversion unit 302 (subpixel 201 and subpixel 202) are all combined.
 上述した構成の画素200Gに光が入射した際の様子をより具体的に説明する。図6A~図6Bは、画素200Gに形成されたマイクロレンズ305に、光が入射した場合の光強度分布の例を示している。図6Aはマイクロレンズ光軸に平行な断面における光強度分布を、図6Bはマイクロレンズ焦点位置でのマイクロレンズ光軸に垂直な断面における光強度分布を、それぞれ示している。図6Aにおいて、Hはマイクロレンズ305の凸側の面を、fはマイクロレンズの焦点距離を、nFΔは後述するリフォーカスによる焦点位置の可動範囲を、φは入射する光束の最大角度を示す。 The state when light enters the pixel 200G having the above-described configuration will be described more specifically. 6A to 6B show examples of light intensity distributions when light is incident on the microlens 305 formed in the pixel 200G. 6A shows a light intensity distribution in a cross section parallel to the microlens optical axis, and FIG. 6B shows a light intensity distribution in a cross section perpendicular to the microlens optical axis at the microlens focal position. In FIG. 6A, H represents the convex surface of the microlens 305, f represents the focal length of the microlens, nFΔ represents the movable range of the focal position by refocusing described later, and φ represents the maximum angle of the incident light beam.
 入射光は、マイクロレンズ305によって焦点位置に集光されるが、光の波動性による回折の影響により、集光スポットの直径は回折限界Δより小さくならず、例えば図6A及び図6Bに示すように有限の大きさとなる。例えば、光電変換部301の受光面サイズが約1~2μm程度である場合、マイクロレンズの集光スポットも約1μm程度である。そのため、光電変換部301の受光面とマイクロレンズ305を介して共役の関係にある瞳部分領域501(光電変換部302には瞳部分領域502)は、回折ボケのため、明瞭に瞳分割がされていない受光率分布(瞳強度分布)となる。 The incident light is condensed at the focal position by the microlens 305, but the diameter of the condensing spot is not smaller than the diffraction limit Δ due to the influence of diffraction due to the wave nature of the light. For example, as shown in FIGS. 6A and 6B It becomes a finite size. For example, when the light receiving surface size of the photoelectric conversion unit 301 is about 1 to 2 μm, the condensing spot of the microlens is also about 1 μm. Therefore, the pupil partial region 501 (pupil partial region 502 in the photoelectric conversion unit 302) that is in a conjugate relationship with the light receiving surface of the photoelectric conversion unit 301 via the microlens 305 is clearly divided into pupils due to diffraction blur. The light reception rate distribution (pupil intensity distribution) is not.
 この画素200Gにおける瞳強度分布は、更に横軸を瞳座標、縦軸を受光率として模式的に示すと図7のようになる。なお、瞳強度分布701は図5の瞳部分領域501のX軸に沿った瞳強度分布の例(実線)を、瞳強度分布702は瞳部分領域502のX軸に沿った瞳強度分布の例(破線)をそれぞれ示している。瞳部分領域501と瞳部分領域502とが異なる瞳位置においてなだらかな瞳強度のピークを有しており、マイクロレンズ305を通過した光がなだらかに瞳分割されることを示している。 The pupil intensity distribution in this pixel 200G is as shown in FIG. 7 schematically showing the pupil coordinates on the horizontal axis and the light reception rate on the vertical axis. The pupil intensity distribution 701 is an example of the pupil intensity distribution along the X axis (solid line) of the pupil partial area 501 in FIG. 5, and the pupil intensity distribution 702 is an example of the pupil intensity distribution along the X axis of the pupil partial area 502. (Broken lines) are shown. The pupil partial area 501 and the pupil partial area 502 have gentle pupil intensity peaks at different pupil positions, indicating that the light passing through the microlens 305 is gently divided into pupils.
 更に、撮像素子107の位置の異なる画素と瞳分割との対応関係は、図8Aのようになる。異なる瞳部分領域(瞳部分領域501と瞳部分領域502)を通過したそれぞれの光束は、撮像面800を通過して撮像素子107の各画素に異なる角度で入射する。そして、2×1分割された各画素の副画素201(光電変換部301)と副画素202(光電変換部302)とによって受光される。すなわち、撮像素子107は、結像光学系の異なる瞳部分領域を通過する光束を受光するように構成された複数の副画素が設けられた画素を、複数配列している。 Further, the correspondence between the pixels at different positions of the image sensor 107 and the pupil division is as shown in FIG. 8A. Each light flux that has passed through different pupil partial areas (pupil partial area 501 and pupil partial area 502) passes through the imaging surface 800 and enters each pixel of the image sensor 107 at different angles. The light is received by the subpixel 201 (photoelectric conversion unit 301) and the subpixel 202 (photoelectric conversion unit 302) of each pixel divided by 2 × 1. That is, the image sensor 107 has a plurality of pixels arranged with a plurality of sub-pixels configured to receive light beams that pass through different pupil partial regions of the imaging optical system.
 このような構成の撮像素子107を用いることにより、各画素の副画素201の受光信号を集めて第1視点画像を生成し、各画素の副画素202の受光信号を集めて第2視点画像を生成することができる。つまり、撮像素子107の各画素により取得される入力画像から、異なる瞳部分領域毎に複数の視点画像を生成することができる。なお、本実施形態では、第1視点画像と第2視点画像は、それぞれ、ベイヤー配列の画像となるため、必要に応じて第1視点画像と第2視点画像とにデモザイキング処理を適用してもよい。また、撮像素子の画素毎に、副画素201と副画素202の信号を加算して読み出すことにより、有効画素数Nの解像度の撮像画像を生成することができる。本実施形態では、複数の視点画像(第1視点画像と第2視点画像)から生成した撮像画像を用いる例を説明する。なお、図8Aでは、瞳領域が水平方向に2つに瞳分割される例を示したが、副画素の分割方法に応じて瞳分割が垂直方向に行われてもよい。しかしこれに限らず、本実施形態も他の実施形態も、公知の技術により複数の視点画像を取得できるものであれば適用出来る。例えば特開2011-22796号公報のように、複数の視点の異なるカメラをまとめて撮像素子107とみなす構成でもよい。また、図1の光学系と異なり、物体平面と撮像素子が共役の関係にあるように、マイクロレンズアレイ上で撮影光学系からの光束を結像させ、その結像面に撮像素子を設ける構成でもよい。さらには、マイクロレンズアレイ上で撮影光学系からの光束を再結像させ(一度結像した光束が拡散する状態にあるものを結像させるので再結像と呼んでいる)、その結像面に撮像素子を設けるような構成でも良い。また、適当なパターンを施したマスク(ゲイン変調素子)を撮影光学系の光路中に挿入する方法も利用できる。 By using the image sensor 107 having such a configuration, the light reception signals of the sub-pixels 201 of each pixel are collected to generate a first viewpoint image, and the light reception signals of the sub-pixels 202 of each pixel are collected to obtain the second viewpoint image. Can be generated. That is, a plurality of viewpoint images can be generated for each different pupil partial region from the input image acquired by each pixel of the image sensor 107. In the present embodiment, since the first viewpoint image and the second viewpoint image are images of a Bayer array, a demosaicing process is applied to the first viewpoint image and the second viewpoint image as necessary. Also good. Further, by adding and reading the signals of the sub-pixel 201 and the sub-pixel 202 for each pixel of the image sensor, it is possible to generate a captured image having a resolution of N effective pixels. In this embodiment, an example in which a captured image generated from a plurality of viewpoint images (first viewpoint image and second viewpoint image) is used will be described. Although FIG. 8A shows an example in which the pupil region is divided into two pupils in the horizontal direction, pupil division may be performed in the vertical direction according to the subpixel division method. However, the present invention is not limited to this, and this embodiment and other embodiments can be applied as long as a plurality of viewpoint images can be acquired by a known technique. For example, as in Japanese Patent Application Laid-Open No. 2011-22796, a configuration in which a plurality of cameras with different viewpoints are collectively regarded as the image sensor 107 may be used. In addition, unlike the optical system of FIG. 1, a configuration in which a light beam from a photographing optical system is imaged on a microlens array and an imaging device is provided on the imaging surface so that the object plane and the imaging device are in a conjugate relationship. But you can. Furthermore, the image from the imaging optical system is re-imaged on the microlens array (this is called re-imaging because the image once formed is diffused), and the image plane A configuration in which an image pickup element is provided in the apparatus may be used. A method of inserting a mask (gain modulation element) with an appropriate pattern into the optical path of the photographing optical system can also be used.
 (視差画像間のデフォーカス量と像ずれ量の関係)
 次に、撮像素子107から出力される第1視点画像と第2視点画像との間のデフォーカス量と像ずれ量の関係について説明する。図8Bは、第1視点画像と第2視点画像とのデフォーカス量と、第1視点画像と第2視点画像との間の像ずれ量との関係を模式的に示している。撮像面800に撮像素子107(図8Bでは不図示)が配置され、図5、図8A及び図8Bと同様に、結像光学系の射出瞳が、瞳部分領域501と瞳部分領域502に2分割される。
(Relationship between defocus amount and image shift amount between parallax images)
Next, the relationship between the defocus amount and the image shift amount between the first viewpoint image and the second viewpoint image output from the image sensor 107 will be described. FIG. 8B schematically shows the relationship between the defocus amount between the first viewpoint image and the second viewpoint image and the image shift amount between the first viewpoint image and the second viewpoint image. An imaging element 107 (not shown in FIG. 8B) is arranged on the imaging surface 800, and the exit pupil of the imaging optical system is divided into two in the pupil partial area 501 and the pupil partial area 502, as in FIGS. Divided.
 デフォーカス量dは、被写体の結像位置から撮像面800までの距離を大きさ|d|で表わす。デフォーカス量dは、例えば被写体の結像位置が撮像面800より被写体側にある状態(前ピン状態ともいう)を負符号(d<0)で表す。一方、被写体の結像位置が撮像面800より被写体の反対側にある状態(後ピン状態ともいう)を正符号(d>0)で表す。被写体の結像位置が撮像面にある合焦状態はd=0である。図9では、被写体801が合焦状態(d=0)である例を、被写体802が前ピン状態(d<0)である例を、それぞれ示している。なお、前ピン状態(d<0)と後ピン状態(d>0)を合わせて、デフォーカス状態(|d|>0)とする。 The defocus amount d represents the distance from the imaging position of the subject to the imaging surface 800 by the magnitude | d |. The defocus amount d represents, for example, a negative sign (d <0) when the imaging position of the subject is on the subject side of the imaging surface 800 (also referred to as a front pin state). On the other hand, a state where the subject imaging position is on the opposite side of the subject from the imaging surface 800 (also referred to as a rear pin state) is represented by a positive sign (d> 0). The in-focus state where the imaging position of the subject is on the imaging surface is d = 0. FIG. 9 illustrates an example in which the subject 801 is in focus (d = 0) and an example in which the subject 802 is in the front pin state (d <0). The front pin state (d <0) and the rear pin state (d> 0) are combined to form a defocus state (| d |> 0).
 前ピン状態(d<0)である場合、被写体802からの光束のうち、瞳部分領域501(瞳部分領域502)を通過した光束は、一度、集光した後、光束の重心位置G1(G2)を中心として幅Γ1(Γ2)に広がり、撮像面800でボケた像となる。ボケた像は、撮像素子に配列された各画素を構成する副画素201(副画素202)により受光され、第1視点画像(第2視点画像)が生成される。よって、第1視点画像(第2視点画像)は、撮像面800上の重心位置G1(G2)に、被写体802が幅Γ1(Γ2)にボケた被写体像として記録される。被写体像のボケ幅Γ1(Γ2)は、デフォーカス量dの大きさ|d|の増加に伴い、概ね、比例して増加する。同様に、第1視点画像と第2視点画像間の被写体像の像ずれ量p(=光束の重心位置の差G1-G2)の大きさ|p|も、デフォーカス量dの大きさ|d|の増加に伴い、概ね、比例して増加する。後ピン状態(d>0)である場合、第1視点画像と第2視点画像間の被写体像の像ずれ方向が前ピン状態である場合と反対となるが、同様である。 In the front pin state (d <0), the luminous flux that has passed through the pupil partial area 501 (pupil partial area 502) out of the luminous flux from the subject 802 is condensed once and then the gravity center position G1 (G2) of the luminous flux. ) And a width Γ1 (Γ2), and the image is blurred on the imaging surface 800. The blurred image is received by the sub-pixel 201 (sub-pixel 202) constituting each pixel arranged in the image sensor, and a first viewpoint image (second viewpoint image) is generated. Therefore, the first viewpoint image (second viewpoint image) is recorded as a subject image in which the subject 802 is blurred by the width Γ1 (Γ2) at the gravity center position G1 (G2) on the imaging surface 800. The blur width Γ1 (Γ2) of the subject image increases approximately in proportion to the increase of the magnitude | d | of the defocus amount d. Similarly, the magnitude | p | of the image shift amount p of the subject image between the first viewpoint image and the second viewpoint image (= difference G1-G2 in the center of gravity of the light beam) is also the magnitude | d of the defocus amount d As | increases, it generally increases in proportion. In the rear pin state (d> 0), the image shift direction of the subject image between the first viewpoint image and the second viewpoint image is opposite to that in the front pin state, but the same.
 従って、本実施形態では、第1視点画像と第2視点画像、もしくは、第1視点画像と第2視点画像を加算した撮像信号のデフォーカス量の大きさの増加に伴い、第1視点画像と第2視点画像間の像ずれ量の大きさが増加する。 Therefore, in the present embodiment, the first viewpoint image and the second viewpoint image, or the first viewpoint image and the second viewpoint image are increased in accordance with the increase in the defocus amount of the imaging signal obtained by adding the first viewpoint image and the second viewpoint image. The amount of image shift between the second viewpoint images increases.
 (視点画像修正とリフォーカス)
 次に、本実施形態に係る視点画像修正処理とリフォーカス処理について説明する。本実施形態のリフォーカス処理では、第1段階として、視点変更処理部155が撮像画像の各画素値に基づくコントラストの高低を表すコントラスト分布を算出する。第2段階として、視点変更処理部155は算出したコントラスト分布に基づいて、画素毎に複数の視点画像(第1視点画像と第2視点画像)間の差を拡大して視差を強調する変換を行い、複数の修正視点画像(第1修正視点画像と第2修正視点画像)を生成する。第3段階として、リフォーカス処理部156は複数の修正視点画像(第1修正視点画像と第2修正視点画像)を相対的にシフト加算して、リフォーカス画像を生成する。
(Viewpoint image correction and refocus)
Next, viewpoint image correction processing and refocus processing according to the present embodiment will be described. In the refocus processing of the present embodiment, as a first step, the viewpoint change processing unit 155 calculates a contrast distribution representing the level of contrast based on each pixel value of the captured image. As a second stage, the viewpoint change processing unit 155 performs conversion for enlarging the difference between a plurality of viewpoint images (first viewpoint image and second viewpoint image) for each pixel to enhance parallax based on the calculated contrast distribution. To generate a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image). As a third stage, the refocus processing unit 156 relatively shift-adds a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image) to generate a refocus image.
 なお、以下ではi、jを整数として、撮像素子107の行方向j番目、列方向i番目の位置を(j、i)と表わす。また、位置(j、i)の画素の第1視点画像をA0(j、i)、第2視点画像をB0(j、i)、撮像画像をI(j、i)=A0(j、i)+B0(j、i)と表わす。 In the following, i and j are integers, and the j-th position in the row direction and the i-th direction in the column direction of the image sensor 107 are represented as (j, i). The first viewpoint image of the pixel at the position (j, i) is A0 (j, i), the second viewpoint image is B0 (j, i), and the captured image is I (j, i) = A0 (j, i). ) + B0 (j, i).
 (第1段階:コントラスト分布の算出)
 視点変更処理部155は、ベイヤー配列の撮像画像I(j、i)に対し、式(1)に従って、位置(j、i)毎にRGB毎の色重心を一致させて、輝度Y(j、i)を算出する。
(First stage: calculation of contrast distribution)
The viewpoint change processing unit 155 matches the color centroids of RGB for each position (j, i) with respect to the Bayer-arrayed captured image I (j, i) according to the equation (1) to obtain the luminance Y (j, i) is calculated.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 次に、視点変更処理部155は、輝度Y(j、i)に対して、瞳分割方向である水平方向(列i方向)に、[1、2、-1、-4、-1、2、1]などのラプラシアン型フィルタ処理を適用して、水平方向の高周波成分dY(j、i)を算出する。視点変更処理部155は、必要に応じて、瞳分割方向ではない垂直方向(行j方向)に[1、1、1、1、1、1、1]などの高周波カットフィルタ処理を適用して、垂直方向の高周波ノイズを抑制してもよい。 Next, the viewpoint change processing unit 155 [1, 2, −1, −4, −1, 2] in the horizontal direction (column i direction) that is the pupil division direction with respect to the luminance Y (j, i). 1] is applied to calculate the high-frequency component dY (j, i) in the horizontal direction. The viewpoint change processing unit 155 applies high-frequency cut filter processing such as [1, 1, 1, 1, 1, 1, 1] in the vertical direction (row j direction) that is not the pupil division direction as necessary. The high frequency noise in the vertical direction may be suppressed.
 次に、視点変更処理部155は、規格化(正規化)された水平方向の高周波成分dZ(j、i)を、式(2)に従って算出する。ここで、定数Y0を分母に加えるのは、0で除算することにより式(2)が発散することを防止するためである。視点変更処理部155は、必要に応じて、式(2)で規格化する前に、輝度Y(j、i)に高周波カットフィルタ処理を適用して高周波ノイズを抑制してもよい。 Next, the viewpoint change processing unit 155 calculates a normalized (normalized) horizontal high-frequency component dZ (j, i) according to Equation (2). Here, the reason why the constant Y0 is added to the denominator is to prevent the expression (2) from diverging by dividing by zero. The viewpoint change processing unit 155 may suppress high-frequency noise by applying a high-frequency cut filter process to the luminance Y (j, i) as necessary before normalization by Expression (2).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 視点変更処理部155は、コントラスト分布C(j、i)を式(3)に従って算出する。式(3)の1行目は、撮像画像の輝度が所定輝度Ycよりも低輝度の場合、コントラスト分布C(j、i)を0とすることを示す。一方、式(3)の3行目は、規格化された高周波成分dZ(j、i)が所定値Zcより大きい場合、コントラスト分布C(j、i)を1とすることを示す。それ以外(すなわち式(3)の2行目)は、dZ(j、i)をZcで規格化した値がコントラスト分布C(j、i)となることを示す。 The viewpoint change processing unit 155 calculates the contrast distribution C (j, i) according to the equation (3). The first line of Expression (3) indicates that the contrast distribution C (j, i) is set to 0 when the luminance of the captured image is lower than the predetermined luminance Yc. On the other hand, the third line of Equation (3) indicates that the contrast distribution C (j, i) is set to 1 when the normalized high-frequency component dZ (j, i) is larger than the predetermined value Zc. Other than that (that is, the second line of Expression (3)) indicates that a value obtained by normalizing dZ (j, i) with Zc is a contrast distribution C (j, i).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
ここで、コントラスト分布C(j、i)は、[0、1]の範囲の値をとり、0に近いほどコントラストが低く、1に近いほどコントラストが高くなる。 Here, the contrast distribution C (j, i) takes a value in the range of [0, 1]. The closer to 0, the lower the contrast, and the closer to 1, the higher the contrast.
 図9は、式(3)によって得られる、撮像画像のコントラスト分布C(j、i)の一例を示している。このコントラスト分布において、白い部分は水平方向の高周波成分が多くコントラストが高いことを示し、黒い部分は水平方向の高周波成分が少なくコントラストが低いことを示している。 FIG. 9 shows an example of the contrast distribution C (j, i) of the captured image obtained by Expression (3). In this contrast distribution, the white portion indicates that there are many horizontal high-frequency components and the contrast is high, and the black portion indicates that there are few horizontal high-frequency components and the contrast is low.
 (第2段階:視差画像の視差強調処理)
 次に、視差画像の視差強調処理について説明する。視差強調処理では、まず視点画像の像ずれ分布を算出する。像ずれ分布の算出は、第1視点画像A0と第2視点画像B0の1対の像に対して相関演算を行って、1対の像の相対的な位置ずれ量を計算することにより得られる。相関演算には、様々な公知の方法が知られているが、視点変更処理部155は、例えば式(4)に示すような一対の像の差の絶対値を加算することにより、像の相関値を得ることができる。
(Second stage: parallax enhancement processing of parallax image)
Next, the parallax enhancement processing of the parallax image will be described. In the parallax enhancement processing, first, an image shift distribution of the viewpoint image is calculated. The calculation of the image shift distribution is obtained by performing a correlation operation on a pair of images of the first viewpoint image A0 and the second viewpoint image B0 and calculating a relative positional shift amount of the pair of images. . Various known methods are known for the correlation calculation. The viewpoint change processing unit 155, for example, adds the absolute values of the difference between a pair of images as shown in Expression (4) to correlate the images. A value can be obtained.
Figure JPOXMLDOC01-appb-M000004
ここで、A0i、B0iは、それぞれ第1視点画像A0、第2視点画像B0のi番目の画素の輝度を表す。またniは、演算に用いる画素数を表す数字で、像ずれ分布の最小演算範囲に応じて適切に設定される。
Figure JPOXMLDOC01-appb-M000004
Here, A0i and B0i represent the luminance of the i-th pixel of the first viewpoint image A0 and the second viewpoint image B0, respectively. Ni is a number representing the number of pixels used in the calculation, and is appropriately set according to the minimum calculation range of the image shift distribution.
 視点変更処理部155は、例えば式(4)のCOR(k)が最小となるkを、像ずれ量として算出する。すなわち、1対の像をk画素ずらした状態で、行方向のi番目の各A0画素とB0画素の差の絶対値を取り、その絶対値を行方向の複数画素について加算する。そして、視点変更処理部155は、加算された値、すなわちCOR(k)が最も小さくなるときのkをA0とB0の像ずれ量とみなし、ずらし量k画素を算出する。 The viewpoint change processing unit 155 calculates, for example, k that minimizes COR (k) in Expression (4) as the image shift amount. That is, with the pair of images shifted by k pixels, the absolute value of the difference between each i-th A0 pixel and B0 pixel in the row direction is taken, and the absolute value is added to a plurality of pixels in the row direction. Then, the viewpoint change processing unit 155 considers the added value, that is, k when COR (k) is smallest, as the image shift amount of A0 and B0, and calculates the shift amount k pixels.
 これに対し、2次元的な像を瞳分割方向のみにk画素分動かして第1視点画像A0の画素と第2視点画像B0の差分をとるようにし、複数列について加算する場合、相関演算は式(5)で定義される。 On the other hand, when the two-dimensional image is moved by k pixels only in the pupil division direction so as to take the difference between the pixels of the first viewpoint image A0 and the second viewpoint image B0, and the addition is performed for a plurality of columns, the correlation calculation is It is defined by equation (5).
Figure JPOXMLDOC01-appb-M000005
ここで、A0ij、B0ijは、それぞれ第1視点画像A0、第2視点画像B0のj列目i番目の画素の輝度を表す。また、niは、演算に用いる画素数、njは相関演算を行う1対の像の列方向の数をそれぞれ表す。
Figure JPOXMLDOC01-appb-M000005
Here, A0ij and B0ij represent the luminance of the i-th pixel in the j-th column of the first viewpoint image A0 and the second viewpoint image B0, respectively. Further, ni represents the number of pixels used for the calculation, and nj represents the number in the column direction of the pair of images for which the correlation calculation is performed.
 視点変更処理部155は、式(4)と同様に式(5)のCOR(k)が最小となるkを像ずれ量として算出する。なお、添え字kはiにのみ加算されてjとは無関係である。これは、2次元的な像を瞳分割方向のみに移動させながら相関演算をしていることに対応する。視点変更処理部155は、式(5)に従って第1視点画像A0、第2視点画像B0の各領域の像ずれ量を算出し、像ずれ分布を算出することができる。 The viewpoint change processing unit 155 calculates k that minimizes the COR (k) in the equation (5) as the image deviation amount in the same manner as the equation (4). Note that the subscript k is added only to i and is independent of j. This corresponds to performing the correlation calculation while moving the two-dimensional image only in the pupil division direction. The viewpoint change processing unit 155 can calculate the image shift amount of each region of the first viewpoint image A0 and the second viewpoint image B0 according to the equation (5), and calculate the image shift distribution.
 後述する本実施形態のリフォーカス処理では、高コントラスト部分のみに後述のシャープネス処理を行って、リフォーカス処理を行う。従って、上述したコントラスト分布の算出処理において、コントラスト分布C(j、i)が0である領域(すなわち所定輝度Ycよりも低輝度の位置)には、式(5)による相関演算を行わないようにしてもよい。 In the refocus processing of the present embodiment described later, the sharpness processing described later is performed only on the high contrast portion to perform the refocus processing. Therefore, in the above-described contrast distribution calculation process, the correlation calculation according to the equation (5) is not performed on the region where the contrast distribution C (j, i) is 0 (that is, a position having a luminance lower than the predetermined luminance Yc). It may be.
 次に、具体的な視差強調処理の例について説明する。瞳強度分布の例を図7に示したように、画素毎に形成されるマイクロレンズと複数に分割された光電変換部による瞳分割では、回折ボケのために、なだらかな瞳分割となる。そのため、なだらかに分割された瞳強度分布に応じた複数の視点画像では、瞳分割方向の実効F値が十分に暗く(大きく)ならないため、実効的な焦点深度が深くなりにくい。 Next, a specific example of parallax enhancement processing will be described. As shown in FIG. 7, an example of the pupil intensity distribution is a gentle pupil division due to diffraction blur in the pupil division by the microlens formed for each pixel and the photoelectric conversion unit divided into a plurality of pixels. For this reason, in a plurality of viewpoint images corresponding to the gently divided pupil intensity distribution, the effective F value in the pupil division direction does not become sufficiently dark (large), so that the effective depth of focus is difficult to increase.
 従って、本実施形態では、視点変更処理部155は、複数の視点画像(第1視点画像と第2視点画像)に対して、画素毎に視点画像間の差を拡大して視差を強調する処理を行う。視点変更処理部155は、視差強調処理により、複数の修正視点画像(第1修正視点画像と第2修正視点画像)を生成する。 Therefore, in the present embodiment, the viewpoint change processing unit 155 increases the difference between the viewpoint images for each pixel and emphasizes the parallax with respect to a plurality of viewpoint images (first viewpoint image and second viewpoint image). I do. The viewpoint change processing unit 155 generates a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image) by the parallax enhancement processing.
 視点変更処理部155は、第1視点画像A0(j、i)と第2視点画像B0(j、i)に対して、式(6)および式(7)に従って視点画像間の差を拡大し、第1修正視点画像A(j、i)と第2修正視点画像B(j、i)を生成する。ここで、k(0≦k≦1)、α(0≦α≦1)を実数とする。 The viewpoint change processing unit 155 enlarges the difference between the viewpoint images according to Expression (6) and Expression (7) for the first viewpoint image A0 (j, i) and the second viewpoint image B0 (j, i). First corrected viewpoint image A (j, i) and second corrected viewpoint image B (j, i) are generated. Here, k (0 ≦ k ≦ 1) and α (0 ≦ α ≦ 1) are real numbers.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 図10は、視差強調処理によって、所定の位置において視点画像間の差が拡大した例を示している。視差強調処理を行う前の第1視点画像A0(101)と第2視点画像B0(102)の例を破線で示し、式(4)および式(5)による視差強調処理を行った後の第1修正視点画像A(103)と第2修正視点画像B(104)の例を実線で示している。図10では、横軸は1152番目~1156番目の画素を副画素(サブピクセル)単位で示し、縦軸は各画素における視差の大きさを示している。視差強調処理により、視点画像間の差が小さかった部分はあまり変化しない(例えば1154番目付近)が、視点画像間の差が大きかった部分はより拡大(例えば1153番目や1155番目付近)して、視差が強調されている。 FIG. 10 shows an example in which the difference between the viewpoint images is enlarged at a predetermined position by the parallax enhancement processing. An example of the first viewpoint image A0 (101) and the second viewpoint image B0 (102) before performing the parallax enhancement processing is indicated by a broken line, and the first viewpoint image after performing the parallax enhancement processing by Expression (4) and Expression (5) Examples of the first corrected viewpoint image A (103) and the second corrected viewpoint image B (104) are shown by solid lines. In FIG. 10, the horizontal axis indicates the 1152 to 1156th pixels in units of sub-pixels (sub-pixels), and the vertical axis indicates the magnitude of parallax in each pixel. By the parallax enhancement processing, the portion where the difference between the viewpoint images is small does not change much (for example, near the 1154th), but the portion where the difference between the viewpoint images is large is further enlarged (for example, near the 1153rd and 1155th), The parallax is emphasized.
 このように、本実施形態では、視点変更処理部155は、複数の視点画像毎に、複数の視点画像間の差を拡大させて視差を強調した複数の修正視点画像を生成する。なお、視点変更処理部155は、式(6)及び式(7)のように、画素内に含まれる複数の副画素の信号を用いて演算することにより、視差強調処理の負荷を抑制することができる。 As described above, in this embodiment, the viewpoint change processing unit 155 generates a plurality of corrected viewpoint images in which the difference between the plurality of viewpoint images is enlarged and the parallax is emphasized for each of the plurality of viewpoint images. Note that the viewpoint change processing unit 155 suppresses the load of the parallax enhancement processing by performing calculations using signals of a plurality of subpixels included in the pixel as in Expression (6) and Expression (7). Can do.
 式(6)では、kの値を大きくして視差強調を強くすると、複数の修正視点画像(第1修正視点画像と第2修正視点画像)間の視差が大きくなる。従って、kの値を大きくすることにより、分割方向の実効F値を暗く(大きく)し、分割方向の実効的な焦点深度を深く修正することができる。しかし、視差強調を過剰に強くすると、修正視点画像のノイズが増加してS/Nが低下してしまう。 In Equation (6), when the value of k is increased to increase the parallax enhancement, the parallax between a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image) increases. Therefore, by increasing the value of k, it is possible to darken (increase) the effective F value in the dividing direction and deeply correct the effective depth of focus in the dividing direction. However, if the parallax enhancement is excessively increased, the noise of the corrected viewpoint image increases and the S / N decreases.
 このため、本実施形態では、コントラスト分布C(j、i)に基づいて、視差強調の変換の強さを領域適応的に調整する。例えば、視点変更処理部155は、コントラストが高い領域では、視差を大きくして視差強調の強度を強くし、分割方向の実効F値を暗く(大きく)する。一方、コントラストが低い領域では、S/Nを維持するため視差強調の強度を弱くして、S/N低下を抑制する。このようにすることで、複数の修正視点画像(第1修正視点画像と第2修正視点画像)間の視差を大きくして、分割方向の実効F値を暗く(大きく)し、分割方向の実効的な焦点深度を深く修正することができる。また、後述するリフォーカス処理では、複数の修正視点画像(第1修正視点画像と第2修正視点画像)を用いてリフォーカス画像を生成ことにより、リフォーカス効果を向上(リフォーカスによる画像の変化を強調)させることができる。 For this reason, in this embodiment, the strength of the parallax enhancement conversion is adjusted in a region-adaptive manner based on the contrast distribution C (j, i). For example, in the region with high contrast, the viewpoint change processing unit 155 increases the parallax enhancement intensity by increasing the parallax, and darkens (increases) the effective F value in the division direction. On the other hand, in a low contrast area, the intensity of parallax enhancement is weakened to maintain S / N, and the S / N reduction is suppressed. In this way, the parallax between the plurality of corrected viewpoint images (the first corrected viewpoint image and the second corrected viewpoint image) is increased, the effective F value in the division direction is darkened (increased), and the effective in the division direction is increased. The depth of focus can be corrected deeply. Further, in the refocus processing described later, a refocus image is generated using a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image), thereby improving the refocus effect (change in image due to refocusing). Can be emphasized).
 視点変更処理部155は、必要に応じて、撮像画像の低輝度の領域よりも高輝度の領域における視差強調の強度を強くすれば、例えばS/N低下を抑制することができる。また、必要に応じて、撮像画像の高周波成分が少ない領域よりも高周波成分が多い領域において、視差強調の強度をより強くしても、同様に例えばS/N低下を抑制することができる。 The viewpoint change processing unit 155 can suppress a decrease in S / N, for example, by increasing the intensity of parallax enhancement in a high-brightness area rather than a low-brightness area of a captured image as necessary. Further, for example, even if the intensity of parallax enhancement is increased in a region where the high-frequency component is larger than a region where the high-frequency component of the captured image is small, for example, S / N reduction can be similarly suppressed.
 (第3段階:リフォーカス処理)
 複数の修正視点画像(第1修正視点画像と第2修正視点画像)を用いた、瞳分割方向(列方向或いは水平方向)のリフォーカス処理について、図11を参照して説明する。図11は、撮像面800に配置された撮像素子107の列方向i番目の画素の信号を含む、第1修正視点画像Aiと第2修正視点画像Biとを模式的に示している。第1修正視点画像Aiは、(図8Aの瞳部分領域501に対応した)主光線角度θaでi番目の画素に入射した光束の受光信号を含む。第2修正視点画像Biは、(図8Aの瞳部分領域502に対応した)主光線角度θbでi番目の画素に入射した光束の受光信号を含む。つまり、第1修正視点画像Aiと第2修正視点画像Biは、光強度分布情報に加えて、入射角度情報も有している。
(3rd stage: refocus processing)
Refocus processing in the pupil division direction (column direction or horizontal direction) using a plurality of corrected viewpoint images (first corrected viewpoint image and second corrected viewpoint image) will be described with reference to FIG. FIG. 11 schematically shows a first modified viewpoint image Ai and a second modified viewpoint image Bi including the signal of the i-th pixel in the column direction of the image sensor 107 arranged on the imaging surface 800. The first modified viewpoint image Ai includes a light reception signal of a light beam incident on the i-th pixel at the principal ray angle θa (corresponding to the pupil partial region 501 in FIG. 8A). The second modified viewpoint image Bi includes a light reception signal of the light beam incident on the i-th pixel at the principal ray angle θb (corresponding to the pupil partial region 502 in FIG. 8A). That is, the first modified viewpoint image Ai and the second modified viewpoint image Bi have incident angle information in addition to the light intensity distribution information.
 入射角情報を有することにより、リフォーカス処理部156は、所定の仮想的な結像面におけるリフォーカス画像を生成することができる。具体的に、リフォーカス処理部156は、第1修正視点画像Aiを角度θaに沿って、第2修正視点画像Biを角度θbに沿って、それぞれ仮想結像面810まで平行移動させる。そして、平行移動した各修正視点画像を画素毎に加算することにより、仮想結像面810におけるリフォーカス画像を生成することができる。図11の例では、第1修正視点画像Aiを角度θaに沿って仮想結像面810まで平行移動させることは、第1修正視点画像Aiを列方向に+0.5画素シフトすることに対応する。一方、第2修正視点画像Biを角度θbに沿って仮想結像面810まで平行移動させることは、第2修正視点画像Biを列方向に-0.5画素シフトすることに対応する。つまり、図11の例における、仮想結像面810における第1修正視点画像Aiと第2修正視点画像Biの組み合わせは、第1修正視点画像Aiと第2修正視点画像Biとを相対的に+1画素シフトすることにより得られる。このため、第1修正視点画像Aiと、シフトした第2修正視点画像Bi+1とを画素毎に加算することにより、仮想結像面810におけるリフォーカス画像を生成することができる。 By having the incident angle information, the refocus processing unit 156 can generate a refocus image on a predetermined virtual imaging plane. Specifically, the refocus processing unit 156 translates the first modified viewpoint image Ai along the angle θa and the second modified viewpoint image Bi along the angle θb to the virtual imaging plane 810, respectively. Then, the refocused images on the virtual imaging plane 810 can be generated by adding the parallel corrected viewpoint images for each pixel. In the example of FIG. 11, translating the first corrected viewpoint image Ai along the angle θa to the virtual imaging plane 810 corresponds to shifting the first corrected viewpoint image Ai by +0.5 pixels in the column direction. . On the other hand, translating the second corrected viewpoint image Bi to the virtual imaging plane 810 along the angle θb corresponds to shifting the second corrected viewpoint image Bi by −0.5 pixels in the column direction. That is, in the example of FIG. 11, the combination of the first corrected viewpoint image Ai and the second corrected viewpoint image Bi on the virtual imaging plane 810 is relatively +1 between the first corrected viewpoint image Ai and the second corrected viewpoint image Bi. Obtained by pixel shift. Therefore, a refocus image on the virtual imaging plane 810 can be generated by adding the first modified viewpoint image Ai and the shifted second modified viewpoint image Bi + 1 for each pixel.
 このようにリフォーカス処理部156は、整数シフト量sとして、式(8)に従って第1修正視点画像Aと第2修正視点画像Bをシフト加算することにより、整数シフト量sに応じた各仮想結像面におけるリフォーカス画像I(j、i;s)を生成する。 As described above, the refocus processing unit 156 shift-adds the first modified viewpoint image A and the second modified viewpoint image B in accordance with the equation (8) as the integer shift amount s, and thereby each virtual amount corresponding to the integer shift amount s. A refocus image I (j, i; s) on the imaging plane is generated.
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 本実施形態では、第1修正視点画像Aと第2修正視点画像Bはベイヤー配列で構成されるため、リフォーカス処理部156は、2の倍数のシフト量s=2n(n:整数)で、同色毎に式(8)に従ったシフト加算を行う。すなわち、リフォーカス処理部156は、画像のベイヤー配列を保ったままリフォーカス画像I(j、i;s)を生成し、その後、生成したリフォーカス画像I(j、i;s)にデモザイキング処理を施す。なお、リフォーカス処理部156は、必要に応じて、まず第1修正視点画像Aと第2修正視点画像Bとにデモザイキング処理を施し、デモザイキング処理後の第1修正視点画像と第2修正視点画像を用いてシフト加算処理を行ってもよい。さらに、リフォーカス処理部156は、必要に応じて、第1修正視点画像Aと第2修正視点画像Bの各画素間の補間信号を生成して、非整数シフト量に応じたリフォーカス画像を生成してもよい。このようにすれば、仮想結像面の位置をより詳細な粒度で変更したリフォーカス画像を生成することができる。 In the present embodiment, since the first corrected viewpoint image A and the second corrected viewpoint image B are configured in a Bayer array, the refocus processing unit 156 has a shift amount s = 2n (n: integer) that is a multiple of 2. Shift addition according to equation (8) is performed for each color. That is, the refocus processing unit 156 generates the refocus image I (j, i; s) while maintaining the Bayer arrangement of the image, and then demosaicing the generated refocus image I (j, i; s). Apply processing. Note that the refocus processing unit 156 first performs demosaicing processing on the first modified viewpoint image A and the second modified viewpoint image B as necessary, and the first modified viewpoint image and the second modified viewpoint image after the demosaicing processing. Shift addition processing may be performed using the viewpoint image. Further, the refocus processing unit 156 generates an interpolated signal between each pixel of the first corrected viewpoint image A and the second corrected viewpoint image B as necessary, and generates a refocus image corresponding to the non-integer shift amount. It may be generated. In this way, it is possible to generate a refocus image in which the position of the virtual imaging plane is changed with a more detailed granularity.
 次に、リフォーカス処理部156が、より効果的なリフォーカス画像を生成するために適用するシャープネス処理、及びリフォーカス可能範囲の算出について説明する。 Next, a description will be given of the sharpness processing applied by the refocus processing unit 156 to generate a more effective refocus image and the calculation of the refocusable range.
 (シャープネス処理)
 上述したように、リフォーカス処理では、第1修正視点画像Aと第2修正視点画像Bとがシフト加算されて、仮想結像面におけるリフォーカス画像が生成される。シフト加算により第1修正視点画像Aと第2修正視点画像Bの像をずらすため、リフォーカス処理前の画像に対する相対的なずらし量(像ずらし量ともいう)がわかる。
(Sharpness processing)
As described above, in the refocus process, the first corrected viewpoint image A and the second corrected viewpoint image B are shift-added to generate a refocus image on the virtual imaging plane. Since the images of the first corrected viewpoint image A and the second corrected viewpoint image B are shifted by shift addition, the relative shift amount (also referred to as image shift amount) with respect to the image before the refocus processing can be known.
 ここで、上述したリフォーカス処理による整数シフト量sは、この像ずらし量に対応する。このため、リフォーカス処理部156は、像ずらし量sに対応した領域にシャープネス処理を行うことにより、リフォーカス画像における被写体の輪郭強調を実現することができる。 Here, the integer shift amount s by the refocus processing described above corresponds to this image shift amount. For this reason, the refocus processing unit 156 can realize the contour enhancement of the subject in the refocus image by performing the sharpness process on the region corresponding to the image shift amount s.
 本実施形態に係るシャープネス処理では、例えば、図12に概要を示すアンシャープマスク処理を用いる。アンシャープマスク処理は、注目画素を中心とした局所領域(元の信号)に、ぼかしフィルタを適用し、ぼかし処理を適用する前後の画素値の差分を注目画素の画素値に反映することにより、輪郭強調を実現する。 In the sharpness process according to the present embodiment, for example, an unsharp mask process whose outline is shown in FIG. 12 is used. Unsharp mask processing applies a blur filter to the local region (original signal) centered on the pixel of interest, and reflects the difference between the pixel values before and after applying the blur processing on the pixel value of the pixel of interest. Realize contour enhancement.
 処理対象の画素値Pに対するアンシャープマスク処理は、式(9)に従って算出される。但し、P'は処理適用後の画素値、Rはぼかしフィルタの半径、Tは適用量(%)である。 The unsharp mask process for the pixel value P to be processed is calculated according to Equation (9). However, P ′ is a pixel value after application of the processing, R is a radius of the blurring filter, and T is an application amount (%).
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 式(9)においてF(i、j、R)は、画素P(i、j)に対して半径Rのぼかしフィルタを適用して得られる画素値である。なお、ぼかしフィルタには、公知の方法、例えばガウスぼかしを用いることができる。ガウスぼかしは、処理対象の画素からの距離に応じてガウス分布に従った重み付けを適用して平均化する処理であり、自然な処理結果を得ることができる。また、ぼかしフィルタの半径Rの大きさは、シャープネス処理を適用したい画像上の周波の波長に関係する。すなわち、Rが小さいほど細かい模様が強調され、Rが大きいほど緩やかな模様が強調される。適用量T(i、j)は、像ずれ分布に応じてアンシャープマスク処理による輪郭強調の適用量を変化させる値である。具体的には、各画素の位置の像ずれ量をpred(i、j)とし、リフォーカス処理によるシフト量sとすると、|s-pred(i、j)|が小さい値(例えば像ずれ1画素以内)となる領域、すなわち仮想結像面で合焦状態となる領域、では適用量Tを大きくする。一方、|s-pred(i、j)|が大きい値(例えば像ずれ量が3画素以上の場合)となる領域では、適用量Tを小さくする。このようにすることで、デフォーカス量が小さいピント位置または合焦近傍である領域には輪郭強調することができ、かつデフォーカス量が大きいボケ領域にはアンシャープマスク処理をかけない(またはぼかし処理)ようにすることができる。つまり、リフォーカス処理によるピント位置の移動の効果をより強調することができる。 In Equation (9), F (i, j, R) is a pixel value obtained by applying a blurring filter having a radius R to the pixel P (i, j). For the blur filter, a known method such as Gaussian blur can be used. Gaussian blur is a process of averaging by applying weighting according to a Gaussian distribution according to the distance from the pixel to be processed, and a natural processing result can be obtained. The size of the radius R of the blurring filter is related to the wavelength of the frequency on the image to which the sharpness processing is to be applied. That is, as R is smaller, a finer pattern is emphasized, and as R is larger, a gentler pattern is emphasized. The application amount T (i, j) is a value that changes the application amount of contour enhancement by unsharp mask processing according to the image shift distribution. Specifically, if the image shift amount at the position of each pixel is pred (i, j) and the shift amount s by refocus processing is, | s−pred (i, j) | The application amount T is increased in a region that is within pixels), that is, a region that is in a focused state on the virtual imaging plane. On the other hand, in a region where | s-pred (i, j) | By doing this, it is possible to enhance the outline in the focus position where the defocus amount is small or in the vicinity of the focus, and the unsharp mask processing is not applied to the blurred region where the defocus amount is large (or blurring). Processing). That is, the effect of moving the focus position by the refocus processing can be further emphasized.
 (リフォーカス可能範囲の算出)
 リフォーカス可能範囲は、リフォーカス処理によって変更可能なピント位置の範囲を表す。例えば、図13は、本実施形態に係るリフォーカス可能範囲を模式的に示している。許容錯乱円をδ、結像光学系の絞り値をFとすると、絞り値Fにおける被写界深度は±Fδである。これに対して、NH×NV(2×1)分割されて狭くなった瞳部分領域501(502)の水平方向の実効絞り値F01(F02)は、F01=NHFと暗くなる。第1修正視点画像(第2修正視点画像)毎の実効的な被写界深度は±NHFδとNH倍深くなり、合焦範囲がNH倍に広がる。すなわち、実効的な被写界深度±NHFδの範囲内では、第1修正視点画像(第2修正視点画像)毎に合焦した被写体像が取得されている。このため、リフォーカス処理部156は、図11に示した主光線角度θa(θb)に沿って第1修正視点画像(第2修正視点画像)を平行移動するリフォーカス処理により、撮影後に、ピント位置を再調整(リフォーカス)することができる。換言すれば、撮影後にピント位置を再調整(リフォーカス)できる撮像面からのデフォーカス量dは限定されており、デフォーカス量dのリフォーカス可能範囲は、概ね、式(10)の範囲である。
(Calculation of refocusable range)
The refocusable range represents a range of a focus position that can be changed by the refocus process. For example, FIG. 13 schematically illustrates a refocusable range according to the present embodiment. If the allowable circle of confusion is δ and the aperture value of the imaging optical system is F, the depth of field at the aperture value F is ± Fδ. On the other hand, the effective aperture value F01 (F02) in the horizontal direction of the pupil partial region 501 (502) narrowed by dividing by NH × NV (2 × 1) becomes dark as F01 = NHF. The effective depth of field for each first modified viewpoint image (second modified viewpoint image) is ± NHFδ and NH times deeper, and the focusing range is expanded NH times. That is, within the range of effective depth of field ± NHFδ, a focused subject image is acquired for each first modified viewpoint image (second modified viewpoint image). For this reason, the refocus processing unit 156 performs the focus after the shooting by the refocus processing of translating the first corrected viewpoint image (second corrected viewpoint image) along the principal ray angle θa (θb) shown in FIG. The position can be readjusted (refocused). In other words, the defocus amount d from the imaging surface where the focus position can be readjusted (refocused) after shooting is limited, and the refocusable range of the defocus amount d is approximately in the range of Expression (10). is there.
 許容錯乱円δは、例えばδ=2ΔX(画素周期ΔXのナイキスト周波数1/(2ΔX)の逆数)などで規定される。このように、リフォーカス可能範囲を算出することにより、ユーザ操作によってピント位置を変更(リフォーカス)する際の操作可能な範囲に対応させることができる。また、リフォーカス処理によって合焦させることのできる光線(被写体)を予め把握することができるため、例えば、所定の被写体がリフォーカス可能範囲に含まれるように、結像光学系の状態等の撮影条件を制御して再び撮影することも可能になる
 (視点移動処理)
 次に、視点変更処理部155によって実行される、本実施形態に係る視点移動処理について説明する。なお、視点移動処理は、手前側の非主被写体のボケが主被写体に被る場合に、非主被写体によるボケを低減するための処理である。
The allowable circle of confusion δ is defined by, for example, δ = 2ΔX (the reciprocal of the Nyquist frequency 1 / (2ΔX) of the pixel period ΔX). As described above, by calculating the refocusable range, it is possible to correspond to the operable range when the focus position is changed (refocused) by the user operation. In addition, since the light beam (subject) that can be focused by the refocus processing can be grasped in advance, for example, photographing of the state of the imaging optical system or the like so that the predetermined subject is included in the refocusable range. It is also possible to shoot again under controlled conditions (viewpoint movement processing)
Next, the viewpoint movement process according to the present embodiment, which is executed by the viewpoint change processing unit 155, will be described. The viewpoint movement process is a process for reducing the blur caused by the non-main subject when the blur of the non-main subject on the near side is covered by the main subject.
 図14A~図14Cは視点移動処理の原理を示している。この図では、撮像素子107(不図示)が撮像面600に配置されており、図11と同様に、結像光学系の射出瞳が瞳部分領域501と瞳部分領域502とに2分割される。 14A to 14C show the principle of the viewpoint movement process. In this figure, an image sensor 107 (not shown) is arranged on the imaging surface 600, and the exit pupil of the imaging optical system is divided into two, a pupil partial region 501 and a pupil partial region 502, as in FIG. .
 図14Aは、主被写体q1の合焦像p1に、手前の被写体q2のボケ像Γ1+Γ2が重なって撮影される場合(主被写体への前ボケ被りともいう)の一例を示している。また、図14Bと図14Cは、図14Aに示す例を、結像光学系の瞳部分領域501を通過する光束と、瞳部分領域502を通過する光束とに、それぞれ分けて示している。図14Bでは、主被写体q1からの光束は、瞳部分領域501を通過して、合焦状態で像p1に結像する。一方、手前の被写体q2からの光束は、瞳部分領域501を通過して、デフォーカス状態でボケ像Γ1に広がる。それぞれの光束は、撮像素子107の異なる画素の副画素201により受光されて、第1視点画像が生成される。図14Bに示すように、第1視点画像では、主被写体q1の像p1と手前の被写体q2のボケ像Γ1とは重ならずに受光される。これは、所定領域(被写体q1の像p1近傍)において、複数の視点画像(第1視点画像と第2視点画像)のうち、至近側の被写体(被写体q2のボケ像Γ1)が最も狭い範囲で撮影されている視点画像となる。換言すれば、所定領域(被写体q1の像p1近傍)において、複数の視点画像(第1視点画像と第2視点画像)のうち、被写体q2のボケ像Γ1の写りが少なく、コントラスト評価値が最も大きい視点画像が得られる。 FIG. 14A shows an example of a case where a blurred image Γ1 + Γ2 of the subject q2 in front is photographed with the focused image p1 of the main subject q1 (also referred to as a front blurring on the main subject). 14B and 14C show the example shown in FIG. 14A separately for the light beam passing through the pupil partial region 501 and the light beam passing through the pupil partial region 502 of the imaging optical system. In FIG. 14B, the light beam from the main subject q1 passes through the pupil partial region 501 and forms an image p1 in a focused state. On the other hand, the light beam from the subject q2 on the near side passes through the pupil partial region 501 and spreads in the blurred image Γ1 in the defocused state. Each light beam is received by the sub-pixel 201 of a different pixel of the image sensor 107, and a first viewpoint image is generated. As shown in FIG. 14B, in the first viewpoint image, the image p1 of the main subject q1 and the blurred image Γ1 of the subject q2 in front are received without overlapping. This is because, in a predetermined area (near the image p1 of the subject q1), the closest subject (the blurred image Γ1 of the subject q2) among the plurality of viewpoint images (first viewpoint image and second viewpoint image) is the narrowest range. This is the viewpoint image being shot. In other words, in a predetermined region (near the image p1 of the subject q1), among the plurality of viewpoint images (the first viewpoint image and the second viewpoint image), the blur image Γ1 of the subject q2 is less reflected and the contrast evaluation value is the highest. A large viewpoint image is obtained.
 一方、図14Cでは、主被写体q1からの光束は、瞳部分領域502を通過して、合焦状態で像p1に結像する。一方、手前の被写体q2からの光束は、瞳部分領域502を通過して、デフォーカス状態でボケ像Γ2に広がる。それぞれの光束は、撮像素子107の各画素の副画素202で受光されて、第2視点画像が生成される。図14Cに示すように、第2視点画像では、主被写体q1の像p1と手前の被写体q2のボケ像Γ2が重なって受光される。これは、所定領域(被写体q1の像p1近傍)において、複数の視点画像(第1視点画像と第2視点画像)のうち、至近側の被写体(被写体q2のボケ像Γ2)が最も広い範囲で撮影されている視点画像となる。換言すれば、所定領域(被写体q1の像p1近傍)において、複数の視点画像(第1視点画像と第2視点画像)のうち、被写体q2のボケ像Γ2の写りが多く、コントラスト評価値が最も小さい視点画像が得られる。 On the other hand, in FIG. 14C, the light beam from the main subject q1 passes through the pupil partial region 502 and forms an image p1 in a focused state. On the other hand, the light beam from the subject q2 on the near side passes through the pupil partial area 502 and spreads in the blurred image Γ2 in the defocused state. Each light beam is received by the sub-pixel 202 of each pixel of the image sensor 107, and a second viewpoint image is generated. As shown in FIG. 14C, in the second viewpoint image, the image p1 of the main subject q1 and the blurred image Γ2 of the subject q2 in front are overlapped and received. This is because, in a predetermined area (near the image p1 of the subject q1), the closest subject (the blurred image Γ2 of the subject q2) is the widest among the plurality of viewpoint images (first viewpoint image and second viewpoint image). This is the viewpoint image being shot. In other words, in a predetermined area (near the image p1 of the subject q1), among the plurality of viewpoint images (first viewpoint image and second viewpoint image), there are many reflections of the blurred image Γ2 of the subject q2, and the contrast evaluation value is the highest. A small viewpoint image is obtained.
 従って、像p1近傍において、像p1とボケ像Γ1の重なりが少ない第1視点画像の重みを大きくし、像p1とボケ像Γ2の重なりが多い第2視点画像の重みを小さくして加算すれば、主被写体に対する前ボケ被りを低減させることができる。 Accordingly, in the vicinity of the image p1, if the weight of the first viewpoint image with little overlap between the image p1 and the blurred image Γ1 is increased, and the weight of the second viewpoint image with much overlap between the image p1 and the blurred image Γ2 is decreased, addition is performed. Further, it is possible to reduce the front blurring with respect to the main subject.
 次に、視点変更処理部155が、第1視点画像と第2視点画像とを、重みを用いて重ね合わせる処理について説明する。なお、視点変更処理部155は、上述した第1視点画像A(j、i)と第2視点画像B(j、i)とを入力する。 Next, a process in which the viewpoint change processing unit 155 superimposes the first viewpoint image and the second viewpoint image using weights will be described. The viewpoint change processing unit 155 inputs the above-described first viewpoint image A (j, i) and second viewpoint image B (j, i).
 第1ステップとして、視点変更処理部155は、視点移動を行う所定領域R=[j1、j2]×[i1、i2]と所定領域の境界幅σとを設定したうえで、式(11)に従って所定領域Rと所定領域の境界幅σに応じたテーブル関数T(j、i)を算出する。 As a first step, the viewpoint change processing unit 155 sets a predetermined area R = [j1, j2] × [i1, i2] in which the viewpoint is moved and a boundary width σ of the predetermined area, and then follows Expression (11). A table function T (j, i) corresponding to the boundary width σ between the predetermined region R and the predetermined region is calculated.
Figure JPOXMLDOC01-appb-M000011
なお、テーブル関数T(j、i)は、所定領域Rの内側で1、所定領域Rの外側で0となる関数であり、所定領域Rの境界幅σで1から0に概ね連続的に変化する。なお、視点変更処理部155は、必要に応じて、所定領域を円形やその他の任意の形状としてもよく、また、複数の所定領域及び複数の境界幅を設定してもよい。
Figure JPOXMLDOC01-appb-M000011
The table function T (j, i) is a function that becomes 1 inside the predetermined region R and 0 outside the predetermined region R, and changes almost continuously from 1 to 0 with the boundary width σ of the predetermined region R. To do. Note that the viewpoint change processing unit 155 may set the predetermined area to be circular or any other shape as necessary, and may set a plurality of predetermined areas and a plurality of boundary widths.
 第2ステップとして、視点変更処理部155は、実係数w(-1≦w≦1)を用いて、式(12A)に従う第1視点画像A(j、i)の第1重み係数Wa(j、i)を算出する。また、視点変更処理部155は、式(12B)に従う第2視点画像B(j、i)の第2重み係数Wb(j、i)を算出する。 As a second step, the viewpoint change processing unit 155 uses the real coefficient w (−1 ≦ w ≦ 1) and the first weight coefficient Wa (j of the first viewpoint image A (j, i) according to the equation (12A). , I). In addition, the viewpoint change processing unit 155 calculates the second weight coefficient Wb (j, i) of the second viewpoint image B (j, i) according to the equation (12B).
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 第3ステップとして、視点変更処理部155は、第1視点画像A(j、i)と、第2視点画像B(j、i)と、第1重み係数Wa(j、i)と、第2重み係数Wb(j、i)とを用いて、式(13)に従って出力画像I(j、i)を生成する。 As a third step, the viewpoint change processing unit 155 includes the first viewpoint image A (j, i), the second viewpoint image B (j, i), the first weighting coefficient Wa (j, i), and the second Using the weight coefficient Wb (j, i), an output image I (j, i) is generated according to the equation (13).
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
 或いは、視点変更処理部155は、シフト量sを用いたリフォーカス処理と組み合わせて、式(14A)もしくは式(14B)に従って出力画像Is(j、i)を生成してもよい。 Alternatively, the viewpoint change processing unit 155 may generate the output image Is (j, i) according to the equation (14A) or the equation (14B) in combination with the refocus processing using the shift amount s.
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
このようにして出力された出力画像Is(j、i)は、視点が移動した画像であると共に、ピント位置が再調整(リフォーカス)された画像となる。 The output image Is (j, i) output in this way is an image whose viewpoint has moved and an image whose focus position has been readjusted (refocused).
 このように、出力画像の領域に応じて連続的に変化する重み係数を用いて、複数の視点画像毎に重み係数をかけて合成し、出力画像を生成するようにした。すなわち、視点変更処理部155は、式(13)を用いて主被写体に対する前ボケ被りを低減させる場合、像p1近傍において、像p1とボケ像Γ1の重なりが少ない第1視点画像の第1重み係数Waを、像p1とボケ像Γ2の重なりが多い第2視点画像の第2重み係数Wbより大きくして出力画像を生成する。 In this way, using a weighting factor that continuously changes in accordance with the area of the output image, a plurality of viewpoint images are multiplied and combined to generate an output image. That is, when the viewpoint change processing unit 155 reduces the front blurring with respect to the main subject using Expression (13), the first weight of the first viewpoint image in which the overlap between the image p1 and the blurred image Γ1 is small in the vicinity of the image p1. The coefficient Wa is set larger than the second weighting coefficient Wb of the second viewpoint image in which the overlap between the image p1 and the blurred image Γ2 is large, and an output image is generated.
 換言すれば、視点変更処理部155は、画像の所定領域において、至近側の被写体が最も広い範囲で撮影されている視点画像の重み係数を最も小さくし、もしくは、至近側の被写体が最も狭い範囲で撮影されている視点画像の重み係数を最も大きくする。また、視点変更処理部155は、出力画像の所定領域において、コントラスト評価値が最も小さい視点画像の重み係数を最も小さくし、もしくは、コントラスト評価値が最も大きい視点画像の重み係数を最も大きくする。 In other words, the viewpoint change processing unit 155 minimizes the weight coefficient of the viewpoint image in which the closest subject is photographed in the widest range in the predetermined area of the image, or the narrowest range of the closest subject. The weighting coefficient of the viewpoint image captured at is maximized. In addition, the viewpoint change processing unit 155 minimizes the weight coefficient of the viewpoint image having the smallest contrast evaluation value or maximizes the weight coefficient of the viewpoint image having the largest contrast evaluation value in the predetermined region of the output image.
 なお、視点変更処理部155は、必要に応じて、視点移動処理を行わない所定領域以外では、結像光学系のボケ形状を変化させないために、複数の視点画像毎の重み係数(第1重み係数、第2重み係数)を、概ね均等に加算し、出力画像を生成してもよい。また、ユーザの指定に応じて重み係数(すなわち加算比率)を変更した出力画像を生成する方法を後述するが、視点移動処理を行う所定領域をユーザが指定するようにしてもよい。 Note that the viewpoint change processing unit 155, as necessary, does not change the blurring shape of the imaging optical system in a region other than the predetermined area where the viewpoint movement process is not performed. The output image may be generated by adding the coefficients and the second weighting coefficient substantially evenly. Further, although a method for generating an output image in which the weighting coefficient (that is, the addition ratio) is changed according to the user's designation will be described later, the user may designate a predetermined area for performing the viewpoint movement process.
 (瞳ずれに対する視点移動処理)
 次に、撮像素子107の周辺像高における瞳ずれについて説明する。図15A~図15Cは、各画素の副画素201と副画素202がそれぞれ受光する瞳部分領域(501、502)と結像光学系の射出瞳400との関係を示している。
(Viewpoint movement processing for pupil misalignment)
Next, pupil shift in the peripheral image height of the image sensor 107 will be described. 15A to 15C show the relationship between the pupil partial areas (501, 502) received by the sub-pixel 201 and the sub-pixel 202 of each pixel and the exit pupil 400 of the imaging optical system.
 図15Aは、結像光学系の射出瞳距離Dlと撮像素子107の設定瞳距離Dsが同じ場合を示している。この場合、結像光学系の射出瞳400は、中央像高の場合も周辺像高の場合も同様に、瞳部分領域501と瞳部分領域502によって概ね均等に瞳分割される。 FIG. 15A shows a case where the exit pupil distance Dl of the imaging optical system and the set pupil distance Ds of the image sensor 107 are the same. In this case, the exit pupil 400 of the imaging optical system is divided into pupils approximately equally by the pupil partial area 501 and the pupil partial area 502 in the same manner for both the central image height and the peripheral image height.
 これに対して、図15Bは、結像光学系の射出瞳距離Dlが撮像素子107の設定瞳距離Dsより短い場合を示している。この場合、周辺像高では、結像光学系の射出瞳400が、瞳部分領域501と瞳部分領域502によって不均一に瞳分割される。図15Bの例では、瞳部分領域501に対応した第1視点画像の実効絞り値が、瞳部分領域502に対応した第2視点画像の実効絞り値より小さい(明るい)値となる。反対側の像高(不図示)では、逆に、瞳部分領域501に対応した第1視点画像の実効絞り値が、瞳部分領域502に対応した第2視点画像の実効絞り値より大きい(暗い)値となる。 On the other hand, FIG. 15B shows a case where the exit pupil distance Dl of the imaging optical system is shorter than the set pupil distance Ds of the image sensor 107. In this case, at the peripheral image height, the exit pupil 400 of the imaging optical system is non-uniformly divided by the pupil partial region 501 and the pupil partial region 502. In the example of FIG. 15B, the effective aperture value of the first viewpoint image corresponding to the pupil partial area 501 is smaller (brighter) than the effective aperture value of the second viewpoint image corresponding to the pupil partial area 502. Conversely, at the opposite image height (not shown), the effective aperture value of the first viewpoint image corresponding to the pupil partial region 501 is larger (darker) than the effective aperture value of the second viewpoint image corresponding to the pupil partial region 502. ) Value.
 図15Cは、結像光学系の射出瞳距離Dlが撮像素子107の設定瞳距離Dsより長い場合である。この場合も、周辺像高では、結像光学系の射出瞳400が、瞳部分領域501と瞳部分領域502によって不均一に瞳分割される。図15Cの例では、瞳部分領域501に対応した第1視点画像の実効絞り値が、瞳部分領域502に対応した第2視点画像の実効絞り値より大きい(暗い)値となる。反対側の像高(不図示)では、逆に、瞳部分領域501に対応した第1視点画像の実効絞り値が、瞳部分領域502に対応した第2視点画像の実効絞り値より小さい(明るい)値となる。 FIG. 15C shows the case where the exit pupil distance Dl of the imaging optical system is longer than the set pupil distance Ds of the image sensor 107. Also in this case, at the peripheral image height, the exit pupil 400 of the imaging optical system is non-uniformly pupil-divided by the pupil partial area 501 and the pupil partial area 502. In the example of FIG. 15C, the effective aperture value of the first viewpoint image corresponding to the pupil partial area 501 is larger (darker) than the effective aperture value of the second viewpoint image corresponding to the pupil partial area 502. Conversely, at the opposite image height (not shown), the effective aperture value of the first viewpoint image corresponding to the pupil partial region 501 is smaller (brighter) than the effective aperture value of the second viewpoint image corresponding to the pupil partial region 502. ) Value.
 すなわち、瞳ずれにより周辺像高で瞳分割が不均一になることに伴って、第1視点画像と第2視点画像の実効F値も不均一になる。このため、第1視点画像と第2視点画像のいずれかのボケの広がり方が大きくなり、他方のボケの広がり方が小さくなる。 That is, as the pupil division becomes nonuniform at the peripheral image height due to pupil shift, the effective F values of the first viewpoint image and the second viewpoint image also become nonuniform. For this reason, one of the first viewpoint image and the second viewpoint image has a larger spread of blur, and the other blur has a smaller spread.
 従って、視点変更処理部155は、必要に応じて、出力画像の所定領域において、実効絞り値が最も小さい視点画像の重み係数を最も小さくし、もしくは、実効絞り値が最も大きい視点画像の重み係数を最も大きくすることが望ましい。このような視点移動処理を行うことにより、主被写体への前ボケ被りを低減させることができる。 Therefore, the viewpoint change processing unit 155 minimizes the weight coefficient of the viewpoint image with the smallest effective aperture value or the weight coefficient of the viewpoint image with the largest effective aperture value in a predetermined area of the output image as necessary. It is desirable to maximize the value. By performing such viewpoint movement processing, it is possible to reduce front blurring on the main subject.
 (被写界深度拡大処理)
 次に、視点変更処理部155による深度拡大処理について、再び図14Bを参照して説明する。上述したように、図14Bにおいて瞳部分領域501を通過した像が第1視点画像、瞳部分領域502を通過した像が第2視点画像である。各視点画像は、図からも明らかなように、本来の瞳領域の半分を通過して得られる画像であるため、水平方向2分割の瞳分割領域の場合には水平方向の絞り径が半分となる。このため、水平方向の被写界深度は4倍になる。一方、本実施形態では垂直方向に瞳分割した構成ではないため、垂直方向の被写界深度の変化はない。従って、第1視点画像または第2視点画像は、第1視点画像と第2視点画像とを合成した画像(A+B像)の被写界深度に対して、縦横平均として2倍の被写界深度を有する画像となる。
(Depth of field expansion process)
Next, depth expansion processing by the viewpoint change processing unit 155 will be described with reference to FIG. 14B again. As described above, the image that has passed through the pupil partial region 501 in FIG. 14B is the first viewpoint image, and the image that has passed through the pupil partial region 502 is the second viewpoint image. As is clear from the figure, each viewpoint image is an image obtained by passing through half of the original pupil region. Therefore, in the case of the pupil division region divided in two in the horizontal direction, the aperture diameter in the horizontal direction is half. Become. For this reason, the depth of field in the horizontal direction is quadrupled. On the other hand, in the present embodiment, since the pupil is not divided in the vertical direction, there is no change in the depth of field in the vertical direction. Therefore, the first viewpoint image or the second viewpoint image has a depth of field that is twice as long as the vertical and horizontal average of the depth of field of the image (A + B image) obtained by combining the first viewpoint image and the second viewpoint image. It becomes the image which has.
 このように視点変更処理部155は、第1視点画像または第2視点画像の加算比率を1:1以外に変更して合成画像を生成することにより、被写界深度が拡大した画像を生成することができる。更に、視点変更処理部155は、上述したコントラスト分布と像ずれ分布を用いたアンシャープマスク処理を、第1視点画像又は第2視点画像の加算比率を変更した画像に対して適用する。このようにすることで、被写界深度を拡大し、かつ輪郭を強調した合成画像を生成することができる。また、深度拡大処理では、視点移動処理と同様に、所定領域をユーザの指定に応じて処理するようにしてもよい。なお、視点画像から合成画像は、視点変更処理部155から出力されると、上述した現像処理が適用され、現像処理の適用された画像が画像処理部125から出力される。 In this way, the viewpoint change processing unit 155 generates an image with an increased depth of field by generating a composite image by changing the addition ratio of the first viewpoint image or the second viewpoint image to other than 1: 1. be able to. Further, the viewpoint change processing unit 155 applies the above-described unsharp mask process using the contrast distribution and the image shift distribution to an image in which the addition ratio of the first viewpoint image or the second viewpoint image is changed. By doing in this way, the synthetic | combination image which expanded the depth of field and emphasized the outline can be produced | generated. Further, in the depth expansion process, as in the viewpoint movement process, a predetermined area may be processed according to the user's designation. Note that when the composite image from the viewpoint image is output from the viewpoint change processing unit 155, the above-described development processing is applied, and the image to which the development processing is applied is output from the image processing unit 125.
 (撮像画像の視点移動及びピント調整操作に係る一連の動作)
 次に、図16を参照して、撮像画像の視点移動及びピント調整操作に係る一連の動作について説明する。なお、本処理は、例えば操作部132に含まれるレリーズスイッチ等がユーザによって押下された場合に開始される。また、本処理は、制御部121が不図示のROMに記憶されたプログラムをRAMの作業用領域に展開、実行すると共に、画像処理部125等の各部を制御することにより実現される。
(A series of operations related to viewpoint movement and focus adjustment operation of captured images)
Next, with reference to FIG. 16, a series of operations related to the viewpoint movement and focus adjustment operation of the captured image will be described. This process is started when, for example, a release switch included in the operation unit 132 is pressed by the user. This process is realized by the control unit 121 developing and executing a program stored in a ROM (not shown) in a work area of the RAM and controlling each unit such as the image processing unit 125.
 S101において、撮像素子107は、制御部121の指示に応じて撮像を行う。S102において、撮像素子107は、視差画像データを出力する。具体的に、撮像素子107は、上述した視点画像(A+B像とA像)を1つのファイルフォーマットの画像データとして出力する。また、記録媒体133は、撮像素子107から出力された画像データを一時的に記憶する。 In step S <b> 101, the imaging element 107 performs imaging in accordance with an instruction from the control unit 121. In S102, the image sensor 107 outputs parallax image data. Specifically, the image sensor 107 outputs the above-described viewpoint images (A + B image and A image) as image data of one file format. The recording medium 133 temporarily stores image data output from the image sensor 107.
 S103において、画像処理部125は、制御部121の指示に応じて視差画像データを読み込む。例えば、画像処理部125は、記録媒体133に記憶した画像データを、画像取得部151を用いて取得する。このとき、画像処理部125は、A+B像からB像を生成して、例えば、左側の視点の画像である第1視点画像(A像)と、右側の視点の画像である第2視点画像(B像)とを取得する。S104において、制御部121は、操作部132と画像処理部125の出力とを制御して、後述する視点画像操作処理、すなわち、撮像画像に対する視点移動及びピント調整を行う。制御部121は、視点画像操作処理を終えると、本一連の処理を終了する。 In S103, the image processing unit 125 reads parallax image data in accordance with an instruction from the control unit 121. For example, the image processing unit 125 acquires the image data stored in the recording medium 133 using the image acquisition unit 151. At this time, the image processing unit 125 generates a B image from the A + B image and, for example, a first viewpoint image (A image) that is an image of the left viewpoint and a second viewpoint image (an image of the right viewpoint) ( B image). In step S <b> 104, the control unit 121 controls the operation unit 132 and the output of the image processing unit 125 to perform viewpoint image operation processing described later, that is, viewpoint movement and focus adjustment for the captured image. When the viewpoint image operation process is completed, the control unit 121 ends the series of processes.
 (視点画像操作処理に係る一連の動作)
 次に、S104における視点画像操作処理に係る一連の動作について、図17に示すフローチャートを参照して説明する。なお、以下の説明では、視点移動(及び被写界深度の変更)を行うためのユーザインターフェース(視点移動UI)における操作を、ピント調整を行うためのユーザインターフェース(ピント調整UI)における操作より先に行う例を示す。しかし、ピント調整UI操作を、視点移動UI操作より先に行ってもかまわない。
(A series of operations related to viewpoint image manipulation processing)
Next, a series of operations related to the viewpoint image operation processing in S104 will be described with reference to the flowchart shown in FIG. In the following description, the operation on the user interface (viewpoint movement UI) for performing viewpoint movement (and changing the depth of field) is preceded by the operation on the user interface (focus adjustment UI) for performing focus adjustment. An example is shown below. However, the focus adjustment UI operation may be performed prior to the viewpoint movement UI operation.
 S201において、制御部121は、視点移動UIとピント調整UIとを備えたユーザインターフェース(単にUIという)と、撮影画像とを表示部131に表示させる。 In S201, the control unit 121 causes the display unit 131 to display a user interface (hereinafter simply referred to as UI) including a viewpoint movement UI and a focus adjustment UI, and a captured image.
 S202において、制御部121は、操作部132を介して入力されたユーザ操作に基づいて、視点移動を行うかどうかを判定する。制御部121は、入力されたユーザ操作が視点移動を行うことを示す場合、制御部121は視点移動を行うと判定して処理をS203へ進める。一方、入力されたユーザ操作が視点移動を行うことを示さない場合、制御部121は視点移動を行わないと判定して処理をS207へ進める。 In S202, the control unit 121 determines whether or not to perform viewpoint movement based on the user operation input via the operation unit 132. When the input user operation indicates that the viewpoint movement is performed, the control unit 121 determines that the viewpoint movement is performed and advances the process to S203. On the other hand, when the input user operation does not indicate that the viewpoint is moved, the control unit 121 determines that the viewpoint is not moved and advances the process to S207.
 S203において、制御部121は、操作部132を介して視点移動UIを操作するユーザ操作を更に取得する。ここで、表示部131に表示する視点移動UIの一例を、図19Aに示している。図19Aの例では、UIを構成する一部の領域1000に画像(撮影画像や視点画像)を表示する。上述したように、本実施形態では水平方向に2つに瞳分割した画像を得る構成であるため、左右のみの視点画像を用いて視点映像を生成する。 In S203, the control unit 121 further acquires a user operation for operating the viewpoint movement UI via the operation unit 132. Here, an example of the viewpoint movement UI displayed on the display unit 131 is shown in FIG. 19A. In the example of FIG. 19A, an image (a captured image or a viewpoint image) is displayed in a part of the area 1000 that forms the UI. As described above, in the present embodiment, since the image obtained by dividing the pupil into two in the horizontal direction is obtained, the viewpoint video is generated using only the left and right viewpoint images.
 視点移動UIは、ユーザが操作部材を視点の変化する方向へ操作可能なように、スライダー1001とスライダーバー1002とを水平方向に配置する。これにより、ユーザが視点移動の操作をより直感的に操作することができる。 The viewpoint movement UI arranges the slider 1001 and the slider bar 1002 in the horizontal direction so that the user can operate the operation member in the direction in which the viewpoint changes. As a result, the user can more intuitively operate the viewpoint movement.
 S204において、制御部121は画像処理部125を用いて、視点画像の加算比率を変更した合成画像を生成する。具体的に、画像処理部125は、操作情報取得部154を介してS203において指定されたスライダー1001の位置を取得する。画像処理部125は、スライダー101の位置に応じて第1視点画像と第2視点画像の加算比率を変更して合成(すなわち視点移動処理)することにより、視点移動した画像を生成する。スライダーバー1002の右端の値を1、中央を0、左端の値を-1と定義すると、画像処理部125は、スライダー1001が位置xにある場合、第1視点画像と第2視点画像の比率を(1+x):(1-x)となるように加算比率を変更する。 In S204, the control unit 121 uses the image processing unit 125 to generate a composite image in which the addition ratio of the viewpoint images is changed. Specifically, the image processing unit 125 acquires the position of the slider 1001 designated in S203 via the operation information acquisition unit 154. The image processing unit 125 generates an image whose viewpoint has been moved by changing the addition ratio of the first viewpoint image and the second viewpoint image in accordance with the position of the slider 101 and combining them (that is, viewpoint movement processing). When the value at the right end of the slider bar 1002 is defined as 1, the value at the center is defined as 0, and the value at the left end is defined as −1, the image processing unit 125 calculates the ratio between the first viewpoint image and the second viewpoint image when the slider 1001 is at the position x. Is changed to (1 + x) :( 1−x).
 S205において、制御部121は、画像処理部125を用いて、S204において合成された画像に現像処理を適用する。現像処理は、図18のフローチャートを参照して後述する。S206において、制御部121は、S205において現像処理を適用した画像を表示部131に表示する。 In S205, the control unit 121 uses the image processing unit 125 to apply development processing to the image synthesized in S204. The development process will be described later with reference to the flowchart of FIG. In step S206, the control unit 121 displays an image to which the development process has been applied in step S205 on the display unit 131.
 S207において、制御部121は、操作部132を介して入力されたユーザ操作に基づいて、ピント調整を行うかどうかを判定する。制御部121は、入力されたユーザ操作がピント調整を行うことを示す場合、制御部121はピント調整を行うと判定して処理をS208へ進める。一方、入力されたユーザ操作がピント調整を行うことを示さない場合、制御部121はピント調整を行わないと判定して一連の処理を終了する。 In S207, the control unit 121 determines whether or not to perform focus adjustment based on a user operation input via the operation unit 132. If the input user operation indicates that focus adjustment is to be performed, the control unit 121 determines that the focus adjustment is to be performed, and advances the processing to S208. On the other hand, if the input user operation does not indicate that the focus adjustment is to be performed, the control unit 121 determines that the focus adjustment is not performed and ends the series of processes.
 S208において、制御部121は、操作部132を介してピント調節UIを操作するユーザ操作を更に取得する。図19Aはピント調整UIの一例を示している。上述した視点移動UIでは、視点移動する方向にスライダーバーが設定され、これに対してピント調整UIでは、視点移動とは異なる方向に(異なる角度を持たせて)設置する。図19Aの例では、ピント調整UIのスライダーバー1003とスライダー1004は、視点移動UIであるスライダーバー1002の方向と直交する方向(すなわち上下方向)に設定される。例えば、制御部121は、スライダー1004を上側に動かすと後ピン状態が強くなる方向に、下側に動かすと前ピン状態が強くなる方向にピント調整を制御する。ピント調整範囲は、上述したリフォーカス可能範囲に対応し、式(10)に従って算出される。 In S208, the control unit 121 further acquires a user operation for operating the focus adjustment UI via the operation unit 132. FIG. 19A shows an example of the focus adjustment UI. In the viewpoint movement UI described above, a slider bar is set in the direction in which the viewpoint is moved. On the other hand, the focus adjustment UI is installed in a direction different from the viewpoint movement (with a different angle). In the example of FIG. 19A, the slider bar 1003 and the slider 1004 of the focus adjustment UI are set in a direction orthogonal to the direction of the slider bar 1002 that is the viewpoint movement UI (that is, the vertical direction). For example, the control unit 121 controls focus adjustment in a direction in which the rear pin state becomes stronger when the slider 1004 is moved upward, and in a direction in which the front pin state becomes stronger when the slider 1004 is moved downward. The focus adjustment range corresponds to the refocusable range described above, and is calculated according to Equation (10).
 S209において、制御部121は、画像処理部125を用いてS208で指定されたスライダー位置に基づいて、ピント調整位置を算出すると共に上述したリフォーカス処理を行う。画像処理部125は、スライダーバー1002に対するスライダー1004の位置に基づいて、リフォーカス可能範囲に対するデフォーカス量(あるいはシフト量)を決定する。S210において、制御部121は、画像処理部125を用いて現像処理を行う。そして、制御部121は、S211において表示部131に現像処理された画像を表示させて、視差画像操作処理に係る一連の動作を終了して、呼び出し元に処理を戻す。 In step S209, the control unit 121 calculates the focus adjustment position based on the slider position designated in step S208 using the image processing unit 125, and performs the above-described refocus processing. The image processing unit 125 determines the defocus amount (or shift amount) for the refocusable range based on the position of the slider 1004 with respect to the slider bar 1002. In step S <b> 210, the control unit 121 performs development processing using the image processing unit 125. Then, the control unit 121 displays the image developed on the display unit 131 in step S211, ends the series of operations related to the parallax image operation processing, and returns the processing to the caller.
 (現像処理に係る一連の動作)
 次に、S205及びS210における現像処理について、図18を参照して説明する。S301において、画像処理部125は、白の領域のR,G,Bが等色になるようにR,G,Bの各色にゲインをかけてホワイトバランス処理を行う。S302において、画像処理部125は、デモザイキング処理を行う。具体的に、画像処理部125は、入力画像に対して、それぞれの規定方向で補間を行って、その後方向選択を行うことにより、各画素について補間処理結果としてR、G,Bの3原色のカラー画像信号を生成する。
(A series of operations related to development processing)
Next, the development processing in S205 and S210 will be described with reference to FIG. In step S <b> 301, the image processing unit 125 performs white balance processing by applying a gain to each color of R, G, and B so that R, G, and B in the white region have the same color. In S302, the image processing unit 125 performs a demosaicing process. Specifically, the image processing unit 125 interpolates the input image in each specified direction, and then performs direction selection, so that the interpolation processing result of each of the three primary colors R, G, and B is obtained for each pixel. A color image signal is generated.
 S303において、画像処理部125は、ガンマ処理を行う。S304において、画像処理部125は、画像の見栄えを改善するためのノイズ低減、彩度強調、色相補正、エッジ強調といった各種の色調整処理を行う。S305において、画像処理部125は、S304において色調整されたカラー画像信号をJPEG等の所定の方式で圧縮処理して、圧縮された画像データを出力する。S306において、制御部121は、画像処理部125から出力された画像データを、記録媒体133に記録して現像処理に係る一連の動作を終了して呼び出し元に処理を戻す。 In S303, the image processing unit 125 performs gamma processing. In step S304, the image processing unit 125 performs various color adjustment processes such as noise reduction, saturation enhancement, hue correction, and edge enhancement to improve the appearance of the image. In step S305, the image processing unit 125 performs compression processing on the color image signal color-adjusted in step S304 using a predetermined method such as JPEG, and outputs compressed image data. In step S306, the control unit 121 records the image data output from the image processing unit 125 on the recording medium 133, ends a series of operations related to the development processing, and returns the processing to the caller.
 (視点移動UI及びピント調整UIの例)
 次に、図19B~図19Eを参照して、上述した視点移動UI及びピント調整UIの操作例と操作後の合成画像の例を説明する。図19Bは、ピント調整UIを操作してリフォーカス処理を行った合成画像を表示した例を示している。スライダー1004を下方向に動かすユーザ操作に対して、制御部121は、(画像処理部125を用いて)前ピン状態が強くなるようにピント調整(リフォーカス処理)を行って出力画像を表示している。
(Example of viewpoint movement UI and focus adjustment UI)
Next, with reference to FIGS. 19B to 19E, an example of the operation of the viewpoint movement UI and the focus adjustment UI described above and an example of the composite image after the operation will be described. FIG. 19B shows an example in which a composite image that has been subjected to the refocus processing by operating the focus adjustment UI is displayed. In response to a user operation to move the slider 1004 downward, the control unit 121 performs focus adjustment (refocus processing) so that the front focus state becomes strong (using the image processing unit 125) and displays an output image. ing.
 図19Cは、図19Bに対して更に視点移動UIのスライダー1001を右方向に移動させて、視点移動処理を行った合成画像を表示した例を示している。スライダー1001を右方向に動かすユーザ操作に対して、制御部121は、(画像処理部125を用いて)視点移動処理を行って被写界深度を拡大した出力画像を表示している。また、図19Dは、図19Bに対して視点移動UIのスライダー1001を左方向に移動させて、視点移動処理を行い、被写界深度を拡大した合成画像を表示した例を示している。このように視点移動UIとピント調整UIを並行して行えるように、操作部材であるスライダーとスライダーバーを配置することにより、ユーザが視点移動、被写界深度の拡大、リフォーカス処理を並行して(同時に)行うことが可能となる。 FIG. 19C shows an example in which the viewpoint moving UI slider 1001 is further moved to the right with respect to FIG. In response to a user operation to move the slider 1001 to the right, the control unit 121 displays an output image in which the depth of field is expanded by performing viewpoint movement processing (using the image processing unit 125). FIG. 19D shows an example in which the viewpoint movement process is performed by moving the slider 1001 of the viewpoint movement UI to the left with respect to FIG. 19B to display a composite image in which the depth of field is expanded. In this way, by arranging the slider and slider bar as operation members so that the viewpoint movement UI and the focus adjustment UI can be performed in parallel, the user can perform viewpoint movement, depth of field expansion, and refocus processing in parallel. (Simultaneously).
 更に、図20には、上述した視差強調処理とシャープネス処理における強調度合を変更可能なUIを更に追加した例を示している。制御部121は、図19A~図19Dに示したUIに加えて、強調度合を変更できるスライダー1005とスライダーバー1006とを配置する。このスライダー1005に対する操作は、上述の視差強調処理における変数k、或いはシャープネス処理における適用量Tに対応するパラメータを変化させ、表示される合成画像における強調度合を変更する。 Furthermore, FIG. 20 shows an example in which a UI that can change the degree of enhancement in the above-described parallax enhancement processing and sharpness processing is further added. In addition to the UIs shown in FIGS. 19A to 19D, the control unit 121 arranges a slider 1005 and a slider bar 1006 that can change the degree of emphasis. The operation on the slider 1005 changes the parameter corresponding to the variable k in the above-described parallax enhancement processing or the application amount T in the sharpness processing, and changes the degree of enhancement in the displayed composite image.
 なお、本実施形態では、各画素が縦方向に分割された撮像素子で得られた信号に基づいて水平方向に視点移動処理を行うため、視点移動UIを水平方向に配置するようにした。しかし、撮像素子の分割方向が水平方向でない(例えば垂直方向)場合には、視点移動UIを配置する方向を当該分割方向に合わせて(例えば垂直方向)配置してもよい。この場合、ピント調整UIは、視点移動UIと異なる方向に配置して両UIの区別をより明確にしてもよいし、垂直方向を維持して、ピント位置に対する操作をより直観的に行うようにしてもよい。 In the present embodiment, the viewpoint movement UI is arranged in the horizontal direction in order to perform the viewpoint movement process in the horizontal direction based on the signal obtained by the image sensor in which each pixel is divided in the vertical direction. However, when the dividing direction of the image sensor is not the horizontal direction (for example, the vertical direction), the direction in which the viewpoint movement UI is disposed may be aligned with the dividing direction (for example, the vertical direction). In this case, the focus adjustment UI may be arranged in a direction different from the viewpoint movement UI so that the distinction between the two UIs can be made clearer, or the vertical direction can be maintained and the operation on the focus position can be performed more intuitively. May be.
 以上説明したように、入射する光線の強度と角度情報を含む画像信号を取得した後に、視点を移動する操作とピント位置を操作する操作を受け付けて、当該操作に応じた合成画像を生成し、表示するようにした。このようにすることで、ユーザが視点移動や被写界深度の拡大、ピント位置の調整(リフォーカス)を並行して行うことが可能になる。換言すれば、複数の視点画像に基づいて、視点変更画像を表示させる操作とピント位置を変更した画像を表示させる操作とを並行して操作することが可能になる。 As described above, after acquiring an image signal including the intensity and angle information of the incident light beam, an operation for moving the viewpoint and an operation for operating the focus position are received, and a composite image corresponding to the operation is generated, Displayed. In this way, the user can perform viewpoint movement, depth of field expansion, and focus position adjustment (refocus) in parallel. In other words, based on a plurality of viewpoint images, an operation for displaying a viewpoint change image and an operation for displaying an image with a changed focus position can be operated in parallel.
 また、強調度合を変更する操作を更に受け付けることにより、ピント調整のされた合成画像の強調処理を更に並行して行うことが可能となる。さらに、入射する光線の強度と角度情報を含む画像信号を取得する撮像素子の画素が水平方向に複数分割されている場合には、視点移動UIを水平方向に操作可能に配置するようにした。このようにすることで、視点の移動可能な方向とユーザが操作可能な方向が一致するため、ユーザはより直観的に操作を行うことができるようになる。 Further, by further accepting an operation for changing the degree of emphasis, it becomes possible to further perform the emphasis processing of the composite image that has been subjected to the focus adjustment. Furthermore, when the pixel of the image sensor that acquires the image signal including the intensity and angle information of the incident light beam is divided into a plurality of pixels in the horizontal direction, the viewpoint movement UI is arranged to be operable in the horizontal direction. In this way, the direction in which the viewpoint can be moved matches the direction in which the user can operate, so that the user can operate more intuitively.
 (実施形態2)
 次に実施形態2について説明する。実施形態2では、画像を縦位置表示又は横位置表示に切り換えて視点画像の操作を行う例について説明する。本実施形態のデジタルカメラ100の構成は実施形態1と同一構成であり、視差画像操作処理の一部が異なる。このため、同一の構成については同一の符号を付して重複する説明は省略し、相違点について重点的に説明する。
(Embodiment 2)
Next, Embodiment 2 will be described. In the second embodiment, an example in which the viewpoint image is operated by switching the image to the vertical position display or the horizontal position display will be described. The configuration of the digital camera 100 of the present embodiment is the same as that of the first embodiment, and a part of the parallax image operation processing is different. For this reason, the same reference numerals are assigned to the same components, and redundant descriptions are omitted, and differences will be mainly described.
 (視点画像操作処理に係る一連の動作)
 本実施形態に係る視点画像操作処理について、図21を参照して説明する。S401において、制御部121は、画像データの縦位置表示を行うかどうかを判定する。制御部121は、例えば入力画像のメタデータを参照して縦位置で撮影された画像であるかを判定する。制御部121は、縦位置で撮影された画像であると判定した場合、縦位置表示を行うためにS402へ処理を進める。一方、縦位置で撮影された画像でないと判定した場合、横位置表示を行うためにS403へ処理を進める。なお、S401の判定は、ユーザが操作部132のボタン等を介して縦位置表示に設定した場合に縦位置表示を行うようにしてもよいし、メタデータから撮像素子の画素の分割方向を示す情報を取得して、当該分割方向に応じて表示の向きを判定してもよい。
(A series of operations related to viewpoint image manipulation processing)
The viewpoint image operation processing according to the present embodiment will be described with reference to FIG. In step S401, the control unit 121 determines whether to display the vertical position of the image data. For example, the control unit 121 determines whether the image is captured in the vertical position with reference to the metadata of the input image. If the control unit 121 determines that the image is captured in the vertical position, the control unit 121 advances the processing to S402 to display the vertical position. On the other hand, if it is determined that the image is not captured in the vertical position, the process proceeds to S403 to display the horizontal position. Note that the determination in S401 may be performed when the user sets the vertical position display via the button of the operation unit 132 or the like, or indicates the division direction of the pixels of the image sensor from the metadata. Information may be acquired and the display orientation may be determined according to the division direction.
 S402において、制御部121は、縦位置で画像を表示した上で、視点移動UIを垂直方向に変更可能に表示すると共にピント調整UIを水平方向に変更可能に表示する。一方、S403において、制御部121は、横位置で画像を表示した上で、視点移動UIを水平方向に変更可能に表示すると共にピント調整UIを垂直方向に変更表示に表示する。その後、制御部121は、S202~S211に係る処理を実施形態1と同様に行って、呼び出し元に処理を戻す。 In S402, the control unit 121 displays the image in the vertical position, displays the viewpoint movement UI so as to be changeable in the vertical direction, and displays the focus adjustment UI so as to be changeable in the horizontal direction. On the other hand, in step S403, the control unit 121 displays the image in the horizontal position, displays the viewpoint movement UI to be changeable in the horizontal direction, and displays the focus adjustment UI in the change display in the vertical direction. Thereafter, the control unit 121 performs the processing related to S202 to S211 in the same manner as in the first embodiment, and returns the processing to the caller.
 以上説明したように本実施形態では、入力した画像が縦位置表示又は横位置表示かに応じて、視点移動UI及びピント調整UIを動的に切り替えるようにした。このようにすることで、ユーザは、表示向きの異なる撮影画像が存在する場合であっても、撮影画像の視点移動が可能な方向に合わせた操作を行うことが可能になる。 As described above, in the present embodiment, the viewpoint movement UI and the focus adjustment UI are dynamically switched according to whether the input image is the vertical position display or the horizontal position display. In this way, the user can perform an operation according to a direction in which the viewpoint of the captured image can be moved even when there are captured images with different display directions.
 (実施形態3)
 次に実施形態3について説明する。実施形態3では、各画素が水平方向と垂直方向とにそれぞれ2分割された撮像素子を用いる点が異なる。このため、この点以外のデジタルカメラ100の構成は、実施形態1と同一である。従って、同一の構成については同一の符号を付して重複する説明は省略し、相違点について重点的に説明する。
(Embodiment 3)
Next, Embodiment 3 will be described. The third embodiment is different in that an image sensor in which each pixel is divided into two in the horizontal direction and the vertical direction is used. For this reason, the configuration of the digital camera 100 other than this point is the same as that of the first embodiment. Accordingly, the same components are denoted by the same reference numerals and redundant description is omitted, and differences will be mainly described.
 本実施形態に係る撮像素子107の画素及び副画素の配列について、図22を参照して説明する。図22は、本実施形態の撮像素子107について、画素の配列を4列×4行の範囲で、副画素の配列を8列×8行の範囲で示している。 The arrangement of pixels and sub-pixels of the image sensor 107 according to this embodiment will be described with reference to FIG. FIG. 22 shows an array of pixels in a range of 4 columns × 4 rows and an array of subpixels in a range of 8 columns × 8 rows for the image sensor 107 of the present embodiment.
 本実施形態において、図22に示す2列×2行の画素群200では、R(赤)の分光感度を有する画素200Rが左上に、G(緑)の分光感度を有する画素200Gが右上と左下に、B(青)の分光感度を有する画素200Bが右下にそれぞれ配置されている。更に、各画素は2列×2行に配列された副画素221から副画素224により構成される。 In the present embodiment, in the pixel group 200 of 2 columns × 2 rows shown in FIG. 22, the pixel 200 </ b> R having R (red) spectral sensitivity is on the upper left, and the pixel 200 </ b> G having G (green) spectral sensitivity is on the upper right and lower left. In addition, pixels 200B having a spectral sensitivity of B (blue) are respectively arranged on the lower right. Further, each pixel includes sub-pixels 221 to 224 arranged in 2 columns × 2 rows.
 撮像素子107は、図22に示す4列×4行の画素(8列×8行の副画素)を面上に多数配置して、撮像画像(副画素信号)の取得を可能としている。撮像素子107は、例えば、画素の周期Pが4μm、画素数Nが横5575列×縦3725行=約2075万画素、副画素の周期PSUBが2μm、副画素数NSUBが横11150列×縦7450行=約8300万画素の撮像素子107となる。 The image sensor 107 has a large number of 4 columns × 4 rows of pixels (8 columns × 8 rows of sub-pixels) shown in FIG. 22 arranged on the surface so that a captured image (sub-pixel signal) can be acquired. The image sensor 107 has, for example, a pixel period P of 4 μm, a pixel number N of 5575 columns × 3725 rows = approximately 20.75 million pixels, a subpixel cycle PSUB of 2 μm, and a subpixel number NSUB of 11150 columns × vertical 7450. Row = image sensor 107 having about 83 million pixels.
 図22に示す1つの画素200Gを、撮像素子107の受光面側(+z側)から見た平面図を図23Aに示し、図23Aのa-a断面を-y側から見た断面図を図23Bに示す。図23Aに示すように、本実施形態の画素200Gでは、x方向にNH分割(2分割)、y方向にNV分割(2分割)された光電変換部2301~2304が形成される。光電変換部2301~2304は、それぞれ副画素221~224に対応する。 FIG. 23A shows a plan view of one pixel 200G shown in FIG. 22 viewed from the light receiving surface side (+ z side) of the image sensor 107, and FIG. 23A is a cross-sectional view of the aa cross section of FIG. 23A viewed from the −y side. Shown in 23B. As shown in FIG. 23A, in the pixel 200G of the present embodiment, photoelectric conversion units 2301 to 2304 that are NH-divided (two divisions) in the x direction and NV-divided (two divisions) in the y direction are formed. The photoelectric conversion units 2301 to 2304 correspond to the subpixels 221 to 224, respectively.
 本実施形態では、第1視点画像は、各画素の副画素201の受光信号を集めて生成される。同様に、第2視点画像は各画素の副画素202の受光信号を、第3視点画像は各画素の副画素203の受光信号を、第4視点画像は各画素の副画素204の受光信号を集めて、それぞれ生成される。なお、本実施形態では、第1視点画像から第4視点画像は、それぞれ、ベイヤー配列の画像であり、必要に応じて、第1視点画像から第4視点画像に、デモザイキング処理を行ってもよい。 In the present embodiment, the first viewpoint image is generated by collecting the light reception signals of the sub-pixels 201 of each pixel. Similarly, the second viewpoint image is the light reception signal of the sub-pixel 202 of each pixel, the third viewpoint image is the light reception signal of the sub-pixel 203 of each pixel, and the fourth viewpoint image is the light reception signal of the sub-pixel 204 of each pixel. Collect and generate each. In the present embodiment, the first viewpoint image to the fourth viewpoint image are images of a Bayer array, and the demosaicing process may be performed from the first viewpoint image to the fourth viewpoint image as necessary. Good.
 j、iを整数として、撮像素子107の行方向j番目、列方向i番目の位置を(j、i)、位置(j、i)の画素の第1視点画像をA0(j、i)、第2視点画像をB0(j、i)、第3視点画像をC0(j、i)、第4視点画像をD0(j、i)とする。このとき、撮像画像Iは、I(j、i)=A0(j、i)+B0(j、i)+C0(j、i)+D0(j、i)である。 j, i are integers, the j-th position in the row direction and the i-th position in the column direction of the image sensor 107 are (j, i), the first viewpoint image of the pixel at the position (j, i) is A0 (j, i), Assume that the second viewpoint image is B0 (j, i), the third viewpoint image is C0 (j, i), and the fourth viewpoint image is D0 (j, i). At this time, the captured image I is I (j, i) = A0 (j, i) + B0 (j, i) + C0 (j, i) + D0 (j, i).
 (視点画像修正とリフォーカス)
 視点変更処理部155は、実施形態1と同様に、コントラスト処理を行う。すなわち、ベイヤー配列の撮像画像I(j、i)に対し、式(1)に従って輝度Y(j、i)を算出する。また、視点変更処理部155は、高周波成分dY(j、i)、高周波成分dZ(j、i)、コントラスト分布C(j、i)を算出する。
(Viewpoint image correction and refocus)
The viewpoint change processing unit 155 performs contrast processing as in the first embodiment. That is, the luminance Y (j, i) is calculated according to the equation (1) for the captured image I (j, i) with the Bayer array. In addition, the viewpoint change processing unit 155 calculates a high frequency component dY (j, i), a high frequency component dZ (j, i), and a contrast distribution C (j, i).
 次に、視点変更処理部155は、視点画像の視差強調処理を行う。視点変更処理部155は、第1視点画像A0(j、i)から第4視点画像D0(j、i)に対して、式(15)および式(16)に従って視点画像間の差を拡大し、視差強調する変換を行う。視点変更処理部155は、この処理によって第1修正視点画像A(j、i)から第4修正視点画像D(j、i)を生成する。ここで、0≦kAB、kAC、kAD、kBC、kBD、kCD ≦1の実数とする。 Next, the viewpoint change processing unit 155 performs parallax enhancement processing of the viewpoint image. The viewpoint change processing unit 155 enlarges the difference between the viewpoint images from the first viewpoint image A0 (j, i) to the fourth viewpoint image D0 (j, i) according to Expression (15) and Expression (16). Conversion for parallax enhancement is performed. The viewpoint change processing unit 155 generates a fourth modified viewpoint image D (j, i) from the first modified viewpoint image A (j, i) by this process. Here, it is assumed that 0 ≦ kAB, kAC, kAD, kBC, kBD, kCD ≦ 1 real numbers.
Figure JPOXMLDOC01-appb-M000016
Figure JPOXMLDOC01-appb-M000016
 そして、リフォーカス処理部156は、視点変更処理部155によって出力された修正視点画像を用いてリフォーカス処理を行う。具体的に、リフォーカス処理部156は、整数シフト量sとして、式(17)に従い第1修正視点画像Aから第4修正視点画像Dをシフト加算する。 The refocus processing unit 156 performs refocus processing using the corrected viewpoint image output by the viewpoint change processing unit 155. Specifically, the refocus processing unit 156 shifts and adds the fourth corrected viewpoint image D from the first corrected viewpoint image A according to Expression (17) as the integer shift amount s.
Figure JPOXMLDOC01-appb-M000017
Figure JPOXMLDOC01-appb-M000017
 すなわち、整数シフト量sに応じた各仮想結像面でのリフォーカス画像I(j、i;s)を生成することができる。なお、第1修正視点画像Aから第4修正視点画像Dがベイヤー配列であるため、2の倍数のシフト量s=2n(n:整数)で、同色毎に式(10)のシフト加算を行い、ベイヤー配列を保って、リフォーカス画像I(j、i;s)を生成する。画像処理部125は、生成されたリフォーカス画像I(j、i;s)にデモザイキング処理を行う。 That is, it is possible to generate a refocus image I (j, i; s) on each virtual imaging plane corresponding to the integer shift amount s. In addition, since the first corrected viewpoint image A to the fourth corrected viewpoint image D are Bayer arrays, the shift addition of Expression (10) is performed for each same color with a shift amount s = 2n (n: integer) that is a multiple of 2. The refocus image I (j, i; s) is generated while maintaining the Bayer array. The image processing unit 125 performs a demosaicing process on the generated refocus image I (j, i; s).
 なお、必要に応じて、第1修正視点画像から第4修正視点画像にデモザイキング処理を適用し、リフォーカス処理部156はデモザイキング処理後の第1修正視点画像から第4修正視点画像にシフト加算処理を行ってリフォーカス画像を生成してもよい。また、リフォーカス処理部156は、必要に応じて、第1修正視点画像から第4修正視点画像の各画素間の補間信号を生成し、非整数シフト量に応じたリフォーカス画像を生成してもよい。 If necessary, the demosaicing process is applied from the first corrected viewpoint image to the fourth corrected viewpoint image, and the refocus processing unit 156 shifts from the first corrected viewpoint image after the demosaicing process to the fourth corrected viewpoint image. A refocus image may be generated by performing addition processing. Further, the refocus processing unit 156 generates an interpolated signal between each pixel of the fourth corrected viewpoint image from the first corrected viewpoint image as necessary, and generates a refocus image corresponding to the non-integer shift amount. Also good.
 (像ずれ分布)
 さらに、本実施形態における視点画像の像ずれ分布について説明する。水平方向の瞳分割方向の像ずれ分布は実施形態1と同様のため省略し、垂直方向の瞳分割方向の像ずれ分布について説明する。
(Image shift distribution)
Furthermore, the image shift distribution of the viewpoint image in the present embodiment will be described. Since the image shift distribution in the horizontal pupil division direction is the same as that in the first embodiment, it will be omitted, and the image shift distribution in the vertical pupil division direction will be described.
 視点変更処理部155は、2次元的な像を垂直方向の瞳分割方向のみにk画素分動かして第1視点画像A0の画素と第3視点画像C0の差分をとる。従って、複数行について加算する相関演算の式は、式(18)で定義される。 The viewpoint change processing unit 155 moves the two-dimensional image by k pixels only in the vertical pupil division direction, and obtains the difference between the pixel of the first viewpoint image A0 and the third viewpoint image C0. Therefore, the equation of correlation calculation to be added for a plurality of rows is defined by equation (18).
Figure JPOXMLDOC01-appb-M000018
但し、A0ij、C0ijはそれぞれ第1視点画像A0、第3視点画像B0のj列目i番目の画素の輝度を表す。またniは演算に用いる画素数を表す数字、njは相関演算を行う1対の像の列方向の数である。
Figure JPOXMLDOC01-appb-M000018
However, A0ij and C0ij represent the luminance of the i-th pixel in the j-th column of the first viewpoint image A0 and the third viewpoint image B0, respectively. Ni is a number representing the number of pixels used in the calculation, and nj is the number in the column direction of a pair of images for which correlation calculation is performed.
 視点変更処理部155は、式(18)に示すCOR’(k)が最小となるkを像ずれ量として算出する。なお、添え字kはjにのみ加算されてiとは無関係である。これは、2次元的な像を垂直方向の瞳分割方向のみに移動させながら相関演算をしていることに対応する。このように、視点変更処理部155は、第1視点画像A0、第3視点画像C0の各領域の像ずれ量を算出することにより、像ずれ分布を生成することができる。なお、本実施形態ではA0とC0を用いたが、B0とD0を用いて相関演算をしてもよいし、A0とB0を加算した信号とC0とD0と加算した信号で相関演算してもよい。 The viewpoint change processing unit 155 calculates k that minimizes COR ′ (k) shown in Expression (18) as an image shift amount. Note that the subscript k is added only to j and is independent of i. This corresponds to performing the correlation calculation while moving the two-dimensional image only in the vertical pupil division direction. As described above, the viewpoint change processing unit 155 can generate an image shift distribution by calculating the image shift amount of each region of the first viewpoint image A0 and the third viewpoint image C0. Although A0 and C0 are used in this embodiment, correlation calculation may be performed using B0 and D0, or correlation calculation may be performed using a signal obtained by adding A0 and B0 and a signal obtained by adding C0 and D0. Good.
 (被写界深度拡大処理)
 視点変更処理部155は、実係数w(-1≦w≦1)として、式(19A)~(19D)に従って各視点画像の重み係数をそれぞれ算出する。
(Depth of field expansion process)
The viewpoint change processing unit 155 calculates the weight coefficient of each viewpoint image according to the equations (19A) to (19D) as the actual coefficient w (−1 ≦ w ≦ 1).
Figure JPOXMLDOC01-appb-M000019
ここで、Wa(j、i)は第1視点画像A(j、i)の第1重み係数、Wb(j、i)は第2視点画像B(j、i)の第2重み係数、Wc(j、i)は第3視点画像C(j、i)の第3重み係数、Wd(j、i)は第4視点画像D(j、i)の第4重み係数である。
Figure JPOXMLDOC01-appb-M000019
Here, Wa (j, i) is the first weighting factor of the first viewpoint image A (j, i), Wb (j, i) is the second weighting factor of the second viewpoint image B (j, i), Wc (J, i) is the third weighting coefficient of the third viewpoint image C (j, i), and Wd (j, i) is the fourth weighting coefficient of the fourth viewpoint image D (j, i).
 視点変更処理部155は、各視点画像と対応する重み係数から、式(20)に従って出力画像I(j、i)を生成する。 The viewpoint change processing unit 155 generates an output image I (j, i) according to the equation (20) from the weighting coefficient corresponding to each viewpoint image.
Figure JPOXMLDOC01-appb-M000020
Figure JPOXMLDOC01-appb-M000020
 (視点移動UI及びピント調整UIの例)
 更に、本実施形態に係る視点移動UIとピント調整UIの構成について、図24を参照して説明する。本実施形態の構成では、瞳分割方向が水平方向と垂直方向の2方向で分割されているため、ユーザは垂直方向と水平方向とで視点移動させることが可能である。このため、本実施形態では、ユーザがそれぞれの方向に操作するための、2軸のスライダーとスライダーバーを設ける。
(Example of viewpoint movement UI and focus adjustment UI)
Further, the configuration of the viewpoint movement UI and the focus adjustment UI according to the present embodiment will be described with reference to FIG. In the configuration of the present embodiment, the pupil division direction is divided in two directions, the horizontal direction and the vertical direction, so that the user can move the viewpoint in the vertical direction and the horizontal direction. For this reason, in this embodiment, the biaxial slider and slider bar for a user to operate in each direction are provided.
 水平方向の視点移動には水平方向のスライダーバー3001とスライダー3002とを配置し、垂直方向の視点移動には垂直方向のスライダーバー4001とスライダー4002とを配置する。ピント調整UIは視点移動UIとは互いに異なる方向にスライダーバー5001とスライダー5002とを配置する。なお、ピント調整UIは、十字型に配置された視点移動UIの交点を通るように配置されているが、他の位置に配置されてもよい。このように、視点移動の2方向のスライダーをそれぞれ移動させることで第1から第4の視点画像の重み係数を変更し、視点の異なる画像を生成することができる。そして、実施形態1と同様に、視点移動UIとピント調整UIとを並行して(同時に)操作することが可能になる。 A horizontal slider bar 3001 and a slider 3002 are arranged for horizontal viewpoint movement, and a vertical slider bar 4001 and a slider 4002 are arranged for vertical viewpoint movement. In the focus adjustment UI, the slider bar 5001 and the slider 5002 are arranged in different directions from the viewpoint movement UI. The focus adjustment UI is arranged so as to pass through the intersection of the viewpoint movement UIs arranged in a cross shape, but may be arranged at other positions. In this way, by moving the sliders in the two directions of viewpoint movement, the weighting coefficients of the first to fourth viewpoint images can be changed, and images with different viewpoints can be generated. As in the first embodiment, the viewpoint movement UI and the focus adjustment UI can be operated in parallel (simultaneously).
 以上説明したように、本実施形態では、入射する光線の強度と角度情報を含む画像信号を取得した後に、2次元状(水平及び垂直方向)に視点を移動する操作と、ピント位置を操作する操作とを受け付けて、当該操作に応じた合成画像を生成し、表示するようにした。このようにすることで、入力した画像信号から2次元状に視点を移動させた画像を生成可能な場合に、ユーザは2次元状の視点移動、ピント位置の調整(リフォーカス)を並行して行うことが可能になる。 As described above, in this embodiment, after acquiring an image signal including the intensity and angle information of an incident light beam, the operation of moving the viewpoint in two dimensions (horizontal and vertical directions) and the focus position are operated. An operation is received, and a composite image corresponding to the operation is generated and displayed. In this way, when it is possible to generate an image in which the viewpoint is moved two-dimensionally from the input image signal, the user performs two-dimensional viewpoint movement and focus position adjustment (refocus) in parallel. It becomes possible to do.
 (実施形態4)
 次に実施形態4について説明する。実施形態4では、画像を縦位置表示又は横位置表示に切り換えて視点画像の操作を行う際のUIの表記(上述した視点移動UI等に付す表示)を制御する例について説明する。視点画像の操作を行うUIの表記は、例えば視点画像の操作に応じて視点画像の視点が変更される向きを示す。本実施形態のデジタルカメラ100の構成は実施形態1と同一構成であり、視差画像操作処理の一部が異なる。このため、同一の構成については同一の符号を付して重複する説明は省略し、相違点について重点的に説明する。
(Embodiment 4)
Next, a fourth embodiment will be described. In the fourth embodiment, an example of controlling UI notation (display attached to the above-described viewpoint movement UI or the like) when the image is switched to the vertical position display or the horizontal position display and the viewpoint image is operated will be described. The notation of the UI for operating the viewpoint image indicates a direction in which the viewpoint of the viewpoint image is changed according to the operation of the viewpoint image, for example. The configuration of the digital camera 100 of the present embodiment is the same as that of the first embodiment, and a part of the parallax image operation processing is different. For this reason, the same reference numerals are assigned to the same components, and redundant descriptions are omitted, and differences will be mainly described.
 (視点画像操作処理に係る一連の動作)
 本実施形態に係る視点画像操作処理について、図25A~図25Bを参照して説明する。図25AのS501において、制御部121は、視差画像の操作を行うUIの表記を画像の縦位置表示に合わせるかを判定するため、入力画像を縦位置で表示するかを判定する。制御部121は、例えば入力画像のメタデータを参照して入力画像が縦位置で撮影された画像であるかを判定し、入力画像を縦位置で表示すると判定した場合は、UIの表記を縦位置用とするためにS502へ処理を進める。一方、縦位置で撮影された画像でないと判定した場合、横位置用の表記を行うためにS503へ処理を進める。S502において、制御部121は、さらに画像の回転角度を判定する。制御部121は、例えば入力画像のメタデータを参照して撮影画像の角度(例えば、90度右回転した縦位置、又は90度左回転した縦位置)を判定する。制御部121は、入力画像が90度右回転で撮影された画像であると判定した場合、表示部131に90度右回転の表記を行うためにS504へ処理を進める。一方、撮影画像が90度右回転でない(かつ縦位置表示で撮影された)画像であると判定した場合、90度左回転の表記を行うためにS505へ処理を進める。なお、S501及びS502における判定は、ユーザが操作部132のボタン等を介して縦位置表示に設定した場合に縦位置用の表記を行うようにしてもよいし、メタデータから撮像素子の画素の分割方向を示す情報を取得して、当該分割方向に応じて視差画像の操作を行うUIの表記を判定してもよい。
(A series of operations related to viewpoint image manipulation processing)
The viewpoint image operation processing according to the present embodiment will be described with reference to FIGS. 25A to 25B. In S501 of FIG. 25A, the control unit 121 determines whether to display the input image in the vertical position in order to determine whether the UI notation for performing the operation on the parallax image is matched with the vertical position display of the image. For example, the control unit 121 refers to the metadata of the input image to determine whether the input image is an image captured in the vertical position. If the input image is determined to be displayed in the vertical position, the control unit 121 sets the UI notation to vertical. The process proceeds to S502 for use in position. On the other hand, if it is determined that the image is not taken in the vertical position, the process proceeds to S503 in order to perform the horizontal position notation. In step S502, the control unit 121 further determines the rotation angle of the image. The control unit 121 determines the angle (for example, the vertical position rotated 90 degrees to the right or the vertical position rotated 90 degrees to the left) with reference to the metadata of the input image, for example. If the control unit 121 determines that the input image is an image captured by 90-degree right rotation, the control unit 121 advances the processing to S504 in order to display 90-degree right rotation on the display unit 131. On the other hand, if it is determined that the captured image is an image that is not rotated 90 degrees to the right (and photographed in the vertical position display), the process proceeds to S505 in order to perform a 90-degree left rotation. Note that the determination in S501 and S502 may be performed for the vertical position when the user sets the vertical position display via the button of the operation unit 132 or the like. Information indicating the division direction may be acquired, and UI notation for performing an operation on the parallax image may be determined according to the division direction.
 S503において、制御部121は、横位置で画像を表示した上で、視点移動の操作を行うUIの表記を、横位置用の表記(左右)にして表示する(図26)。S504において、制御部121は、90度右回転の縦位置で画像を表示した上で、視点移動の操作を行うUIの表記をスライダーの左側を上、右側を下として表示する(図27)。S505において、制御部121は、90度右回転の縦位置で画像を表示した上で、視点移動の操作を行うUIの表記をスライダーの左側を下、右側を上として表示する(図28)。このようにすれば、制御部121は、視点移動の操作を行うUIの表記を視点移動UIの向きに応じて切り替えることができるほか、同じ縦位置であっても当該表記を回転角度に応じて切り替えを行うことができる。S503~S505の処理を終了すると、制御部121は、その後、図25Bに示すS202~S211に係る処理を実施形態1と同様に行って、呼び出し元に処理を戻す。 In step S503, the control unit 121 displays the image in the horizontal position, and displays the UI for performing the viewpoint movement operation as the horizontal position (left and right) (FIG. 26). In step S504, the control unit 121 displays the image at the vertical position rotated 90 degrees rightward, and displays the UI notation for performing the viewpoint movement operation with the left side of the slider on the upper side and the right side on the lower side (FIG. 27). In step S505, the control unit 121 displays the image at the vertical position rotated 90 degrees rightward, and displays the UI notation for performing the viewpoint movement operation with the left side of the slider on the bottom and the right side on the top (FIG. 28). In this way, the control unit 121 can switch the notation of the UI for performing the viewpoint movement operation in accordance with the direction of the viewpoint movement UI, and can change the notation in accordance with the rotation angle even in the same vertical position. Switching can be done. When the processes of S503 to S505 are completed, the control unit 121 thereafter performs the processes according to S202 to S211 shown in FIG. 25B in the same manner as in the first embodiment, and returns the process to the caller.
 以上説明したように本実施形態では、入力した画像が縦位置表示又は横位置表示か、及び回転角度に応じて、視点移動UIの表記を動的に切り替えるようにした。このようにすることで、ユーザは、表示向きの異なる撮影画像が存在する場合であっても、撮影画像の視点移動が可能な方向に合わせた操作を行うことが可能になる。 As described above, in the present embodiment, the notation of the viewpoint movement UI is dynamically switched according to whether the input image is the vertical position display or the horizontal position display and the rotation angle. In this way, the user can perform an operation according to a direction in which the viewpoint of the captured image can be moved even when there are captured images with different display directions.
 (その他の実施形態)
 本発明は、上述の実施形態の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介してシステム又は装置に供給し、そのシステム又は装置のコンピュータにおける1つ以上のプロセッサーがプログラムを読出し実行する処理でも実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
(Other embodiments)
The present invention supplies a program that realizes one or more functions of the above-described embodiments to a system or apparatus via a network or a storage medium, and one or more processors in a computer of the system or apparatus read and execute the program This process can be realized. It can also be realized by a circuit (for example, ASIC) that realizes one or more functions.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために、以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, in order to make the scope of the present invention public, the following claims are attached.
 本願は、2016年3月24日提出の日本国特許出願特願2016-060897、及び2016年6月3日提出の日本国特許出願特願2016-112103を基礎として優先権を主張するものであり、その記載内容の全てを、ここに援用する。 This application claims priority on the basis of Japanese Patent Application No. 2016-060897 filed on Mar. 24, 2016 and Japanese Patent Application No. 2016-112103 filed on Jun. 3, 2016. , The entire contents of which are incorporated herein by reference.
121…制御部、125…画像処理部、131…表示部、132…操作部、151…画像取得部、155…視点変更処理部、156…リフォーカス処理部 DESCRIPTION OF SYMBOLS 121 ... Control part, 125 ... Image processing part, 131 ... Display part, 132 ... Operation part, 151 ... Image acquisition part, 155 ... Viewpoint change processing part, 156 ... Refocus processing part

Claims (18)

  1.  入射する光線の強度と角度情報を含む画像信号を取得する取得手段と、
     視点を変更する操作とピント位置を変更する操作とを受け付ける操作手段と、
     前記画像信号に基づいて得られる複数の視点画像に基づいて、前記視点を変更する操作に応じて視点を変更すると共に、前記ピント位置を変更する操作に応じてピント位置を変更した表示画像を生成する処理手段と、を備える
     ことを特徴とする画像処理装置。
    An acquisition means for acquiring an image signal including intensity and angle information of an incident light beam;
    An operation means for accepting an operation for changing the viewpoint and an operation for changing the focus position;
    Based on a plurality of viewpoint images obtained based on the image signal, a viewpoint is changed according to an operation for changing the viewpoint, and a display image in which a focus position is changed according to an operation for changing the focus position is generated. And an image processing apparatus.
  2.  前記操作手段は、前記画像信号に基づいて所定の方向に視点を変更した前記表示画像を生成可能な場合に、前記視点を変更する操作を、前記所定の方向に沿って受け付ける、
     ことを特徴とする請求項1に記載の画像処理装置。
    The operation means accepts an operation for changing the viewpoint along the predetermined direction when the display image whose viewpoint has been changed in a predetermined direction can be generated based on the image signal.
    The image processing apparatus according to claim 1.
  3.  前記操作手段は、前記画像信号に基づいて所定の方向に視点を変更した前記表示画像を生成可能な場合に、前記ピント位置を変更する操作を、前記所定の方向と異なる方向に沿って受け付ける、
     ことを特徴とする請求項1に記載の画像処理装置。
    The operation means receives an operation for changing the focus position along a direction different from the predetermined direction when the display image whose viewpoint has been changed in a predetermined direction can be generated based on the image signal.
    The image processing apparatus according to claim 1.
  4.  前記操作手段は、所定の位置から前記所定の方向に沿って離れるほど前記所定の方向への視点の変更が大きくなり、前記所定の位置から前記所定の方向に沿って反対側に離れるほど、前記所定の方向と反対側に視点の変更が大きくなるように、前記視点を変更する操作を受け付ける、
     ことを特徴とする請求項2又は3に記載の画像処理装置。
    As the operation means moves away from the predetermined position along the predetermined direction, the change of the viewpoint in the predetermined direction increases, and as the operation means moves away from the predetermined position along the predetermined direction, Accepting an operation to change the viewpoint such that the change of the viewpoint becomes larger in the direction opposite to the predetermined direction;
    The image processing apparatus according to claim 2, wherein the image processing apparatus is an image processing apparatus.
  5.  前記所定の方向は、水平方向又は垂直方向である、
     ことを特徴とする請求項2から4のいずれか1項に記載の画像処理装置。
    The predetermined direction is a horizontal direction or a vertical direction.
    The image processing apparatus according to claim 2, wherein the image processing apparatus is an image processing apparatus.
  6.  前記操作手段は、横位置において水平方向に視点を変更可能な前記表示画像を縦位置で表示する場合、前記視点を変更する操作を、垂直方向に受け付ける、
     ことを特徴とする請求項5に記載の画像処理装置。
    The operation means accepts an operation for changing the viewpoint in the vertical direction when the display image whose viewpoint can be changed in the horizontal direction at the horizontal position is displayed in the vertical position.
    The image processing apparatus according to claim 5.
  7.  前記視点を変更する操作に応じて視点が変更される向きを示す表記を表示する表示手段を更に備え、
     前記表示手段は、前記表記を、前記操作手段が前記視点を変更する操作を受け付ける方向に応じて異ならせる、
     ことを特徴とする請求項6に記載の画像処理装置。
    Further comprising display means for displaying a notation indicating a direction in which the viewpoint is changed according to the operation of changing the viewpoint;
    The display means changes the notation according to a direction in which the operation means receives an operation of changing the viewpoint.
    The image processing apparatus according to claim 6.
  8.  前記操作手段は、前記ピント位置を変更する操作が所定の位置から操作を受け付ける方向に離れるほど前記ピント位置の変更が大きくなるように、前記ピント位置を変更する操作を受け付ける、
     ことを特徴とする請求項2から7のいずれか1項に記載の画像処理装置。
    The operation means accepts an operation for changing the focus position so that the change in the focus position increases as the operation for changing the focus position moves away from a predetermined position in a direction in which the operation is accepted.
    The image processing apparatus according to claim 2, wherein the image processing apparatus is an image processing apparatus.
  9.  前記操作手段は、前記画像信号に基づく表示画像が表示された状態で前記視点を変更する操作と前記ピント位置を変更する操作とを受け付ける、
     ことを特徴とする請求項1から8のいずれか1項に記載の画像処理装置。
    The operation means receives an operation for changing the viewpoint and an operation for changing the focus position in a state where a display image based on the image signal is displayed.
    The image processing apparatus according to claim 1, wherein the image processing apparatus is an image processing apparatus.
  10.  前記操作手段は、前記表示画像における輪郭強調の度合を変更する操作を更に受け付け、
     前記処理手段は、前記輪郭強調の度合に応じて、前記表示画像における合焦している領域に対する輪郭強調を行う、
     ことを特徴とする請求項1から9のいずれか1項に記載の画像処理装置。
    The operation means further accepts an operation for changing the degree of contour emphasis in the display image,
    The processing means performs edge enhancement on a focused area in the display image according to the degree of edge enhancement.
    The image processing apparatus according to claim 1, wherein the image processing apparatus is an image processing apparatus.
  11.  前記処理手段は、前記視点を変更する操作に応じて前記複数の視点画像を加算するための重み付けを異ならせ、前記重み付けを用いて前記複数の視点画像を加算して、前記表示画像を生成する、
     ことを特徴とする請求項1から10のいずれか1項に記載の画像処理装置。
    The processing unit generates weights for adding the plurality of viewpoint images according to an operation of changing the viewpoint, and adds the plurality of viewpoint images using the weights to generate the display image. ,
    The image processing apparatus according to claim 1, wherein the image processing apparatus is an image processing apparatus.
  12.  前記処理手段は、前記ピント位置を変更する操作に応じて前記複数の視点画像間のシフト量を変更し、前記シフト量によりずらした前記複数の視差画像を加算して、前記表示画像を生成する、
     ことを特徴とする請求項1から11のいずれか1項に記載の画像処理装置。
    The processing unit changes a shift amount between the plurality of viewpoint images in accordance with an operation to change the focus position, and adds the plurality of parallax images shifted by the shift amount to generate the display image. ,
    The image processing apparatus according to claim 1, wherein the image processing apparatus is an image processing apparatus.
  13.  複数の副画素を含む画素が2次元状に配置され、前記副画素から出力された信号に基づいて入射する光線の強度と角度情報を含む画像信号を出力する撮像素子と、
     視点を変更する操作とピント位置を変更する操作とを受け付ける操作手段と、
     前記画像信号に基づいて得られる複数の視点画像に基づいて、前記視点を変更する操作に応じて視点を変更すると共に、前記ピント位置を変更する操作に応じてピント位置を変更した表示画像を生成する処理手段と、を備え、
     前記撮像素子は、前記画素のなかで所定の方向に複数の前記副画素が配置され、
     前記操作手段は、前記画像信号に基づいて前記所定の方向に視点を変更した前記表示画像を生成可能な場合に、前記視点を変更する操作を、前記所定の方向に沿って受け付ける、
     ことを特徴とする撮像装置。
    An image pickup device in which pixels including a plurality of subpixels are two-dimensionally arranged and outputs an image signal including intensity and angle information of incident light rays based on a signal output from the subpixel;
    An operation means for accepting an operation for changing the viewpoint and an operation for changing the focus position;
    Based on a plurality of viewpoint images obtained based on the image signal, a viewpoint is changed according to an operation for changing the viewpoint, and a display image in which a focus position is changed according to an operation for changing the focus position is generated. Processing means
    The image sensor has a plurality of sub-pixels arranged in a predetermined direction among the pixels,
    The operation means accepts an operation for changing the viewpoint along the predetermined direction when the display image whose viewpoint has been changed in the predetermined direction can be generated based on the image signal.
    An imaging apparatus characterized by that.
  14.  前記操作手段は、前記撮像装置を用いて横位置で撮影された前記画像信号を横位置の前記表示画像として表示する場合、前記視点を変更する操作を、前記所定の方向に沿って受け付け、横位置で撮影された前記画像信号を縦位置の前記表示画像として表示する場合、前記視点を変更する操作を、前記所定の方向と垂直な方向に沿って受け付ける、
     ことを特徴とする請求項13に記載の撮像装置。
    The operation means accepts an operation for changing the viewpoint along the predetermined direction when displaying the image signal captured in the horizontal position using the imaging device as the display image in the horizontal position, When displaying the image signal captured at a position as the display image at a vertical position, an operation for changing the viewpoint is accepted along a direction perpendicular to the predetermined direction.
    The imaging apparatus according to claim 13.
  15.  前記視点を変更する操作に応じて視点が変更される向きを示す表記を表示する表示手段を更に備え、
     前記表示手段は、前記表記を、前記画像信号の撮影された、縦位置又は横位置からの回転角に応じて異ならせる、
     ことを特徴とする請求項14に記載の撮像装置。
    Further comprising display means for displaying a notation indicating a direction in which the viewpoint is changed according to the operation of changing the viewpoint;
    The display means varies the notation according to a rotation angle from a vertical position or a horizontal position where the image signal is captured.
    The imaging apparatus according to claim 14.
  16.  取得手段が、入射する光線の強度と角度情報を含む画像信号を取得する取得工程と、
     操作手段が、視点を変更する操作とピント位置を変更する操作とを受け付ける操作工程と、
     処理手段が、前記画像信号に基づいて得られる複数の視点画像に基づいて、前記視点を変更する操作に応じて視点を変更すると共に、前記ピント位置を変更する操作に応じてピント位置を変更した表示画像を生成する処理工程と、を有する
     ことを特徴とする画像処理装置の制御方法。
    An acquisition step in which the acquisition means acquires an image signal including the intensity and angle information of the incident light beam;
    An operation step in which the operation means receives an operation of changing a viewpoint and an operation of changing a focus position;
    The processing unit changes the viewpoint according to the operation for changing the viewpoint based on the plurality of viewpoint images obtained based on the image signal, and changes the focus position according to the operation for changing the focus position. And a processing step of generating a display image. A control method for an image processing device.
  17.  複数の副画素を含む画素が2次元状に配置され、前記副画素から出力された信号に基づいて入射する光線の強度と角度情報を含む画像信号を出力する撮像素子を有する撮像装置の制御方法であって、
     取得手段が、前記撮像素子から出力される前記画像信号を取得する取得工程と、
     操作手段が、視点を変更する操作とピント位置を変更する操作とを受け付ける操作工程と、
     処理手段が、前記画像信号に基づいて得られる複数の視点画像に基づいて、前記視点を変更する操作に応じて視点を変更すると共に、前記ピント位置を変更する操作に応じてピント位置を変更した表示画像を生成する処理工程と、を有し、
     前記撮像素子は、前記画素のなかで所定の方向に複数の前記副画素が配置され、
     前記操作工程では、前記画像信号に基づいて前記所定の方向に視点を変更した前記表示画像を生成可能な場合に、前記視点を変更する操作を、前記所定の方向に沿って受け付ける、
     ことを特徴とする撮像装置の制御方法。
    Control method for an imaging apparatus having an imaging element in which pixels including a plurality of subpixels are arranged two-dimensionally and output an image signal including intensity and angle information of incident light rays based on a signal output from the subpixel Because
    An acquisition step of acquiring the image signal output from the image sensor;
    An operation step in which the operation means receives an operation of changing a viewpoint and an operation of changing a focus position;
    The processing unit changes the viewpoint according to the operation for changing the viewpoint based on the plurality of viewpoint images obtained based on the image signal, and changes the focus position according to the operation for changing the focus position. A processing step of generating a display image,
    The image sensor has a plurality of sub-pixels arranged in a predetermined direction among the pixels,
    In the operation step, when the display image whose viewpoint is changed in the predetermined direction can be generated based on the image signal, an operation for changing the viewpoint is accepted along the predetermined direction.
    And a method of controlling the imaging apparatus.
  18.  コンピュータに、請求項16に記載の画像処理装置の制御方法の各工程を実行させるためのプログラム。 A program for causing a computer to execute each step of the control method of the image processing apparatus according to claim 16.
PCT/JP2017/002504 2016-03-24 2017-01-25 Image processing apparatus, image pickup apparatus, and control methods therefor, and program WO2017163588A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
CN201780019656.XA CN108886581B (en) 2016-03-24 2017-01-25 Image processing apparatus, image pickup apparatus, and control method thereof
RU2018137046A RU2707066C1 (en) 2016-03-24 2017-01-25 Image processing device, image creation device, their control methods and program
SG11201808036XA SG11201808036XA (en) 2016-03-24 2017-01-25 Image processing apparatus, imaging apparatus, and control methods thereof
KR1020187029815A KR102157491B1 (en) 2016-03-24 2017-01-25 Image processing apparatus, imaging apparatus and control method thereof, storage medium
DE112017001458.1T DE112017001458T5 (en) 2016-03-24 2017-01-25 Image processing apparatus, imaging apparatus and control method thereof
PH12018502032A PH12018502032A1 (en) 2016-03-24 2018-09-21 Image processing apparatus, imaging apparatus, and control methods thereof
US16/137,801 US10924665B2 (en) 2016-03-24 2018-09-21 Image processing apparatus, imaging apparatus, control methods thereof, and storage medium for generating a display image based on a plurality of viewpoint images

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2016060897 2016-03-24
JP2016-060897 2016-03-24
JP2016-112103 2016-06-03
JP2016112103A JP6757184B2 (en) 2016-03-24 2016-06-03 Image processing equipment, imaging equipment and their control methods and programs

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/137,801 Continuation US10924665B2 (en) 2016-03-24 2018-09-21 Image processing apparatus, imaging apparatus, control methods thereof, and storage medium for generating a display image based on a plurality of viewpoint images

Publications (1)

Publication Number Publication Date
WO2017163588A1 true WO2017163588A1 (en) 2017-09-28

Family

ID=59900206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/002504 WO2017163588A1 (en) 2016-03-24 2017-01-25 Image processing apparatus, image pickup apparatus, and control methods therefor, and program

Country Status (1)

Country Link
WO (1) WO2017163588A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110388880A (en) * 2018-04-20 2019-10-29 株式会社基恩士 Form measuring instrument and form measuring method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009211335A (en) * 2008-03-04 2009-09-17 Nippon Telegr & Teleph Corp <Ntt> Virtual viewpoint image generation method, virtual viewpoint image generation apparatus, virtual viewpoint image generation program, and recording medium from which same recorded program can be read by computer
JP2013110556A (en) * 2011-11-21 2013-06-06 Olympus Corp Prenoptic camera
JP2015115818A (en) * 2013-12-12 2015-06-22 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP2015198340A (en) * 2014-04-01 2015-11-09 キヤノン株式会社 Image processing system and control method therefor, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009211335A (en) * 2008-03-04 2009-09-17 Nippon Telegr & Teleph Corp <Ntt> Virtual viewpoint image generation method, virtual viewpoint image generation apparatus, virtual viewpoint image generation program, and recording medium from which same recorded program can be read by computer
JP2013110556A (en) * 2011-11-21 2013-06-06 Olympus Corp Prenoptic camera
JP2015115818A (en) * 2013-12-12 2015-06-22 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP2015198340A (en) * 2014-04-01 2015-11-09 キヤノン株式会社 Image processing system and control method therefor, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110388880A (en) * 2018-04-20 2019-10-29 株式会社基恩士 Form measuring instrument and form measuring method

Similar Documents

Publication Publication Date Title
JP6757184B2 (en) Image processing equipment, imaging equipment and their control methods and programs
US10681286B2 (en) Image processing apparatus, imaging apparatus, image processing method, and recording medium
JP6789833B2 (en) Image processing equipment, imaging equipment, image processing methods and programs
CN107465866B (en) Image processing apparatus and method, image capturing apparatus, and computer-readable storage medium
JP6972266B2 (en) Image processing method, image processing device, and image pickup device
CN107431755B (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
JP6516510B2 (en) Image processing apparatus, imaging apparatus, image processing method, image processing program, and storage medium
JP6254843B2 (en) Image processing apparatus and control method thereof
JP6976754B2 (en) Image processing equipment and image processing methods, imaging equipment, programs
US10868953B2 (en) Image processing device capable of notifying an effect of emphasizing a subject before performing imaging, control method thereof, and medium
JP7204357B2 (en) Imaging device and its control method
WO2017163588A1 (en) Image processing apparatus, image pickup apparatus, and control methods therefor, and program
WO2016143913A1 (en) Image processing method, image processing device, and image pickup apparatus
JP6800648B2 (en) Image processing device and its control method, program and imaging device
JP2020171050A (en) Image processing apparatus, imaging apparatus, image processing method, and storage medium
JP6817855B2 (en) Image processing equipment, imaging equipment, image processing methods, and programs
US10964739B2 (en) Imaging apparatus and control method thereof

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 11201808036X

Country of ref document: SG

ENP Entry into the national phase

Ref document number: 20187029815

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17769635

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17769635

Country of ref document: EP

Kind code of ref document: A1