WO2013146996A1 - 画像処理装置、撮像装置および画像処理方法 - Google Patents
画像処理装置、撮像装置および画像処理方法 Download PDFInfo
- Publication number
- WO2013146996A1 WO2013146996A1 PCT/JP2013/059206 JP2013059206W WO2013146996A1 WO 2013146996 A1 WO2013146996 A1 WO 2013146996A1 JP 2013059206 W JP2013059206 W JP 2013059206W WO 2013146996 A1 WO2013146996 A1 WO 2013146996A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- parallax
- amount
- image
- viewpoint images
- imaging
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 137
- 238000012545 processing Methods 0.000 title claims abstract description 84
- 238000003672 processing method Methods 0.000 title claims description 9
- 238000012937 correction Methods 0.000 claims abstract description 132
- 238000004364 calculation method Methods 0.000 claims abstract description 31
- 210000001747 pupil Anatomy 0.000 claims description 53
- 230000003287 optical effect Effects 0.000 claims description 24
- 238000011156 evaluation Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 2
- 238000000034 method Methods 0.000 description 45
- 230000006870 function Effects 0.000 description 38
- 230000000875 corresponding effect Effects 0.000 description 26
- 238000004891 communication Methods 0.000 description 18
- 238000001514 detection method Methods 0.000 description 13
- 239000004973 liquid crystal related substance Substances 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000005259 measurement Methods 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000004888 barrier function Effects 0.000 description 4
- 230000001276 controlling effect Effects 0.000 description 4
- 230000004907 flux Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 229930188970 Justin Natural products 0.000 description 2
- 241001272720 Medialuna californiensis Species 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/133—Equalising the characteristics of different image components, e.g. their average brightness or colour balance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/34—Systems for automatic generation of focusing signals using different areas in a pupil plane
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
- G03B35/10—Stereoscopic photography by simultaneous recording having single camera with stereoscopic-base-defining system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/218—Image signal generators using stereoscopic image cameras using a single 2D image sensor using spatial multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/225—Image signal generators using stereoscopic image cameras using a single 2D image sensor using parallax barriers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/704—Pixels specially adapted for focusing, e.g. phase difference pixel sets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/702—SSIS architectures characterised by non-identical, non-equidistant or non-planar pixel layout
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
Definitions
- the present invention relates to an image processing device, an imaging device, and an image processing method, and in particular, when detecting parallax of a plurality of viewpoint images captured by pupil division, the parallax is resistant to noise and equivalent to parallax sensed by a person.
- the present invention relates to an image processing apparatus, an imaging apparatus, and an image processing method.
- a stereoscopic imaging apparatus including an imaging element that performs imaging using a pupil division method is known.
- the method of calculating the parallax by performing matching between the left and right images is to correct the positional deviation amount of the block that minimizes the square sum of the pixel value differences. It is well known how to output it by assuming that As described above, the parallax is generally calculated using a method (correlation method) for calculating the parallax based on the correlation based on the pixel value between the left-eye image and the right-eye image.
- Patent Document 1 discloses a phase difference type image sensor that includes a microlens having an upper opening and a microlens having a lower opening so that a left-eye image and a right-eye image can be obtained. Further, it is disclosed that matching is performed by a correlation method between a left eye image and a right eye image to obtain distance information.
- Patent Document 2 discloses a phase difference type image sensor in which a left eye image and a right eye image can be obtained by arranging photoelectric conversion elements on the left side and the right side with respect to one cylindrical microlens. is doing.
- Patent Document 3 obtains a plurality of viewpoint images (first image and second image) by imaging pupil division, and performs different image processing on each of the plurality of viewpoint images. Discloses that each image processing is performed so that a difference in image quality between a plurality of viewpoint images after each image processing is reduced.
- Patent Document 4 acquires a plurality of viewpoint images (first parallax image and second parallax image) by imaging pupil division, and the defocus amount at each position in the plurality of viewpoint images. Disclosed is to restore a plurality of viewpoint images by selecting a restoration filter corresponding to the image height for each position and performing deconvolution for each position in the plurality of viewpoint images based on the selected restoration filter. ing.
- JP 2002-191060 A Special table 2011-515045 gazette WO2011 / 118089 publication WO2011 / 118077 publication
- each image is a set of images obtained by applying a symmetrical half-moon filter. Therefore, when block matching is performed between such a left-eye image and a right-eye image, there is a problem that a parallax slightly different from the parallax sensed by a person is calculated.
- a parallax calculation method is a method (one of so-called correlation methods) in which block matching is performed and a positional deviation amount that minimizes the sum of squares of pixel value differences is regarded as a correct answer.
- a person perceives the shift amount of the peak position of the signal between the left and right images as parallax.
- the peak position deviation amount is not necessarily output as a correct answer, and thus may be slightly different from the parallax sensed by a person.
- Patent Documents 1 and 2 merely disclose a phase difference type image sensor, and there is no disclosure about obtaining parallax that is resistant to noise and equivalent to parallax sensed by a person.
- Patent Document 3 discloses that each image processing is performed so that the difference in image quality between viewpoint images is reduced, and a parallax that is resistant to noise and equivalent to a parallax sensed by a person is obtained. There is no disclosure about this.
- Patent Document 4 discloses a restoration process that performs deconvolution for each position in a plurality of viewpoint images, and is resistant to noise and obtains a parallax equivalent to a parallax sensed by a person. There is no disclosure. That is, with the techniques described in Patent Documents 1 to 4, it is difficult to obtain a parallax equivalent to the parallax sensed by a person.
- the present invention has been made in view of such circumstances.
- the present invention is resistant to noise and equivalent to parallax sensed by a person. It is an object of the present invention to provide an image processing apparatus, an imaging apparatus, and an image processing method that can obtain the parallax.
- the present invention provides a first parallax amount between image acquisition means for acquiring a plurality of viewpoint images with different viewpoints generated by pupil division imaging and the viewpoint images of the plurality of viewpoint images.
- the first parallax amount is corrected to the amount of deviation in the parallax direction of the object image.
- an image processing apparatus comprising: a second parallax amount calculating unit that calculates a second parallax amount.
- the “parallax direction” is a direction in the viewpoint image corresponding to the direction connecting the eyes of both eyes.
- the phase difference direction of the phase difference pixels is the “parallax direction”. That is, a direction connecting corresponding points corresponding to each other between a plurality of viewpoint images is a “parallax direction”.
- the first parallax amount is calculated by the first parallax amount calculating means as the temporary correct answer of the parallax amount between the viewpoint images, and the first parallax amount as the temporary correct answer is calculated.
- a parallax equivalent to the parallax to be obtained can be obtained.
- a parallax sensed by a person is “an amount of deviation in the parallax direction of an actual object image”.
- the shift amount of the parallax direction of the object image is a shift amount of the peak position of the pixel value of the object image between the viewpoint images of the plurality of viewpoint images.
- a second amount of parallax is calculated by correcting the amount of parallax to the amount of deviation of the peak position of the pixel value of the object image.
- the first parallax amount calculating means calculates the first parallax amount by correlation.
- the relationship between the parallax calculated by correlation matching and the amount of deviation in the parallax direction of the actual object image (the amount of deviation of the peak position) is stored in advance, and any subject is imaged by pupil division.
- a temporary correct answer (first parallax amount) between the viewpoint images is obtained by matching based on the correlation, and based on the temporary correct answer and the disparity correction information.
- the temporary correct answer is corrected to the amount of deviation in the parallax direction of the actual object image (second parallax amount), it is possible to obtain parallax that is resistant to noise and equivalent to parallax sensed by a person.
- the plurality of viewpoint images are imaged in a pupil division by an imaging element having an imaging surface in which a plurality of photoelectric conversion elements are arranged, and the storage unit has a parallax for each different position on the imaging surface of the imaging element.
- the correction information is stored, and the second parallax amount calculating unit acquires the parallax correction information for each position of the imaging surface of the imaging element corresponding to the position in the plurality of viewpoint images from the storage unit, and the acquired parallax correction information
- the second parallax amount is calculated based on the above.
- the storage unit stores the parallax correction information for each color channel of the pixel of the viewpoint image
- the second parallax amount calculation unit acquires the parallax correction information for each color channel stored in the storage unit.
- the second parallax amount is calculated for each color channel based on the acquired parallax correction information.
- the plurality of viewpoint images are composed of pixels of a plurality of colors including green pixels
- the storage unit stores parallax correction information corresponding to the green pixels
- the second parallax amount calculation unit Calculates the second amount of parallax in the pixels other than the green pixel and the green pixel of the plurality of viewpoint images based on the parallax correction information corresponding to the green pixel stored in the storage unit.
- the plurality of viewpoint images are captured using a photographing optical system having a variable aperture value
- the storage unit stores parallax correction information for each aperture value of the photographing optical system, and the second parallax is stored.
- the amount calculation means acquires the parallax correction information using the aperture value when a plurality of viewpoint images are captured using the photographing optical system as a parameter, and calculates the second parallax amount based on the acquired parallax correction information.
- a parallax enhancing unit that increases the second parallax amount of the plurality of viewpoint images calculated by the second parallax amount calculating unit is provided.
- a multi-viewpoint that generates a multi-viewpoint image having a different number of viewpoints from the plurality of viewpoint images acquired by the image acquisition unit An image generation means is provided.
- the first parallax amount calculating means calculates an evaluation value indicating a degree of matching in a predetermined pixel unit or sub-pixel unit by performing block matching between viewpoint images, and the evaluation value is minimized.
- a shift amount between the positions is set as a first parallax amount.
- the parallax calculation means calculates the evaluation value by obtaining the sum of squares of the pixel value differences between the viewpoint images or the sum of the pixel value differences between the viewpoint images.
- the storage means stores the parallax correction information as a lookup table or a calculation formula.
- the present invention also provides an imaging apparatus comprising the above-described image processing apparatus and imaging means for performing pupil division imaging.
- the imaging unit has a photographing optical system including a focus lens, and adjusts the position of the focus lens of the photographing optical system based on the second parallax amount calculated by the second parallax amount calculating unit.
- An autofocus processing means for performing control is provided.
- the present invention also includes a step of acquiring a plurality of viewpoint images with different viewpoints generated by pupil division imaging, a step of calculating a first amount of parallax between the viewpoint images of the plurality of viewpoint images,
- the parallax correction information representing the relationship between the parallax amount and the shift amount in the parallax direction of the corresponding object image between the viewpoint images of the plurality of viewpoint images resulting from the imaging of the pupil division is stored in the storage unit in advance.
- FIG. 1 is a perspective view showing an embodiment of a stereoscopic imaging apparatus to which an image processing apparatus according to the present invention is applied.
- Rear view of the stereoscopic imaging device The figure which shows the structural example of the image pick-up element of a three-dimensional imaging device.
- Enlarged view of the main part of the image sensor The block diagram which shows one Embodiment of the internal structure of the said three-dimensional imaging device.
- Explanatory drawing which shows the example of the left eye image and right eye image which were imaged by the said imaging element by pupil division Explanatory drawing showing a situation where a point light source that is located closer than the just pin position is imaged by pupil division Explanatory drawing when a point light source is imaged by normal imaging with a conventional imaging device Explanatory drawing when a point light source is imaged by pupil division imaging with the imaging device of this example Explanatory drawing which shows the example of the left eye image and right eye image which were obtained by imaging the point light source which exists in the position closer than the just pin position by pupil division Explanatory drawing which shows the example of the left eye image and right eye image which were obtained by imaging the point light source which exists in the position far from the just pin position by pupil division Explanatory drawing which shows the difference with the amount of deviation
- the block diagram which shows the functional internal structural example of CPU and memory Explanatory drawing used for explaining the relationship between the distance of the point light source, the calculated parall
- a graph showing an example of the correspondence between the calculated parallax and the parallax correction amount A graph showing an example of a correspondence relationship between the calculated parallax and the parallax correction amount for each color channel Explanatory drawing used for explaining the relationship between the position in the screen, the calculated parallax, and the peak position deviation, and shows the left eye image and right eye image obtained by imaging the point light source by pupil division. It is a graph which shows the example of the correspondence of the calculated parallax for every position in a screen, and a parallax correction amount.
- FIG. 3 is an external view of a smartphone as a portable electronic device that is another embodiment of the image processing apparatus according to the invention. Block diagram showing the configuration of the smartphone
- FIG. 1 is a perspective view showing an embodiment of a stereoscopic imaging apparatus to which an image processing apparatus according to the present invention is applied.
- FIG. 2 is a rear view of the stereoscopic imaging apparatus.
- the stereoscopic imaging device 10 (3D digital camera) is a digital camera that receives light passing through a lens with an imaging device, converts the light into a digital signal, and records the digital signal on a recording medium such as a memory card.
- a stereoscopic imaging device 10 has a photographing lens 12, a strobe 1 and the like disposed on the front surface, and a shutter button 2, a power / mode switch 3, a mode dial 4 and the like disposed on the upper surface. Yes.
- a 3D liquid crystal monitor 30 for 3D display a zoom button 5, a cross button 6, a MENU / OK button 7, a playback button 8, a BACK button 9, and the like are arranged on the back of the camera. Yes.
- the photographing lens 12 is constituted by a retractable zoom lens, and is set out from the camera body by setting the camera mode to the photographing mode by the power / mode switch 3.
- the strobe 1 irradiates strobe light toward a main subject.
- the shutter button 2 is composed of a two-stroke switch composed of a so-called “half press” and “full press”.
- the shutter button 2 is “half-pressed” to activate the AE / AF, and “full-press” to execute shooting.
- the shutter button 2 is “fully pressed” to execute shooting.
- the power / mode switch 3 has both a function as a power switch for turning on / off the power of the stereoscopic imaging device 10 and a function as a mode switch for setting the mode of the stereoscopic imaging device 10. It is slidably disposed between the “reproduction position” and the “photographing position”. The stereoscopic imaging apparatus 10 is turned on by sliding the power / mode switch 3 to the “reproduction position” or “photographing position”, and turned off by adjusting to the “OFF position”. . Then, the power / mode switch 3 is slid and set to “playback position” to set to “playback mode”, and to the “shooting position” to set to “shooting mode”.
- the mode dial 4 functions as a shooting mode setting means for setting the shooting mode of the stereoscopic imaging device 10, and the shooting mode of the stereoscopic imaging device 10 is set to various modes depending on the setting position of the mode dial. For example, there are a “planar image capturing mode” for capturing a planar image, a “stereoscopic image capturing mode” for capturing a stereoscopic image (3D image), and a “moving image capturing mode” for capturing a moving image.
- the 3D liquid crystal monitor 30 is a stereoscopic display unit that can display a stereoscopic image (a left-eye image and a right-eye image) as a directional image having a predetermined directivity by a parallax barrier.
- a stereoscopic image is input to the 3D liquid crystal monitor 30, a parallax barrier having a pattern in which light transmitting portions and light shielding portions are alternately arranged at a predetermined pitch on the parallax barrier display layer of the 3D liquid crystal monitor 30.
- strip-shaped image fragments showing left and right images are alternately arranged and displayed on the lower image display surface.
- the form of the 3D liquid crystal monitor 30 is not limited to this, and as long as the left-eye image and the right-eye image are displayed recognizable as a three-dimensional image, a dedicated lenticular lens, polarizing glasses, liquid crystal shutter glasses, and the like are used. It may be possible to see the left eye image and the right eye image individually by wearing glasses.
- the zoom button 5 functions as zoom instruction means for instructing zooming, and includes a tele button 5T for instructing zooming to the telephoto side and a wide button 5W for instructing zooming to the wide angle side.
- the focal length of the photographing lens 12 is changed by operating the tele button 5T and the wide button 5W in the photographing mode. Further, when the tele button 5T and the wide button 5W are operated in the reproduction mode, the image being reproduced is enlarged or reduced.
- the cross button 6 is an operation unit for inputting instructions in four directions, up, down, left, and right, and is a button (cursor moving operation means) for selecting an item from a menu screen or instructing selection of various setting items from each menu.
- the left / right key functions as a frame advance (forward / reverse feed) button in the playback mode.
- the MENU / OK button 7 has both a function as a menu button for instructing to display a menu on the screen of the 3D liquid crystal monitor 30 and a function as an OK button for instructing confirmation and execution of selection contents. Operation key.
- the playback button 8 is a button for switching to a playback mode in which a 3D liquid crystal monitor 30 displays a still image or a moving image of a stereoscopic image (3D image) or a planar image (2D image) that has been shot and recorded.
- the BACK button 9 functions as a button for instructing to cancel the input operation or return to the previous operation state.
- the photographing lens 12 is an imaging optical system composed of a number of lenses including a focus lens and a zoom lens. In the photographing mode, the image light indicating the subject is imaged on the light receiving surface of the image sensor 16 via the photographing lens 12.
- FIG. 3 is a diagram illustrating a configuration example of the image sensor 16.
- the image sensor 16 is configured as a CCD image sensor for detecting images having a parallax (a plurality of viewpoint images), and pixels of odd lines (also referred to as main pixels and A-plane pixels) arranged in a matrix, respectively. It has pixels of even lines (also referred to as sub-pixels and B-plane pixels), and image signals (a plurality of viewpoint images) for two surfaces photoelectrically converted by these main and sub-pixels are read out independently. Be able to.
- FIG. 4 is an enlarged view of a main part of the image sensor 16 functioning as a phase difference image sensor.
- a light shielding member 16A is disposed on the front surface side (microlens L side) of the photodiode PD of the main pixel of the image sensor 16, as shown in FIG. 4B.
- a light shielding member 16B is disposed on the front side of the photodiode PD of the subpixel.
- the microlens L and the light shielding members 16A and 16B function as pupil dividing means.
- the light shielding member 16A shields the left half of the light receiving surface of the main pixel (photodiode PD). To do. Therefore, only the left side of the optical axis of the light beam passing through the exit pupil of the photographing lens 12 is received by the main pixel.
- the light shielding member 16B shields the right half of the light receiving surface of the subpixel (photodiode PD). Therefore, only the right side of the optical axis of the light beam passing through the exit pupil of the photographing lens 12 is received by the sub-pixel. In this way, the light beam passing through the exit pupil is divided into the right and left by the microlens L and the light shielding members 16A and 16B, which are pupil dividing means, and enter into the main pixel and the subpixel, respectively.
- the in-focus portion is on the image sensor 16.
- the front and rear pin portions are incident on different positions on the image sensor 16 (out of phase). Accordingly, the subject image corresponding to the left half light beam and the subject image corresponding to the right half light beam can be acquired as parallax images (left eye image and right eye image) having different parallaxes.
- the image sensor 16 of this embodiment is a CCD image sensor, but is not limited thereto, and may be a CMOS type image sensor.
- FIG. 5 is a block diagram showing an embodiment of the internal configuration of the stereoscopic imaging apparatus 10.
- the stereoscopic imaging apparatus 10 records captured images on a memory card 54, and the overall operation of the apparatus is controlled by a central processing unit (CPU) 40.
- CPU central processing unit
- the stereoscopic imaging device 10 is provided with operation units 38 such as a shutter button, a mode dial, a playback button, a MENU / OK key, a cross key, a zoom button, a BACK key, and the like.
- operation units 38 such as a shutter button, a mode dial, a playback button, a MENU / OK key, a cross key, a zoom button, a BACK key, and the like.
- a signal from the operation unit 38 is input to the CPU 40, and the CPU 40 controls each circuit of the stereoscopic imaging device 10 based on the input signal. For example, lens driving control, aperture driving control, photographing operation control, image processing control, image processing Data recording / reproduction control, display control of the 3D liquid crystal monitor 30, and the like are performed.
- the operation unit 38 is provided with a parallax amount setting unit for user-set parallax correction.
- the photographing lens 12 is an imaging optical system composed of a large number of lenses.
- the diaphragm 14 is composed of, for example, five diaphragm blades, and the diaphragm driving unit 34 controls the diaphragm value (F value) from F2 to F8 continuously or stepwise.
- the light flux that has passed through the photographing lens 12 and the diaphragm 14 is imaged on the image sensor 16, and signal charges are accumulated in the image sensor 16.
- the signal charge accumulated in the image sensor 16 is read out as a voltage signal corresponding to the signal charge based on a read signal applied from the image sensor control unit 32.
- the voltage signal read from the image sensor 16 is applied to the analog signal processing unit 18.
- the analog signal processing unit 18 performs image capturing for the purpose of reducing correlated double sampling processing (noise (particularly thermal noise) included in the output signal of the image sensor 16) with respect to the voltage signal output from the image sensor 16.
- R, G, and B signals for each pixel are sampled and held by a process of obtaining accurate pixel data by taking the difference between the feedthrough component level and the pixel signal component level included in the output signal for each pixel of the element 16 After being amplified and added to the A / D converter 20.
- the A / D converter 20 converts R, G, and B signals that are sequentially input into digital R, G, and B signals and outputs them to the image input controller 22.
- the digital signal processing unit 24 performs gain control processing including gamma correction processing, gamma correction processing, synchronization processing, YC processing for digital image signals input via the image input controller 22, including offset processing, white balance correction, and sensitivity correction. Then, predetermined signal processing such as edge enhancement processing is performed.
- the main image data read from the main pixels of the odd lines of the image sensor 16 is processed as left eye image data
- the sub image data read from the sub pixels of the even lines is processed as right eye image data.
- the left eye image data and right eye image data (3D image data) processed by the digital signal processing unit 24 are input to the VRAM 50.
- the VRAM 50 includes an A area and a B area for recording 3D image data each representing a 3D image for one frame.
- 3D image data representing a 3D image for one frame is rewritten alternately in the A area and the B area.
- the written 3D image data is read from an area other than the area in which the 3D image data is rewritten in the A area and the B area of the VRAM 50.
- the 3D image data read from the VRAM 50 is encoded by the video encoder 28 and output to the 3D liquid crystal monitor 30 provided on the back of the camera, whereby 3D subject images are continuously displayed on the display screen of the 3D liquid crystal monitor 30. Displayed above.
- the CPU 40 starts the AF operation and the AE operation, moves the focus lens in the optical axis direction via the lens driving unit 36, and Control is performed so that the focus lens comes to the in-focus position.
- the AF processing unit 42 is a part that performs contrast AF processing or phase difference AF processing.
- contrast AF processing an AF indicating an in-focus state is extracted by extracting a high-frequency component of an image in a predetermined focus area from at least one of the left-eye image and the right-eye image and integrating the high-frequency component.
- An evaluation value is calculated.
- AF control is performed by controlling the focus lens in the photographic lens 12 so that the AF evaluation value is maximized.
- phase difference AF processing a phase difference between images corresponding to main pixels and subpixels in a predetermined focus area of the left eye image and right eye image is detected, and based on information indicating the phase difference. To obtain the defocus amount.
- AF control is performed by controlling the focus lens in the taking lens 12 so that the defocus amount becomes zero.
- the CPU 40 moves the zoom lens forward and backward in the optical axis direction via the lens driving unit 36 in accordance with the zoom command from the zoom button 5 to change the focal length.
- the image data output from the A / D converter 20 when the shutter button 2 is half-pressed is taken into the AE detection unit 44.
- the AE detection unit 44 integrates the G signals of the entire screen or integrates the G signals that are weighted differently in the central portion and the peripheral portion of the screen, and outputs the integrated value to the CPU 40.
- the CPU 40 calculates the brightness of the subject (shooting Ev value) from the integrated value input from the AE detection unit 44, and calculates the F value of the aperture 14 and the electronic shutter (shutter speed) of the image sensor 16 based on the shooting EV value. It is determined according to a predetermined program diagram.
- the element denoted by reference numeral 46 is a known face detection circuit for detecting the face of a person within the shooting angle of view and setting the area including the face as an AF area and an AE area (for example, see JP-A-9-101579.
- the element denoted by reference numeral 47 corrects the stereoscopic effect of the left eye image and the right eye image according to the present invention in addition to the camera control program, the defect information of the image sensor 16, various parameters and tables used for image processing, and the like.
- An image processing program for performing (parallax correction), a calculation formula or look-up table for calculating filter coefficients, a parameter of a calculation formula corresponding to the degree of parallax or parallax enhancement, or information for determining a look-up table is stored.
- ROM EEPROM
- the shutter button 2 When the shutter button 2 is half-pressed, the AE operation and the AF operation are completed.
- the shutter button When the shutter button is pressed in the second stage (full press), the main pixel output from the A / D converter 20 in response to the press. And the image data for two sheets of the left viewpoint image (main image) and the right viewpoint image (sub image) corresponding to the sub-pixel are input from the image input controller 22 to the memory (SDRAM) 48 and temporarily stored.
- SDRAM memory
- the two pieces of image data temporarily stored in the memory 48 are appropriately read out by the digital signal processing unit 24.
- the synchronization processing interpolation of the spatial shift of the color signal accompanying the arrangement of the primary color filters is performed.
- Predetermined signal processing including image processing for parallax correction and edge enhancement according to the present invention, and YC processing (processing for generating luminance data and color difference data of image data) is performed.
- the YC processed image data (YC data) is stored in the memory 48 again.
- the two pieces of YC data stored in the memory 48 are respectively output to the compression / decompression processing unit 26 and subjected to predetermined compression processing such as JPEG (joint photographic experts group), and then stored in the memory 48 again.
- predetermined compression processing such as JPEG (joint photographic experts group)
- JPEG joint photographic experts group
- a multi-picture file (MP file: a file in a format in which a plurality of images are connected) is generated from two pieces of YC data (compressed data) stored in the memory 48, and the MP file is read by the media controller 52. And recorded on the memory card 54.
- the stereoscopic imaging device 10 can acquire not only a stereoscopic image (3D image) but also a planar image (2D image).
- FIG. 6 shows an example of the left-eye image 60L and the right-eye image 60R having different viewpoints generated by the imaging device 16 by pupil division imaging.
- Reference numeral 61 denotes a short-distance object image
- reference numeral 62 denotes a medium-distance object image
- reference numeral 63 denotes a long-distance object image.
- the amount of deviation in the parallax direction (in the horizontal direction in the figure) of the object image with the same sign between the left eye image 60L and the right eye image 60R (the amount of deviation in the parallax direction between the object images of numeral 61, the object image of numeral 62)
- the amount of deviation in the parallax direction between them, and the amount of deviation in the parallax direction between the object images denoted by reference numeral 63) correspond to the actual parallax.
- the rough flow is the following [1] parallax measurement and [2] parallax correction.
- Parallax measurement block matching is performed by correlation calculation between a plurality of acquired viewpoint images (left-eye image 60L and right-eye image 60R having different viewpoints generated by pupil division imaging), and parallax ("parallax” (Also referred to as “quantity”).
- Parallax correction Based on the calculated parallax of the plurality of viewpoint images (the left eye image 60L and the right eye image 60R) and the parallax correction information corresponding to the calculated parallax, Correction is made for the amount of deviation in the parallax direction of the image (actual parallax).
- a correlation method is known as a parallax measurement method. For example, it is possible to perform matching by scanning a kernel having 15 pixels in the x direction and 1 pixel in the y direction, and finding a portion having the smallest sum of squares of pixel value differences.
- the measurement result of parallax can be expressed as a monochrome image (distance image) that becomes darker as the distance increases.
- the parallax can be measured in sub-pixel units. For details, see Arai et al. “Optimization of correlation function and sub-pixel estimation method in image block matching (Information Processing Society of Japan, Vol. 2004, Issue: 40 (CVIM-144), pages: 33-40) "and the like.
- parallax measurement there is generally a problem that there is a slight difference between a parallax value calculated using correlation by performing block matching and a parallax value felt by a person.
- FIG. 10 shows an example of a left-eye image 71L and a right-eye image 71R obtained by imaging a point light source that exists at a position closer to the justin position by the pupil division method.
- FIG. 11 shows an example of a left-eye image 72L and a right-eye image 72R obtained by imaging a point light source existing at a position far from the justin position by the pupil division method.
- an ideal pulse-shaped image signal can be obtained as indicated by reference numeral 12A, but the point light source is closer to the jaspin position. If present, a left-eye image 71L and a right-eye image 71R with parallax are obtained as indicated by reference numeral 12B.
- the parallax is calculated by performing block matching, and the right-eye image 71R is moved in the x-axis direction by the calculated parallax.
- the peak position of the pixel value of the left eye image 71L may not match the peak position of the pixel value of the right eye image 71R. That is, as indicated by reference numeral 12D, the peak position of the pixel value of the left-eye image 71L does not always match the peak position of the pixel value of the right-eye image 71R.
- the calculated parallax (parallax amount) that is the measurement result is used as an input parameter, and a correction amount for correcting the calculated parallax (the deviation amount d of reference numeral 12C) is output. It is preferable to use a lookup table (or calculation formula).
- FIG. 12 shows a case where parallax correction information (lookup table or calculation formula) representing the correspondence between the calculated parallax and the correction amount of the calculated parallax (difference between the calculated parallax and the peak position deviation amount) is used.
- this parallax correction information indirectly represents the correspondence between the calculated parallax and the shift amount of the peak position (the shift amount of the actual object image in the parallax direction). You may make it use the parallax correction information (lookup table or calculation formula) which directly represented the correspondence of the calculated parallax and the deviation
- FIG. 13 is a block diagram showing a functional internal configuration example of the CPU 40 and the memory 48 (storage means) in the present embodiment.
- the memory 48 includes a parallax (calculated parallax) calculated by correlation between point light source images in a point light source image of a plurality of viewpoints generated by imaging a point light source by pupil division, and a point light source image.
- Parallax correction information representing the relationship with the shift amount of the peak value of the pixel value (shift amount in the parallax direction of the actual object image) is stored.
- the parallax correction information is calculated parallax between viewpoint images of a plurality of viewpoint images generated by imaging an arbitrary subject by pupil division, and a shift of a peak position of a pixel value between viewpoint images of the plurality of viewpoint images. This represents the relationship with the amount (the amount of deviation in the parallax direction of the actual object image).
- the memory 48 of the present example uses a lookup table (or a function calculation formula) that associates the calculated parallax with the correction amount d (difference between the calculated parallax and the peak position deviation amount) as the parallax correction information. ,Remember.
- the CPU 40 calculates the parallax (calculated parallax) between viewpoint images in a plurality of viewpoint images generated by the pupil division method imaging by the imaging element 16 by the parallax calculating unit 40a and the parallax calculating unit 40a. Based on the calculated parallax and the parallax correction information stored in the memory 48, the parallax correction unit 40b that corrects the calculated parallax to the amount of shift of the peak position (shift amount of the actual object image in the parallax direction); Disparity enhancing means 40c for increasing the parallax of the plurality of viewpoint images corrected by the correcting means 40b is provided.
- the parallax correction unit 40b of this example acquires a correction amount for correcting the calculated parallax calculated by the correlation based on a lookup table (or a function calculation formula) stored in the memory 48, and the acquired Based on the correction amount, the calculated parallax is corrected to the amount of deviation of the peak position.
- parallax correction unit 40b corrects the calculated parallax directly to the amount of deviation of the peak position based on the parallax correction information.
- the stereoscopic imaging apparatus 10 includes the first parallax amount calculating unit (parallax calculating unit 40a) that calculates the first parallax amount (calculated parallax) between the viewpoint images of the plurality of viewpoint images.
- Storage means storing disparity correction information representing a relationship between the first amount of parallax and the amount of shift in the parallax direction of the corresponding real object image between the viewpoint images of the plurality of viewpoint images resulting from pupil division imaging
- the first amount of parallax is corrected to the amount of deviation in the parallax direction of the actual object image.
- Second parallax amount calculating means (parallax correcting means 40b) for calculating the second parallax amount and second parallaxes of the plurality of viewpoint images calculated by the second parallax amount calculating means (parallax correcting means 40b).
- Disparity enhancing means 40c for increasing the amount is provided.
- the first parallax amount is calculated by performing matching between the viewpoint images (acquisition of correspondence) by the arithmetic processing of the correlation method, but the present invention uses arithmetic processing other than the correlation method.
- the first parallax amount is calculated by matching feature points between viewpoint images.
- the first amount of parallax is corrected to the amount of deviation of the peak position, but the present invention provides the second information as information indicating the amount of deviation in the parallax direction of the actual object image other than the amount of deviation of the peak position. Including the case of calculating the amount of parallax.
- matching between viewpoint images include matching such as KLT (Kanade Lucas Tomasi) method, SIFT (Scale-Invariant Feature Transform) method and the like.
- KLT Kanade Lucas Tomasi
- SIFT Scale-Invariant Feature Transform
- the KLT method is more robust than the correlation method when there is a gap in the brightness of the entire image.
- the SIFT method has an advantage that it can cope with rotation and enlargement / reduction of an image although processing time is required. Therefore, it is preferable to select an appropriate calculation method according to the processing speed, the variety of imaging scenes, and the like.
- the evaluation value of the degree of coincidence between the blocks of both viewpoint images is obtained by calculating the difference between each pixel value L (i, j) of the target block of the left eye image and each pixel value R (i, j) of the target block of the right eye image.
- the smaller the sum of squares that is the evaluation value of the degree of matching between blocks the higher the degree of matching between blocks.
- the calculation of the evaluation value of the degree of coincidence is not limited to the sum of squares (SSD).
- SSD summation
- CC cross-correlation
- NCC normalized cross-correlation
- the correlation in the present invention is not limited to the above example.
- Various methods (correlation methods) for calculating parallax by performing block matching between a plurality of viewpoint images and analyzing the correlation between the images in units of pixels or subpixels can be used.
- FIG. 14 shows a plurality of viewpoint images (plurality images) obtained by placing point light sources at a plurality of positions that are closer to the Jaspin position and at different distances from the stereoscopic imaging apparatus 10 and image the point light sources at each position by the pupil division method.
- An example of a point light source image of (2) viewpoint is shown.
- the calculated parallax increases as the distance from the stereoscopic imaging apparatus 10 increases, and the blur shape of the point light source increases.
- the first multi-viewpoint image (the first left-eye image 101L and the first right-eye image 101R) has a calculated parallax of 2.64 pixels, a deviation amount of the peak position of the pixel value is 2.78 pixels, and the parallax The correction amount was 0.14 pixel.
- the second multi-viewpoint image (the second left-eye image 102L and the second right-eye image 102R) has a calculated parallax of 4.66 pixels, a shift amount of the peak position of the pixel value is 4.89 pixels, and the parallax The correction amount was 0.23 pixels.
- the third multi-viewpoint image (the third left-eye image 103L and the third right-eye image 103R) has a calculated parallax of 7.90 pixels, a deviation amount of the peak position of the pixel value is 8.19 pixels, and the parallax The correction amount was 0.29 pixels.
- the calculated parallax is a parallax indicating a deviation amount between the position of the point light source image in the left eye image and the position of the point light source image in the right eye image, and the point light source image and the right eye image in the left eye image.
- This is the parallax calculated by correlation with the image of the middle point light source.
- the deviation amount of the peak position of the pixel value (the deviation amount of the actual object image in the parallax direction) is the deviation amount between the peak position of the pixel value in the left-eye image and the peak position of the pixel value in the right-eye image.
- the amount of deviation of the point light source image is shown.
- the parallax correction amount is a difference between the calculated parallax and the shift amount of the peak position of the pixel value.
- FIG. 15 is a graph showing an example of the correspondence relationship between the calculated parallax and the parallax correction amount.
- the point light source is placed at a plurality of positions with different perspectives, and the point light source is imaged by the pupil division method at each position where the point light source is placed, and calculated by correlation.
- a parallax correction amount that is the difference between the calculated parallax (calculated parallax) and the amount of deviation of the peak position of the pixel value (the amount of deviation in the parallax direction of the actual object image) is obtained, and the parallax correction is performed for each arbitrary parallax by interpolation processing It is generated as information indicating the quantity.
- the parallax correction amount has a certain correlation with respect to the parallax calculated by the correlation (calculated parallax).
- the relationship between the calculated parallax and the parallax correction amount is obtained by actual measurement in advance and stored as a lookup table (LUT).
- the relationship between the calculated parallax and the parallax correction amount may be used as a function, and the parallax correction amount may be calculated using the parallax calculated by the correlation as a parameter.
- the parallax correction amount may be calculated by simulation.
- a direct relationship between the calculated parallax and the amount of deviation in the parallax direction of the actual object image may be stored as a lookup table (LUT), or the deviation between the calculated parallax and the parallax direction of the actual object image may be stored.
- a direct relationship with quantity may be a function.
- the plurality of viewpoint images are imaged in the pupil division by the imaging element 16 having the imaging surface 16C in which the plurality of photoelectric conversion elements are arranged, and the memory 48 has different positions on the imaging surface 16C of the imaging element 16.
- the parallax correction information is stored for each position, and the parallax calculation means 40a (second parallax amount calculation means) stores the parallax correction information for each position of the imaging surface 16C of the imaging element 16 corresponding to the positions in the plurality of viewpoint images. 48, and the calculated parallax is corrected based on the acquired parallax correction information.
- FIG. 16 is a graph illustrating an example of a correspondence relationship between the calculated parallax and the parallax correction amount for each color channel. As shown in FIG. 16, the correspondence between the calculated parallax and the parallax correction amount differs for each color channel.
- the color channel is a series of pixels of the same color in the viewpoint image corresponding to each color (R, G, B) of the color filter provided in the pixel of the image sensor 16.
- the photoelectric conversion elements with color filters arranged on the imaging surface 16C of the imaging element 16 generally have the highest refractive index of B (blue) among R, G, and B, the B channel in the captured image (multiple viewpoint images).
- the size of the blur shape in the (B pixel series) is larger than the size of the blur shape in the R channel and the G channel. Therefore, when the difference (parallax correction amount) between the calculated parallax and the peak value shift amount of the pixel value is calculated for each color channel (for each of R, G, and B), the absolute value of the calculated difference (parallax correction amount) is Generally, R ⁇ G ⁇ B.
- the memory 48 of this example stores a lookup table (or function) for outputting a parallax correction amount for each color channel.
- the amount of parallax correction is calculated based on the point light source parallax calculated for each color channel (calculated parallax) and the amount of deviation of the peak position of the pixel value calculated for each color channel (the amount of deviation in the parallax direction of the actual object image). ). That is, the memory 48 stores a lookup table (or function) for each pixel color of the image sensor 16 that captures a plurality of viewpoint images (the color of the color filter provided in the photodiode PD).
- the parallax correction unit 40b of the present example uses a point light source parallax (calculated parallax) calculated by correlation and a deviation amount of a peak position of a pixel value (deviation amount of an actual object image in a parallax direction) for each color channel.
- the difference (parallax correction amount) is acquired from the memory 48, and the parallax (calculated parallax) in a plurality of viewpoint images is corrected for each color channel based on the acquired difference.
- the parallax can be corrected with an appropriate correction amount.
- this example is not limited to the case where the parallax correction is performed for each color channel as described above, and information on the G channel having the largest correlation with the brightness (the amount of deviation between the calculated parallax and the parallax direction of the actual object image) May be stored in the memory 48 in advance, and parallax correction of other color channels may be performed based on the information of the G channel.
- FIG. 17 is an explanatory diagram used for explaining the relationship between the position in the screen, the calculated parallax, and the amount of deviation of the peak position, and is an explanatory diagram showing a left eye image and a right eye image obtained by imaging a point light source by pupil division. It is.
- FIG. 18 is a graph illustrating an example of a correspondence relationship between the calculated parallax and the parallax correction amount for each position (center, end) in the screen.
- the position in the screen is a screen corresponding to the imaging surface 16C on which the photoelectric conversion elements of the imaging element 16 are arranged, and corresponds to the entire area of the captured image (viewpoint image).
- the photoelectric conversion elements (photodiode PD) arranged on the imaging surface 16C of the imaging device 16 have different light incident angles at the center and the end of the imaging surface. Therefore, in general, the blur shape of the image of the point light source at the center position and the blur shape of the point light source at the left and right end positions in the entire area of the captured image corresponding to the imaging surface 16C of the image sensor 16 are different. .
- the memory 48 of this example stores a lookup table (or function) that outputs the parallax correction amount for each position in the screen. That is, the memory 48 stores a lookup table (or function) for each different position on the imaging surface of the imaging device 16 that captures a plurality of viewpoint images.
- the amount of parallax correction is calculated based on the point light source parallax calculated for each screen position (calculated parallax) and the peak position shift amount calculated for each screen position (in the parallax direction of the actual object image). Difference). That is, the memory 48 stores the parallax correction amount for each predetermined position in the entire area of the captured image.
- the predetermined position may be two or more positions in the screen. For example, only the parallax correction amount at the center position of the screen and the parallax correction amounts at the left and right end positions may be stored in the memory 48.
- the parallax correction unit 40b of this example reads the parallax correction amount for each position in the screen from the memory 48, and corrects the calculated parallax in the plurality of viewpoint images for each position in the screen based on the read difference.
- the parallax can be adjusted with an appropriate correction amount. Can be corrected.
- the parallax correction amount may be managed by a look-up table or a function for each of a large number (for example, 16 positions) of the screen.
- the screen may be divided into a plurality of strip-shaped areas in the left-right direction, and the parallax correction amount may be switched depending on which area it is. .
- the memory 48 stores the parallax correction amount for each F value of the photographing optical system 11, and the parallax correcting unit 40 b uses the F value when a plurality of viewpoint images are captured using the photographing optical system 11 as parameters.
- the parallax correction amount may be read from the memory 48 and the calculated parallax in the plurality of viewpoint images may be corrected based on the read parallax correction amount.
- FIG. 19 is a flowchart illustrating a flow of an image processing example when parallax enhancement is performed.
- the difference from the deviation amount is stored as a parallax correction amount for each parallax of the point light source.
- a plurality of viewpoint images (left eye image, right eye image) in which an arbitrary subject is imaged by pupil division are acquired (step S11).
- the plurality of viewpoint images do not include a point light source as a subject.
- the parallax correction amount corresponding to the parallax calculated by the correlation in the plurality of viewpoint images is read from the memory 48, and the parallax calculated by the correlation in the plurality of viewpoint images is corrected based on the read parallax correction amount. (Step S13).
- parallax enhancement is performed to increase the parallax of the corrected plurality of viewpoint images (step S14).
- FIG. 20 is a flowchart illustrating a flow of an image processing example when multi-viewpoint image generation is performed.
- Steps S11 to S14 are the same as the image processing example in FIG.
- step S15 based on the parallax of the plurality of viewpoint images corrected in step S13, a multi-viewpoint image having a different number of viewpoints from the plurality of viewpoint images acquired in step S11 is generated.
- FIG. 21 is a flowchart showing a flow of a conventional phase difference AF (autofocus) processing example
- FIG. 22 is a flowchart showing a flow of a phase difference AF processing example to which the present invention is applied.
- a parallax as a defocus amount is obtained by correlation (step S22). For example, as the correlation, the squares of the pixel value differences between the left and right phase difference pixels are summed within a predetermined area by the least square method, and the parallax that minimizes the sum is set as the correct defocus amount.
- step S31 when the signals of the left and right phase difference pixels are acquired (step S31), the parallax as the defocus amount is obtained by the correlation to obtain a temporary correct answer (step S32).
- the parallax correction amount is calculated using the look-up table in the memory 48 using the temporary correct answer as a parameter (step S33), and the temporary correct answer obtained in step S32 plus the parallax correction amount is defocused.
- the amount is correct (step S34).
- the step of acquiring a plurality of viewpoint images with different viewpoints generated by pupil division imaging and the first parallax between the viewpoint images of the plurality of viewpoint images is stored in the memory 48 in advance, and based on the first parallax amount and the parallax correction information stored in the memory 48, the first parallax amount is set to the amount of deviation in the parallax direction of the actual object image. Calculating a corrected second amount of parallax.
- the imaging optical system 11 By moving the focus lens in the optical axis direction of the photographic optical system 11, a process for focusing the focus lens on the target subject is also performed.
- the portable electronic device has been described by taking the stereoscopic imaging device (digital camera) 10 as an example, but the configuration of the portable electronic device is not limited to this.
- a built-in or external PC camera or a portable terminal device having a photographing function as described below can be used as another portable electronic device to which the present invention is applied.
- Examples of the portable terminal device that is the second embodiment of the portable electronic device according to the present invention include a mobile phone, a smartphone, a PDA (Personal Digital Assistant), and a portable game machine.
- a smartphone will be described as an example, and will be described in detail with reference to the drawings.
- FIG. 23 shows an appearance of a smartphone 500 which is another embodiment of the portable electronic device according to the present invention.
- a smartphone 500 illustrated in FIG. 23 includes a flat housing 502, and a display input in which a display panel 521 as a display unit and an operation panel 522 as an input unit are integrated on one surface of the housing 502. Part 520.
- the housing 502 includes a speaker 531, a microphone 532, an operation unit 540, and a camera unit 541.
- the configuration of the housing 502 is not limited thereto, and for example, a configuration in which the display unit and the input unit are independent, or a configuration having a folding structure or a slide mechanism can be employed.
- FIG. 24 is a block diagram showing a configuration of the smartphone 500 shown in FIG.
- the main components of the smartphone include a wireless communication unit 510, a display input unit 520, a call unit 530, an operation unit 540, a camera unit 541, a storage unit 550, and an external input / output unit. 560, a GPS (Global Positioning System) receiving unit 570, a motion sensor unit 580, a power supply unit 590, and a main control unit 501.
- a wireless communication function for performing mobile wireless communication via the base station device BS and the mobile communication network NW is provided as a main function of the smartphone 500.
- the wireless communication unit 510 performs wireless communication with the base station apparatus BS accommodated in the mobile communication network NW according to an instruction from the main control unit 501. Using this wireless communication, transmission and reception of various file data such as audio data and image data, e-mail data, and reception of Web data and streaming data are performed.
- the display input unit 520 displays images (still images and moving images), character information, and the like visually by transmitting information to the user under the control of the main control unit 501, and detects user operations on the displayed information.
- a so-called operation panel which includes a display panel 521 and an operation panel 522.
- the display panel 521 uses an LCD (Liquid Crystal Display), an OELD (Organic Electro-Luminescence Display), or the like as a display device.
- the operation panel 522 is a device that is placed so that an image displayed on the display surface of the display panel 521 is visible and detects one or a plurality of coordinates operated by a user's finger or stylus. When this device is operated by a user's finger or stylus, a detection signal generated due to the operation is output to the main control unit 501. Next, the main control unit 501 detects an operation position (coordinates) on the display panel 521 based on the received detection signal.
- the display panel 521 and the operation panel 522 of the smartphone 500 integrally form the display input unit 520, but the operation panel 522 is disposed so as to completely cover the display panel 521. ing.
- the operation panel 522 may have a function of detecting a user operation even in an area outside the display panel 521.
- the operation panel 522 includes a detection area (hereinafter referred to as a display area) for an overlapping portion that overlaps the display panel 521 and a detection area (hereinafter, a non-display area) for an outer edge portion that does not overlap the other display panel 521. May be included).
- the operation panel 522 may include two sensitive regions of the outer edge portion and the other inner portion. Further, the width of the outer edge portion is appropriately designed according to the size of the housing 502 and the like.
- examples of the position detection method employed in the operation panel 522 include a matrix switch method, a resistive film method, a surface acoustic wave method, an infrared method, an electromagnetic induction method, a capacitance method, and the like. You can also
- the call unit 530 includes a speaker 531 and a microphone 532, and converts a user's voice input through the microphone 532 into voice data that can be processed by the main control unit 501, and outputs the voice data to the main control unit 501, or a wireless communication unit 510 or the audio data received by the external input / output unit 560 is decoded and output from the speaker 531.
- the speaker 531 may be mounted on the same surface as the surface on which the display input unit 520 is provided.
- the microphone 532 can be mounted on the side surface of the housing 502.
- the operation unit 540 is a hardware key using a key switch or the like, and receives an instruction from the user.
- the operation unit 540 is mounted on the lower and lower side of the display unit of the housing 502 of the smartphone 500, and turns on when pressed with a finger or the like, and restores a spring or the like when the finger is released. It is a push button type switch that is turned off by force.
- the storage unit 550 includes a control program and control data of the main control unit 501, application software, address data that associates the name and telephone number of a communication partner, transmitted / received e-mail data, Web data downloaded by Web browsing, The downloaded content data is stored, and streaming data and the like are temporarily stored.
- the storage unit 550 includes an internal storage unit 551 with a built-in smartphone and an external storage unit 552 having a removable external memory slot.
- Each of the internal storage unit 551 and the external storage unit 552 constituting the storage unit 550 includes a flash memory type (flash memory type), a hard disk type (hard disk type), a multimedia card micro type (multimedia card micro type), It is realized using a storage medium such as a card type memory (for example, Micro SD (registered trademark) memory), RAM (Random Access Memory), ROM (Read Only Memory), or the like.
- flash memory type flash memory type
- hard disk type hard disk type
- multimedia card micro type multimedia card micro type
- a storage medium such as a card type memory (for example, Micro SD (registered trademark) memory), RAM (Random Access Memory), ROM (Read Only Memory), or the like.
- the external input / output unit 560 serves as an interface with all external devices connected to the smartphone 500, and communicates with other external devices (for example, universal serial bus (USB), IEEE1394, etc.) or a network.
- external devices for example, universal serial bus (USB), IEEE1394, etc.
- a network for example, Internet, wireless LAN, Bluetooth (registered trademark), RFID (Radio Frequency Identification), infrared communication (IrDA) (registered trademark), UWB (Ultra Wideband) (registered trademark), ZigBee ( ZigBee) (registered trademark, etc.) for direct or indirect connection.
- Examples of the external device connected to the smartphone 500 include a memory card connected via a wired / wireless headset, wired / wireless external charger, wired / wireless data port, card socket, and SIM (Subscriber).
- Identity Module Card / UIM User Identity Module Card
- external audio / video equipment connected via audio / video I / O (Input / Output) terminal
- external audio / video equipment connected wirelessly, yes / no
- the external input / output unit 560 transmits data received from such an external device to each component inside the smartphone 500, or allows the data inside the smartphone 500 to be transmitted to the external device. Can do.
- the GPS receiving unit 570 receives GPS signals transmitted from the GPS satellites ST1 to STn in accordance with instructions from the main control unit 501, performs positioning calculation processing based on the received GPS signals, and calculates the latitude of the smartphone 500 Detect the position consisting of longitude and altitude.
- the GPS reception unit 570 can acquire position information from the wireless communication unit 510 or the external input / output unit 560 (for example, a wireless LAN), the GPS reception unit 570 can also detect the position using the position information.
- the motion sensor unit 580 includes a triaxial acceleration sensor, for example, and detects the physical movement of the smartphone 500 in accordance with an instruction from the main control unit 501. By detecting the physical movement of the smartphone 500, the moving direction and acceleration of the smartphone 500 are detected. This detection result is output to the main control unit 501.
- the power supply unit 590 supplies power stored in a battery (not shown) to each unit of the smartphone 500 in accordance with an instruction from the main control unit 501.
- the main control unit 501 includes a microprocessor, operates according to a control program and control data stored in the storage unit 550, and controls each unit of the smartphone 500 in an integrated manner. Further, the main control unit 501 includes a mobile communication control function for controlling each unit of the communication system and an application processing function in order to perform voice communication and data communication through the wireless communication unit 510.
- the application processing function is realized by the main control unit 501 operating according to the application software stored in the storage unit 550.
- Application processing functions include, for example, an infrared communication function that controls the external input / output unit 560 to perform data communication with the opposite device, an e-mail function that transmits and receives e-mails, and a web browsing function that browses web pages. .
- the main control unit 501 has an image processing function such as displaying video on the display input unit 520 based on image data (still image data or moving image data) such as received data or downloaded streaming data.
- the image processing function is a function in which the main control unit 501 decodes the image data, performs image processing on the decoding result, and displays an image on the display input unit 520.
- the main control unit 501 executes display control for the display panel 521 and operation detection control for detecting a user operation through the operation unit 540 and the operation panel 522.
- the main control unit 501 By executing the display control, the main control unit 501 displays an icon for starting application software, a software key such as a scroll bar, or a window for creating an e-mail.
- a software key such as a scroll bar
- the scroll bar refers to a software key for accepting an instruction to move a display portion of an image, such as a large image that does not fit in the display area of the display panel 521.
- the main control unit 501 detects a user operation through the operation unit 540, or accepts an operation on the icon or an input of a character string in the input field of the window through the operation panel 522. Or a display image scroll request through a scroll bar.
- the main control unit 501 causes the operation position with respect to the operation panel 522 to overlap with the display panel 521 (display area) or other outer edge part (non-display area) that does not overlap with the display panel 521. And an operation panel control function for controlling the sensitive area of the operation panel 522 and the display position of the software key.
- the main control unit 501 can also detect a gesture operation on the operation panel 522 and execute a preset function according to the detected gesture operation.
- Gesture operation is not a conventional simple touch operation, but an operation that draws a trajectory with a finger or the like, designates a plurality of positions at the same time, or combines these to draw a trajectory for at least one of a plurality of positions. means.
- the camera unit 541 is a digital camera that performs electronic photography using an image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge-Coupled Device). Also, the camera unit 541 converts image data obtained by imaging into compressed image data such as JPEG (Joint Photographic coding Experts Group) under the control of the main control unit 501, and records the data in the storage unit 550 or externally The data can be output through the input / output unit 560 and the wireless communication unit 510.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge-Coupled Device
- the camera unit 541 is mounted on the same surface as the display input unit 520, but the mounting position of the camera unit 541 is not limited thereto, and may be mounted on the back surface of the display input unit 520. Alternatively, a plurality of camera units 541 may be mounted. Note that in the case where a plurality of camera units 541 are mounted, the camera unit 541 used for shooting can be switched to perform shooting alone, or a plurality of camera units 541 can be used for shooting simultaneously.
- the camera unit 541 can be used for various functions of the smartphone 500.
- an image acquired by the camera unit 541 can be displayed on the display panel 521, or the image of the camera unit 541 can be used as one of operation inputs of the operation panel 522.
- the GPS receiving unit 570 detects the position, the position can also be detected with reference to an image from the camera unit 541.
- the optical axis direction of the camera unit 541 of the smartphone 500 is determined without using the triaxial acceleration sensor or in combination with the triaxial acceleration sensor. It is also possible to determine the current usage environment.
- the image from the camera unit 541 can be used in the application software.
- the position information acquired by the GPS receiver 570 to the image data of the still image or the moving image, the voice information acquired by the microphone 532 (the text information may be converted into voice information by the main control unit or the like), Posture information and the like acquired by the motion sensor unit 580 can be added and recorded in the storage unit 550 or output through the external input / output unit 560 or the wireless communication unit 510.
- SYMBOLS 10 Stereoscopic imaging device (image processing apparatus), 12 ... Shooting lens, 16 ... Imaging element, 30 ... LCD, 40 ... CPU, 40a ... Parallax calculation means, 40b ... Parallax correction means, 40c ... Parallax enhancement means, 42 ... AF Processing unit 48 ... Memory 500 ... Smartphone
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Studio Devices (AREA)
- Color Television Image Signal Generators (AREA)
- Focusing (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
Description
本発明によれば、第1の視差量算出手段によって視点画像間の視差量の一時的な正解としての第1の視差量を算出し、その一時的な正解としての第1の視差量を、その第1の視差量と記憶手段に記憶されている視差補正情報とに基づいて、視点画像間の現実の物体像の視差方向のずれ量分に補正するので、ノイズに強く、かつ人の感知する視差と同等の視差を得ることができる。ここで、「人の感知する視差」は、「現実の物体像の視差方向のずれ量」である。
図1は本発明に係る画像処理装置が適用された立体撮像装置の実施の形態を示す斜視図である。図2は上記立体撮像装置の背面図である。この立体撮像装置10(3Dデジタルカメラ)は、レンズを通った光を撮像素子で受け、デジタル信号に変換してメモリカード等の記録メディアに記録するデジタルカメラである。
撮影レンズ12は、フォーカスレンズ、ズームレンズを含む多数のレンズから構成される撮像光学系である。撮影モード時において、被写体を示す画像光は、撮影レンズ12を介して撮像素子16の受光面に結像される。
図5は上記立体撮像装置10の内部構成の実施形態を示すブロック図である。この立体撮像装置10は、撮像した画像をメモリカード54に記録するもので、装置全体の動作は、中央処理装置(CPU)40によって統括制御される。
定の信号処理が行われ、YC処理された画像データ(YCデータ)は、再びメモリ48に記憶される。
次に、本発明に係る画像処理方法における視差補正の原理について説明する。
相関で用いる関数として、例えば、2乗和(SSD)がある。即ち、両視点画像のブロック間の一致度の評価値を、左目画像の注目ブロックの各画素値L(i,j)と右目画像の注目ブロックの各画素値R(i,j)との差の2乗和Σ{L(i,j)-R(i,j))}2として、算出する。本例では、ブロック間の一致度の評価値である2乗和が小さいほど、ブロック間の一致度が高いことを示す。
図14は、ジャスピン位置よりも近く且つ立体撮像装置10からの距離が異なる複数の位置に点光源を置き、各位置で瞳分割方式により点光源を撮像して得られた複数の視点画像(複数の視点の点光源画像)の例を示す。本例では、ジャスピン位置よりも近い位置に点光源を置いているので、立体撮像装置10に近いほど、算出視差が大きくなり、点光源のボケ形状が大きくなる。第1の複数視点画像(第1の左目画像101L及び第1の右目画像101R)は、算出視差が2.64ピクセルであり、画素値のピーク位置のずれ量が2.78ピクセルであり、視差補正量が0.14ピクセルであった。第2の複数視点画像(第2の左目画像102L及び第2の右目画像102R)は、算出視差が4.66ピクセルであり、画素値のピーク位置のずれ量が4.89ピクセルであり、視差補正量が0.23ピクセルであった。第3の複数視点画像(第3の左目画像103L及び第3の右目画像103R)は、算出視差が7.90ピクセルであり、画素値のピーク位置のずれ量が8.19ピクセルであり、視差補正量が0.29ピクセルであった。
図16は、色チャネルごとの算出視差と視差補正量との対応関係の例を示すグラフである。図16に示すように、色チャネルごとに、算出視差と視差補正量との対応関係は異なる。
図17は、画面内の位置と算出視差とピーク位置のずれ量との関係の説明に用いる説明図であって点光源を瞳分割で撮像して得られた左目画像及び右目画像を示す説明図である。図18は、画面内の位置(中心、端)ごとの算出視差と視差補正量との対応関係の例を示すグラフである。
また、視差補正をF値によって変更することが、好ましい。この場合、メモリ48は、視差補正量を、撮影光学系11のF値ごとに記憶し、視差補正手段40bは、撮影光学系11を用いて複数の視点画像を撮像したときのF値をパラメータとして、メモリ48から視差補正量を読み出し、その読み出された視差補正量に基づいて、複数の視点画像における算出視差を補正するようにしてもよい。
図19は、視差強調をおこなう場合の画像処理例の流れを示すフローチャートである。
図20は、多視点画像生成を行う場合の画像処理例の流れを示すフローチャートである。
図21は従来の位相差AF(オートフォーカス)処理例の流れを示すフローチャートであり、図22は本発明を適用した位相差AF処理例の流れを示すフローチャートである。
図23は、本発明に係る携帯型電子機器の他の実施形態であるスマートフォン500の外観を示すものである。図23に示すスマートフォン500は、平板状の筐体502を有し、筐体502の一方の面に表示部としての表示パネル521と、入力部としての操作パネル522とが一体となった表示入力部520を備えている。また、筐体502は、スピーカ531と、マイクロホン532、操作部540と、カメラ部541とを備えている。なお、筐体502の構成はこれに限定されず、例えば、表示部と入力部とが独立した構成を採用したり、折り畳み構造やスライド機構を有する構成を採用することもできる。
Claims (15)
- 瞳分割の撮像により生成された視点の異なる複数の視点画像を取得する画像取得手段と、
前記複数の視点画像の視点画像間の第1の視差量を算出する第1の視差量算出手段と、
前記第1の視差量と、前記瞳分割の撮像に起因する前記複数の視点画像の視点画像間の対応する物体像の視差方向のずれ量との関係を表す視差補正情報が記憶された記憶手段と、
前記第1の視差量と前記記憶手段に記憶されている視差補正情報とに基づいて、前記第1の視差量を前記物体像の視差方向のずれ量分に補正した第2の視差量を算出する第2の視差量算出手段と、
を備えた画像処理装置。 - 前記物体像の視差方向のずれ量は、前記複数の視点画像の視点画像間における前記物体像の画素値のピーク位置のずれ量であり、
前記第2の視差量算出手段は、前記第1の視差量を前記物体像の画素値のピーク位置のずれ量分に補正した第2の視差量を算出する請求項1に記載の画像処理装置。 - 前記第1の視差量算出手段は、前記第1の視差量を相関により算出する請求項1または2に記載の画像処理装置。
- 前記複数の視点画像は、複数の光電変換素子が配列された撮像面を有する撮像素子によって瞳分割で撮像されており、
前記記憶手段は、前記撮像素子の前記撮像面における異なる位置ごとに前記視差補正情報を記憶し、
前記第2の視差量算出手段は、前記複数の視点画像内の位置に対応した前記撮像素子の前記撮像面の位置ごとの前記視差補正情報を前記記憶手段から取得し、該取得した視差補正情報に基づいて前記第2の視差量を算出する請求項1から3のいずれか1項に記載の画像処理装置。 - 前記記憶手段は、前記視点画像の画素の色チャネルごとに前記視差補正情報を記憶し、
前記第2の視差量算出手段は、前記記憶手段に記憶された前記色チャネルごとの前記視差補正情報を取得し、該取得した視差補正情報に基づいて前記色チャネルごとに前記第2の視差量を算出する請求項1から4のいずれか1項に記載の画像処理装置。 - 前記複数の視点画像は、緑の画素を含む複数色の画素で構成されており、
前記記憶手段は、前記緑の画素に対応した前記視差補正情報を記憶し、
前記第2の視差量算出手段は、前記記憶手段に記憶された前記緑の画素に対応した前記視差補正情報に基づいて前記緑の画素に対応した前記視差補正情報を取得し、該取得した視差補正情報に基づいて、前記複数の視点画像の前記緑の画素及び前記緑の画素以外の画素における前記第2の視差量を算出する請求項1から4のいずれか1項に記載の画像処理装置。 - 前記複数の視点画像は、絞り値が可変な撮影光学系を用いて撮像されており、
前記記憶手段は、前記撮影光学系の絞り値ごとに前記視差補正情報を記憶し、
前記第2の視差量算出手段は、前記撮影光学系を用いて前記複数の視点画像を撮像したときの前記絞り値をパラメータとして前記視差補正情報を取得し、該取得した視差補正情報に基づいて前記第2の視差量を算出する請求項1から6のいずれか1項に記載の画像処理装置。 - 前記第2の視差量算出手段によって算出された前記複数の視点画像の前記第2の視差量を増加させる視差強調手段を備えた請求項1から7のいずれか1項に記載の画像処理装置。
- 前記第2の視差量算出手段によって算出された前記第2の視差量に基づいて、前記画像取得手段で取得された前記複数の視点画像とは視点数が異なる多視点画像を生成する多視点画像生成手段を備えた請求項1から8のいずれか1項に記載の画像処理装置。
- 前記第1の視差量算出手段は、前記視点画像間でブロックマッチングを行うことにより所定のピクセル単位またはサブピクセル単位で一致の程度を示す評価値を算出し、前記評価値が最小となる位置同士のずれ量を前記第1の視差量とする請求項1から9のいずれか1項に記載の画像処理装置。
- 前記第1の視差量算出手段は、前記視点画像間における画素値の差分の2乗和、又は前記視点画像間における画素値の差分の総和を求めることにより、前記評価値を算出する請求項10に記載の画像処理装置。
- 前記記憶手段は、前記視差補正情報をルックアップテーブルまたは計算式として記憶する請求項1から11のいずれか1項に記載の画像処理装置。
- 請求項1から12のいずれか1項に記載の画像処理装置と、
瞳分割方式の撮像を行う撮像手段と、
を備えた撮像装置。 - 前記撮像手段は、フォーカスレンズを含む撮影光学系を有し、
前記第2の視差量算出手段によって算出された第2の視差量に基づいて、前記撮影光学系のフォーカスレンズの位置を調整する制御を行うオートフォーカス処理手段を備えた請求項13に記載の撮像装置。 - 瞳分割の撮像により生成された視点の異なる複数の視点画像を取得するステップと、
前記複数の視点画像の視点画像間の第1の視差量を算出するステップと、
前記第1の視差量と、前記瞳分割の撮像に起因する前記複数の視点画像の視点画像間の対応する物体像の視差方向のずれ量との関係を表す視差補正情報を予め記憶手段に記憶させておき、前記第1の視差量と前記記憶手段に記憶されている視差補正情報とに基づいて、前記第1の視差量を前記物体像の視差方向のずれ量分に補正した第2の視差量を算出するステップと、
を備えた画像処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201380018241.2A CN104221370B (zh) | 2012-03-29 | 2013-03-28 | 图像处理装置、摄像装置以及图像处理方法 |
EP13769273.7A EP2833638B1 (en) | 2012-03-29 | 2013-03-28 | Image processing device, imaging device, and image processing method |
JP2014508013A JP5655174B2 (ja) | 2012-03-29 | 2013-03-28 | 画像処理装置、撮像装置および画像処理方法 |
US14/495,174 US9167224B2 (en) | 2012-03-29 | 2014-09-24 | Image processing device, imaging device, and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012077175 | 2012-03-29 | ||
JP2012-077175 | 2012-03-29 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/495,174 Continuation US9167224B2 (en) | 2012-03-29 | 2014-09-24 | Image processing device, imaging device, and image processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013146996A1 true WO2013146996A1 (ja) | 2013-10-03 |
Family
ID=49260244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/059206 WO2013146996A1 (ja) | 2012-03-29 | 2013-03-28 | 画像処理装置、撮像装置および画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9167224B2 (ja) |
EP (1) | EP2833638B1 (ja) |
JP (1) | JP5655174B2 (ja) |
CN (1) | CN104221370B (ja) |
WO (1) | WO2013146996A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105659580A (zh) * | 2014-09-30 | 2016-06-08 | 华为技术有限公司 | 一种自动对焦方法、装置及电子设备 |
JP2019213036A (ja) * | 2018-06-04 | 2019-12-12 | オリンパス株式会社 | 内視鏡プロセッサ、表示設定方法および表示設定プログラム |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014156202A1 (ja) * | 2013-03-29 | 2014-10-02 | 株式会社ニコン | 画像処理装置、撮像装置および画像処理プログラム |
DE112015005499T5 (de) * | 2015-01-20 | 2018-03-22 | Olympus Corporation | Bildverarbeitungsvorrichtung, Bildverarbeitungsverfahren und Programm |
JP6275174B2 (ja) * | 2015-03-10 | 2018-02-07 | キヤノン株式会社 | 画像処理方法、画像処理装置、および、撮像装置 |
WO2016181620A1 (ja) * | 2015-05-08 | 2016-11-17 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法、プログラム、および、記憶媒体 |
JP6611588B2 (ja) * | 2015-12-17 | 2019-11-27 | キヤノン株式会社 | データ記録装置、撮像装置、データ記録方法およびプログラム |
WO2018179671A1 (ja) * | 2017-03-27 | 2018-10-04 | ソニー株式会社 | 画像処理装置と画像処理方法および撮像装置 |
CN107214109B (zh) * | 2017-07-12 | 2023-05-05 | 合肥美亚光电技术股份有限公司 | 色选设备的动态校正装置及校正方法 |
CN108010538B (zh) * | 2017-12-22 | 2021-08-24 | 北京奇虎科技有限公司 | 音频数据处理方法及装置、计算设备 |
JP2020027957A (ja) * | 2018-08-09 | 2020-02-20 | オリンパス株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
CN112929640A (zh) * | 2019-12-05 | 2021-06-08 | 北京芯海视界三维科技有限公司 | 多视点裸眼3d显示装置、显示方法、显示屏校正方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09101579A (ja) | 1995-10-05 | 1997-04-15 | Fuji Photo Film Co Ltd | 顔領域抽出方法及び複写条件決定方法 |
JP2002191060A (ja) | 2000-12-22 | 2002-07-05 | Olympus Optical Co Ltd | 3次元撮像装量 |
JP2011515045A (ja) | 2008-02-29 | 2011-05-12 | イーストマン コダック カンパニー | 多視点のイメージ取得を備えたセンサー |
WO2011118077A1 (ja) | 2010-03-24 | 2011-09-29 | 富士フイルム株式会社 | 立体撮像装置および視差画像復元方法 |
WO2011118089A1 (ja) | 2010-03-24 | 2011-09-29 | 富士フイルム株式会社 | 立体撮像装置 |
WO2012002307A1 (ja) * | 2010-06-29 | 2012-01-05 | 富士フイルム株式会社 | 単眼立体撮像装置 |
WO2012036019A1 (ja) * | 2010-09-13 | 2012-03-22 | 富士フイルム株式会社 | 単眼立体撮像装置、単眼立体撮像装置用シェーディング補正方法及び単眼立体撮像装置用プログラム |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010045584A (ja) | 2008-08-12 | 2010-02-25 | Sony Corp | 立体画像補正装置、立体画像補正方法、立体画像表示装置、立体画像再生装置、立体画像提供システム、プログラム及び記録媒体 |
JP2010066712A (ja) * | 2008-09-12 | 2010-03-25 | Olympus Corp | フォーカス調整装置及び撮像装置 |
WO2012002297A1 (ja) * | 2010-06-30 | 2012-01-05 | 富士フイルム株式会社 | 撮像装置および撮像方法 |
CN104247412B (zh) * | 2012-03-30 | 2016-08-24 | 富士胶片株式会社 | 图像处理装置、摄像装置、图像处理方法、记录介质以及程序 |
-
2013
- 2013-03-28 WO PCT/JP2013/059206 patent/WO2013146996A1/ja active Application Filing
- 2013-03-28 JP JP2014508013A patent/JP5655174B2/ja not_active Expired - Fee Related
- 2013-03-28 CN CN201380018241.2A patent/CN104221370B/zh not_active Expired - Fee Related
- 2013-03-28 EP EP13769273.7A patent/EP2833638B1/en not_active Not-in-force
-
2014
- 2014-09-24 US US14/495,174 patent/US9167224B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09101579A (ja) | 1995-10-05 | 1997-04-15 | Fuji Photo Film Co Ltd | 顔領域抽出方法及び複写条件決定方法 |
JP2002191060A (ja) | 2000-12-22 | 2002-07-05 | Olympus Optical Co Ltd | 3次元撮像装量 |
JP2011515045A (ja) | 2008-02-29 | 2011-05-12 | イーストマン コダック カンパニー | 多視点のイメージ取得を備えたセンサー |
WO2011118077A1 (ja) | 2010-03-24 | 2011-09-29 | 富士フイルム株式会社 | 立体撮像装置および視差画像復元方法 |
WO2011118089A1 (ja) | 2010-03-24 | 2011-09-29 | 富士フイルム株式会社 | 立体撮像装置 |
WO2012002307A1 (ja) * | 2010-06-29 | 2012-01-05 | 富士フイルム株式会社 | 単眼立体撮像装置 |
WO2012036019A1 (ja) * | 2010-09-13 | 2012-03-22 | 富士フイルム株式会社 | 単眼立体撮像装置、単眼立体撮像装置用シェーディング補正方法及び単眼立体撮像装置用プログラム |
Non-Patent Citations (2)
Title |
---|
ARAI, PROCESSING SOCIETY OF JAPAN, vol. 2004, no. 40, pages 33 - 40 |
See also references of EP2833638A4 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105659580A (zh) * | 2014-09-30 | 2016-06-08 | 华为技术有限公司 | 一种自动对焦方法、装置及电子设备 |
EP3190781A4 (en) * | 2014-09-30 | 2017-08-09 | Huawei Technologies Co. Ltd. | Autofocus method, device and electronic apparatus |
US10129455B2 (en) | 2014-09-30 | 2018-11-13 | Huawei Technologies Co., Ltd. | Auto-focus method and apparatus and electronic device |
CN108924428A (zh) * | 2014-09-30 | 2018-11-30 | 华为技术有限公司 | 一种自动对焦方法、装置及电子设备 |
US10455141B2 (en) | 2014-09-30 | 2019-10-22 | Huawei Technologies Co., Ltd. | Auto-focus method and apparatus and electronic device |
JP2019213036A (ja) * | 2018-06-04 | 2019-12-12 | オリンパス株式会社 | 内視鏡プロセッサ、表示設定方法および表示設定プログラム |
Also Published As
Publication number | Publication date |
---|---|
US20150009299A1 (en) | 2015-01-08 |
JP5655174B2 (ja) | 2015-01-14 |
CN104221370A (zh) | 2014-12-17 |
EP2833638A4 (en) | 2015-11-25 |
EP2833638A1 (en) | 2015-02-04 |
EP2833638B1 (en) | 2017-09-27 |
US9167224B2 (en) | 2015-10-20 |
JPWO2013146996A1 (ja) | 2015-12-14 |
CN104221370B (zh) | 2016-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5655174B2 (ja) | 画像処理装置、撮像装置および画像処理方法 | |
JP5931206B2 (ja) | 画像処理装置、撮像装置、プログラム及び画像処理方法 | |
JP5993937B2 (ja) | 画像処理装置、撮像装置、画像処理方法、及びプログラム | |
JP5960286B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5740045B2 (ja) | 画像処理装置及び方法並びに撮像装置 | |
JP5889441B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5926391B2 (ja) | 撮像装置及び合焦確認表示方法 | |
JP5923595B2 (ja) | 画像処理装置及び方法並びに撮像装置 | |
US9288472B2 (en) | Image processing device and method, and image capturing device | |
JPWO2014046039A1 (ja) | 撮像装置及び合焦確認表示方法 | |
WO2014155813A1 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
WO2014077065A1 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5547356B2 (ja) | 撮影装置、方法、記憶媒体及びプログラム | |
JPWO2013179899A1 (ja) | 撮像装置 | |
JP5972485B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5901780B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13769273 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014508013 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2013769273 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013769273 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |