WO2012157407A1 - Image capture device and focus control method - Google Patents

Image capture device and focus control method Download PDF

Info

Publication number
WO2012157407A1
WO2012157407A1 PCT/JP2012/060862 JP2012060862W WO2012157407A1 WO 2012157407 A1 WO2012157407 A1 WO 2012157407A1 JP 2012060862 W JP2012060862 W JP 2012060862W WO 2012157407 A1 WO2012157407 A1 WO 2012157407A1
Authority
WO
WIPO (PCT)
Prior art keywords
correlation calculation
imaging
phase difference
correlation
difference detection
Prior art date
Application number
PCT/JP2012/060862
Other languages
French (fr)
Japanese (ja)
Inventor
史憲 入江
高嗣 青木
井上 知己
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2012157407A1 publication Critical patent/WO2012157407A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2101/00Still video cameras

Definitions

  • the present invention relates to an imaging apparatus and a focusing control method thereof, and more particularly to an imaging apparatus that performs focusing control by a phase difference detection method and a focusing control method thereof.
  • an information device having the above imaging function is referred to as an imaging device.
  • phase difference method As a focus control method for detecting the distance to the main subject. Since the phase difference method can detect the in-focus position at a higher speed than the contrast method, it is widely used in various imaging apparatuses. There is also known an imaging apparatus that performs focusing control by embedding a phase difference detection pixel in an imaging element that captures an image of a subject and performing correlation calculation of signals read from the phase difference detection pixel.
  • Japanese Patent Application Laid-Open No. 2010-103705 discloses an image pickup apparatus that includes a vibrating unit that vibrates an optical member and detects a defocus amount by correcting an output signal of a phase difference detection pixel with correction data. There is described an imaging apparatus that obtains a difference between output signals of phase difference detection pixels before and after vibrating an optical member by a vibration unit, and changes the correction data when the difference value is a predetermined value or more.
  • Japanese Patent Application Laid-Open No. 2010-226395 has vibration means for vibrating the optical member, and the antinode position of the standing wave of the optical member generated by the vibration of the optical member and the phase difference detection pixel (focus detection pixel).
  • an imaging apparatus that controls the vibration of a vibration unit so that the position of an optical member that transmits light overlaps.
  • the reliability of the correlation calculation result of the signal obtained from the phase difference detection pixel is determined to use preferential information with high reliability, or from the phase difference detection pixel.
  • a technique for repeating the dust removing operation until high reliability is obtained for the obtained frame is also disclosed.
  • the present invention provides an imaging apparatus and a focus control method capable of calculating a focus position quickly with high accuracy and performing focus control.
  • An imaging apparatus includes a first phase difference detection pixel on which a light beam having passed through one side with respect to a main axis of a photographing lens is incident, and the other side with respect to the main axis of the photographing lens.
  • An image sensor including a plurality of phase difference detection pixel pairs each including a second phase difference detection pixel on which a light beam that has passed through is incident and an optical element provided on the light receiving surface side of the image sensor
  • a first correlation that performs a correlation operation by combining signals obtained from the phase difference detection pixel pair in the same imaging frame in which an object is imaged by the imaging element;
  • the signal obtained from the phase difference detection pixel pair is assembled in different imaging frames captured by the imaging device by alternately performing the calculation unit, imaging of the imaging device, and excitation of the excitation unit.
  • each correlation calculation Determination means for determining whether or not the correlation of the signals is high; a focus position calculation means for calculating a focus position from a result of correlation calculation of a combination determined to have high correlation by the determination means; and the focus Drive control means for controlling drive means for driving the photographic lens so that the photographic lens is moved to the in-focus position calculated by the position calculating means.
  • the second correlation calculation means for performing the correlation calculation combining the signals obtained from the phase difference detection pixel pair is provided and the in-focus position is calculated based on the correlation calculation result having a high correlation, dust or the like is generated by the vibration. Even if the foreign object moves without being completely removed, the second correlation calculation means can obtain a correlation calculation result with high correlation depending on the combination. Focus control is possible.
  • the second correlation calculation means is configured to detect a difference between signals obtained from the first phase difference detection pixels in two different imaging frames.
  • the difference is equal to or less than a predetermined threshold
  • the signal obtained from the first phase difference detection pixel in one of the two different imaging frames is used for correlation calculation, and the difference in the two imaging frames
  • the difference between the signals obtained from the second phase difference detection pixels is equal to or smaller than the threshold value
  • the signal obtained from the second phase difference detection pixels in either one of the two different imaging frames is correlated. You may make it use for a calculation.
  • Such a configuration makes it possible to reduce the number of combinations of correlation calculations, thereby further reducing the calculation time.
  • the vibration of the vibration means when the vibration of the vibration means is performed every time an image frame is acquired by the image sensor, There may be further provided a first vibration control means for controlling the vibration means so that vibration is applied under different vibration conditions.
  • the signal of the phase difference detection pixel pair in the imaging frame first obtained by imaging by the imaging element is combined to
  • the system further comprises provisional focus position calculation means for calculating a provisional provisional focus position from the result of the correlation calculation performed by the first correlation calculation means, and the drive control means calculates the provisional focus position.
  • each phase difference detection pixel generated due to the movement of the photographing lens based on each position of the front photographing lens when each imaging frame is acquired.
  • the second correlation calculation means may further perform correlation calculation using the signal corrected by the correction means.
  • the power required for driving the photographing lens and the imaging device can be used from among the vibration conditions of the vibration means.
  • a selection unit that selects an excitation condition that can be executed within a range in which the power consumption of the imaging device does not exceed the maximum power amount, and vibration is generated under the excitation condition selected by the selection unit.
  • You may further provide the 2nd vibration control means which controls the said vibration means so that it may be added.
  • the imaging is performed based on the amount of power required for driving the photographing lens and the maximum amount of power that can be used in the imaging device.
  • the apparatus may further comprise third vibration control means for controlling the vibration timing of the vibration means so that the power consumption of the apparatus does not exceed the maximum power amount.
  • the excitation of the optical member is executed in synchronization with a drive signal for driving the imaging device.
  • the focusing control method includes a first phase difference detection pixel on which a light beam that has passed through one side with respect to the main axis of the photographing lens is incident, and the other with respect to the main axis of the photographing lens.
  • a plurality of phase difference detection pixel pairs each including a second phase difference detection pixel on which a light beam that has passed through the first side is incident, and a plurality of imaging pixels, and a light receiving surface side of the imaging element.
  • a focus control method in an imaging apparatus comprising: an optical member; and an excitation unit that applies vibration to the optical member, wherein the phase difference detection pixel pair in the same imaging frame in which a subject is imaged by the imaging element.
  • the ninth aspect of the present invention also operates in the same manner as the first aspect of the present invention, it is possible to calculate the in-focus position with high accuracy and control the focus.
  • the present invention it is possible to calculate the in-focus position with high accuracy and to control in-focus.
  • FIG. 1 is a perspective view of a digital camera according to a first embodiment. It is a perspective view of a digital camera in a state where a lens unit is removed. It is a figure which shows the back surface of the digital camera which concerns on 1st Embodiment. It is a figure which shows typically the inside of the digital camera which concerns on 1st Embodiment. It is a surface expansion schematic diagram of the phase difference detection area of a solid-state image sensor.
  • FIG. 6 is an explanatory diagram for explaining a concept of a phase difference amount by extracting only the phase difference detection pixel of FIG. 5 and its detection signal.
  • 1 is a block diagram of a digital camera according to a first embodiment.
  • the digital camera 10 is a single-lens reflex digital still camera that mainly captures still images.
  • the digital camera 10 includes a lens mount unit 14 on the front (front) portion of the camera body 12, and a lens group 300 including a plurality of lenses via the lens mount unit 14.
  • a lens unit 16 including (see FIG. 4) and the like is replaceably mounted.
  • the shutter button 18 is disposed at the upper left portion toward the front.
  • an optical viewfinder 60 As shown in FIG. 3, an optical viewfinder 60, a display panel 62, a liquid crystal monitor 64, a cross button 66, a menu / execution button 67, a back button 68, a function button 63, and the like are provided on the rear exterior portion of the digital camera 10. ing.
  • the LCD monitor 64 can display a subject image and a reproduced image of a recorded image. In addition, information on the currently set mode, image compression rate information, date / time information, frame number, and the like are also displayed. Furthermore, it is also used as a user interface display screen when the user performs various setting operations, and menu information such as setting items is displayed as necessary.
  • the optical finder 60 can see the subject image from the lens unit 16 (see FIGS. 1 and 2) as it is.
  • a quick return mirror 203 is provided in the photographing optical path inside the camera body 12.
  • the quick return mirror 203 moves between a position for guiding subject light from the lens unit 16 to the optical path system to the optical viewfinder 60 (an oblique position) and a position for retracting outside the photographing optical path (a retracted position). (So-called mirror down and mirror up).
  • the quick return mirror 203 is in the oblique position where the mirror is lowered.
  • a dashed line indicates the optical axis L.
  • a focus plate 204 on which subject light guided to the optical viewfinder 60 forms an image is disposed above the quick return mirror 203.
  • a condenser lens 205 for improving the visibility of the optical viewfinder 60 is provided above the focus plate 204. Then, the pentagonal roof prism 206 guides the subject light passing through the focusing plate 204 and the condenser lens 205 to the eyepiece 208 for the optical viewfinder 60.
  • a focal plane shutter type shutter mechanism 30 in which the shutter 32 opens and closes in the vertical direction is disposed behind the quick return mirror 203.
  • the shutter 32 is in an open state.
  • Behind the shutter mechanism 30 is a CCD 22 that is an image sensor.
  • the imaging device is a CCD, but it may be a CMOS.
  • a large number of pixels (photodiodes) (not shown) are arranged in a square lattice pattern on the light receiving surface of the CCD 22.
  • the pixel arrangement is not limited to a square lattice arrangement, and may be a so-called honeycomb pixel arrangement in which even-numbered pixel rows are shifted by 1/2 pixel pitch with respect to odd-numbered pixel rows.
  • a cover glass 24 is disposed as an optical member on the light receiving surface side of the CCD 22.
  • the cover glass 24 is a bare glass that transmits light incident from the lens group 300, and protects the imaging surface of the CCD 22.
  • an optical member such as an optical low-pass filter that removes high-frequency components of light may be disposed in order to prevent the occurrence of moire.
  • the shutter 32 is opened and charges are accumulated in the CCD 22. Then, the image data is recorded on the recording medium 120 (see FIG. 7).
  • a phase difference detection area is provided at an arbitrary partial region position on the light receiving surface of the CCD 22, such as the center position. It should be noted that the phase difference detection area may be provided only at one location with respect to the light receiving surface, or may be provided at a plurality of locations so that AF can be performed anywhere on the imaging screen. The entire area of the light receiving surface may be used as a phase difference detection area.
  • FIG. 5 is a schematic enlarged view of the surface of the phase difference detection area. A large number of pixels are arranged in a square lattice on the light receiving surface of the CCD 21, and the same applies to the phase difference detection area.
  • each pixel is indicated by R (red), G (green), and B (blue).
  • R, G, and B represent the colors of the color filters stacked on each pixel.
  • the color filter array is a Bayer array, but is not limited to the Bayer array, and may be another color filter array such as a stripe.
  • the pixel array and color filter array in the phase difference detection area are the same as the pixel array and color filter array on the light receiving surface outside the phase difference detection area.
  • the diagonally adjacent same color pixels constituting the pair are set as phase difference detection pixels 1x and 1y.
  • phase difference detection area not all pixels are phase difference detection pixels, but normal imaging pixels and phase difference detection pixels for imaging a subject are arranged alternately or periodically, for example.
  • the pair pixels for phase difference detection are provided at discrete and periodic positions in the phase difference detection area, for example, checkered positions in the illustrated example. Note that only the imaging pixels are arranged outside the phase difference detection area.
  • the color filter array is a Bayer array
  • the pixels of the same color are arranged in the horizontal direction, so that the two pixels constituting the pair are arranged horizontally. It will be adjacent.
  • two pixels constituting a pair may not be provided in the same color filter row, but each pixel constituting the pair may be provided separately in each filter row of the same color closest in the vertical direction. The same applies to the vertical stripe arrangement.
  • the phase difference detection pixels 1x and 1y are provided in the most G filter mounted pixels among R, G, and B, and 8 pixels are arranged in the horizontal direction (x direction) and in the vertical direction (y direction). It is arranged so that every eight pixels and the entire checkered position. Accordingly, when viewed in the phase difference detection direction (left-right direction), the phase difference detection pixels 1x are arranged every four pixels.
  • FIG. 6 is a diagram schematically showing only the phase difference detection pixels 1x and 1y extracted in FIG.
  • the phase difference detection pixels 1x and 1y constituting the pair pixel are formed with the light shielding film openings 2x and 2y, and the light shielding film opening 2x of the pixel 1x is provided eccentrically in the left direction, and the light shielding film opening of the pixel 1y. 2y is eccentrically provided in the right direction (phase difference detection direction).
  • the phase difference detection pixel 1x is incident with the light beam that has passed through one side (here, the left side) with respect to the main axes of the lenses constituting the lens group 300.
  • the phase difference detection pixel 1y is arranged on a line adjacent to the phase difference detection pixel 1x constituting the pair, and has passed through the other side (here, the right side) with respect to the main axes of the lenses constituting the lens group 300.
  • a light beam enters.
  • the focus control is performed by detecting this shift amount (phase difference amount). Is done.
  • a curve X shown in the lower part of FIG. 6 is a graph in which detection signal amounts of the phase difference detection pixels 1x arranged in a horizontal row are plotted, and a curve Y indicates a detection signal of the phase difference detection pixel 1y that forms a pair with these pixels 1x. It is the graph which plotted quantity.
  • the paired pixels 1x and 1y are adjacent pixels and are very close to each other, and therefore are considered to receive light from the same subject. For this reason, it is considered that the curve X and the curve Y have the same shape, and the shift in the left-right direction (phase difference detection direction) is caused by the difference between the image seen by one pixel 1x of the paired pixel divided by pupil and the other pixel 1y. This is the amount of phase difference from the image seen in.
  • the phase difference amount (lateral shift amount) can be obtained by performing a correlation calculation between the curve X and the curve Y, and the distance to the subject can be calculated from the phase difference amount.
  • a known method may be adopted. For example, there is a method in which one waveform data (curve) is shifted in units of pixels, the sum of differences from the other curve is taken, and a distance when the sum is minimum is obtained as a phase difference amount. More specifically, the integrated value of the absolute value of the difference between each point X (i) constituting the curve X and each point Y (i + j) constituting the curve Y is obtained, and the minimum integrated value is obtained.
  • the value is a phase difference amount (lateral deviation amount).
  • the minimum integrated value is referred to as a correlation value. The higher the correlation between the curve X and the curve Y, the smaller the minimum correlation value.
  • FIG. 7 is a block diagram showing the internal configuration of the digital camera 10 (camera body 12 and lens unit 16).
  • the lens unit 16 includes a diaphragm mechanism 80 and a lens group 300.
  • the lens group 300 includes a focus lens 82, a zoom lens 182 and the like.
  • an iris motor 83 and its drive circuit (motor driver) 84 as means for driving the aperture mechanism 80
  • an AF motor 86 and its drive circuit (motor driver) 87 as means for driving the focus lens 82
  • a zoom lens 182 are provided.
  • a zoom motor 186 as a driving means, a driving circuit (motor driver) 187, a central processing unit for control (hereinafter referred to as a lens CPU) 88, a ROM 89 storing various data, and the like are incorporated.
  • the lens unit 16 is driven by power supply from a power supply circuit 97 provided in the camera body 12 described later.
  • the ROM 89 may store power information indicating power required for driving the lens unit 16 in advance.
  • the camera CPU 90 on the camera body 12 side can control the operation of the digital camera 10 according to the power information acquired via the lens CPU 88. Note that the zoom may be driven manually. Further, the diaphragm and AF may be driven from the camera body 12.
  • the ROM 89 which is a nonvolatile storage means, may be rewritable or may be rewritable like an EEPROM.
  • the ROM 89 stores the model name of the lens unit 16, focal length, F-number, and other information related to lens performance (hereinafter referred to as “lens information”).
  • the lens unit 16 When the lens unit 16 is attached to the lens mount 14 of the camera body 12 (see FIGS. 1 and 2), the electrical contact 14A provided on the lens mount 14 and the electrical contact 16A of the lens unit 16 are connected.
  • the lens unit 16 and the camera body 12 are electrically connected to each other, and signals can be transferred between a CPU (hereinafter referred to as camera CPU) 90 in the digital camera 10 and the lens CPU 88.
  • camera CPU hereinafter referred to as camera CPU
  • the camera CPU 90 functions as a control unit that performs overall control of the camera system according to a predetermined program, and also functions as a calculation unit that performs various calculations such as an AE / AF calculation.
  • a ROM 91 connected to the camera CPU 90 stores programs executed by the camera CPU 90 and various data necessary for control.
  • the RAM 92 is used as a work area for the camera CPU 90.
  • the ROM 91 which is a nonvolatile storage means may be non-rewritable, or may be rewritable like an EEPROM.
  • the camera CPU 90 controls the operation of each circuit in the digital camera 10 based on instruction signals from the power switch 93, the mode selection switch 94, the release detection switch 95, and other operation units 96 provided in the camera body 12.
  • the operation unit 96 is a block including various operation means such as a cross button 66, a menu / execution button 67, a back button 68, and a function button 63 shown in FIG.
  • the camera CPU 90 is connected to an AF / MF changeover switch 130 for switching between an autofocus mode for automatically focusing and a manual focus mode for manually focusing.
  • the power switch 93 is an operation means for turning on / off the main power of the digital camera 10.
  • the camera CPU 90 monitors the state of the power switch 93 and controls the power circuit 97 according to the state. That is, when the closed (ON) state of the power switch 93 is detected, the camera CPU 90 gives a start command signal to the power circuit 97 to start the power circuit 97.
  • the power supply circuit 97 includes a DC / DC converter.
  • the power supplied from the battery 98 loaded in the digital camera 10 is converted into a required voltage by the DC / DC converter of the power supply circuit 97 and then supplied from the power supply circuit 97 to each circuit block in the digital camera 10.
  • the camera CPU 90 gives a stop command signal to the power circuit 97 to stop the power supply from the power circuit 97.
  • the main power ON / OFF is not limited to the operation of the power switch 93, but the auto power ON function (function to turn on power at a set time) and the auto power OFF function (no operation state for a certain period of time). There is also a mode of switching according to a case or a function of automatically turning off power at a set time.
  • the mode selection switch 94 is a means for setting the operation mode of the digital camera 10, and by operating this mode changeover switch, a "shooting mode” (shooting mode) or a “playback mode” (a mode for playing back recorded images). ) And other modes.
  • the release detection switch 95 is a detection switch disposed inside the shutter button 18 (see FIG. 1), and includes an S1 switch that is turned on when the shutter button 18 is half-pressed and an S2 switch that is turned on when the shutter button 18 is fully pressed. .
  • the digital camera 10 When “shooting mode” is selected by the mode selection switch 94, the digital camera 10 is ready for shooting.
  • the CCD exposure and readout control is started.
  • the AE function mounted on the digital camera 10 is a TTL type AE.
  • an AE sensor (light receiving element) 100 as a detection system is provided in the vicinity of the optical viewfinder 60 inside the digital camera 10. ing.
  • the AE sensor 100 is configured such that subject light reflected by the quick return mirror 203 is guided by an optical system such as a pentagonal roof prism 206 in a state where the quick return mirror 203 is mirrored down.
  • the AF function of the digital camera 10 is as described above, and focusing control is performed by the phase difference detection method using the phase difference detection pixels.
  • the camera CPU 90 also controls the actuator 126 that switches the quick return mirror 203 between the oblique position and the retracted position.
  • the camera CPU 90 performs AE calculation based on the detection signal from the AE sensor 100, and calculates an aperture value and a shutter speed.
  • the camera CPU 90 controls the actuator 126 to switch the quick return mirror 203 to the retracted position, and sends the aperture control signal to the lens based on the result of the AE calculation. This is sent to the CPU 88 to control the shutter mechanism 30 (see FIG. 4), to open and close the shutter 32 of the shutter mechanism 30, and to control the charge accumulation time of the CCD 22.
  • the lens CPU 88 controls the motor driver 84 based on the signal from the camera CPU 90 to operate the iris motor 83, thereby opening the aperture mechanism 80 to a required opening.
  • the optical image of the subject formed on the CCD 22 via the lens unit 16 is photoelectrically converted by the CCD 22.
  • the signal charge accumulated in each pixel (photodiode) of the CCD 22 is sequentially read out as a voltage signal corresponding to the signal charge based on a pulse given from the timing generator (TG) 103.
  • the signal output from the CCD 22 is sent to the analog processing unit 104, where required processing such as correlated double sampling (CDS) processing, color separation, and gain adjustment is performed.
  • CDS correlated double sampling
  • the image signal generated by the analog processing unit 104 is converted into a digital signal by the A / D converter 106 and then stored in the memory 110 via the image input controller 108.
  • the timing generator 103 gives a timing signal (also referred to as a drive signal) to the CCD 22, the analog processing unit 104, and the A / D converter 106 in accordance with a command from the camera CPU 90, and each circuit is synchronized by this timing signal. It has been taken.
  • the start of signal charge accumulation (exposure) of the CCD 22 and the reading and transfer of signal charges from the CCD 22 are performed at timings in accordance with commands from the camera CPU 90.
  • the data stored in the memory 110 is sent to the image signal processing circuit 114 via the bus 112.
  • the image signal processing circuit 114 is an image processing unit including a luminance / color difference signal generation circuit, a gamma correction circuit, a sharpness correction circuit, a white balance correction circuit, a gain (sensitivity) adjustment circuit, and the like. Process the signal.
  • the image data input to the image signal processing circuit 114 is converted into a luminance signal (Y signal) and a color difference signal (Cr, Cb signal) and subjected to predetermined processing such as gamma correction and gain adjustment.
  • the image data generated by the image signal processing circuit 114 is sent to the compression / decompression circuit 116 and compressed according to a predetermined format such as JPEG.
  • the compressed image data is recorded on the recording medium 120 via the media controller 118.
  • the compression format is not limited to JPEG, and MPEG or other methods may be adopted.
  • the means for storing image data is not limited to a semiconductor memory represented by a memory card, and various media such as a magnetic disk, an optical disk, and a magneto-optical disk can be used. Further, the recording medium (such as an internal memory) built in the digital camera 10 is not limited to a removable medium.
  • an image file is read from the recording medium 120.
  • the read image data is decompressed by the compression / decompression circuit 116 and sent to the VRAM 122.
  • the data stored in the VRAM 122 is converted into a predetermined display signal (for example, an NTSC color composite video signal) by the video encoder 124 and then supplied to the liquid crystal monitor 64. In this way, the image stored in the recording medium 120 is displayed on the liquid crystal monitor 64.
  • the digital camera 10 can be switched between an optical finder mode and an electronic finder mode as a photographing mode by operating the operation unit 96, for example.
  • This optical finder mode is a mode in which the quick return mirror 203 is set at an oblique position and photographing can be performed while the subject image is visually recognized by the optical finder 60.
  • the electronic viewfinder mode the quick return mirror 203 is set to the retracted position and the shutter 32 is opened to form a subject image on the CCD 22, and the subject image (through image) is displayed on the liquid crystal monitor 64 at a predetermined cycle. Is a mode that enables photographing while viewing the image on the liquid crystal monitor 64.
  • the cover glass 24 provided on the light receiving surface side of the CCD 22 has a cover glass for moving or removing foreign matters such as dust attached to the cover glass 24.
  • a piezoelectric element 24a for ultrasonically vibrating the 24 is provided.
  • a standing wave bending vibration is generated in the cover glass 24 by driving the piezoelectric element 24a.
  • the piezoelectric element 24a will be described as an example of the vibration means for applying vibration to the cover glass 24, but the vibration means is not limited to the piezoelectric element.
  • the piezoelectric element 24a is driven by a pulse signal supplied from the piezoelectric element drive circuit 160, and the piezoelectric element drive circuit 160 generates a pulse signal in accordance with a control signal supplied from the camera CPU 90.
  • the piezoelectric element 24a is supplied.
  • the excitation condition include an excitation frequency, a step width when changing the excitation frequency, an excitation time, an excitation timing, and an excitation amplitude. Therefore, it is possible to generate a pulse signal corresponding to the excitation condition, and the piezoelectric element driving circuit 160 may generate a pulse signal corresponding to the excitation condition set by the camera CPU 90.
  • FIG. 9 shows a phase difference detection pixel 1x (hereinafter also referred to as x pixel) and a phase difference detection pixel 1y (hereinafter referred to as y pixel) in a state where foreign matter such as dust adheres to the cover glass 24. ) Is a diagram plotting the detection signals read from each of the above.
  • phase difference detection pixel is provided in the CCD 24, and AF control is performed using a phase difference amount obtained by performing a correlation operation on a signal read from the phase difference detection pixel.
  • AF control is performed using a phase difference amount obtained by performing a correlation operation on a signal read from the phase difference detection pixel.
  • FIG. 10 is a flowchart showing the flow of AF control executed in the shooting mode.
  • step 400 an in-focus position calculation process is executed.
  • the flow of the focus position calculation process according to the present embodiment will be described.
  • step 500 when the exposure of the CCD 22 is started and then the read timing comes, signal charges are read from the respective pixels of the CCD 22, and a voltage signal (hereinafter referred to as a detection signal) corresponding to the read signal charges for one imaging frame. get.
  • the imaging frame is simply referred to as a frame (the same applies to each drawing).
  • the frames acquired after the AF control start are distinguished and described, they are referred to as a first frame, a second frame, a third frame,.
  • the frame acquired in step 500 is called the first frame because it is the first frame acquired after the AF control is started.
  • step 502 the correlation calculation of the detection signals of the x pixel group and the y pixel group in the frame acquired in step 500 is performed (see also FIG. 12 (1)).
  • step 504 it is determined from the correlation calculation result in step 502 whether or not the correlation between the curves X and Y described above (correlation between the detection signal of the x pixel group and the detection signal of the y pixel group) is high.
  • the correlation value mentioned above is below a predetermined threshold value, it will determine with a correlation high, and if a correlation value is higher than a threshold value, it will determine with a correlation low.
  • step 504 If an affirmative determination is made in step 504, the process proceeds to step 518, and the in-focus position is calculated based on the correlation calculation result determined that the correlation is high.
  • step 504 the process proceeds to step 506.
  • step 506 the piezoelectric element driving circuit 160 is controlled to drive the piezoelectric element 24a, and an excitation operation for applying vibration to the cover glass 24 is performed.
  • step 508 signal charges are read from the phase difference detection pixels, and detection signals for one frame are acquired.
  • step 510 the correlation calculation of the detection signals of the x pixel group and the y pixel group in the frame acquired in step 508 is performed (see also FIG. 12 (2)).
  • the correlation calculation between the detection signal of the x pixel group and the detection signal of the y pixel group in the same frame is referred to as an intra-frame correlation calculation.
  • step 512 it is determined from the correlation calculation result in step 510 whether the correlation is high as described above. If an affirmative determination is made in step 512, the process proceeds to step 518, and the in-focus position is calculated based on the correlation calculation result determined to have a high correlation.
  • step 512 the process proceeds to step 514, using not only the latest frame but also the detection signal of the frame acquired before the latest frame is acquired after the AF control is started.
  • the correlation calculation is performed using a combination of detection signals of different frames. This correlation calculation is referred to as an inter-frame correlation calculation in distinction from the intra-frame correlation calculation.
  • the correlation calculation of the detection signal of the x pixel group of the first frame and the detection signal of the y pixel group of the second frame is performed.
  • the correlation calculation of the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the first frame is performed.
  • the correlation calculation is also performed using the detection signal of the frame acquired before the latest frame.
  • each frame is calculated until the in-focus position is calculated in step 518.
  • the detection signal data (hereinafter also referred to as frame data) is stored in a storage unit such as the RAM 92.
  • step 516 for each of the correlation calculation results in step 514, it is determined whether the correlation is high as described above. Here, if there is even one correlation calculation result for the combination determined to have high correlation, a positive determination is made. If there is no correlation calculation result for the combination determined to have high correlation, a negative determination is made.
  • step 516 If an affirmative determination is made in step 516, the process proceeds to step 518, and the in-focus position is calculated based on the correlation calculation result determined that the correlation is high.
  • the correlation calculation result having the highest correlation is used.
  • step 516 if a negative determination is made in step 516, the process returns to step 506 and the above processing is repeated until a high correlation calculation result is obtained. Note that the determination in step 516 may be performed every time the correlation calculation is performed in the inter-frame correlation calculation, and the inter-frame correlation calculation may be terminated when it is determined that the correlation is high, and the process may proceed to step 518.
  • the intra-frame correlation calculation of the detection signal of the x pixel group and the detection signal of the y pixel group of the first frame is performed. If it is determined that the correlation is low, as shown in FIG. 12 (2), the frames of the detection signal of the x pixel group and the detection signal of the y pixel group of the second frame obtained after performing the excitation operation If the inner correlation calculation is performed and it is determined that the correlation is low, as shown in FIG. 12 (3), in the combination of the detection signal of the x pixel group and the detection signal of the y pixel group, Inter-frame correlation calculation is performed.
  • FIG. 13A due to the presence of foreign matter such as dust, an abnormality occurs in the detection signal of a part of the y pixel group of the first frame, and the dust is moved by the vibration operation, and then moved to FIG. 13B.
  • FIG. 13C even when some detection signals of the x pixel group of the second frame are abnormal, as shown in FIG. A high correlation is obtained for the correlation calculation of the detection signals of the y pixel group of the frame, and by using this, the in-focus position can be calculated with high accuracy.
  • the correlation calculation of the detection signal of the x pixel group of the first frame and the detection signal of the y pixel group of the second frame the correlation of the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the first frame The calculation is omitted because it has been executed in the inter-frame correlation calculation performed when the second frame is acquired.
  • a combination that has not yet been executed that is, a correlation operation between the detection signal of the x pixel group of the first frame and the detection signal of the y pixel group of the third frame, and the detection signal of the x pixel group of the second frame Correlation calculation of the detection signal of the y pixel group of the third frame, correlation calculation of the detection signal of the x pixel group of the third frame and the detection signal of the y pixel group of the first frame, detection signal of the x pixel group of the third frame Each of the correlation operations of the detection signals of the y pixel group in the second frame is performed. The same applies to the inter-frame correlation calculation when the third and subsequent frames are acquired.
  • step 402 in FIG. 10 focus driving such as the moving direction and amount of movement of the focus lens 82 is performed based on the calculated in-focus position. Determine the conditions.
  • step 404 the focus lens 82 is driven according to the focus drive condition.
  • step 406 it is determined whether or not the in-focus state has been achieved, that is, whether or not the focus lens 82 has been moved to the calculated in-focus position. If a negative determination is made in step 406, the process returns to step 404 to continue driving the focus lens. If an affirmative determination is made in step 406, the focus lens drive is stopped in step 408 and the AF control is terminated. To do.
  • the phase difference detection is performed after a long time has elapsed since the piezoelectric element 24a was vibrated.
  • the vibration is applied after the AF control is started (at the timing of detecting the phase difference), the phase difference can be detected with high accuracy.
  • the camera CPU 90 is configured to select an excitation condition that can be operated within the maximum power available in the camera body 12 and to change the setting of the piezoelectric element driving circuit 160, and according to the excitation condition. Vibration may be generated.
  • the power information indicating the maximum power required for driving the lens unit 16 is stored in the ROM 89 in advance, and the camera CPU 90 reads out the power information from the ROM 89 in advance before the AF control. Then, considering the power required for driving the lens unit 16 indicated by the power information and the power required for driving the camera 12 main body, the piezoelectric element 24a is driven within the maximum power available in the digital camera 10. Therefore, the maximum power that can be used for this purpose is obtained, and an excitation condition that can be driven below this maximum power is selected to control the piezoelectric element driving circuit 160. Thereby, even if there is power limitation in the digital camera 10, it can be vibrated under an appropriate vibration condition.
  • the excitation timing may be synchronized with a drive signal for driving the CCD 22. Since the piezoelectric element 24a is driven by using a pulse voltage having a large amplitude, coupling noise, induction noise, power supply noise, and the like may affect the signal read from the CCD 22. For this reason, by performing drive control of the piezoelectric element 24a in synchronization with the drive signal of the CCD 22, it is possible to prevent noise from being superimposed on the CCD 22 and an increase in the phase difference detection error caused thereby.
  • a signal of each pixel of the CCD 22 is obtained by performing charge accumulation (exposure), charge read-out, and transfer of the read-out charge, and the piezoelectric element 24a during charge read-out or transfer.
  • the camera CPU 90 may control to start the excitation at the exposure start timing in synchronization with the drive signal and finish the excitation before the charge readout start timing.
  • the image sensor is not the CCD 22 but a CMOS, it is controlled by a well-known rolling shutter system, so that the exposure and readout timings in each pixel are not the same even within one frame. Therefore, during AF control, the phase difference detection pixel corresponding to the currently selected distance measuring point may be vibrated at the timing when the exposure period is reached.
  • the correlation value decreases as the correlation increases.
  • an evaluation value that increases as the correlation increases for example, the reciprocal of the correlation value may be used, and the evaluation value is less than the threshold value. If the correlation is high, it may be determined that the correlation is high.
  • FIG. 14 is a flowchart showing the flow of the in-focus position calculation process according to the present embodiment. 14, steps that perform the same processing as in FIG. 11 are denoted by the same reference numerals.
  • the frame data deletion process in step 513 is performed to delete the frame data that can be deleted, and the remaining frames are framed in step 514. Inter-correlation calculation is performed.
  • FIG. 15 is a flowchart showing the flow of the frame data deletion process.
  • step 600 the difference between the frames before and after the detection signal is calculated for each phase difference detection pixel.
  • the front and rear frames refer to the frame obtained in step 508 and the frame obtained immediately before the frame.
  • step 602 it is determined whether or not the difference of each pixel in the x pixel group is equal to or less than a threshold value.
  • a threshold value if each difference of each pixel of the x pixel group is equal to or less than the threshold value, an affirmative determination is made, and if even one difference exceeds the threshold value, a negative determination is made. If an affirmative determination is made in step 602, the process proceeds to step 604, and the previous frame data of the x pixel group is deleted. If a negative determination is made in step 602, step 604 is skipped.
  • Step 606 it is determined whether or not the difference for each pixel in the y pixel group is equal to or less than a threshold value.
  • a threshold value if each difference of each pixel in the y pixel group is equal to or smaller than the threshold value, an affirmative determination is made, and if even one difference exceeds the threshold value, a negative determination is made. If an affirmative determination is made in step 606, the process proceeds to step 608, and the previous frame data of the y pixel group is deleted. If a negative determination is made in step 608, step 608 is skipped.
  • the previous frame data is deleted in step 604 and step 608.
  • the present invention is not limited to this, and the latest frame data may be deleted.
  • step 514 the inter-frame correlation calculation is performed as in the first embodiment, but the frame with a small difference is deleted by the frame data deletion processing. Therefore, since the number of combinations to be subjected to correlation calculation is reduced, the time required for the correlation calculation can be shortened, and the recording capacity necessary for holding the frame data can be reduced.
  • FIG. 16 is an explanatory diagram for explaining a specific example of the correlation calculation.
  • FIG. 17A is a diagram plotting x pixel detection signals and y pixel detection signals in the first frame
  • FIG. 17B is plotting x pixel detection signals and y pixel detection signals in the second frame
  • FIG. 17C is a diagram in which the detection signal of the x pixel and the detection signal of the y pixel of the third frame are plotted
  • FIG. 17D is the detection signal of the x pixel group of the first frame and the y signal of the third frame. It is the figure which plotted the detection signal of the pixel group.
  • the second frame is acquired after performing the excitation operation, and the intra-frame correlation calculation of the second frame is performed as shown in FIG. If it is determined from the correlation calculation result that the correlation between the detection signals of the x pixel group and the y pixel group is low (see also FIG. 17B), the inter-frame correlation calculation is performed, but before that, the frame data deletion process is performed. Do.
  • the inter-frame correlation calculation only needs to perform the correlation calculation between the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the first frame. Become. If it is determined that the correlation is low even in this correlation calculation, the third frame is acquired after performing the excitation operation again, and the intra-frame correlation calculation of the third frame is performed as shown in FIG. .
  • the inter-frame correlation calculation is performed, but before that, the frame data deletion process is performed. Do.
  • the inter-frame correlation calculation is performed by calculating the correlation between the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the third frame (FIG. 16 (5)), and the x pixel group of the third frame.
  • the correlation calculation (FIG. 16 (6)) between the detection signal and the detection signal of the y pixel group in the first frame may be performed.
  • the correlation between the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the third frame is low, and the detection signal of the x pixel group of the third frame and the detection of the y pixel group of the first frame
  • the correlation with the signal is high (see also FIG. 17D)
  • the in-focus position is calculated using the correlation calculation result between the detection signal of the x pixel group in the third frame and the detection signal of the y pixel group in the first frame. Will be.
  • the number of combinations to be subjected to the correlation calculation can be reduced, and the time required for the correlation calculation can be shortened.
  • the recording capacity required for holding can also be reduced.
  • the step after the reading in step 508 The frame data deletion process may be performed before the intra-frame correlation calculation is performed in 510.
  • the intra-frame correlation calculation in step 510 is also performed. Since there is a high possibility that the same result will be obtained, the intra-frame correlation calculation processing in step 510 may be skipped.
  • the piezoelectric element driving circuit 160 is configured to be able to change the excitation condition of the piezoelectric element 24a.
  • the excitation condition include the excitation frequency, the step width when changing the excitation frequency, the excitation time, the excitation timing, and the excitation amplitude.
  • an excitation frequency will be described as an example.
  • the digital camera 10 according to the present embodiment is provided with a plurality of vibration modes that generate standing wave bending vibrations having different excitation frequencies.
  • FIG. 18 is a flowchart showing the flow of the in-focus position calculation process according to the present embodiment. 18, steps that perform the same processing as in FIG. 11 are denoted by the same reference numerals.
  • the vibration mode setting for the next execution of step 506 is set to the excitation that was executed immediately before.
  • the setting of the piezoelectric element driving circuit 160 is changed so that the vibration mode is different from the vibration mode.
  • a control signal is output to the drive element drive circuit 160 so that the vibration operation is executed in the changed vibration mode.
  • the piezoelectric element driving circuit 160 When receiving the control signal, the piezoelectric element driving circuit 160 generates a pulse voltage corresponding to the changed vibration mode and supplies the pulse voltage to the piezoelectric element 24a. Since the standing wave generated by changing the vibration mode changes, the adhesion state of the foreign matter changes.
  • FIG. 19 shows each position in the width direction (that is, the direction intersecting the optical axis direction) of the optical member (cover glass 24 in the present embodiment) when standing wave bending vibration is generated in the sixth-order vibration mode.
  • 8 is a graph showing an example of the amount of displacement in the optical axis direction and an example of the amount of displacement in the optical axis direction at each position in the width direction of the optical member when standing wave bending vibration is generated in the seventh-order vibration mode. is there.
  • the dust adhering to the vibration antinode is removed by the applied acceleration or moved to the node. Further, as is apparent from FIG. 19, the abdomen and the node occur at different positions in the sixth and seventh orders.
  • the 6th order and the 7th order are illustrated here as the order of the vibration mode, the order of the vibration mode is not limited to these.
  • the frame data deletion process described in the second embodiment may be performed before step 514.
  • the vibration mode setting change for the piezoelectric element driving circuit 160 may be performed during the charge accumulation (exposure) of the CCD 22.
  • the camera CPU 90 may perform control so that the vibration mode setting is changed at the exposure start timing, and then the vibration is started and the vibration is finished by the charge reading start timing.
  • the focus position may be temporarily calculated by the first correlation calculation after the AF control is started, and the focus lens drive may be started before the focus position is determined.
  • the focus position calculation processing routine shown in FIG. 20 is started simultaneously with the start of AF control, and the above-described AF control shown in FIG. 10 is not performed.
  • Step 600 and Step 602 perform the same processing as Step 500 and Step 502 described with reference to FIG.
  • step 604 the in-focus position is provisionally calculated based on the intra-frame correlation calculation result in step 602.
  • the in-focus position calculated here is referred to as a temporary in-focus position.
  • the focus lens drive processing routine shown in FIG. 21 is started. Thereafter, the focus lens drive process routine is executed in parallel with the focus position calculation process routine. Details of the focus lens drive processing routine will be described later.
  • step 606 to step 620 is the same as the processing from step 504 to step 518 described with reference to FIG.
  • the frame data deletion process may be inserted before step 616, or the process of step 619 may be omitted when the vibration is performed under a constant excitation condition. .
  • step 622 the in-focus position calculated based on the correlation calculation result determined as having a high correlation in step 620 is compared with the temporary in-focus position calculated in step 604, and the difference is determined in advance. It is determined whether or not the threshold value is exceeded.
  • step 624 the in-focus position is updated from the temporary in-focus position to the latest in-focus position calculated in step 620, and this processing routine is terminated.
  • step 700 focus driving conditions such as a moving direction and a moving amount of the focus lens 82 are determined based on the provisional focus position.
  • step 702 the focus lens 82 is driven according to the focus drive condition, and the focus lens 82 is moved.
  • step 704 it is determined whether or not the in-focus position has been updated. If it is determined that the in-focus position has been updated, the process returns to step 700, and the focus drive condition is determined based on the updated in-focus position. In step 702, the focus lens 82 is driven. If it is determined in step 704 that the in-focus position has not been updated, it is determined in step 706 whether or not driving of the focus lens under the current focus lens driving condition has been completed (in-focus). If a negative determination is made in step 706, the process returns to step 702 to continue driving the focus lens 82. If an affirmative determination is made in step 706, the focus drive is stopped in step 708 and the AF control is terminated. .
  • the focus lens position is roughly adjusted at the temporary focus position, and then finely adjusted so that the updated focus position is obtained, compared with the case where the focus lens 82 is not driven until the focus position is finally determined. , AF control takes less time.
  • the temporary in-focus position is calculated, the focus lens driving is started, and then the correlation is high as described in the first to third embodiments.
  • the focus position is updated and the lens is driven to match the in-focus position. It is possible to start driving the focus lens 82, which occupies a large proportion of them, at an early stage, and to prevent the time required for focusing from being prolonged.
  • the second and subsequent frames are acquired. Therefore, for example, when the frame rate is low in a low-luminance environment, or when a high-speed actuator such as a voice coil motor or an ultrasonic motor is used as the focus lens actuator (the moving speed of the focus lens 82 is high), the second In the frames after the frame, the correlation calculation result may be affected by the lens position change due to lens driving. Therefore, in order to increase the accuracy of the inter-frame correlation calculation, the detection signal may be corrected by the amount of movement of the focus lens 82.
  • a high-speed actuator such as a voice coil motor or an ultrasonic motor
  • the position of the focus lens 82 at the time of acquisition of each frame is stored, the data of the past frame is corrected from the amount of lens movement between frames, and correlation calculation and in-focus position calculation are performed.
  • the correlation calculation is performed using the data of the x pixel group of the first frame (see also FIG. 22A) and the data of the y pixel group of the second frame (see also FIG. 22B)
  • the amount of movement of the focus lens is obtained from the difference between the focus lens position at the time of acquiring one frame and the focus lens position at the time of acquiring the second frame, and detection of x pixels caused by the movement of the focus lens by the amount of movement is performed.
  • the variation amount of the signal is obtained as a correction amount
  • the correction data (x ′) is obtained by correcting the data of the x pixel group of the first frame with the correction amount (see also FIG. 22C).
  • the correlation calculation is performed using the y pixel group data of the frame.
  • the focus position can be calculated with high accuracy.
  • the driving process of the focus lens 82 is performed in parallel with the in-focus position calculation process, the driving of the focus lens 82 is performed when the timing of the excitation operation and the driving timing of the focus lens are not adjusted.
  • the period and the vibration operation period of the piezoelectric element 24a may overlap.
  • the power that can be used by the digital camera 10 is limited, it is necessary to suppress the power consumption so as not to exceed the maximum supply power.
  • power information indicating the maximum power required for driving the lens unit 16 is read from the ROM 89 of the lens unit 16 before AF control, and the power required for driving the lens unit 16 indicated by the power information and the camera
  • the maximum power that can be used for driving the piezoelectric element 24a is obtained within the maximum power that can be used by the digital camera 10 in consideration of the power required for driving the 12 main body, and can be driven with power that is less than or equal to this maximum power.
  • the piezoelectric element driving circuit 160 may be controlled by selecting an excitation condition. Thereby, even if the digital camera 10 has power limitation, the lens drive and the vibration operation can be performed in parallel. By selecting and executing an excitation condition that can be driven within the maximum power, the lens drive and the vibration operation can be performed in parallel, and both high precision and high speed of AF can be achieved.
  • the drive timing of the piezoelectric element 24a may be determined with reference to the power information of the lens unit 16 connected to the camera body 12.
  • the AF motor 86 for driving the focus lens 82 when a DC motor is used as the AF motor 86 for driving the focus lens 82, the starting current after the focus lens driving is started is large, and then the power is reduced according to the driving speed. At that time, the amount of electric power that can be used to drive the piezoelectric element 24a also varies depending on the drive timing of the aperture mechanism 80. Therefore, in a series of AF sequences, the amount of power that can be used is calculated according to the driving state of the focus lens and the diaphragm mechanism 80, and the calculated amount of power and the power required to drive the piezoelectric element 24a. Based on the amount, the piezoelectric element 24a is driven at a timing that does not exceed the amount of power that the digital camera 10 can use. Thereby, for example, the piezoelectric element 24a can be driven avoiding the timing of the maximum power point during lens driving, the AF speed can be increased, and the device can be downsized.
  • the optical member provided with the piezoelectric element 24a is a light-receiving surface of the imaging element.
  • the optical member is not limited to the cover glass as long as the optical member is disposed on the side.
  • an optical low-pass filter optical LPF
  • the optical LPF A piezoelectric element may be provided to apply vibration to the optical LPF.
  • the present invention is not limited to the above-described embodiments, and it is needless to say that the present invention can also be applied to those modified in design within the scope described in the claims. It is also possible to execute processing that combines the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)

Abstract

Provided is an image capture device, wherein in common image capture frames obtained from a pair of phase difference detection pixels by capturing an image of a photographic subject with an image capture element, signals obtained from the pair of phase difference detection pixels are combined and a correlation computation carried out (first correlation computation); and in different image capture frames obtained from the pair of phase difference detection pixels by alternating the capturing of the image with the image capture element with the oscillation of an optical element which is disposed on the light receiving face side of the image capture element, signals obtained from the pair of phase difference detection pixels are combined and a correlation computation carried out (second correlation computation). On the basis of the correlation values obtained with the correlation computations, an assessment is made as to whether each correlation of the signals which are combined with each correlation computation is high or low, and a focus location computed and focus controlled from the result of the correlation computation of a combination which is assessed to have a high correlation.

Description

撮像装置及び合焦制御方法Imaging apparatus and focus control method
 本発明は、撮像装置及びその合焦制御方法に係り、特に、位相差検出方式で合焦制御を行う撮像装置及びその合焦制御方法に関する。 The present invention relates to an imaging apparatus and a focusing control method thereof, and more particularly to an imaging apparatus that performs focusing control by a phase difference detection method and a focusing control method thereof.
 近年、CCD(Charge Coupled Device)エリアセンサ、CMOS(Complementary Metal Oxide Semiconductor)イメージ・センサ等の固体撮像素子の高解像度化に伴い、デジタル電子スチルカメラ、デジタルビデオカメラ、携帯電話機、PDA(Personal Digital Assistant,携帯情報端末)等の撮影機能を有する情報機器の需要が急増している。なお、以上のような撮像機能を有する情報機器を撮像装置と称する。 In recent years, with the increase in resolution of solid-state imaging devices such as CCD (Charge Coupled Device) area sensors and CMOS (Complementary Metal Oxide Semiconductor) image sensors, digital electronic still cameras, digital video cameras, mobile phones, PDAs (Personal Digital Assistants) , Portable information terminals) and other information devices having a photographing function are rapidly increasing. Note that an information device having the above imaging function is referred to as an imaging device.
 ところで、主要な被写体までの距離を検出する合焦制御方法には、コントラスト方式や位相差方式がある。位相差方式は、コントラスト方式に比べて合焦位置の検出を高速に行うことができるため、様々な撮像装置で多く採用されている。また、被写体を撮像する撮像素子に位相差検出画素を埋め込み、位相差検出画素から読み出された信号の相関演算を行って、合焦制御を行う撮像装置も知られている。 By the way, there are a contrast method and a phase difference method as a focus control method for detecting the distance to the main subject. Since the phase difference method can detect the in-focus position at a higher speed than the contrast method, it is widely used in various imaging apparatuses. There is also known an imaging apparatus that performs focusing control by embedding a phase difference detection pixel in an imaging element that captures an image of a subject and performing correlation calculation of signals read from the phase difference detection pixel.
 このような撮像素子の撮像面(受光面)側に設けられたカバーガラスや光学ローバスフィルタなどの光学部材に塵埃等の異物が付着すると、付着した異物により撮影レンズを透過した光束が遮蔽され、遮蔽された範囲内に配置された位相差検出画素からの出力が正常に得られない。 When foreign matter such as dust adheres to an optical member such as a cover glass or an optical low pass filter provided on the image pickup surface (light receiving surface) side of such an image pickup device, the light flux transmitted through the photographing lens is shielded by the attached foreign matter. The output from the phase difference detection pixels arranged in the shielded range cannot be obtained normally.
 レンズ交換式のカメラシステム、特に、ミラーボックスを持たず、フランジバックの短いカメラにおいては、異物が付着しやすいため、特に問題となる。 異物 Especially in a camera system with an interchangeable lens, especially a camera with no mirror box and a short flange back, foreign matter tends to adhere, which is particularly problematic.
 なお、特開2010-103705号公報には、光学部材を振動させる加振手段を有すると共に位相差検出画素の出力信号を補正データで補正してデフォーカス量を検出する撮像装置であって、加振手段で光学部材を振動させる前及び振動させた後の位相差検出画素の出力信号の差分を求め、差分値が所定値以上の場合に上記補正データを変更する撮像装置が記載されている。 Japanese Patent Application Laid-Open No. 2010-103705 discloses an image pickup apparatus that includes a vibrating unit that vibrates an optical member and detects a defocus amount by correcting an output signal of a phase difference detection pixel with correction data. There is described an imaging apparatus that obtains a difference between output signals of phase difference detection pixels before and after vibrating an optical member by a vibration unit, and changes the correction data when the difference value is a predetermined value or more.
 また、特開2010-226395号公報には、光学部材を振動させる振動手段を有し、光学部材が振動して生じる光学部材の定常波の腹位置と、位相差検出画素(焦点検出用画素)へ光を透過させる光学部材の位置とが重複するように振動手段の振動を制御する撮像装置が開示されている。なお、特開2010-226395号公報には、位相差検出画素から得られた信号の相関演算結果の信頼性を判定して信頼性の高い情報を優先的に使用したり、位相差検出画素から得たフレームについて高い信頼性が得られるまで塵埃除去動作を繰り返す技術も開示されている。 Japanese Patent Application Laid-Open No. 2010-226395 has vibration means for vibrating the optical member, and the antinode position of the standing wave of the optical member generated by the vibration of the optical member and the phase difference detection pixel (focus detection pixel). There has been disclosed an imaging apparatus that controls the vibration of a vibration unit so that the position of an optical member that transmits light overlaps. In Japanese Patent Laid-Open No. 2010-226395, the reliability of the correlation calculation result of the signal obtained from the phase difference detection pixel is determined to use preferential information with high reliability, or from the phase difference detection pixel. A technique for repeating the dust removing operation until high reliability is obtained for the obtained frame is also disclosed.
特開2010-103705号公報JP 2010-103705 A 特開2010-226395号公報JP 2010-226395 A
 ところで、特開2010-103705号公報には、相関演算を行うまでに、塵埃除去前後の信号の差分から異物アドレスデータを生成し、異物アドレスデータを基に新たに補正データを生成し、該補正データを用いて信号を補正する、という一連の処理が必要となる。また、特許文献2に記載のように、信頼性の高い信号だけを抽出して使用するということは、合焦制御毎に使用する信号の数が増減する可能性があり、安定した精度が得られない。なお、特開2010-103705号公報及び特開2010-226395号公報では、撮像フレーム毎に相関演算を行なっているが、異物が除去されずに異物が光学部材上を移動しつづけたときに、撮像フレーム毎の相関演算のみでは高精度と迅速性の双方を両立させることが困難となる。 By the way, in Japanese Patent Application Laid-Open No. 2010-103705, before performing correlation calculation, foreign object address data is generated from the difference between signals before and after dust removal, and new correction data is generated based on the foreign object address data. A series of processes for correcting signals using data is required. In addition, as described in Patent Document 2, extracting and using only highly reliable signals may increase or decrease the number of signals used for each focus control, and stable accuracy can be obtained. I can't. In JP 2010-103705 A and JP 2010-226395 A, correlation calculation is performed for each imaging frame, but when the foreign material continues to move on the optical member without being removed, It is difficult to achieve both high accuracy and quickness only by the correlation calculation for each imaging frame.
 本発明は、精度高く迅速に合焦位置を算出して合焦制御することができる撮像装置及び合焦制御方法を提供する。 The present invention provides an imaging apparatus and a focus control method capable of calculating a focus position quickly with high accuracy and performing focus control.
 本発明の第1の態様による撮像装置は、撮影レンズの主軸に対して一方の側を通過した光束が入射される第1の位相差検出画素と、前記撮影レンズの主軸に対して他方の側を通過した光束が入射される第2の位相差検出画素とからなる位相差検出画素対を複数備えると共に複数の撮像画素を備えた撮像素子と、前記撮像素子の受光面側に設けられた光学部材と、前記光学部材に振動を加える加振手段と、前記撮像素子により被写体を撮像した同一撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第1の相関演算手段と、前記撮像素子の撮像と前記加振手段の加振とを交互に行って、前記撮像素子の撮像により撮像した異なる撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第2の相関演算手段と、前記第1の相関演算手段及び前記第2の相関演算手段での各相関演算で得られた相関値に基づいて、各相関演算で組み合わせた信号の相関が高いか否かを各々判定する判定手段と、前記判定手段により相関が高いと判定された組み合わせの相関演算の結果から合焦位置を算出する合焦位置算出手段と、前記合焦位置算出手段で算出された合焦位置に前記撮影レンズが移動されるように前記撮影レンズを駆動する駆動手段を制御する駆動制御手段と、を備える。 An imaging apparatus according to a first aspect of the present invention includes a first phase difference detection pixel on which a light beam having passed through one side with respect to a main axis of a photographing lens is incident, and the other side with respect to the main axis of the photographing lens. An image sensor including a plurality of phase difference detection pixel pairs each including a second phase difference detection pixel on which a light beam that has passed through is incident and an optical element provided on the light receiving surface side of the image sensor A first correlation that performs a correlation operation by combining signals obtained from the phase difference detection pixel pair in the same imaging frame in which an object is imaged by the imaging element; The signal obtained from the phase difference detection pixel pair is assembled in different imaging frames captured by the imaging device by alternately performing the calculation unit, imaging of the imaging device, and excitation of the excitation unit. Based on the correlation value obtained by each correlation calculation in the first correlation calculation means and the second correlation calculation means combined with the second correlation calculation means for performing the correlation calculation together, combined in each correlation calculation Determination means for determining whether or not the correlation of the signals is high; a focus position calculation means for calculating a focus position from a result of correlation calculation of a combination determined to have high correlation by the determination means; and the focus Drive control means for controlling drive means for driving the photographic lens so that the photographic lens is moved to the in-focus position calculated by the position calculating means.
 このように、同一撮像フレームにおいて位相差検出画素対から得られた信号の相関演算を行う第1の相関演算手段だけでなく、撮像と加振を交互に行って得られた異なる撮像フレームにおいて位相差検出画素対から得られた信号を組み合わせた相関演算を行う第2の相関演算手段を設け、相関が高い相関演算結果に基づいて合焦位置を算出するようにしたため、加振により塵埃等の異物が除去しきれずに移動してしまった場合にも、第2の相関演算手段で、組み合わせによっては高い相関の相関演算結果を得ることができ、精度高く迅速に合焦位置を算出して合焦制御することができる。 In this way, not only the first correlation calculation means for performing the correlation calculation of the signals obtained from the phase difference detection pixel pairs in the same imaging frame, but also in different imaging frames obtained by alternately performing imaging and excitation. Since the second correlation calculation means for performing the correlation calculation combining the signals obtained from the phase difference detection pixel pair is provided and the in-focus position is calculated based on the correlation calculation result having a high correlation, dust or the like is generated by the vibration. Even if the foreign object moves without being completely removed, the second correlation calculation means can obtain a correlation calculation result with high correlation depending on the combination. Focus control is possible.
 なお、本発明の第2の態様によれば、上記第1の態様において、前記第2の相関演算手段は、異なる2つの撮像フレームにおける前記第1の位相差検出画素から得られた信号間の差分が予め定められた閾値以下の場合には、前記異なる2つの撮像フレームの何れか一方における前記第1の位相差検出画素から得られた信号を相関演算に用い、異なる2つの撮像フレームにおける前記第2の位相差検出画素から得られた信号間の差分が前記閾値以下の場合には、前記異なる2つの撮像フレームの何れか一方における前記第2の位相差検出画素から得られた信号を相関演算に用いるようにしてもよい。 In addition, according to the second aspect of the present invention, in the first aspect, the second correlation calculation means is configured to detect a difference between signals obtained from the first phase difference detection pixels in two different imaging frames. When the difference is equal to or less than a predetermined threshold, the signal obtained from the first phase difference detection pixel in one of the two different imaging frames is used for correlation calculation, and the difference in the two imaging frames When the difference between the signals obtained from the second phase difference detection pixels is equal to or smaller than the threshold value, the signal obtained from the second phase difference detection pixels in either one of the two different imaging frames is correlated. You may make it use for a calculation.
 このような構成により、相関演算の組み合わせを少なくすることができるため、演算時間をより短縮することができる。 Such a configuration makes it possible to reduce the number of combinations of correlation calculations, thereby further reducing the calculation time.
 また、本発明の第3の態様によれば、上記第1又は第2の態様において、前記加振手段の加振を前記撮像素子で撮像フレームを取得する毎に行う場合に、各撮像フレーム毎に異なる加振条件で振動が加えられるように前記加振手段を制御する第1加振制御手段を更に備えていてもよい。 Further, according to the third aspect of the present invention, in the first or second aspect, when the vibration of the vibration means is performed every time an image frame is acquired by the image sensor, There may be further provided a first vibration control means for controlling the vibration means so that vibration is applied under different vibration conditions.
 このような構成により、加振により塵埃等の異物を除去できなかったとしても、加振条件の違いから異物の付着状況は変化させることができるため、加振前後で撮影して得られた撮像フレームにより異なる相関演算結果を得ることができ、相関の高い組み合わせをより迅速に得ることができる。 With such a configuration, even if foreign matter such as dust cannot be removed by vibration, the adhesion state of the foreign matter can be changed due to the difference in vibration conditions. Correlation calculation results that differ depending on the frame can be obtained, and combinations with high correlation can be obtained more quickly.
 本発明の第4の態様によれば、上記第1~第3のいずれかの態様において、前記撮像素子の撮像により最初に取得された撮像フレームにおける前記位相差検出画素対の信号を組み合わせて前記第1の相関演算手段により行われた相関演算の結果から、暫定的な暫定合焦位置を算出する暫定合焦位置算出手段を更に備え、前記駆動制御手段は、前記暫定合焦位置が算出されたときに前記撮影レンズの該暫定合焦位置への移動が開始されるように前記駆動手段の駆動を開始し、その後前記合焦位置が算出された場合には、前記合焦位置に前記撮影レンズが移動されるように前記駆動手段を制御するようにしてもよい。 According to a fourth aspect of the present invention, in any one of the first to third aspects, the signal of the phase difference detection pixel pair in the imaging frame first obtained by imaging by the imaging element is combined to The system further comprises provisional focus position calculation means for calculating a provisional provisional focus position from the result of the correlation calculation performed by the first correlation calculation means, and the drive control means calculates the provisional focus position. When the driving means is started so that the movement of the photographing lens to the temporary focusing position is started, and the focusing position is calculated thereafter, the photographing is performed at the focusing position. The driving means may be controlled so that the lens is moved.
 このような構成により、時間のかかるレンズ駆動を最初の相関演算結果を基に開始することができるため、合焦制御に要する時間が長引くことを抑制できる。 With such a configuration, it is possible to start time-consuming lens driving based on the first correlation calculation result, so that it is possible to suppress an increase in time required for focusing control.
 本発明の第5の態様によれば、上記第4の態様において、各撮像フレームを取得したときの前撮影レンズの各位置に基づいて、前記撮影レンズの移動に起因して生じる各位相差検出画素の信号の変動を補正する補正手段を更に備え、前記第2の相関演算手段は、前記補正手段により補正された信号を用いて相関演算を行うようにしてもよい。 According to a fifth aspect of the present invention, in the fourth aspect, each phase difference detection pixel generated due to the movement of the photographing lens based on each position of the front photographing lens when each imaging frame is acquired. The second correlation calculation means may further perform correlation calculation using the signal corrected by the correction means.
 このような構成により、レンズ位置の変動により位相差検出画素から得られた信号に影響が生じても、該影響を小さくすることができる。 With such a configuration, even if the signal obtained from the phase difference detection pixel is affected by the variation of the lens position, the influence can be reduced.
 本発明の第6の態様によれば、上記第4又は第5の態様において、前記加振手段の加振条件の中から、前記撮影レンズの駆動に必要な電力量と撮像装置で利用可能な最大電力量とに基づいて、撮像装置の消費電力が前記最大電力量を超えない範囲内で実行可能な加振条件を選択する選択手段と、前記選択手段で選択された加振条件で振動が加えられるように前記加振手段を制御する第2加振制御手段を更に備えていてもよい。 According to a sixth aspect of the present invention, in the fourth or fifth aspect, the power required for driving the photographing lens and the imaging device can be used from among the vibration conditions of the vibration means. Based on the maximum power amount, a selection unit that selects an excitation condition that can be executed within a range in which the power consumption of the imaging device does not exceed the maximum power amount, and vibration is generated under the excitation condition selected by the selection unit. You may further provide the 2nd vibration control means which controls the said vibration means so that it may be added.
 このような構成により、必要な電力量が大きいレンズ駆動と加振手段による加振とを並列に行うことができる。 With such a configuration, it is possible to perform lens driving, which requires a large amount of power, and excitation by the excitation means in parallel.
 本発明の第7の態様によれば、上記第4~第6のいずれかの態様において、前記撮影レンズの駆動に必要な電力量と撮像装置で利用可能な最大電力量とに基づいて、撮像装置の消費電力が前記最大電力量を超えないように前記加振手段の加振のタイミングを制御する第3加振制御手段と、を更に備えていてもよい。 According to a seventh aspect of the present invention, in any of the fourth to sixth aspects, the imaging is performed based on the amount of power required for driving the photographing lens and the maximum amount of power that can be used in the imaging device. The apparatus may further comprise third vibration control means for controlling the vibration timing of the vibration means so that the power consumption of the apparatus does not exceed the maximum power amount.
 このような構成により、必要な電力量が大きいレンズ駆動と加振手段の加振とを並列に行うことができる。 With such a configuration, it is possible to drive the lens that requires a large amount of power and the vibration of the vibration means in parallel.
 本発明の第8の態様によれば、上記第1~第6のいずれかの態様において、前記撮像素子を駆動させる駆動信号に同期して前記光学部材に対する加振が実行されるように、前記加振手段を制御する第4加振制御手段を更に備えていてもよい。 According to an eighth aspect of the present invention, in any one of the first to sixth aspects, the excitation of the optical member is executed in synchronization with a drive signal for driving the imaging device. You may further provide the 4th vibration control means which controls a vibration means.
 このような構成により、撮像素子からの信号の読み出しや転送を行っている最中に加振を行う場合に比べて、読み出し信号へのノイズの重畳、及びそれによる相関演算の精度低下を抑制できる。 With such a configuration, it is possible to suppress noise superimposition on the readout signal and a decrease in accuracy of the correlation calculation as compared with the case where vibration is performed during readout or transfer of the signal from the image sensor. .
 本発明の第9の態様による合焦制御方法は、撮影レンズの主軸に対して一方の側を通過した光束が入射される第1の位相差検出画素と、前記撮影レンズの主軸に対して他方の側を通過した光束が入射される第2の位相差検出画素とからなる位相差検出画素対を複数備えると共に複数の撮像画素を備えた撮像素子と、前記撮像素子の受光面側に設けられた光学部材と、前記光学部材に振動を加える加振手段と、を備えた撮像装置における合焦制御方法であって、前記撮像素子により被写体を撮像した同一撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第1の相関演算ステップと、前記撮像素子の撮像と前記加振手段の加振とを交互に行って、前記撮像素子の撮像により撮像した異なる撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第2の相関演算ステップと、前記第1の相関演算手段及び前記第2の相関演算手段での各相関演算で得られた相関値に基づいて、各相関演算で組み合わせた信号の相関が高いか否かを各々判定する判定ステップと、前記判定手段により相関が高いと判定された組み合わせの相関演算の結果から合焦位置を算出する合焦位置算出ステップと、前記合焦位置算出手段で算出された合焦位置に前記撮影レンズが移動されるように前記撮影レンズを駆動する駆動手段を制御する駆動制御ステップと、を含む。 The focusing control method according to the ninth aspect of the present invention includes a first phase difference detection pixel on which a light beam that has passed through one side with respect to the main axis of the photographing lens is incident, and the other with respect to the main axis of the photographing lens. Provided with a plurality of phase difference detection pixel pairs each including a second phase difference detection pixel on which a light beam that has passed through the first side is incident, and a plurality of imaging pixels, and a light receiving surface side of the imaging element. A focus control method in an imaging apparatus comprising: an optical member; and an excitation unit that applies vibration to the optical member, wherein the phase difference detection pixel pair in the same imaging frame in which a subject is imaged by the imaging element. A first correlation calculation step for performing a correlation calculation by combining signals obtained from the above, and different imagings obtained by imaging by the imaging element by alternately performing imaging of the imaging element and excitation of the excitation means Fure A second correlation calculation step of performing a correlation calculation by combining signals obtained from the phase difference detection pixel pair, and each correlation calculation in the first correlation calculation unit and the second correlation calculation unit. Based on the obtained correlation value, a determination step for determining whether or not the correlation of the signals combined in each correlation calculation is high, and a result of the correlation calculation of the combination determined to have a high correlation by the determination means. A focus position calculating step for calculating a focus position; and a drive control step for controlling a drive means for driving the photographic lens so that the photographic lens is moved to the focus position calculated by the focus position calculating means; ,including.
 本発明の第9の態様も、本発明の第1の態様と同様に作用するため、精度高く迅速に合焦位置を算出して合焦制御することができる。 Since the ninth aspect of the present invention also operates in the same manner as the first aspect of the present invention, it is possible to calculate the in-focus position with high accuracy and control the focus.
 本発明によれば、精度高く迅速に合焦位置を算出して合焦制御することができる。 According to the present invention, it is possible to calculate the in-focus position with high accuracy and to control in-focus.
第1の実施の形態に係るデジタルカメラの斜視図である。1 is a perspective view of a digital camera according to a first embodiment. レンズユニットを外した状態のデジタルカメラの斜視図である。It is a perspective view of a digital camera in a state where a lens unit is removed. 第1の実施の形態に係るデジタルカメラの背面を示す図である。It is a figure which shows the back surface of the digital camera which concerns on 1st Embodiment. 第1の実施の形態に係るデジタルカメラの内部を模式的に示す図である。It is a figure which shows typically the inside of the digital camera which concerns on 1st Embodiment. 固体撮像素子の位相差検出エリアの表面拡大模式図である。It is a surface expansion schematic diagram of the phase difference detection area of a solid-state image sensor. 図5の位相差検出画素とその検出信号だけを抜き出して位相差量の概念を説明する説明図である。FIG. 6 is an explanatory diagram for explaining a concept of a phase difference amount by extracting only the phase difference detection pixel of FIG. 5 and its detection signal. 第1の実施の形態に係るデジタルカメラのブロック図である。1 is a block diagram of a digital camera according to a first embodiment. カバーガラスに振動を加える加振機構を模式的に示した図である。It is the figure which showed typically the excitation mechanism which applies a vibration to a cover glass. カバーガラスに塵埃等の異物が付着している状態で、位相差検出画素1x(x画素)、及び位相差検出画素1y(y画素)の各々から読み出した検出信号をプロットした図である。It is the figure which plotted the detection signal read from each of phase difference detection pixel 1x (x pixel) and phase difference detection pixel 1y (y pixel) in the state where foreign substances, such as dust, have adhered to a cover glass. 第1の実施の形態で実行されるAF制御の流れを示すフローチャートである。It is a flowchart which shows the flow of AF control performed in 1st Embodiment. 第1の実施の形態で実行される合焦位置算出処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the focus position calculation process performed in 1st Embodiment. 第1の実施の形態で行われる相関演算の組み合わせを模式的に示す説明図である。It is explanatory drawing which shows typically the combination of the correlation calculation performed in 1st Embodiment. フレーム間相関演算の一例を示す図である。It is a figure which shows an example of the correlation calculation between frames. フレーム間相関演算の一例を示す図である。It is a figure which shows an example of the correlation calculation between frames. フレーム間相関演算の一例を示す図である。It is a figure which shows an example of the correlation calculation between frames. 第2の実施の形態で実行される合焦位置算出処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the focus position calculation process performed in 2nd Embodiment. 第2の実施の形態で実行されるフレームデータ削除処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the frame data deletion process performed in 2nd Embodiment. 第2の実施の形態で行われる相関演算の組み合わせの具体的な例を説明する説明図である。It is explanatory drawing explaining the specific example of the combination of the correlation calculation performed in 2nd Embodiment. 第1フレームのx画素の検出信号及びy画素の検出信号をプロットした図である。It is the figure which plotted the detection signal of the x pixel of the 1st frame, and the detection signal of the y pixel. 第2フレームのx画素の検出信号及びy画素の検出信号をプロットした図である。It is the figure which plotted the detection signal of the x pixel of the 2nd frame, and the detection signal of the y pixel. 第3フレームのx画素の検出信号及びy画素の検出信号をプロットした図である。It is the figure which plotted the detection signal of x pixel of the 3rd frame, and the detection signal of y pixel. 第1フレームのx画素群の検出信号と第3フレームのy画素群の検出信号をプロットした図である。It is the figure which plotted the detection signal of the x pixel group of the 1st frame, and the detection signal of the y pixel group of the 3rd frame. 第3の実施の形態で実行される合焦位置算出処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the focus position calculation process performed in 3rd Embodiment. 6次の振動モードで定在波屈曲振動を発生させた場合の光学部材の幅方向の各位置における光軸方向の変位量の例、及び7次の振動モードで定在波屈曲振動を発生させた場合の光学部材の幅方向の各位置における光軸方向の変位量の例を示したグラフである。Examples of displacement in the optical axis direction at each position in the width direction of the optical member when standing wave bending vibration is generated in the sixth order vibration mode, and standing wave bending vibration is generated in the seventh order vibration mode. It is the graph which showed the example of the displacement amount of the optical axis direction in each position of the width direction of the optical member in the case of. 第4の実施の形態で実行される合焦位置算出処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the focus position calculation process performed in 4th Embodiment. 第4の実施の形態で実行されるフォーカスレンズ駆動処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the focus lens drive process performed in 4th Embodiment. レンズの移動量によりフレームデータを補正することを説明する説明図である。It is explanatory drawing explaining correct | amending frame data with the moving amount | distance of a lens. レンズの移動量によりフレームデータを補正することを説明する説明図である。It is explanatory drawing explaining correct | amending frame data with the moving amount | distance of a lens. レンズの移動量によりフレームデータを補正することを説明する説明図である。It is explanatory drawing explaining correct | amending frame data with the moving amount | distance of a lens.
 以下、本発明の実施の形態について、図面を参照しながら詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
[第1の実施の形態] [First Embodiment]
 第1の実施の形態に係るデジタルカメラについて図1を参照して説明する。図1に示すように、デジタルカメラ10は、主に静止画像を撮影する一眼レフタイプのデジタルスチルカメラである。 The digital camera according to the first embodiment will be described with reference to FIG. As shown in FIG. 1, the digital camera 10 is a single-lens reflex digital still camera that mainly captures still images.
 図1と図2とに示すように、デジタルカメラ10は、カメラ本体12の前面(正面)部にレンズマウント部14が備えられ、レンズマウント部14を介して、複数のレンズからなるレンズ群300(図4参照)などを備えるレンズユニット16が交換可能に装着される。また、シャッターボタン18が正面に向かって左上部に配置されている。 As shown in FIGS. 1 and 2, the digital camera 10 includes a lens mount unit 14 on the front (front) portion of the camera body 12, and a lens group 300 including a plurality of lenses via the lens mount unit 14. A lens unit 16 including (see FIG. 4) and the like is replaceably mounted. Further, the shutter button 18 is disposed at the upper left portion toward the front.
 図3に示すように、デジタルカメラ10の背面外装部には、光学ファインダ60、表示パネル62、液晶モニター64、十字ボタン66、メニュー/実行ボタン67、バックボタン68、ファンクションボタン63などが設けられている。 As shown in FIG. 3, an optical viewfinder 60, a display panel 62, a liquid crystal monitor 64, a cross button 66, a menu / execution button 67, a back button 68, a function button 63, and the like are provided on the rear exterior portion of the digital camera 10. ing.
 液晶モニター64には、被写体像及び記録済み画像の再生画像を表示させることができる。また、現在設定されているモードの情報、画像の圧縮率の情報、日時情報、コマ番号なども表示される。更に、ユーザーが各種の設定操作等を行う際のユーザインターフェース用表示画面としても利用され、必要に応じて設定項目などのメニュー情報も表示される。 The LCD monitor 64 can display a subject image and a reproduced image of a recorded image. In addition, information on the currently set mode, image compression rate information, date / time information, frame number, and the like are also displayed. Furthermore, it is also used as a user interface display screen when the user performs various setting operations, and menu information such as setting items is displayed as necessary.
 光学ファインダ60は、レンズユニット16(図1、図2参照)からの被写体像を、そのまま見ることができる。 The optical finder 60 can see the subject image from the lens unit 16 (see FIGS. 1 and 2) as it is.
 図4に示すように、カメラ本体12の内部には、クイックリターンミラー203が撮影光路内に設けられている。クイックリターンミラー203は、レンズユニット16からの被写体光を光学ファインダ60への光路系に導く位置(斜設位置)と撮影光路外に退避する位置(退避位置)との間を移動する。(いわゆる、ミラーダウンとミラーアップ)。なお、図4においては、クイックリターンミラー203はミラーダウンした斜設位置にある。また、一点破線は、光軸Lを示している。 As shown in FIG. 4, a quick return mirror 203 is provided in the photographing optical path inside the camera body 12. The quick return mirror 203 moves between a position for guiding subject light from the lens unit 16 to the optical path system to the optical viewfinder 60 (an oblique position) and a position for retracting outside the photographing optical path (a retracted position). (So-called mirror down and mirror up). In FIG. 4, the quick return mirror 203 is in the oblique position where the mirror is lowered. A dashed line indicates the optical axis L.
 クイックリターンミラー203の上方には、光学ファインダ60に導かれる被写体光が結像するピント板204が配設されている。ピント板204の上方には、光学ファインダ60の視認性を向上させるためのコンデンサレンズ205が設けられている。そして、ペンタゴナルダハプリズム206で、ピント板204とコンデンサレンズ205とを通った被写体光を、光学ファインダ60用の接眼レンズ208に導く。 A focus plate 204 on which subject light guided to the optical viewfinder 60 forms an image is disposed above the quick return mirror 203. A condenser lens 205 for improving the visibility of the optical viewfinder 60 is provided above the focus plate 204. Then, the pentagonal roof prism 206 guides the subject light passing through the focusing plate 204 and the condenser lens 205 to the eyepiece 208 for the optical viewfinder 60.
 一方、クイックリターンミラー203の後方には、上下方向にシャッター32が開閉するフォーカルプレーンシャッター型のシャッター機構30が配置されている。なお、図4ではシャッター32は開放した状態である。シャッター機構30の後方には、撮像素子であるCCD22が配置されている。なお、本実施の形態では撮像素子をCCDとするが、CMOSであってもよい。CCD22の受光面には図示しない多数の画素(フォトダイオード)が正方格子状に配列形成されている。なお、画素配列は正方格子配列に限るものではなく、奇数行の画素行に対して偶数行の画素行が1/2画素ピッチずつずらして配列された、所謂、ハニカム画素配列でも良い。 On the other hand, a focal plane shutter type shutter mechanism 30 in which the shutter 32 opens and closes in the vertical direction is disposed behind the quick return mirror 203. In FIG. 4, the shutter 32 is in an open state. Behind the shutter mechanism 30 is a CCD 22 that is an image sensor. In the present embodiment, the imaging device is a CCD, but it may be a CMOS. A large number of pixels (photodiodes) (not shown) are arranged in a square lattice pattern on the light receiving surface of the CCD 22. Note that the pixel arrangement is not limited to a square lattice arrangement, and may be a so-called honeycomb pixel arrangement in which even-numbered pixel rows are shifted by 1/2 pixel pitch with respect to odd-numbered pixel rows.
 そして、CCD22の受光面側には、光学部材としてカバーガラス24が配置されている。カバーガラス24は、レンズ群300から入射した光を透過する素ガラスであって、CCD22の撮像面を保護する。なお、カバーガラス24の代わりに(或いはカバーガラス24の近傍に)、モアレの発生を防止するために光の高周波成分を除去する光学ローパスフィルター等の光学部材が配置されていてもよい。 A cover glass 24 is disposed as an optical member on the light receiving surface side of the CCD 22. The cover glass 24 is a bare glass that transmits light incident from the lens group 300, and protects the imaging surface of the CCD 22. Instead of the cover glass 24 (or in the vicinity of the cover glass 24), an optical member such as an optical low-pass filter that removes high-frequency components of light may be disposed in order to prevent the occurrence of moire.
 そして、クイックリターンミラー203が撮影光路外に退避する位置(退避位置)にミラーアップした後、シャッター32が開き、CCD22に電荷を蓄積する。そして、画像データが、記録メディア120(図7参照)に記録される。 Then, after the quick return mirror 203 is mirrored up to a position where the quick return mirror 203 is retracted to the outside of the photographing optical path (retracted position), the shutter 32 is opened and charges are accumulated in the CCD 22. Then, the image data is recorded on the recording medium 120 (see FIG. 7).
 なお、本実施の形態では、CCD22の受光面の任意の一部領域位置、例えば中央位置等に、位相差検出エリアが設けられている。なお、位相差検出エリアは受光面に対して1箇所だけ設けていてもよいし、撮影画面内のどこでもAF可能とできるよう複数箇所に設けても良い。受光面の全領域を位相差検出エリアとすることでも良い。 In the present embodiment, a phase difference detection area is provided at an arbitrary partial region position on the light receiving surface of the CCD 22, such as the center position. It should be noted that the phase difference detection area may be provided only at one location with respect to the light receiving surface, or may be provided at a plurality of locations so that AF can be performed anywhere on the imaging screen. The entire area of the light receiving surface may be used as a phase difference detection area.
 図5は、位相差検出エリアの表面拡大模式図である。CCD21の受光面には、多数の画素が正方格子配列されており、位相差検出エリア内でも同様である。 FIG. 5 is a schematic enlarged view of the surface of the phase difference detection area. A large number of pixels are arranged in a square lattice on the light receiving surface of the CCD 21, and the same applies to the phase difference detection area.
 図示する例では、各画素をR(赤),G(緑),B(青)で示している。R,G,Bは各画素上に積層したカラーフィルタの色を表す。カラーフィルタ配列は、この例ではベイヤ配列となっているが、ベイヤ配列に限るものではなく、ストライプ等の他のカラーフィルタ配列であってもよい。 In the illustrated example, each pixel is indicated by R (red), G (green), and B (blue). R, G, and B represent the colors of the color filters stacked on each pixel. In this example, the color filter array is a Bayer array, but is not limited to the Bayer array, and may be another color filter array such as a stripe.
 位相差検出エリア内の画素配列及びカラーフィルタ配列は、位相差検出エリア外の受光面の画素配列及びカラーフィルタ配列と同じである。位相差検出エリア内では、ペアを構成する斜めに隣接する同色画素を、位相差検出画素1x,1yとしている。 The pixel array and color filter array in the phase difference detection area are the same as the pixel array and color filter array on the light receiving surface outside the phase difference detection area. In the phase difference detection area, the diagonally adjacent same color pixels constituting the pair are set as phase difference detection pixels 1x and 1y.
 位相差検出エリア内では、全画素が位相差検出画素ではなく、被写体を撮像するための通常の撮像画素と位相差検出画素とが例えば交互に或いは周期的に配列形成されている。本実施の形態では、位相差検出用のペア画素は、位相差検出エリア内の離散的で周期的な位置、例えば図示する例では市松位置に設けられている。なお、位相差検出エリア外では、撮像画素だけが配列されているものとする。 In the phase difference detection area, not all pixels are phase difference detection pixels, but normal imaging pixels and phase difference detection pixels for imaging a subject are arranged alternately or periodically, for example. In the present embodiment, the pair pixels for phase difference detection are provided at discrete and periodic positions in the phase difference detection area, for example, checkered positions in the illustrated example. Note that only the imaging pixels are arranged outside the phase difference detection area.
 なお、図示する例では、カラーフィルタ配列がベイヤ配列のため、同色画素が斜めに隣接するので、横ストライプ配列の場合には同色画素は水平方向に並ぶため、ペアを構成する2画素は横に隣接することになる。或いは、横ストライプ配列で、同じ色フィルタ行内にペアを構成する2画素を設けるのではなく、縦方向に最近接する同色のフィルタ行の夫々にペアを構成する各画素を離して設けることでも良い。縦ストライプ配列の場合も同様である。 In the illustrated example, since the color filter array is a Bayer array, pixels of the same color are diagonally adjacent to each other. Therefore, in the case of the horizontal stripe array, the pixels of the same color are arranged in the horizontal direction, so that the two pixels constituting the pair are arranged horizontally. It will be adjacent. Alternatively, in the horizontal stripe arrangement, two pixels constituting a pair may not be provided in the same color filter row, but each pixel constituting the pair may be provided separately in each filter row of the same color closest in the vertical direction. The same applies to the vertical stripe arrangement.
 本実施形態では、位相差検出画素1x,1yを、R,G,Bのうち最も多いGフィルタ搭載画素に設けており、水平方向(x方向)に8画素置き、垂直方向(y方向)に8画素置き、かつ全体的に市松位置となるように配置されている。従って、位相差検出方向(左右方向)で見たとき、位相差検出画素1xは4画素置きに配置されることになる。 In the present embodiment, the phase difference detection pixels 1x and 1y are provided in the most G filter mounted pixels among R, G, and B, and 8 pixels are arranged in the horizontal direction (x direction) and in the vertical direction (y direction). It is arranged so that every eight pixels and the entire checkered position. Accordingly, when viewed in the phase difference detection direction (left-right direction), the phase difference detection pixels 1x are arranged every four pixels.
 図6は、図5の位相差検出画素1x,1yだけを抜き出して模式的に表示した図である。ペア画素を構成する位相差検出画素1x,1yには、その遮光膜開口2x,2yが形成され、かつ、画素1xの遮光膜開口2xは左方向に偏心して設けられ、画素1yの遮光膜開口2yは右方向(位相差検出方向)に偏心して設けられている。 FIG. 6 is a diagram schematically showing only the phase difference detection pixels 1x and 1y extracted in FIG. The phase difference detection pixels 1x and 1y constituting the pair pixel are formed with the light shielding film openings 2x and 2y, and the light shielding film opening 2x of the pixel 1x is provided eccentrically in the left direction, and the light shielding film opening of the pixel 1y. 2y is eccentrically provided in the right direction (phase difference detection direction).
 このような構成により、位相差検出画素1xは、レンズ群300を構成するレンズの主軸に対して一方の側(ここでは左側)を通過した光束が入射される。また、位相差検出画素1yは、ペアを構成する位相差検出画素1xに隣接するライン上に配置され、レンズ群300を構成するレンズの主軸に対して他方の側(ここでは右側)を通過した光束が入射される。後述するように、ピントがずれた状態では、位相差検出画素1x、1yの各々が検出する像の位置及び位相にズレが生じるため、このズレ量(位相差量)を検出して合焦制御が行われる。 With such a configuration, the phase difference detection pixel 1x is incident with the light beam that has passed through one side (here, the left side) with respect to the main axes of the lenses constituting the lens group 300. The phase difference detection pixel 1y is arranged on a line adjacent to the phase difference detection pixel 1x constituting the pair, and has passed through the other side (here, the right side) with respect to the main axes of the lenses constituting the lens group 300. A light beam enters. As will be described later, in the out-of-focus state, a shift occurs in the position and phase of the image detected by each of the phase difference detection pixels 1x and 1y. Therefore, the focus control is performed by detecting this shift amount (phase difference amount). Is done.
 図6の下段に示す曲線Xは、横一行に並ぶ位相差検出画素1xの検出信号量をプロットしたグラフであり、曲線Yは、これら画素1xとペアを構成する位相差検出画素1yの検出信号量をプロットしたグラフである。 A curve X shown in the lower part of FIG. 6 is a graph in which detection signal amounts of the phase difference detection pixels 1x arranged in a horizontal row are plotted, and a curve Y indicates a detection signal of the phase difference detection pixel 1y that forms a pair with these pixels 1x. It is the graph which plotted quantity.
 ペア画素1x,1yは隣接画素であり極めて近接しているため、同一被写体からの光を受光していると考えられる。このため、曲線Xと曲線Yとは同一形状になると考えられ、その左右方向(位相差検出方向)のずれが、瞳分割したペア画素の一方の画素1xで見た画像と、他方の画素1yで見た画像との位相差量となる。 The paired pixels 1x and 1y are adjacent pixels and are very close to each other, and therefore are considered to receive light from the same subject. For this reason, it is considered that the curve X and the curve Y have the same shape, and the shift in the left-right direction (phase difference detection direction) is caused by the difference between the image seen by one pixel 1x of the paired pixel divided by pupil and the other pixel 1y. This is the amount of phase difference from the image seen in.
 この曲線Xと曲線Yの相関演算を行うことで位相差量(横ズレ量)を求めることができ、この位相差量から被写体までの距離を算出することが可能となる。曲線Xと曲線Yの相関量の評価値を求める方法は、公知の方法を採用すれば良い。例えば、一方の波形データ(曲線)を画素単位でシフトしながら他方の曲線との差分の総和をとり、総和が最小となるときの距離を位相差量として求める方法がある。より具体的には、曲線Xを構成する各点X(i)と曲線Yを構成する各点Y(i+j)との差分の絶対値の積算値を求め、最小の積算値が得られたj値を位相差量(横ズレ量)とする。以下、最小の積算値を相関値という。曲線Xと曲線Yの相関が高いほど相関値の最小値は小さくなる。 The phase difference amount (lateral shift amount) can be obtained by performing a correlation calculation between the curve X and the curve Y, and the distance to the subject can be calculated from the phase difference amount. As a method for obtaining the evaluation value of the correlation amount between the curve X and the curve Y, a known method may be adopted. For example, there is a method in which one waveform data (curve) is shifted in units of pixels, the sum of differences from the other curve is taken, and a distance when the sum is minimum is obtained as a phase difference amount. More specifically, the integrated value of the absolute value of the difference between each point X (i) constituting the curve X and each point Y (i + j) constituting the curve Y is obtained, and the minimum integrated value is obtained. The value is a phase difference amount (lateral deviation amount). Hereinafter, the minimum integrated value is referred to as a correlation value. The higher the correlation between the curve X and the curve Y, the smaller the minimum correlation value.
 次に、本実施の形態のデジタルカメラ10の内部構成について説明する。 Next, the internal configuration of the digital camera 10 of the present embodiment will be described.
 図7は、デジタルカメラ10(カメラ本体12及びレンズユニット16)の内部構成を示すブロック図である。レンズユニット16は、絞り機構80とレンズ群300とを備える。本実施の形態においてレンズ群300は、フォーカスレンズ82やズームレンズ182等を備えている。更に、絞り機構80を駆動する手段としてのアイリスモータ83及びその駆動回路(モータドライバ)84、フォーカスレンズ82を駆動する手段としてのAFモータ86及びその駆動回路(モータドライバ)87、ズームレンズ182を駆動する手段としてのズームモータ186及びその駆動回路(モータドライバ)187、並びに制御用の中央処理装置(以下、レンズCPUという。)88、各種データが格納されているROM89等が内蔵されている。レンズユニット16は、後述するカメラ本体12に設けられた電源回路97からの電力供給により駆動する。なお、ROM89に、予めレンズユニット16の駆動に必要な電力を示す電力情報を格納しておいてもよい。カメラ本体12側のカメラCPU90は、レンズCPU88を介して取得した電力情報に応じて、デジタルカメラ10の動作を制御することができる。なお、ズームの駆動は手動で行うようにしてもよい。また、絞りやAFの駆動は、カメラ本体12から駆動する構成としてもよい。 FIG. 7 is a block diagram showing the internal configuration of the digital camera 10 (camera body 12 and lens unit 16). The lens unit 16 includes a diaphragm mechanism 80 and a lens group 300. In the present embodiment, the lens group 300 includes a focus lens 82, a zoom lens 182 and the like. Further, an iris motor 83 and its drive circuit (motor driver) 84 as means for driving the aperture mechanism 80, an AF motor 86 and its drive circuit (motor driver) 87 as means for driving the focus lens 82, and a zoom lens 182 are provided. A zoom motor 186 as a driving means, a driving circuit (motor driver) 187, a central processing unit for control (hereinafter referred to as a lens CPU) 88, a ROM 89 storing various data, and the like are incorporated. The lens unit 16 is driven by power supply from a power supply circuit 97 provided in the camera body 12 described later. The ROM 89 may store power information indicating power required for driving the lens unit 16 in advance. The camera CPU 90 on the camera body 12 side can control the operation of the digital camera 10 according to the power information acquired via the lens CPU 88. Note that the zoom may be driven manually. Further, the diaphragm and AF may be driven from the camera body 12.
 不揮発性記憶手段であるROM89は書き換え不能なものであってもよいし、EEPROMのように書き換え可能なものでもよい。ROM89には当該レンズユニット16の型名、焦点距離、Fナンバー、その他のレンズ性能に関する各種情報(以下、「レンズ情報」という。)が格納されている。 The ROM 89, which is a nonvolatile storage means, may be rewritable or may be rewritable like an EEPROM. The ROM 89 stores the model name of the lens unit 16, focal length, F-number, and other information related to lens performance (hereinafter referred to as “lens information”).
 レンズユニット16をカメラ本体12のレンズマウント部14に装着すると(図1と図2とを参照)、レンズマウント部14に設けられている電気接点部14Aとレンズユニット16の電気接点部16Aとを介してレンズユニット16とカメラ本体12が電気的に接続され、デジタルカメラ10内のCPU(以下、カメラCPUという。)90とレンズCPU88の間で信号の受渡しが可能となる。 When the lens unit 16 is attached to the lens mount 14 of the camera body 12 (see FIGS. 1 and 2), the electrical contact 14A provided on the lens mount 14 and the electrical contact 16A of the lens unit 16 are connected. The lens unit 16 and the camera body 12 are electrically connected to each other, and signals can be transferred between a CPU (hereinafter referred to as camera CPU) 90 in the digital camera 10 and the lens CPU 88.
 カメラCPU90は、所定のプログラムに従って本カメラシステムを統括制御する制御手段として機能すると共に、AE/AF演算など各種の演算を実施する演算手段としても機能する。カメラCPU90に接続されているROM91には、カメラCPU90が実行するプログラム及び制御に必要な各種データ等が格納されており、RAM92はカメラCPU90の作業用領域として利用される。不揮発性記憶手段であるROM91は書き換え不能なものであってもよいし、EEPROMのように書き換え可能なものでもよい。 The camera CPU 90 functions as a control unit that performs overall control of the camera system according to a predetermined program, and also functions as a calculation unit that performs various calculations such as an AE / AF calculation. A ROM 91 connected to the camera CPU 90 stores programs executed by the camera CPU 90 and various data necessary for control. The RAM 92 is used as a work area for the camera CPU 90. The ROM 91 which is a nonvolatile storage means may be non-rewritable, or may be rewritable like an EEPROM.
 カメラCPU90は、カメラ本体12に設けられている電源スイッチ93、モード選択スイッチ94、及びレリーズ検出スイッチ95その他の操作部96からの指示信号に基づいてデジタルカメラ10内の各回路の動作を制御する。なお、操作部96は図3に示す十字ボタン66、メニュー/実行ボタン67、バックボタン68、ファンクションボタン63等の各種の操作手段を含むブロックである。 The camera CPU 90 controls the operation of each circuit in the digital camera 10 based on instruction signals from the power switch 93, the mode selection switch 94, the release detection switch 95, and other operation units 96 provided in the camera body 12. . The operation unit 96 is a block including various operation means such as a cross button 66, a menu / execution button 67, a back button 68, and a function button 63 shown in FIG.
 また、カメラCPU90には、自動で合焦駆動するオートフォーカスモードと手動で合焦させるマニュアルフォーカスモードとを切り替えるためのAF/MF切り替えスイッチ130が接続されている。 The camera CPU 90 is connected to an AF / MF changeover switch 130 for switching between an autofocus mode for automatically focusing and a manual focus mode for manually focusing.
 電源スイッチ93は、デジタルカメラ10の主電源をON/OFFする操作手段である。カメラCPU90は電源スイッチ93の状態を監視し、その状態に応じて電源回路97を制御する。すなわち、電源スイッチ93の閉(ON)状態を検出すると、カメラCPU90は電源回路97に対して起動指令の信号を与え、電源回路97を起動させる。 The power switch 93 is an operation means for turning on / off the main power of the digital camera 10. The camera CPU 90 monitors the state of the power switch 93 and controls the power circuit 97 according to the state. That is, when the closed (ON) state of the power switch 93 is detected, the camera CPU 90 gives a start command signal to the power circuit 97 to start the power circuit 97.
 電源回路97はDC/DCコンバータを含む。デジタルカメラ10に装填されている電池98から供給される電力は、電源回路97のDC/DCコンバータによって所要の電圧に変換された後、電源回路97よりデジタルカメラ10内の各回路ブロックに供給される。電源スイッチ93の開(OFF)状態を検出すると、カメラCPU90は電源回路97に対して停止指令の信号を与え、電源回路97からの電力供給を停止させる。なお、主電源のON/OFFについては、電源スイッチ93の操作に限らず、オートパワーON機能(設定された時刻にパワーONする機能)やオートパワーOFF機能(一定時間の無操作状態が継続した場合や設定された時刻に自動的にパワーOFFする機能)によって切り換わる態様もある。 The power supply circuit 97 includes a DC / DC converter. The power supplied from the battery 98 loaded in the digital camera 10 is converted into a required voltage by the DC / DC converter of the power supply circuit 97 and then supplied from the power supply circuit 97 to each circuit block in the digital camera 10. The When the open (OFF) state of the power switch 93 is detected, the camera CPU 90 gives a stop command signal to the power circuit 97 to stop the power supply from the power circuit 97. Note that the main power ON / OFF is not limited to the operation of the power switch 93, but the auto power ON function (function to turn on power at a set time) and the auto power OFF function (no operation state for a certain period of time). There is also a mode of switching according to a case or a function of automatically turning off power at a set time.
 モード選択スイッチ94は、デジタルカメラ10の動作モードを設定する手段であり、このモード切換スイッチを操作することによって「撮影モード」(撮影を行うモード)や「再生モード」(記録画像を再生するモード)などの各モードに設定できる。レリーズ検出スイッチ95は、シャッターボタン18(図1参照)の内部に配設される検出スイッチであり、シャッターボタン18の半押し時にONするS1スイッチと、全押し時にONするS2スイッチから構成される。 The mode selection switch 94 is a means for setting the operation mode of the digital camera 10, and by operating this mode changeover switch, a "shooting mode" (shooting mode) or a "playback mode" (a mode for playing back recorded images). ) And other modes. The release detection switch 95 is a detection switch disposed inside the shutter button 18 (see FIG. 1), and includes an S1 switch that is turned on when the shutter button 18 is half-pressed and an S2 switch that is turned on when the shutter button 18 is fully pressed. .
 モード選択スイッチ94によって「撮影モード」が選択されると、デジタルカメラ10は撮影可能な状態になる。カメラCPU90がシャッターボタン18の半押し(S1=ON)を検出すると、AE及びAF処理を実施し、その後、シャッターボタン18の全押し(S2=ON)を検知すると、記録用の画像を取り込むためのCCD露光及び読み出し制御を開始する。 When “shooting mode” is selected by the mode selection switch 94, the digital camera 10 is ready for shooting. When the camera CPU 90 detects that the shutter button 18 is half-pressed (S1 = ON), it performs AE and AF processing, and when it detects that the shutter button 18 is fully pressed (S2 = ON), it captures an image for recording. The CCD exposure and readout control is started.
 デジタルカメラ10に搭載されたAE機能はTTL方式のAEであり、図4に示すように、デジタルカメラ10内部の光学ファインダ60付近には、検出系としてのAEセンサ(受光素子)100が設けられている。このAEセンサ100には、クイックリターンミラー203がミラーダウンした状態において、クイックリターンミラー203に反射された被写体光がペンタゴナルダハプリズム206等の光学系によって導かれるようになっている。 The AE function mounted on the digital camera 10 is a TTL type AE. As shown in FIG. 4, an AE sensor (light receiving element) 100 as a detection system is provided in the vicinity of the optical viewfinder 60 inside the digital camera 10. ing. The AE sensor 100 is configured such that subject light reflected by the quick return mirror 203 is guided by an optical system such as a pentagonal roof prism 206 in a state where the quick return mirror 203 is mirrored down.
 また、デジタルカメラ10のAF機能は前述したとおりであり、位相差検出画素を用いて位相差検出方式で合焦制御が行われる。 Further, the AF function of the digital camera 10 is as described above, and focusing control is performed by the phase difference detection method using the phase difference detection pixels.
 また、カメラCPU90は、クイックリターンミラー203を斜設位置と退避位置とに切り替えるアクチュエータ126も制御する。 The camera CPU 90 also controls the actuator 126 that switches the quick return mirror 203 between the oblique position and the retracted position.
 また、シャッターボタン18が「半押し」されると(S1=ON)、カメラCPU90は、位相差検出画素から読み出した信号を相関演算して合焦位置を求め、フォーカスレンズ82を移動させるための制御信号を生成する。この制御信号はレンズCPU88に送られる。レンズCPU88はカメラCPU90からの信号に基づいてモータドライバ87を制御してAFモータ86を作動させ、フォーカスレンズ82を合焦位置に移動させる。 When the shutter button 18 is “half-pressed” (S1 = ON), the camera CPU 90 performs correlation calculation on the signal read from the phase difference detection pixel to obtain the in-focus position, and moves the focus lens 82. Generate a control signal. This control signal is sent to the lens CPU 88. The lens CPU 88 controls the motor driver 87 based on the signal from the camera CPU 90 to operate the AF motor 86 and move the focus lens 82 to the in-focus position.
 また、カメラCPU90は、AEセンサ100からの検出信号に基づいてAE演算を行い、絞り値やシャッタースピードを算出する。 Further, the camera CPU 90 performs AE calculation based on the detection signal from the AE sensor 100, and calculates an aperture value and a shutter speed.
 シャッターボタン18が「全押し」されると(S2=ON)、カメラCPU90は、アクチュエータ126を制御してクイックリターンミラー203を退避位置に切り替え、AE演算の結果に基づいて、絞り制御信号をレンズCPU88に送り、シャッター機構30(図4参照)を制御し、シャッター機構30のシャッター32を開閉動作させると共に、CCD22の電荷蓄積時間を制御する。レンズCPU88は、カメラCPU90からの信号に基づいてモータドライバ84を制御してアイリスモータ83を作動させ、絞り機構80を所要の開口にする。そして、レンズユニット16を介してCCD22に結像された被写体の光学像は、CCD22によって光電変換される。 When the shutter button 18 is “fully pressed” (S2 = ON), the camera CPU 90 controls the actuator 126 to switch the quick return mirror 203 to the retracted position, and sends the aperture control signal to the lens based on the result of the AE calculation. This is sent to the CPU 88 to control the shutter mechanism 30 (see FIG. 4), to open and close the shutter 32 of the shutter mechanism 30, and to control the charge accumulation time of the CCD 22. The lens CPU 88 controls the motor driver 84 based on the signal from the camera CPU 90 to operate the iris motor 83, thereby opening the aperture mechanism 80 to a required opening. The optical image of the subject formed on the CCD 22 via the lens unit 16 is photoelectrically converted by the CCD 22.
 CCD22の各画素(フォトダイオード)に蓄積された信号電荷は、タイミングジェネレータ(TG)103から与えられるパルスに基づいて信号電荷に応じた電圧信号として順次読み出される。CCD22から出力された信号はアナログ処理部104に送られ、相関二重サンプリング(CDS)処理、色分離、ゲイン調整などの所要の処理が行われる。アナログ処理部104で生成された画像信号はA/D変換器106によってデジタル信号に変換された後、画像入力コントローラ108を介してメモリ110に格納される。なお、タイミングジェネレータ103は、カメラCPU90の指令に従ってCCD22、アナログ処理部104及びA/D変換器106に対してタイミング信号(駆動信号ともいう)を与えており、このタイミング信号によって各回路の同期がとられている。 The signal charge accumulated in each pixel (photodiode) of the CCD 22 is sequentially read out as a voltage signal corresponding to the signal charge based on a pulse given from the timing generator (TG) 103. The signal output from the CCD 22 is sent to the analog processing unit 104, where required processing such as correlated double sampling (CDS) processing, color separation, and gain adjustment is performed. The image signal generated by the analog processing unit 104 is converted into a digital signal by the A / D converter 106 and then stored in the memory 110 via the image input controller 108. The timing generator 103 gives a timing signal (also referred to as a drive signal) to the CCD 22, the analog processing unit 104, and the A / D converter 106 in accordance with a command from the camera CPU 90, and each circuit is synchronized by this timing signal. It has been taken.
 すなわち、CCD22の信号電荷の蓄積(露光)の開始、及びCCD22からの信号電荷の読み出しや転送等は、カメラCPU90の指令に応じたタイミングで行われる。 That is, the start of signal charge accumulation (exposure) of the CCD 22 and the reading and transfer of signal charges from the CCD 22 are performed at timings in accordance with commands from the camera CPU 90.
 メモリ110に格納されたデータは、バス112を介して画像信号処理回路114に送られる。画像信号処理回路114は、輝度・色差信号生成回路、ガンマ補正回路、シャープネス補正回路、ホワイトバランス補正回路、ゲイン(感度)調整回路等を含む画像処理手段であり、カメラCPU90からのコマンドに従って、画像信号を処理する。 The data stored in the memory 110 is sent to the image signal processing circuit 114 via the bus 112. The image signal processing circuit 114 is an image processing unit including a luminance / color difference signal generation circuit, a gamma correction circuit, a sharpness correction circuit, a white balance correction circuit, a gain (sensitivity) adjustment circuit, and the like. Process the signal.
 画像信号処理回路114に入力された画像データは、輝度信号(Y信号)及び色差信号(Cr,Cb信号)に変換されると共に、ガンマ補正やゲイン調整等の所定の処理が施される。画像信号処理回路114で生成された画像データは圧縮伸長回路116に送られ、JPEGその他の所定の形式に従って圧縮される。圧縮された画像データは、メディアコントローラ118を介して記録メディア120に記録される。 The image data input to the image signal processing circuit 114 is converted into a luminance signal (Y signal) and a color difference signal (Cr, Cb signal) and subjected to predetermined processing such as gamma correction and gain adjustment. The image data generated by the image signal processing circuit 114 is sent to the compression / decompression circuit 116 and compressed according to a predetermined format such as JPEG. The compressed image data is recorded on the recording medium 120 via the media controller 118.
 圧縮形式はJPEGに限定されず、MPEGその他の方式を採用してもよい。また、画像データを保存する手段は、メモリカードで代表される半導体メモリに限定されず、磁気ディスク、光ディスク、光磁気ディスクなど、種々の媒体を用いることができる。また、リムーバブルメディアに限らず、デジタルカメラ10に内蔵された記録媒体(内部メモリなど)であってもよい。 The compression format is not limited to JPEG, and MPEG or other methods may be adopted. The means for storing image data is not limited to a semiconductor memory represented by a memory card, and various media such as a magnetic disk, an optical disk, and a magneto-optical disk can be used. Further, the recording medium (such as an internal memory) built in the digital camera 10 is not limited to a removable medium.
 モード選択スイッチ94によって「再生モード」が選択されると、記録メディア120から画像ファイルが読み出される。読み出された画像データは、圧縮伸長回路116によって伸長処理され、VRAM122に送られる。VRAM122に格納されたデータは、ビデオエンコーダ124によって表示用の所定方式の信号(例えば、NTSC方式のカラー複合映像信号)に変換された後、液晶モニター64に供給される。こうして記録メディア120に格納されている画像が液晶モニター64に表示される。 When “reproduction mode” is selected by the mode selection switch 94, an image file is read from the recording medium 120. The read image data is decompressed by the compression / decompression circuit 116 and sent to the VRAM 122. The data stored in the VRAM 122 is converted into a predetermined display signal (for example, an NTSC color composite video signal) by the video encoder 124 and then supplied to the liquid crystal monitor 64. In this way, the image stored in the recording medium 120 is displayed on the liquid crystal monitor 64.
 また、デジタルカメラ10は、撮影モードとして光学ファインダモードと電子ファインダモードとを例えば操作部96を操作することにより切り替え可能となっている。この光学ファインダモードは、クイックリターンミラー203を斜設位置とし、光学ファインダ60で被写体像を視認しながらの撮影を可能とするモードである。また、電子ファインダモードは、クイックリターンミラー203を退避位置にすると共にシャッター32を開いてCCD22に被写体像を結像させ、これを液晶モニター64に所定周期で表示させることにより被写体像(スルー画)を液晶モニター64で視認しながらの撮影を可能とするモードである。 Further, the digital camera 10 can be switched between an optical finder mode and an electronic finder mode as a photographing mode by operating the operation unit 96, for example. This optical finder mode is a mode in which the quick return mirror 203 is set at an oblique position and photographing can be performed while the subject image is visually recognized by the optical finder 60. In the electronic viewfinder mode, the quick return mirror 203 is set to the retracted position and the shutter 32 is opened to form a subject image on the CCD 22, and the subject image (through image) is displayed on the liquid crystal monitor 64 at a predetermined cycle. Is a mode that enables photographing while viewing the image on the liquid crystal monitor 64.
 更に、図8に示すように、本実施の形態において、CCD22の受光面側に設けられたカバーガラス24には、カバーガラス24に付着した塵埃などの異物を移動させる或いは除去するため、カバーガラス24を超音波振動させる圧電素子24aが設けられている。圧電素子24aの駆動により、カバーガラス24に定在波屈曲振動が発生する。なお、ここでは、カバーガラス24に振動を加える加振手段の一例として圧電素子24aを例に挙げて説明するが、加振手段を圧電素子に限定するものではない。 Furthermore, as shown in FIG. 8, in the present embodiment, the cover glass 24 provided on the light receiving surface side of the CCD 22 has a cover glass for moving or removing foreign matters such as dust attached to the cover glass 24. A piezoelectric element 24a for ultrasonically vibrating the 24 is provided. A standing wave bending vibration is generated in the cover glass 24 by driving the piezoelectric element 24a. Here, the piezoelectric element 24a will be described as an example of the vibration means for applying vibration to the cover glass 24, but the vibration means is not limited to the piezoelectric element.
 本実施の形態において、圧電素子24aは、圧電素子駆動回路160から供給されるパルス信号により駆動され、圧電素子駆動回路160はカメラCPU90から供給される制御信号に応じて、パルス信号を生成して圧電素子24aに供給する。 In the present embodiment, the piezoelectric element 24a is driven by a pulse signal supplied from the piezoelectric element drive circuit 160, and the piezoelectric element drive circuit 160 generates a pulse signal in accordance with a control signal supplied from the camera CPU 90. The piezoelectric element 24a is supplied.
 なお、圧電素子駆動回路160の加振条件を変更可能に構成してもよい。加振条件としては、例えば、加振周波数、加振周波数変更時のステップ幅、加振時間、加振タイミング、及び加振振幅等を挙げることができる。そこで、加振条件に応じたパルス信号を生成可能に構成し、圧電素子駆動回路160が、カメラCPU90により設定された加振条件に応じたパルス信号を発生するようにしてもよい。 In addition, you may comprise so that the vibration conditions of the piezoelectric element drive circuit 160 can be changed. Examples of the excitation condition include an excitation frequency, a step width when changing the excitation frequency, an excitation time, an excitation timing, and an excitation amplitude. Therefore, it is possible to generate a pulse signal corresponding to the excitation condition, and the piezoelectric element driving circuit 160 may generate a pulse signal corresponding to the excitation condition set by the camera CPU 90.
 図9は、カバーガラス24に塵埃等の異物が付着している状態で、位相差検出画素1x(以下、x画素という場合もある)及び位相差検出画素1y(以下、y画素という場合もある)の各々から読み出した検出信号をプロットした図である。 FIG. 9 shows a phase difference detection pixel 1x (hereinafter also referred to as x pixel) and a phase difference detection pixel 1y (hereinafter referred to as y pixel) in a state where foreign matter such as dust adheres to the cover glass 24. ) Is a diagram plotting the detection signals read from each of the above.
 デジタルカメラ10では、CCD24に位相差検出画素を設け、位相差検出画素から読み出された信号を相関演算して得られた位相差量を用いてAF制御を行う。デジタルカメラ10では、CCD24の受光面側に配置されたカバーガラス24に塵埃などの異物が付着していると、図9に示すように、塵埃等の異物が付着したエリアに配置された位相差検出画素からの検出信号は、塵埃の影響を受けたものとなり、正しい位相差量を求めることができない。そこで、本実施の形態では、以下に説明するように、位相差検出を行う。 In the digital camera 10, a phase difference detection pixel is provided in the CCD 24, and AF control is performed using a phase difference amount obtained by performing a correlation operation on a signal read from the phase difference detection pixel. In the digital camera 10, when foreign matter such as dust adheres to the cover glass 24 disposed on the light receiving surface side of the CCD 24, as shown in FIG. 9, the phase difference disposed in the area to which foreign matter such as dust adheres. The detection signal from the detection pixel is affected by dust, and a correct phase difference amount cannot be obtained. Therefore, in the present embodiment, phase difference detection is performed as described below.
 以下、本実施形態の作用として、デジタルカメラ10で実行されるAF制御について説明する。 Hereinafter, as an operation of the present embodiment, AF control executed by the digital camera 10 will be described.
 図10は、撮影モードで実行されるAF制御の流れを示すフローチャートである。 FIG. 10 is a flowchart showing the flow of AF control executed in the shooting mode.
 ステップ400では、合焦位置算出処理を実行する。ここで、図11を参照して、本実施の形態に係る合焦位置算出処理の流れを説明する。 In step 400, an in-focus position calculation process is executed. Here, with reference to FIG. 11, the flow of the focus position calculation process according to the present embodiment will be described.
 ステップ500では、CCD22の露光を開始し、その後読み出しタイミングが到来すると、CCD22の各画素から信号電荷を読み出し、該読み出した信号電荷に応じた電圧信号(以下、検出信号という)を1撮像フレーム分取得する。以下、撮像フレームを単にフレームという(各図面においても同様)。AF制御開始後に取得される各フレームを区別して説明する場合には、その取得順に、第1フレーム、第2フレーム、第3フレーム・・・と呼称する。例えば、ステップ500で取得されるフレームは、AF制御開始後、1番目に取得されるフレームであるため、第1フレームと呼称する。 In step 500, when the exposure of the CCD 22 is started and then the read timing comes, signal charges are read from the respective pixels of the CCD 22, and a voltage signal (hereinafter referred to as a detection signal) corresponding to the read signal charges for one imaging frame. get. Hereinafter, the imaging frame is simply referred to as a frame (the same applies to each drawing). When the frames acquired after the AF control start are distinguished and described, they are referred to as a first frame, a second frame, a third frame,. For example, the frame acquired in step 500 is called the first frame because it is the first frame acquired after the AF control is started.
 ステップ502では、上記ステップ500で取得したフレームにおけるx画素群とy画素群の検出信号の相関演算を行う(図12(1)も参照。) In step 502, the correlation calculation of the detection signals of the x pixel group and the y pixel group in the frame acquired in step 500 is performed (see also FIG. 12 (1)).
 ステップ504では、ステップ502の相関演算結果から、前述した曲線X、Yの相関(x画素群の検出信号とy画素群の検出信号の相関)が高いか否かを判定する。ここでは、前述した相関値が予め定められた閾値以下であれば相関が高いと判定し、相関値が閾値より高ければ、相関が低いと判定する。 In step 504, it is determined from the correlation calculation result in step 502 whether or not the correlation between the curves X and Y described above (correlation between the detection signal of the x pixel group and the detection signal of the y pixel group) is high. Here, if the correlation value mentioned above is below a predetermined threshold value, it will determine with a correlation high, and if a correlation value is higher than a threshold value, it will determine with a correlation low.
 ステップ504で肯定判定した場合には、ステップ518に進み、該相関が高いと判定した相関演算結果に基づいて合焦位置を算出する。 If an affirmative determination is made in step 504, the process proceeds to step 518, and the in-focus position is calculated based on the correlation calculation result determined that the correlation is high.
 一方、ステップ504で否定判定した場合には、ステップ506に進む。ステップ506では、圧電素子駆動回路160を制御して圧電素子24aを駆動し、カバーガラス24に振動を加える加振動作を行う。 On the other hand, if a negative determination is made in step 504, the process proceeds to step 506. In step 506, the piezoelectric element driving circuit 160 is controlled to drive the piezoelectric element 24a, and an excitation operation for applying vibration to the cover glass 24 is performed.
 加振動作の後、再び露光を開始して、ステップ508において、位相差検出画素から信号電荷を読み出し、検出信号を1フレーム分取得する。 After the vibration operation, exposure is started again. In step 508, signal charges are read from the phase difference detection pixels, and detection signals for one frame are acquired.
 ステップ510では、上記ステップ508で取得したフレームにおけるx画素群とy画素群の検出信号の相関演算を行う(図12(2)も参照。)。なお、以下、同じフレームにおけるx画素群の検出信号とy画素群の検出信号の相関演算を、フレーム内相関演算と呼称する。 In step 510, the correlation calculation of the detection signals of the x pixel group and the y pixel group in the frame acquired in step 508 is performed (see also FIG. 12 (2)). Hereinafter, the correlation calculation between the detection signal of the x pixel group and the detection signal of the y pixel group in the same frame is referred to as an intra-frame correlation calculation.
 ステップ512では、ステップ510の相関演算結果から、上記と同様に相関が高いか否かを判定する。ステップ512で肯定判定した場合には、ステップ518に進み、該相関が高いと判定した相関演算結果に基づいて合焦位置を算出する。 In step 512, it is determined from the correlation calculation result in step 510 whether the correlation is high as described above. If an affirmative determination is made in step 512, the process proceeds to step 518, and the in-focus position is calculated based on the correlation calculation result determined to have a high correlation.
 一方、ステップ512で、否定判定した場合には、ステップ514に進み、最新のフレームだけでなく、AF制御を開始してから該最新のフレームを取得する前に取得したフレームの検出信号も用いて、異なるフレームの検出信号を組み合わせて用いて相関演算を行う。この相関演算を、上記フレーム内相関演算と区別してフレーム間相関演算と呼称する。 On the other hand, if a negative determination is made in step 512, the process proceeds to step 514, using not only the latest frame but also the detection signal of the frame acquired before the latest frame is acquired after the AF control is started. The correlation calculation is performed using a combination of detection signals of different frames. This correlation calculation is referred to as an inter-frame correlation calculation in distinction from the intra-frame correlation calculation.
 フレーム間相関演算では、具体的には、図12に(3)に示すように、第1フレームのx画素群の検出信号と第2フレームのy画素群の検出信号の相関演算を行うと共に、第2フレームのx画素群の検出信号と第1フレームのy画素群の検出信号の相関演算を行う。 Specifically, in the inter-frame correlation calculation, as shown in FIG. 12 (3), the correlation calculation of the detection signal of the x pixel group of the first frame and the detection signal of the y pixel group of the second frame is performed. The correlation calculation of the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the first frame is performed.
 このようにフレーム間相関演算では、最新のフレームより前に取得されたフレームの検出信号も用いて相関演算を行うため、本実施の形態では、ステップ518で合焦位置が算出されるまで各フレームの検出信号のデータ(以下では、フレームデータという場合もある)はRAM92等の記憶手段に保持しておくものとする。 As described above, in the inter-frame correlation calculation, the correlation calculation is also performed using the detection signal of the frame acquired before the latest frame. In this embodiment, each frame is calculated until the in-focus position is calculated in step 518. The detection signal data (hereinafter also referred to as frame data) is stored in a storage unit such as the RAM 92.
 ステップ516では、ステップ514の相関演算結果の各々について、上記と同様に相関が高いか否かを判定する。ここで、相関が高いと判定した組み合わせの相関演算結果が1つでもあれば、肯定判定され、相関が高いと判定した組み合わせの相関演算結果が1つもなければ、否定判定される。 In step 516, for each of the correlation calculation results in step 514, it is determined whether the correlation is high as described above. Here, if there is even one correlation calculation result for the combination determined to have high correlation, a positive determination is made. If there is no correlation calculation result for the combination determined to have high correlation, a negative determination is made.
 ステップ516で、肯定判定した場合には、ステップ518に進み、該相関が高いと判定した相関演算結果に基づいて合焦位置を算出する。なお、相関が高いと判定した相関演算結果が複数存在した場合には、その中で最も高い相関を有する(相関値が最も小さい)相関演算結果を用いるものとする。 If an affirmative determination is made in step 516, the process proceeds to step 518, and the in-focus position is calculated based on the correlation calculation result determined that the correlation is high. When there are a plurality of correlation calculation results determined to have high correlation, the correlation calculation result having the highest correlation (the smallest correlation value) is used.
 一方、ステップ516で、否定判定した場合には、ステップ506に戻って、高い相関演算結果が得られるまで上記処理を繰り返す。なお、ステップ516の判定を、フレーム間相関演算において相関演算が行われる毎に行い、相関が高いと判定された時点でフレーム間相関演算を終了し、ステップ518に移行してもよい。 On the other hand, if a negative determination is made in step 516, the process returns to step 506 and the above processing is repeated until a high correlation calculation result is obtained. Note that the determination in step 516 may be performed every time the correlation calculation is performed in the inter-frame correlation calculation, and the inter-frame correlation calculation may be terminated when it is determined that the correlation is high, and the process may proceed to step 518.
 すなわち、本実施の形態の合焦位置算出処理においては、図12(1)に示すように、まず第1フレームのx画素群の検出信号とy画素群の検出信号のフレーム内相関演算が行われ、その相関が低いと判定されれば、図12(2)に示すように、加振動作を行った後に取得した第2フレームのx画素群の検出信号とy画素群の検出信号のフレーム内相関演算が行われ、その相関が低いと判定されれば、図12(3)に示すように、x画素群の検出信号及びy画素群の検出信号の組み合わせにおいて、フレームが異なる信号同士のフレーム間相関演算が行われる。 That is, in the in-focus position calculation process of the present embodiment, as shown in FIG. 12A, first, the intra-frame correlation calculation of the detection signal of the x pixel group and the detection signal of the y pixel group of the first frame is performed. If it is determined that the correlation is low, as shown in FIG. 12 (2), the frames of the detection signal of the x pixel group and the detection signal of the y pixel group of the second frame obtained after performing the excitation operation If the inner correlation calculation is performed and it is determined that the correlation is low, as shown in FIG. 12 (3), in the combination of the detection signal of the x pixel group and the detection signal of the y pixel group, Inter-frame correlation calculation is performed.
 例えば、図13Aに示すように、塵埃等の異物の存在により、第1フレームのy画素群の一部の検出信号に異常が生じ、かつ、その後加振動作により塵埃が移動して図13Bに示すように、第2フレームのx画素群の一部の検出信号に異常が生じた場合においても、図13Cに示すように、塵埃の影響を受けなかった第1フレームのx画素群と第2フレームのy画素群の検出信号の相関演算については高い相関が得られ、これを用いることで精度高く合焦位置を算出することができる。 For example, as shown in FIG. 13A, due to the presence of foreign matter such as dust, an abnormality occurs in the detection signal of a part of the y pixel group of the first frame, and the dust is moved by the vibration operation, and then moved to FIG. 13B. As shown in FIG. 13C, even when some detection signals of the x pixel group of the second frame are abnormal, as shown in FIG. A high correlation is obtained for the correlation calculation of the detection signals of the y pixel group of the frame, and by using this, the in-focus position can be calculated with high accuracy.
 なお、第2フレームを取得してフレーム内相関演算及びフレーム間相関演算を行った結果、相関演算結果について、何れもが相関が低いと判定された場合には、更に加振動作を行い、第3フレームを取得し、今度は、第3フレームのフレーム内相関演算を行う。そして、その相関が低いと判定された場合には、第1~第3フレームの各々のx画素群及びy画素群で、フレームが異なるもの同士のフレーム間相関演算を行う。なお、このフレーム間相関演算では、既に相関演算が実行済みの組み合わせについては、その相関演算は省略される。例えば、第1フレームのx画素群の検出信号と第2フレームのy画素群の検出信号の相関演算、第2フレームのx画素群の検出信号と第1フレームのy画素群の検出信号の相関演算については、第2フレームを取得した際に行われたフレーム間相関演算で実行済みであるため、省略される。従って、ここでは、まだ実行されていない組み合わせ、すなわち、第1フレームのx画素群の検出信号と第3フレームのy画素群の検出信号の相関演算、第2フレームのx画素群の検出信号と第3フレームのy画素群の検出信号の相関演算、第3フレームのx画素群の検出信号と第1フレームのy画素群の検出信号の相関演算、第3フレームのx画素群の検出信号と第2フレームのy画素群の検出信号の相関演算、の各々が行われる。3フレーム目以降のフレームを取得した場合のフレーム間相関演算も同様である。 As a result of obtaining the second frame and performing intra-frame correlation calculation and inter-frame correlation calculation, if it is determined that both of the correlation calculation results are low in correlation, a further excitation operation is performed. Three frames are acquired, and this time, the intraframe correlation calculation of the third frame is performed. If it is determined that the correlation is low, an inter-frame correlation calculation is performed between the x pixel group and the y pixel group of the first to third frames having different frames. In this inter-frame correlation calculation, the correlation calculation is omitted for combinations for which correlation calculation has already been performed. For example, the correlation calculation of the detection signal of the x pixel group of the first frame and the detection signal of the y pixel group of the second frame, the correlation of the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the first frame The calculation is omitted because it has been executed in the inter-frame correlation calculation performed when the second frame is acquired. Therefore, here, a combination that has not yet been executed, that is, a correlation operation between the detection signal of the x pixel group of the first frame and the detection signal of the y pixel group of the third frame, and the detection signal of the x pixel group of the second frame Correlation calculation of the detection signal of the y pixel group of the third frame, correlation calculation of the detection signal of the x pixel group of the third frame and the detection signal of the y pixel group of the first frame, detection signal of the x pixel group of the third frame Each of the correlation operations of the detection signals of the y pixel group in the second frame is performed. The same applies to the inter-frame correlation calculation when the third and subsequent frames are acquired.
 図11の合焦位置算出処理により合焦位置が算出された後は、図10のステップ402で、該算出された合焦位置に基づいて、フォーカスレンズ82の移動方向や移動量等のフォーカス駆動条件を決定する。 After the in-focus position is calculated by the in-focus position calculation process in FIG. 11, in step 402 in FIG. 10, focus driving such as the moving direction and amount of movement of the focus lens 82 is performed based on the calculated in-focus position. Determine the conditions.
 ステップ404では、フォーカス駆動条件に従ってフォーカスレンズ82の駆動を行う。 In step 404, the focus lens 82 is driven according to the focus drive condition.
 ステップ406では、合焦したか否か、すなわち、算出された合焦位置までフォーカスレンズ82が移動されたか否かを判定する。ステップ406で否定判定した場合には、ステップ404に戻り、フォーカスレンズの駆動を継続し、ステップ406で肯定判定した場合には、ステップ408で、フォーカスレンズの駆動を停止して、AF制御を終了する。 In step 406, it is determined whether or not the in-focus state has been achieved, that is, whether or not the focus lens 82 has been moved to the calculated in-focus position. If a negative determination is made in step 406, the process returns to step 404 to continue driving the focus lens. If an affirmative determination is made in step 406, the focus lens drive is stopped in step 408 and the AF control is terminated. To do.
 以上説明したように、まず、フレーム内相関演算だけでなく、加振動作をはさんで得られた異なるフレームの信号を組み合わせて用いてフレーム間相関演算を行うようにしたため、振動を与えて塵埃除去しても塵埃が除去されずに、撮像面を移動した場合であっても、フレームをまたいで異物の影響がないx画素のデータとy画素のデータとを組み合わせて用いて相関演算することができ、位相差の検出が高精度に実施可能となる。 As explained above, first, not only the intra-frame correlation calculation but also the inter-frame correlation calculation is performed using a combination of signals from different frames obtained through the excitation operation. Even when the image pickup surface is moved without removing dust even if it is removed, correlation calculation is performed using a combination of x pixel data and y pixel data that are not affected by foreign matter across frames. Therefore, the phase difference can be detected with high accuracy.
 また、塵埃等の異物の付着状況は、レンズユニット16のレンズ駆動やシャッター機構30の動作等により、随時変動するため、圧電素子24aを振動させてから長い時間が経過した後に位相差検出を行っても、位相差検出を行う前に塵埃が移動したり再付着することもあり効果的ではない。本実施の形態では、AF制御が開始された後に(位相差検出を行うタイミングで)振動を加えるようにしたため、精度高く位相差検出できる。 In addition, since the adhesion state of foreign matters such as dust fluctuates from time to time depending on the lens driving of the lens unit 16 and the operation of the shutter mechanism 30, the phase difference detection is performed after a long time has elapsed since the piezoelectric element 24a was vibrated. However, it is not effective because dust may move or reattach before performing phase difference detection. In the present embodiment, since the vibration is applied after the AF control is started (at the timing of detecting the phase difference), the phase difference can be detected with high accuracy.
 なお、前述したように、圧電素子24aの加振条件の変更を可能に構成した場合には、加振条件に応じて圧電素子24aの駆動に必要な電力が異なるため、上記加振動作の際に、カメラCPU90が、カメラ本体12で利用可能な最大電力内で動作可能な加振条件を選択して、圧電素子駆動回路160の設定を変更するように構成され、該加振条件に応じた振動が生じるようにしてもよい。 As described above, when the excitation condition of the piezoelectric element 24a can be changed, the power required for driving the piezoelectric element 24a differs depending on the excitation condition. In addition, the camera CPU 90 is configured to select an excitation condition that can be operated within the maximum power available in the camera body 12 and to change the setting of the piezoelectric element driving circuit 160, and according to the excitation condition. Vibration may be generated.
 前述したように、ROM89に、予めレンズユニット16の駆動に必要な最大電力を示す電力情報を格納しておき、カメラCPU90は、AF制御の前に予め、このROM89から電力情報を読み出しておく。そして、該電力情報が示すレンズユニット16の駆動に必要な電力と、カメラ12本体の駆動に必要な電力とを考慮して、デジタルカメラ10で利用可能な最大電力内で圧電素子24aの駆動のために利用できる最大電力を求め、この最大電力以下で駆動可能な加振条件を選択して、圧電素子駆動回路160を制御する。これにより、デジタルカメラ10に電力制限があっても、適切な加振条件で加振動作できる。 As described above, the power information indicating the maximum power required for driving the lens unit 16 is stored in the ROM 89 in advance, and the camera CPU 90 reads out the power information from the ROM 89 in advance before the AF control. Then, considering the power required for driving the lens unit 16 indicated by the power information and the power required for driving the camera 12 main body, the piezoelectric element 24a is driven within the maximum power available in the digital camera 10. Therefore, the maximum power that can be used for this purpose is obtained, and an excitation condition that can be driven below this maximum power is selected to control the piezoelectric element driving circuit 160. Thereby, even if there is power limitation in the digital camera 10, it can be vibrated under an appropriate vibration condition.
 また、加振タイミングをCCD22を駆動させる駆動信号に同期させてもよい。圧電素子24aの駆動には、振幅の大きなパルス電圧を用いるため、結合ノイズ、誘導ノイズ、電源ノイズ等が、CCD22から読み出す信号に影響を与える可能性がある。このため、CCD22の駆動信号に同期させて圧電素子24a駆動制御を行うことで、CCD22へのノイズ重畳と、それによる位相差検出誤差の増加を防止できる。 Further, the excitation timing may be synchronized with a drive signal for driving the CCD 22. Since the piezoelectric element 24a is driven by using a pulse voltage having a large amplitude, coupling noise, induction noise, power supply noise, and the like may affect the signal read from the CCD 22. For this reason, by performing drive control of the piezoelectric element 24a in synchronization with the drive signal of the CCD 22, it is possible to prevent noise from being superimposed on the CCD 22 and an increase in the phase difference detection error caused thereby.
 より具体的には、電荷蓄積(露光)、電荷読み出し、及び読み出された電荷の転送が行われることでCCD22の各画素の信号が得られるが、電荷の読み出し中や転送中に圧電素子24aを駆動して振動させると、上記ノイズが乗りやすい。従って、露光中に圧電素子24aを駆動してカバーガラス24を振動させ、読み出しや転送中に圧電素子24aを駆動しないことで、ノイズの影響を抑制する。例えば、カメラCPU90は、駆動信号に同期して露光開始タイミングで加振を開始させ、電荷読み出し開始タイミングまでに加振が終了するように制御してもよい。 More specifically, a signal of each pixel of the CCD 22 is obtained by performing charge accumulation (exposure), charge read-out, and transfer of the read-out charge, and the piezoelectric element 24a during charge read-out or transfer. When this is driven and vibrated, the above noise is likely to ride. Therefore, the influence of noise is suppressed by driving the piezoelectric element 24a during exposure to vibrate the cover glass 24 and not driving the piezoelectric element 24a during reading or transfer. For example, the camera CPU 90 may control to start the excitation at the exposure start timing in synchronization with the drive signal and finish the excitation before the charge readout start timing.
 なお、撮像素子がCCD22ではなく、CMOSである場合には、周知のローリングシャッター方式で制御されるため、1フレーム内でも各画素における露光と読み出しのタイミングが同一にはならない。従って、AF制御時には、現在選択されている測距点に対応する位相差検出画素が露光期間となるタイミングで加振しても良い。 Note that when the image sensor is not the CCD 22 but a CMOS, it is controlled by a well-known rolling shutter system, so that the exposure and readout timings in each pixel are not the same even within one frame. Therefore, during AF control, the phase difference detection pixel corresponding to the currently selected distance measuring point may be vibrated at the timing when the exposure period is reached.
 なお、本実施の形態では、相関が高いほど相関値が小さくなる例について説明したが、相関が高いほど大きくなる評価値(例えば上記相関値の逆数としてもよい)を用い、評価値が閾値より高ければ相関が高いと判定するようにしてもよい。 In this embodiment, an example has been described in which the correlation value decreases as the correlation increases. However, an evaluation value that increases as the correlation increases (for example, the reciprocal of the correlation value may be used), and the evaluation value is less than the threshold value. If the correlation is high, it may be determined that the correlation is high.
[第2の実施の形態] [Second Embodiment]
 次に、第2の実施の形態について説明する。なお、第1の実施の形態と同一部分には同一符号を付し、その詳細な説明は省略する。 Next, a second embodiment will be described. The same parts as those in the first embodiment are denoted by the same reference numerals, and detailed description thereof is omitted.
 図14は、本実施の形態に係る合焦位置算出処理の流れを示すフローチャートである。図14において、図11と同じ処理を行うステップには同じ符号を付している。図14に示す合焦位置算出処理において、ステップ512で否定判断された後は、ステップ513のフレームデータ削除処理を行って削除可能なフレームのデータを削除し、残ったフレームについて、ステップ514でフレーム間相関演算を行う。 FIG. 14 is a flowchart showing the flow of the in-focus position calculation process according to the present embodiment. 14, steps that perform the same processing as in FIG. 11 are denoted by the same reference numerals. In the in-focus position calculation process shown in FIG. 14, after a negative determination is made in step 512, the frame data deletion process in step 513 is performed to delete the frame data that can be deleted, and the remaining frames are framed in step 514. Inter-correlation calculation is performed.
 図15は、フレームデータ削除処理の流れを示すフローチャートである。 FIG. 15 is a flowchart showing the flow of the frame data deletion process.
 ステップ600では、位相差検出画素毎にその検出信号の前後フレーム間の差分を算出する。ここで、前後フレームとは、ステップ508で得られたフレームと該フレームより1つ前に得られたフレームをいう。 In step 600, the difference between the frames before and after the detection signal is calculated for each phase difference detection pixel. Here, the front and rear frames refer to the frame obtained in step 508 and the frame obtained immediately before the frame.
 ステップ602では、x画素群の各画素毎の差分は閾値以下か否かを判定する。なお、ここでは、x画素群の各画素毎の差分の各々が閾値以下であれば、肯定判定し、1つでも閾値を超える差分があれば、否定判定する。ステップ602で肯定判定した場合には、ステップ604に進み、x画素群の上記1つ前のフレームデータを削除する。また、ステップ602で否定判定した場合には、ステップ604はスキップする。 In step 602, it is determined whether or not the difference of each pixel in the x pixel group is equal to or less than a threshold value. Here, if each difference of each pixel of the x pixel group is equal to or less than the threshold value, an affirmative determination is made, and if even one difference exceeds the threshold value, a negative determination is made. If an affirmative determination is made in step 602, the process proceeds to step 604, and the previous frame data of the x pixel group is deleted. If a negative determination is made in step 602, step 604 is skipped.
 ステップ606では、y画素群の各画素毎の差分は閾値以下か否かを判定する。なお、ここでは、y画素群の各画素毎の差分の各々が閾値以下であれば、肯定判定し、1つでも閾値を超える差分があれば、否定判定する。ステップ606で肯定判定した場合には、ステップ608に進み、y画素群の上記1つ前のフレームデータを削除する。また、ステップ608で否定判定した場合には、ステップ608はスキップする。 In Step 606, it is determined whether or not the difference for each pixel in the y pixel group is equal to or less than a threshold value. Here, if each difference of each pixel in the y pixel group is equal to or smaller than the threshold value, an affirmative determination is made, and if even one difference exceeds the threshold value, a negative determination is made. If an affirmative determination is made in step 606, the process proceeds to step 608, and the previous frame data of the y pixel group is deleted. If a negative determination is made in step 608, step 608 is skipped.
 なお、上記では、個々の差分を閾値と比較して判定する例について説明したが、差分の総和と閾値とを比較して判定するようにしてもよい。また、上記ステップ602及び606では、1つでも閾値を超える差分がある場合に、否定判定するようにしたが、閾値を超える差分が予め定められた数以上ある場合に、否定判定するようにしてもよい。 In the above description, an example in which individual differences are determined by comparing with a threshold value has been described. However, the sum of differences may be determined by comparing the threshold value. Also, in steps 602 and 606, a negative determination is made when there is even a difference exceeding the threshold, but a negative determination is made when there are more than a predetermined number of differences exceeding the threshold. Also good.
 また、上記ではステップ604及びステップ608で、1つ前のフレームデータを削除するようにしたが、これに限定されず、最新のフレームデータの方を削除するようにしてもよい。 In the above description, the previous frame data is deleted in step 604 and step 608. However, the present invention is not limited to this, and the latest frame data may be deleted.
 フレームデータ削除処理終了後は、図14のステップ514に進む。 After completion of the frame data deletion process, the process proceeds to step 514 in FIG.
 ここで、ステップ514では、第1の実施の形態と同様にフレーム間相関演算が行われるが、フレームデータ削除処理により、差分が小さいフレームについては削除されている。従って、相関演算を行うべき組み合わせが少なくなるため、相関演算に要する時間を短縮することができ、また、フレームデータを保持しておくために必要な記録容量も削減できる。 Here, in step 514, the inter-frame correlation calculation is performed as in the first embodiment, but the frame with a small difference is deleted by the frame data deletion processing. Therefore, since the number of combinations to be subjected to correlation calculation is reduced, the time required for the correlation calculation can be shortened, and the recording capacity necessary for holding the frame data can be reduced.
 この効果について、図16及び図17を用いて説明する。 This effect will be described with reference to FIGS. 16 and 17.
 図16は、相関演算の具体的な例を説明する説明図である。また、図17Aは、第1フレームのx画素の検出信号及びy画素の検出信号をプロットした図であり、図17Bは、第2フレームのx画素の検出信号及びy画素の検出信号をプロットした図であり、図17Cは、第3フレームのx画素の検出信号及びy画素の検出信号をプロットした図であり、図17Dは、第1フレームのx画素群の検出信号と第3フレームのy画素群の検出信号をプロットした図である。 FIG. 16 is an explanatory diagram for explaining a specific example of the correlation calculation. FIG. 17A is a diagram plotting x pixel detection signals and y pixel detection signals in the first frame, and FIG. 17B is plotting x pixel detection signals and y pixel detection signals in the second frame. FIG. 17C is a diagram in which the detection signal of the x pixel and the detection signal of the y pixel of the third frame are plotted, and FIG. 17D is the detection signal of the x pixel group of the first frame and the y signal of the third frame. It is the figure which plotted the detection signal of the pixel group.
 まず、図16(1)に示すように、第1フレームのフレーム内相関演算を行い、相関演算の結果から、x画素群とy画素群の検出信号の相関が低いと判定した場合には(図17Aも参照)、加振動作を行った後に第2フレームを取得し、図16(2)に示すように、第2フレームのフレーム内相関演算を行う。この相関演算の結果からx画素群とy画素群の検出信号の相関が低いと判定した場合には(図17Bも参照)、フレーム間相関演算を行うが、その前に、フレームデータ削除処理を行う。この例では、第1フレームと第2フレームのx画素群の検出信号の差分が閾値以下であったため、第1フレームのx画素群のフレームデータが削除される。従って、フレーム間相関演算は、図16(3)に示すように、第2フレームのx画素群の検出信号と第1フレームのy画素群の検出信号との相関演算だけを行えばよいことになる。この相関演算でも相関が低いと判定された場合には、再度加振動作を行った後第3フレームを取得し、図16(4)に示すように、第3フレームのフレーム内相関演算を行う。この相関演算の結果からx画素群とy画素群の検出信号の相関が低いと判定した場合には(図17Cも参照)、フレーム間相関演算を行うが、その前に、フレームデータ削除処理を行う。この例では、第2フレームと第3フレームのy画素群の検出信号の差分が閾値以下であったため、第2フレームのy画素群のフレームデータが削除される。従って、フレーム間相関演算は、第2フレームのx画素群の検出信号と第3フレームのy画素群の検出信号との相関演算(図16(5))、及び第3フレームのx画素群の検出信号と第1フレームのy画素群の検出信号との相関演算(図16(6))を行えばよいことになる。 First, as shown in FIG. 16 (1), when the intraframe correlation calculation of the first frame is performed and it is determined from the correlation calculation result that the correlation between the detection signals of the x pixel group and the y pixel group is low ( 17A), the second frame is acquired after performing the excitation operation, and the intra-frame correlation calculation of the second frame is performed as shown in FIG. If it is determined from the correlation calculation result that the correlation between the detection signals of the x pixel group and the y pixel group is low (see also FIG. 17B), the inter-frame correlation calculation is performed, but before that, the frame data deletion process is performed. Do. In this example, since the difference between the detection signals of the x pixel group of the first frame and the second frame is equal to or smaller than the threshold value, the frame data of the x pixel group of the first frame is deleted. Therefore, as shown in FIG. 16 (3), the inter-frame correlation calculation only needs to perform the correlation calculation between the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the first frame. Become. If it is determined that the correlation is low even in this correlation calculation, the third frame is acquired after performing the excitation operation again, and the intra-frame correlation calculation of the third frame is performed as shown in FIG. . If it is determined from the correlation calculation result that the correlation between the detection signals of the x pixel group and the y pixel group is low (see also FIG. 17C), the inter-frame correlation calculation is performed, but before that, the frame data deletion process is performed. Do. In this example, since the difference between the detection signals of the y pixel group of the second frame and the third frame is equal to or smaller than the threshold value, the frame data of the y pixel group of the second frame is deleted. Therefore, the inter-frame correlation calculation is performed by calculating the correlation between the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the third frame (FIG. 16 (5)), and the x pixel group of the third frame. The correlation calculation (FIG. 16 (6)) between the detection signal and the detection signal of the y pixel group in the first frame may be performed.
 ここで、第2フレームのx画素群の検出信号と第3フレームのy画素群の検出信号との相関が低く、第3フレームのx画素群の検出信号と第1フレームのy画素群の検出信号との相関が高い場合には(図17Dも参照)、第3フレームのx画素群の検出信号と第1フレームのy画素群の検出信号との相関演算結果を用いて合焦位置を算出することとなる。 Here, the correlation between the detection signal of the x pixel group of the second frame and the detection signal of the y pixel group of the third frame is low, and the detection signal of the x pixel group of the third frame and the detection of the y pixel group of the first frame When the correlation with the signal is high (see also FIG. 17D), the in-focus position is calculated using the correlation calculation result between the detection signal of the x pixel group in the third frame and the detection signal of the y pixel group in the first frame. Will be.
 以上説明したように、フレーム間演算処理を行う前にフレームデータ削除処理を行うことにより、相関演算を行うべき組み合わせが少なくなり、相関演算に要する時間を短縮することができ、また、フレームデータを保持しておくために必要な記録容量も削減できることになる。 As described above, by performing the frame data deletion process before performing the inter-frame calculation process, the number of combinations to be subjected to the correlation calculation can be reduced, and the time required for the correlation calculation can be shortened. The recording capacity required for holding can also be reduced.
 なお、本実施の形態では、ステップ514でフレーム間相関演算を行う前にステップ513でフレームデータ削除処理を行う例について説明したが、これに限定されず、例えば、ステップ508での読み出しの後ステップ510でフレーム内相関演算を行う前にフレームデータ削除処理を行ってもよい。この場合、例えば、x画素群及びy画素群の双方について差分が小さく、1つ前のフレームデータがx画素群及びy画素群共に削除された場合には、ステップ510でのフレーム内相関演算も同じ結果が出る可能性が高いため、ステップ510のフレーム内相関演算処理をスキップするようにしてもよい。 In this embodiment, the example in which the frame data deletion process is performed in step 513 before performing the inter-frame correlation calculation in step 514 has been described. However, the present invention is not limited to this. For example, the step after the reading in step 508 The frame data deletion process may be performed before the intra-frame correlation calculation is performed in 510. In this case, for example, when the difference is small for both the x pixel group and the y pixel group, and the previous frame data is deleted for both the x pixel group and the y pixel group, the intra-frame correlation calculation in step 510 is also performed. Since there is a high possibility that the same result will be obtained, the intra-frame correlation calculation processing in step 510 may be skipped.
[第3の実施の形態] [Third Embodiment]
 次に、第3の実施の形態について説明する。なお、第1の実施の形態と同一部分には同一符号を付し、その詳細な説明は省略する。 Next, a third embodiment will be described. The same parts as those in the first embodiment are denoted by the same reference numerals, and detailed description thereof is omitted.
 なお、本実施の形態では、圧電素子駆動回路160が、圧電素子24aの加振条件を変更することが可能に構成されているものとする。前述したように、加振条件としては、加振周波数、加振周波数変更時のステップ幅、加振時間、加振タイミング、及び加振振幅等を挙げることができるが、以下では、加振条件の一例として加振周波数を例にとり説明する。本実施の形態に係るデジタルカメラ10には、互いに異なる加振周波数の定在波屈曲振動を発生させる複数の振動モードが備えられているものとする。 In this embodiment, it is assumed that the piezoelectric element driving circuit 160 is configured to be able to change the excitation condition of the piezoelectric element 24a. As described above, examples of the excitation condition include the excitation frequency, the step width when changing the excitation frequency, the excitation time, the excitation timing, and the excitation amplitude. As an example, an excitation frequency will be described as an example. It is assumed that the digital camera 10 according to the present embodiment is provided with a plurality of vibration modes that generate standing wave bending vibrations having different excitation frequencies.
 図18は、本実施の形態に係る合焦位置算出処理の流れを示すフローチャートである。図18において、図11と同じ処理を行うステップには同じ符号を付している。図18に示す合焦位置算出処理において、ステップ516で否定判断された後は、ステップ517において、次にステップ506を実行するときの振動モードの設定が、1つ前に実行された加振の振動モードとは異なる振動モードとなるように圧電素子駆動回路160の設定を変更する。続いて、ステップ506で該変更された振動モードで加振動作が実行されるように、制御信号を駆動素子駆動回路160に出力する。圧電素子駆動回路160は、該制御信号を受けると、変更された振動モードに応じたパルス電圧を生成して圧電素子24aに供給する。振動モードを変更することより発生する定在波が変化するため、異物の付着状態が変化する。 FIG. 18 is a flowchart showing the flow of the in-focus position calculation process according to the present embodiment. 18, steps that perform the same processing as in FIG. 11 are denoted by the same reference numerals. In the in-focus position calculation process shown in FIG. 18, after a negative determination is made in step 516, in step 517, the vibration mode setting for the next execution of step 506 is set to the excitation that was executed immediately before. The setting of the piezoelectric element driving circuit 160 is changed so that the vibration mode is different from the vibration mode. Subsequently, in step 506, a control signal is output to the drive element drive circuit 160 so that the vibration operation is executed in the changed vibration mode. When receiving the control signal, the piezoelectric element driving circuit 160 generates a pulse voltage corresponding to the changed vibration mode and supplies the pulse voltage to the piezoelectric element 24a. Since the standing wave generated by changing the vibration mode changes, the adhesion state of the foreign matter changes.
 図19は、6次の振動モードで定在波屈曲振動を発生させた場合の光学部材(本実施の形態ではカバーガラス24)の幅方向(すなわち、光軸方向と交差する方向)の各位置における光軸方向の変位量の例、及び7次の振動モードで定在波屈曲振動を発生させた場合の光学部材の幅方向の各位置における光軸方向の変位量の例を示したグラフである。振動の腹となる部分に付着した塵埃は加えられる加速度により除去されるか又は節の部分へ移動する。また、図19から明らかなように、6次と7次とでは腹と節が異なる位置に発生する。従って、振動モードを変化させて振動を加えることで、異物が除去されるか或いは移動する等、異物の付着状態を変化させることができる。同じ振動モードを加え続ける場合に比べて、異物の付着状態が変化しやすくなるため、相関が低い相関演算が続くことを抑制でき、より早く相関が高い相関演算結果を得ることができる。 FIG. 19 shows each position in the width direction (that is, the direction intersecting the optical axis direction) of the optical member (cover glass 24 in the present embodiment) when standing wave bending vibration is generated in the sixth-order vibration mode. 8 is a graph showing an example of the amount of displacement in the optical axis direction and an example of the amount of displacement in the optical axis direction at each position in the width direction of the optical member when standing wave bending vibration is generated in the seventh-order vibration mode. is there. The dust adhering to the vibration antinode is removed by the applied acceleration or moved to the node. Further, as is apparent from FIG. 19, the abdomen and the node occur at different positions in the sixth and seventh orders. Therefore, by applying vibration by changing the vibration mode, it is possible to change the adhesion state of the foreign matter such as removal or movement of the foreign matter. Compared to the case where the same vibration mode is continuously applied, the adhesion state of the foreign matter is easily changed, so that it is possible to suppress the correlation calculation having a low correlation from being continued, and a correlation calculation result having a high correlation can be obtained earlier.
 なお、ここでは振動モードの次数として6次と7次を例示したが、振動モードの次数はこれらに限定されるものではない。 In addition, although the 6th order and the 7th order are illustrated here as the order of the vibration mode, the order of the vibration mode is not limited to these.
 更にまた、本実施の形態においても、第2の実施の形態で説明したフレームデータ削除処理をステップ514の前に実施してもよい。 Furthermore, also in the present embodiment, the frame data deletion process described in the second embodiment may be performed before step 514.
 また、圧電素子駆動回路160に対する振動モードの設定変更をCCD22の電荷蓄積(露光)中に行うようにしてもよい。例えば、カメラCPU90は、露光開始タイミングで振動モードの設定変更を行い、その後に加振を開始させ、電荷読み出し開始タイミングまでに加振が終了するように制御してもよい。 Further, the vibration mode setting change for the piezoelectric element driving circuit 160 may be performed during the charge accumulation (exposure) of the CCD 22. For example, the camera CPU 90 may perform control so that the vibration mode setting is changed at the exposure start timing, and then the vibration is started and the vibration is finished by the charge reading start timing.
[第4の実施の形態] [Fourth Embodiment]
 第1~第3の実施の形態では、合焦位置算出処理が終了して合焦位置が確定した後に、フォーカスレンズ駆動を行う場合について説明したが、これに限定されない。例えば、AF制御開始してから初回の相関演算で暫定的に合焦位置を算出し、合焦位置が確定する前の段階でフォーカスレンズ駆動を開始させるようにしてもよい。 In the first to third embodiments, the case where the focus lens is driven after the in-focus position calculation process is completed and the in-focus position is determined has been described. However, the present invention is not limited to this. For example, the focus position may be temporarily calculated by the first correlation calculation after the AF control is started, and the focus lens drive may be started before the focus position is determined.
 本実施の形態のデジタルカメラ10の構成は第1の実施の形態と同様であるため説明を省略する。本実施の形態では、AF制御開始と同時に図20に示す合焦位置算出処理ルーチンを開始するものとし、前述した図10に示すAF制御は行わない。 Since the configuration of the digital camera 10 of the present embodiment is the same as that of the first embodiment, description thereof is omitted. In the present embodiment, the focus position calculation processing routine shown in FIG. 20 is started simultaneously with the start of AF control, and the above-described AF control shown in FIG. 10 is not performed.
 ステップ600、及びステップ602は、図11を参照して説明したステップ500、及びステップ502と同一の処理を行う。 Step 600 and Step 602 perform the same processing as Step 500 and Step 502 described with reference to FIG.
 ステップ604では、ステップ602のフレーム内相関演算結果に基づいて暫定的に合焦位置を算出する。ここで算出された合焦位置を暫定合焦位置と呼称する。ステップ604で暫定合焦位置を算出すると、図21に示すフォーカスレンズ駆動の処理ルーチンを開始する。以後、フォーカスレンズ駆動処理ルーチンは、合焦位置算出処理ルーチンと並行して実行される。フォーカスレンズ駆動処理ルーチンの詳細は後述する。 In step 604, the in-focus position is provisionally calculated based on the intra-frame correlation calculation result in step 602. The in-focus position calculated here is referred to as a temporary in-focus position. When the provisional focus position is calculated in step 604, the focus lens drive processing routine shown in FIG. 21 is started. Thereafter, the focus lens drive process routine is executed in parallel with the focus position calculation process routine. Details of the focus lens drive processing routine will be described later.
 図20において、ステップ606~ステップ620までの処理は、図18を参照して説明したステップ504~ステップ518の処理と同一であるため、説明を省略する。なお、本実施の形態においても、ステップ616の前にフレームデータ削除処理を挿入してもよいし、また、一定の加振条件で振動させる場合には、ステップ619の処理を省略してもよい。 In FIG. 20, the processing from step 606 to step 620 is the same as the processing from step 504 to step 518 described with reference to FIG. Also in the present embodiment, the frame data deletion process may be inserted before step 616, or the process of step 619 may be omitted when the vibration is performed under a constant excitation condition. .
 ステップ622では、ステップ620において相関が高いと判定された相関演算結果に基づいて算出された合焦位置と、ステップ604で算出された暫定合焦位置とを比較し、その差が予め定められた閾値以上か否かを判定する。 In step 622, the in-focus position calculated based on the correlation calculation result determined as having a high correlation in step 620 is compared with the temporary in-focus position calculated in step 604, and the difference is determined in advance. It is determined whether or not the threshold value is exceeded.
 ステップ622で肯定判定した場合には、ステップ624で、合焦位置を暫定合焦位置からステップ620で算出した最新の合焦位置に更新し、本処理ルーチンを終了する。 If the determination in step 622 is affirmative, in step 624, the in-focus position is updated from the temporary in-focus position to the latest in-focus position calculated in step 620, and this processing routine is terminated.
 一方、ステップ604の後に起動したフォーカスレンズ駆動処理ルーチンでは、以下の処理を行う。 On the other hand, in the focus lens driving process routine started after step 604, the following process is performed.
 ステップ700では、暫定合焦位置に基づいて、フォーカスレンズ82の移動方向や移動量等のフォーカス駆動条件を決定する。 In step 700, focus driving conditions such as a moving direction and a moving amount of the focus lens 82 are determined based on the provisional focus position.
 ステップ702では、フォーカス駆動条件に従ってフォーカスレンズ82の駆動を行い、フォーカスレンズ82を移動させる。 In step 702, the focus lens 82 is driven according to the focus drive condition, and the focus lens 82 is moved.
 ステップ704では、合焦位置が更新されたか否かを判定し、合焦位置が更新されたと判定した場合には、ステップ700に戻り、更新された合焦位置に基づいてフォーカス駆動条件を決定し、ステップ702でフォーカスレンズ82の駆動を行う。ステップ704で合焦位置が更新されなかったと判定した場合には、ステップ706で、現在のフォーカスレンズ駆動条件でのフォーカスレンズの駆動が終了したか(合焦したか)否かを判定する。ステップ706で否定判定した場合には、ステップ702に戻り、フォーカスレンズ82の駆動を継続し、ステップ706で肯定判定した場合には、ステップ708で、フォーカス駆動を停止して、AF制御を終了する。 In step 704, it is determined whether or not the in-focus position has been updated. If it is determined that the in-focus position has been updated, the process returns to step 700, and the focus drive condition is determined based on the updated in-focus position. In step 702, the focus lens 82 is driven. If it is determined in step 704 that the in-focus position has not been updated, it is determined in step 706 whether or not driving of the focus lens under the current focus lens driving condition has been completed (in-focus). If a negative determination is made in step 706, the process returns to step 702 to continue driving the focus lens 82. If an affirmative determination is made in step 706, the focus drive is stopped in step 708 and the AF control is terminated. .
 暫定合焦位置が、最終的に確定した合焦位置と異なるとしても大きく異なることはまれである。従って、暫定合焦位置でフォーカスレンズ位置を粗調整し、更に更新した合焦位置となるよう微調整することで、合焦位置が最終的に確定するまでフォーカスレンズ82を駆動しない場合に比べて、AF制御にかかる時間が短くなる。 暫定 Even if the provisional focus position is different from the final focus position, it is rare that it differs greatly. Therefore, the focus lens position is roughly adjusted at the temporary focus position, and then finely adjusted so that the updated focus position is obtained, compared with the case where the focus lens 82 is not driven until the focus position is finally determined. , AF control takes less time.
 以上説明したように、初回の相関演算結果に基づいて、暫定合焦位置を算出し、フォーカスレンズ駆動を開始し、その後、第1~第3の実施の形態で説明したように相関が高いと判定されて算出された合焦位置と暫定合焦位置とに差がある場合には、合焦位置を更新して該合焦位置に合うようにレンズ駆動するようにしたため、AF制御にかかる時間のうち大きな割合を占めるフォーカスレンズ82の駆動を早めに開始することができ、合焦に要する時間が長引くことを抑制することができる。 As described above, based on the first correlation calculation result, the temporary in-focus position is calculated, the focus lens driving is started, and then the correlation is high as described in the first to third embodiments. When there is a difference between the determined and calculated in-focus position and the temporary in-focus position, the focus position is updated and the lens is driven to match the in-focus position. It is possible to start driving the focus lens 82, which occupies a large proportion of them, at an early stage, and to prevent the time required for focusing from being prolonged.
 なお、本実施の形態では、第1フレームを取得後、フォーカスレンズ駆動を開始した後に、第2フレーム以降のフレームの取得を行う。従って、例えば、低輝度環境においてフレームレートが遅い場合や、フォーカスレンズ用のアクチュエータにボイスコイルモータや超音波モータ等の高速アクチュエータを用いた(フォーカスレンズ82の移動速度が速い)場合に、第2フレーム以降のフレームでは相関演算結果にレンズ駆動によるレンズ位置変化の影響を受ける場合がある。そこで、フレーム間相関演算の精度を挙げるため、フォーカスレンズ82の移動量分だけ検出信号を補正するようにしてもよい。 In the present embodiment, after the first frame is acquired, focus lens drive is started, and then the second and subsequent frames are acquired. Therefore, for example, when the frame rate is low in a low-luminance environment, or when a high-speed actuator such as a voice coil motor or an ultrasonic motor is used as the focus lens actuator (the moving speed of the focus lens 82 is high), the second In the frames after the frame, the correlation calculation result may be affected by the lens position change due to lens driving. Therefore, in order to increase the accuracy of the inter-frame correlation calculation, the detection signal may be corrected by the amount of movement of the focus lens 82.
 例えば、各フレーム取得時点のフォーカスレンズ82の位置を記憶し、フレーム間のレンズ移動量から過去のフレームのデータを補正し、相関演算及び合焦位置算出を行う。 For example, the position of the focus lens 82 at the time of acquisition of each frame is stored, the data of the past frame is corrected from the amount of lens movement between frames, and correlation calculation and in-focus position calculation are performed.
 より具体的には、第2フレームについてのフレーム内相関演算を行った後、フレーム間相関演算を行う前に、フレーム間のレンズ駆動によるレンズの移動量から該レンズの移動に起因して生じる信号の変動を補正するための補正量を算出し、該補正量で過去のフレームデータを補正する。 More specifically, after performing the intra-frame correlation calculation for the second frame, and before performing the inter-frame correlation calculation, a signal generated due to the movement of the lens from the lens movement amount due to lens driving between frames. Then, a correction amount for correcting the fluctuation of the image is calculated, and past frame data is corrected with the correction amount.
 図22に示すように、第1フレームのx画素群のデータ(図22Aも参照)と第2フレームのy画素群のデータ(図22Bも参照)とを用いて相関演算を行う場合に、第1フレームを取得したときのフォーカスレンズ位置と第2フレームを取得したときのフォーカスレンズ位置との差分からフォーカスレンズの移動量を求め、フォーカスレンズが該移動量だけ移動したことにより生じるx画素の検出信号の変動量を補正量として求め、第1フレームのx画素群のデータを該補正量で補正して補正データ(x’)を得(図22Cも参照)、該補正データx’と第2フレームのy画素群のデータとを用いて相関演算を行う。 As shown in FIG. 22, when the correlation calculation is performed using the data of the x pixel group of the first frame (see also FIG. 22A) and the data of the y pixel group of the second frame (see also FIG. 22B), The amount of movement of the focus lens is obtained from the difference between the focus lens position at the time of acquiring one frame and the focus lens position at the time of acquiring the second frame, and detection of x pixels caused by the movement of the focus lens by the amount of movement is performed. The variation amount of the signal is obtained as a correction amount, and the correction data (x ′) is obtained by correcting the data of the x pixel group of the first frame with the correction amount (see also FIG. 22C). The correlation calculation is performed using the y pixel group data of the frame.
 これにより、初回の位相差検出結果から算出した暫定合焦位置を元にフォーカスレンズ82の駆動を開始した後、2フレーム取得以降にフレーム間相関演算を行っても、レンズ位置変化による変動を補正でき、高精度に合焦位置を算出することができる。 As a result, even if the inter-frame correlation calculation is performed after the acquisition of two frames after starting the drive of the focus lens 82 based on the temporary in-focus position calculated from the first phase difference detection result, the fluctuation due to the lens position change is corrected. The focus position can be calculated with high accuracy.
 なお、ここでは、フレーム間相関演算において最新のフレームデータではなく、過去に取得したフレームデータを補正する場合を例に挙げて説明したが、これに限定されるものではなく、最新のフレームデータを補正してもよいし、最新のフレームデータと過去のフレームデータの双方を補正してもよい。 Here, the case where the frame data acquired in the past is corrected in the correlation calculation between frames is described as an example, but the present invention is not limited to this, and the latest frame data is not limited to this. You may correct | amend and you may correct | amend both the newest frame data and past frame data.
 また、本実施の形態では、フォーカスレンズ82の駆動処理を合焦位置算出処理と並行して行うため、加振動作のタイミング及びフォーカスレンズの駆動タイミングを調整しない場合には、フォーカスレンズ82の駆動期間と圧電素子24aの加振動作期間とが重なることがある。一方、デジタルカメラ10で利用可能な電力には限りがあるため、最大供給電力を超えないように使用電力を抑える必要がある。従って、予めレンズユニット16の駆動に必要な最大電力を示す電力情報をAF制御の前にレンズユニット16のROM89から読み出しておき、該電力情報が示すレンズユニット16の駆動に必要な電力と、カメラ12本体の駆動に必要な電力とを考慮して、デジタルカメラ10で利用可能な最大電力内で圧電素子24aの駆動のために利用できる最大電力を求め、この最大電力以下の電力で駆動可能な加振条件を選択して、圧電素子駆動回路160を制御するようにしてもよい。これにより、デジタルカメラ10に電力制限があっても、レンズ駆動と加振動作とを並行してできる。最大電力内で駆動可能な加振条件を選択して実行することで、レンズ駆動と加振動動作とを並行して行うことができ、AFの高精度化及び高速化を両立できる。 In this embodiment, since the driving process of the focus lens 82 is performed in parallel with the in-focus position calculation process, the driving of the focus lens 82 is performed when the timing of the excitation operation and the driving timing of the focus lens are not adjusted. The period and the vibration operation period of the piezoelectric element 24a may overlap. On the other hand, since the power that can be used by the digital camera 10 is limited, it is necessary to suppress the power consumption so as not to exceed the maximum supply power. Therefore, power information indicating the maximum power required for driving the lens unit 16 is read from the ROM 89 of the lens unit 16 before AF control, and the power required for driving the lens unit 16 indicated by the power information and the camera The maximum power that can be used for driving the piezoelectric element 24a is obtained within the maximum power that can be used by the digital camera 10 in consideration of the power required for driving the 12 main body, and can be driven with power that is less than or equal to this maximum power. The piezoelectric element driving circuit 160 may be controlled by selecting an excitation condition. Thereby, even if the digital camera 10 has power limitation, the lens drive and the vibration operation can be performed in parallel. By selecting and executing an excitation condition that can be driven within the maximum power, the lens drive and the vibration operation can be performed in parallel, and both high precision and high speed of AF can be achieved.
 また、レンズ交換型の撮像装置において、カメラ本体12に接続されたレンズユニット16の電力情報を参照して、圧電素子24aの駆動タイミングを決定するようにしてもよい。 In the interchangeable lens type imaging apparatus, the drive timing of the piezoelectric element 24a may be determined with reference to the power information of the lens unit 16 connected to the camera body 12.
 例えば、フォーカスレンズ82を駆動するAFモータ86にDCモータを用いた場合、フォーカスレンズ駆動開始後の起動電流が大きく、その後駆動速度に応じた電力に低下する。またその際に、絞り機構80の駆動タイミングによっても、圧電素子24aを駆動するのに利用可能な電力量が変動する。従って、一連のAFシーケンスの中で、フォーカスレンズ駆動や絞り機構80の駆動状態に応じて利用可能な電力量を算出し、該算出した電力量と、圧電素子24aを駆動させるために必要な電力量とに基づいて、デジタルカメラ10が利用可能な電力量を超えないタイミングで圧電素子24aを駆動させる。これにより、例えば、レンズ駆動中の最大電力点となるタイミングを避けて圧電素子24aを駆動することができ、AFの高速化し、また、機器を小型化することができる。 For example, when a DC motor is used as the AF motor 86 for driving the focus lens 82, the starting current after the focus lens driving is started is large, and then the power is reduced according to the driving speed. At that time, the amount of electric power that can be used to drive the piezoelectric element 24a also varies depending on the drive timing of the aperture mechanism 80. Therefore, in a series of AF sequences, the amount of power that can be used is calculated according to the driving state of the focus lens and the diaphragm mechanism 80, and the calculated amount of power and the power required to drive the piezoelectric element 24a. Based on the amount, the piezoelectric element 24a is driven at a timing that does not exceed the amount of power that the digital camera 10 can use. Thereby, for example, the piezoelectric element 24a can be driven avoiding the timing of the maximum power point during lens driving, the AF speed can be increased, and the device can be downsized.
 なお、上記第1~第4の実施の形態において、圧電素子24aをカバーガラス24に設け、カバーガラスに振動を加える例について説明したが、圧電素子24aを設ける光学部材は、撮像素子の受光面側に配置する光学部材であればよく、カバーガラスに限定されない。例えば、カバーガラス24に代えて光学ローパスフィルタ(光学LPF)が設けられている場合や、カバーガラス24より更に前方(レンズ群300側)に光学LPFが設けられている場合には、光学LPFに圧電素子を設け、光学LPFに振動を加えるようにしてもよい。 In the first to fourth embodiments, the example in which the piezoelectric element 24a is provided on the cover glass 24 and vibration is applied to the cover glass has been described. However, the optical member provided with the piezoelectric element 24a is a light-receiving surface of the imaging element. The optical member is not limited to the cover glass as long as the optical member is disposed on the side. For example, when an optical low-pass filter (optical LPF) is provided instead of the cover glass 24, or when an optical LPF is provided further forward (on the lens group 300 side) than the cover glass 24, the optical LPF A piezoelectric element may be provided to apply vibration to the optical LPF.
 また、本発明は、上述した各実施形態に限定されるものではなく、特許請求の範囲に記載された範囲内で設計上の変更をされたものにも適用可能であるのは勿論である。また、上記各実施形態を組み合わせた処理を実行することも可能である。 Further, the present invention is not limited to the above-described embodiments, and it is needless to say that the present invention can also be applied to those modified in design within the scope described in the claims. It is also possible to execute processing that combines the above embodiments.
 尚、日本特許出願2011-108037の開示はその全体が参照により本明細書に取り込まれる。本明細書に記載された全ての文献、特許出願、および技術規格は、個々の文献、特許出願、および技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。 Note that the entire disclosure of Japanese Patent Application No. 2011-108037 is incorporated herein by reference. All documents, patent applications, and technical standards mentioned in this specification are to the same extent as if each individual document, patent application, and technical standard were specifically and individually described to be incorporated by reference, Incorporated herein by reference.
10 デジタルカメラ
14 レンズマウント部
16 レンズユニット
18 シャッターボタン
24 カバーガラス
80 絞り機構
88 レンズCPU
90 カメラCPU
93 電源スイッチ
95 レリーズ検出スイッチ
96 操作部
160 圧電素子駆動回路
300 レンズ群
DESCRIPTION OF SYMBOLS 10 Digital camera 14 Lens mount part 16 Lens unit 18 Shutter button 24 Cover glass 80 Aperture mechanism 88 Lens CPU
90 Camera CPU
93 Power switch 95 Release detection switch 96 Operation unit 160 Piezoelectric element drive circuit 300 Lens group

Claims (9)

  1.  撮影レンズの主軸に対して一方の側を通過した光束が入射される第1の位相差検出画素と、前記撮影レンズの主軸に対して他方の側を通過した光束が入射される第2の位相差検出画素とからなる位相差検出画素対を複数備えると共に複数の撮像画素を備えた撮像素子と、
     前記撮像素子の受光面側に設けられた光学部材と、
     前記光学部材に振動を加える加振手段と、
     前記撮像素子により被写体を撮像した同一撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第1の相関演算手段と、
     前記撮像素子の撮像と前記加振手段の加振とを交互に行って、前記撮像素子の撮像により撮像した異なる撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第2の相関演算手段と、
     前記第1の相関演算手段及び前記第2の相関演算手段での各相関演算で得られた相関値に基づいて、各相関演算で組み合わせた信号の相関が高いか否かを各々判定する判定手段と、
     前記判定手段により相関が高いと判定された組み合わせの相関演算の結果から合焦位置を算出する合焦位置算出手段と、
     前記合焦位置算出手段で算出された合焦位置に前記撮影レンズが移動されるように前記撮影レンズを駆動する駆動手段を制御する駆動制御手段と、
     を備えた撮像装置。
    A first phase difference detection pixel on which a light beam that has passed through one side with respect to the main axis of the photographing lens is incident, and a second position at which a light beam that has passed through the other side with respect to the main axis of the photographing lens is incident. An imaging device including a plurality of phase difference detection pixel pairs each including a phase difference detection pixel and a plurality of imaging pixels;
    An optical member provided on the light receiving surface side of the imaging element;
    Excitation means for applying vibration to the optical member;
    First correlation calculation means for performing a correlation calculation by combining signals obtained from the phase difference detection pixel pair in the same imaging frame in which a subject is imaged by the imaging element;
    Correlation calculation is performed by combining the signals obtained from the pair of phase difference detection pixels in different imaging frames captured by imaging of the imaging element by alternately performing imaging of the imaging element and excitation of the excitation unit. Second correlation calculating means to perform,
    Determination means for respectively determining whether or not the correlation of the signals combined in each correlation calculation is high based on the correlation value obtained by each correlation calculation in the first correlation calculation means and the second correlation calculation means. When,
    An in-focus position calculating means for calculating an in-focus position from the result of correlation calculation of a combination determined to have high correlation by the determining means;
    Drive control means for controlling drive means for driving the photographic lens so that the photographic lens is moved to the focus position calculated by the focus position calculation means;
    An imaging apparatus comprising:
  2.  前記第2の相関演算手段は、異なる2つの撮像フレームにおける前記第1の位相差検出画素から得られた信号間の差分が予め定められた閾値以下の場合には、前記異なる2つの撮像フレームの何れか一方における前記第1の位相差検出画素から得られた信号を相関演算に用い、異なる2つの撮像フレームにおける前記第2の位相差検出画素から得られた信号間の差分が前記閾値以下の場合には、前記異なる2つの撮像フレームの何れか一方における前記第2の位相差検出画素から得られた信号を相関演算に用いる
     請求項1記載の撮像装置。
    When the difference between signals obtained from the first phase difference detection pixels in two different imaging frames is equal to or less than a predetermined threshold value, the second correlation calculation unit calculates the difference between the two different imaging frames. The signal obtained from the first phase difference detection pixel in either one is used for correlation calculation, and the difference between the signals obtained from the second phase difference detection pixel in two different imaging frames is less than or equal to the threshold value. The imaging apparatus according to claim 1, wherein a signal obtained from the second phase difference detection pixel in any one of the two different imaging frames is used for correlation calculation.
  3.  前記加振手段の加振を前記撮像素子で撮像フレームを取得する毎に行う場合に、各撮像フレーム毎に異なる加振条件で振動が加えられるように前記加振手段を制御する第1加振制御手段を更に備えた
     請求項1又は請求項2記載の撮像装置。
    A first excitation that controls the excitation unit so that vibration is applied under different excitation conditions for each imaging frame when the excitation unit performs excitation every time an imaging frame is acquired by the imaging device. The imaging apparatus according to claim 1, further comprising a control unit.
  4.  前記撮像素子の撮像により最初に取得された撮像フレームにおける前記位相差検出画素対の信号を組み合わせて前記第1の相関演算手段により行われた相関演算の結果から、暫定的な暫定合焦位置を算出する暫定合焦位置算出手段を更に備え、
     前記駆動制御手段は、前記暫定合焦位置が算出されたときに前記撮影レンズの該暫定合焦位置への移動が開始されるように前記駆動手段の駆動を開始し、その後前記合焦位置が算出された場合には、前記合焦位置に前記撮影レンズが移動されるように前記駆動手段を制御する
     請求項1~請求項3の何れか1項記載の撮像装置。
    Based on the result of the correlation calculation performed by the first correlation calculation unit by combining the signals of the phase difference detection pixel pair in the imaging frame first acquired by the imaging of the imaging element, a temporary provisional in-focus position is obtained. Provisional focus position calculation means for calculating,
    The drive control means starts driving the drive means so that the movement of the photographing lens to the temporary focus position is started when the temporary focus position is calculated, and then the focus position is The imaging apparatus according to any one of claims 1 to 3, wherein, when calculated, the driving unit is controlled so that the photographing lens is moved to the in-focus position.
  5.  各撮像フレームを取得したときの前撮影レンズの各位置に基づいて、前記撮影レンズの移動に起因して生じる各位相差検出画素の信号の変動を補正する補正手段を更に備え、
     前記第2の相関演算手段は、前記補正手段により補正された信号を用いて相関演算を行う
     請求項4記載の撮像装置。
    Based on each position of the previous photographing lens when each imaging frame is acquired, the image forming apparatus further includes a correction unit that corrects a variation in the signal of each phase difference detection pixel caused by the movement of the photographing lens,
    The imaging apparatus according to claim 4, wherein the second correlation calculation unit performs a correlation calculation using the signal corrected by the correction unit.
  6.  前記加振手段の加振条件の中から、前記撮影レンズの駆動に必要な電力量と撮像装置で利用可能な最大電力量とに基づいて、撮像装置の消費電力が前記最大電力量を超えない範囲内で実行可能な加振条件を選択する選択手段と、
     前記選択手段で選択された加振条件で振動が加えられるように前記加振手段を制御する第2加振制御手段を更に備えた
     請求項4又は請求項5記載の撮像装置。
    Based on the amount of power required for driving the photographic lens and the maximum amount of power that can be used in the imaging device, the power consumption of the imaging device does not exceed the maximum power amount among the excitation conditions of the excitation unit. A selection means for selecting an excitation condition that can be executed within the range;
    The imaging apparatus according to claim 4, further comprising a second vibration control unit that controls the vibration unit so that vibration is applied under the vibration condition selected by the selection unit.
  7.  前記撮影レンズの駆動に必要な電力量と撮像装置で利用可能な最大電力量とに基づいて、撮像装置の消費電力が前記最大電力量を超えないように前記加振手段の加振のタイミングを制御する第3加振制御手段と、
     を更に備えた請求項4~請求項6の何れか1項記載の撮像装置。
    Based on the amount of power required to drive the photographic lens and the maximum amount of power that can be used by the imaging device, the timing of excitation of the excitation means is set so that the power consumption of the imaging device does not exceed the maximum amount of power Third vibration control means for controlling;
    The imaging apparatus according to any one of claims 4 to 6, further comprising:
  8.  前記撮像素子を駆動させる駆動信号に同期して前記光学部材に対する加振が実行されるように、前記加振手段を制御する第4加振制御手段を更に備えた
     請求項1~請求項6の何れか1項記載の撮像装置。
    The fourth vibration control means for controlling the vibration means so as to perform vibration on the optical member in synchronization with a drive signal for driving the image sensor. The imaging apparatus of any one of Claims.
  9.  撮影レンズの主軸に対して一方の側を通過した光束が入射される第1の位相差検出画素と、前記撮影レンズの主軸に対して他方の側を通過した光束が入射される第2の位相差検出画素とからなる位相差検出画素対を複数備えると共に複数の撮像画素を備えた撮像素子と、前記撮像素子の受光面側に設けられた光学部材と、前記光学部材に振動を加える加振手段と、を備えた撮像装置における合焦制御方法であって、
     前記撮像素子により被写体を撮像した同一撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第1の相関演算ステップと、
     前記撮像素子の撮像と前記加振手段の加振とを交互に行って、前記撮像素子の撮像により撮像した異なる撮像フレームにおいて、前記位相差検出画素対から得られた信号を組み合わせて相関演算を行う第2の相関演算ステップと、
     前記第1の相関演算手段及び前記第2の相関演算手段での各相関演算で得られた相関値に基づいて、各相関演算で組み合わせた信号の相関が高いか否かを各々判定する判定ステップと、
     前記判定手段により相関が高いと判定された組み合わせの相関演算の結果から合焦位置を算出する合焦位置算出ステップと、
     前記合焦位置算出手段で算出された合焦位置に前記撮影レンズが移動されるように前記撮影レンズを駆動する駆動手段を制御する駆動制御ステップと、
     を含む合焦制御方法。
    A first phase difference detection pixel on which a light beam that has passed through one side with respect to the main axis of the photographing lens is incident, and a second position at which a light beam that has passed through the other side with respect to the main axis of the photographing lens is incident. An image pickup device including a plurality of phase difference detection pixel pairs including phase difference detection pixels and a plurality of image pickup pixels, an optical member provided on a light receiving surface side of the image pickup device, and an excitation for applying vibration to the optical member A focusing control method in an imaging apparatus comprising:
    A first correlation calculation step of performing a correlation calculation by combining signals obtained from the phase difference detection pixel pair in the same imaging frame in which a subject is imaged by the imaging element;
    Correlation calculation is performed by combining the signals obtained from the pair of phase difference detection pixels in different imaging frames captured by imaging of the imaging element by alternately performing imaging of the imaging element and excitation of the excitation unit. A second correlation calculation step to be performed;
    Determination step for determining whether or not the correlation of the signals combined in each correlation calculation is high based on the correlation value obtained by each correlation calculation in the first correlation calculation means and the second correlation calculation means When,
    An in-focus position calculating step for calculating an in-focus position from a result of correlation calculation of a combination determined to have high correlation by the determination unit;
    A drive control step of controlling drive means for driving the photographic lens so that the photographic lens is moved to the focus position calculated by the focus position calculating means;
    In-focus control method.
PCT/JP2012/060862 2011-05-13 2012-04-23 Image capture device and focus control method WO2012157407A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011108037 2011-05-13
JP2011-108037 2011-05-13

Publications (1)

Publication Number Publication Date
WO2012157407A1 true WO2012157407A1 (en) 2012-11-22

Family

ID=47176743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/060862 WO2012157407A1 (en) 2011-05-13 2012-04-23 Image capture device and focus control method

Country Status (1)

Country Link
WO (1) WO2012157407A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811623A (en) * 2015-04-30 2015-07-29 华为技术有限公司 Interference-reducing photographing device and method thereof
JP2020519433A (en) * 2017-05-12 2020-07-02 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Ultrasonic self-cleaning system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007047323A (en) * 2005-08-08 2007-02-22 Canon Inc Optical equipment and control method therefor
JP2010074480A (en) * 2008-09-18 2010-04-02 Nikon Corp Image pickup device and camera
JP2010103705A (en) * 2008-10-22 2010-05-06 Canon Inc Imaging apparatus and control method and program thereof
JP2010118962A (en) * 2008-11-13 2010-05-27 Canon Inc Imaging apparatus, control method thereof and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007047323A (en) * 2005-08-08 2007-02-22 Canon Inc Optical equipment and control method therefor
JP2010074480A (en) * 2008-09-18 2010-04-02 Nikon Corp Image pickup device and camera
JP2010103705A (en) * 2008-10-22 2010-05-06 Canon Inc Imaging apparatus and control method and program thereof
JP2010118962A (en) * 2008-11-13 2010-05-27 Canon Inc Imaging apparatus, control method thereof and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811623A (en) * 2015-04-30 2015-07-29 华为技术有限公司 Interference-reducing photographing device and method thereof
JP2020519433A (en) * 2017-05-12 2020-07-02 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Ultrasonic self-cleaning system

Similar Documents

Publication Publication Date Title
JP5126261B2 (en) camera
JP3867687B2 (en) Imaging device
US7769282B2 (en) Image capturing apparatus, control method thereof, image processing apparatus, image processing method, image capturing system, and program
US20120268613A1 (en) Image capturing apparatus and control method thereof
US8593531B2 (en) Imaging device, image processing method, and computer program
JP4533735B2 (en) Stereo imaging device
JP5002412B2 (en) Imaging device
JP4745077B2 (en) Imaging device
JP2012168383A (en) Imaging apparatus
US8644698B2 (en) Focusing-state detection apparatus, imaging apparatus, and its control method
JPWO2017154366A1 (en) Low pass filter control device, low pass filter control method, and imaging device
JP5014267B2 (en) Imaging device
JP2010014788A (en) Imaging element and imaging device
JP2013160991A (en) Imaging apparatus
JP2017044821A (en) Image-capturing device and control method therefor, program, and storage medium
JP6377286B2 (en) Imaging apparatus, control method therefor, and operation program
WO2012157407A1 (en) Image capture device and focus control method
JP5366693B2 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND COMPUTER PROGRAM
JP2010074547A (en) Image sensor driving unit and imaging apparatus
JP2010134291A (en) Image capturing apparatus and method for controlling the same
JP2009267893A (en) Imaging apparatus
JP5069076B2 (en) Imaging apparatus and continuous imaging method
JP5501405B2 (en) Control device
JP5093376B2 (en) Imaging apparatus and program thereof
JP2009042556A (en) Electronic camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12786334

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12786334

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP