WO2017077775A1 - 固体撮像素子、撮像装置、及び、電子機器 - Google Patents
固体撮像素子、撮像装置、及び、電子機器 Download PDFInfo
- Publication number
- WO2017077775A1 WO2017077775A1 PCT/JP2016/077665 JP2016077665W WO2017077775A1 WO 2017077775 A1 WO2017077775 A1 WO 2017077775A1 JP 2016077665 W JP2016077665 W JP 2016077665W WO 2017077775 A1 WO2017077775 A1 WO 2017077775A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- exposure
- unit
- image
- pixel value
- Prior art date
Links
- 238000006243 chemical reaction Methods 0.000 claims abstract description 34
- 239000011159 matrix material Substances 0.000 claims abstract description 11
- 238000003384 imaging method Methods 0.000 claims description 119
- 238000012545 processing Methods 0.000 claims description 110
- 239000000203 mixture Substances 0.000 claims description 79
- 230000015572 biosynthetic process Effects 0.000 claims description 41
- 238000003786 synthesis reaction Methods 0.000 claims description 41
- 230000002194 synthesizing effect Effects 0.000 claims description 25
- 230000002093 peripheral effect Effects 0.000 claims description 19
- 239000002131 composite material Substances 0.000 claims description 16
- 230000004397 blinking Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 33
- 238000000034 method Methods 0.000 description 31
- 230000000875 corresponding effect Effects 0.000 description 27
- 230000008569 process Effects 0.000 description 16
- 239000004065 semiconductor Substances 0.000 description 14
- 230000003321 amplification Effects 0.000 description 13
- 238000003199 nucleic acid amplification method Methods 0.000 description 13
- 238000012546 transfer Methods 0.000 description 13
- 239000000758 substrate Substances 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000007423 decrease Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000009792 diffusion process Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000009738 saturating Methods 0.000 description 3
- 101100004031 Mus musculus Aven gene Proteins 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 230000002730 additional effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/689—Motion occurring during a rolling shutter mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/53—Control of the integration time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/53—Control of the integration time
- H04N25/531—Control of the integration time by controlling rolling shutters in CMOS SSIS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
- H04N25/58—Control of the dynamic range involving two or more exposures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
- H04N25/58—Control of the dynamic range involving two or more exposures
- H04N25/581—Control of the dynamic range involving two or more exposures acquired simultaneously
- H04N25/583—Control of the dynamic range involving two or more exposures acquired simultaneously with different integration times
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
- H04N25/58—Control of the dynamic range involving two or more exposures
- H04N25/587—Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
- H04N25/589—Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/71—Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
- H04N25/75—Circuitry for providing, modifying or processing image signals from the pixel array
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/78—Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B7/00—Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
- G03B7/08—Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
- G03B7/091—Digital circuits
- G03B7/093—Digital circuits for control of exposure time
Definitions
- the present disclosure relates to a solid-state imaging device, an imaging device, and an electronic device.
- a solid-state image sensor for example, a CCD image sensor (CCD (Charge Coupled Device) type solid-state image sensor) and a CMOS image sensor (CMOS (Complementary Metal Oxide Semiconductor) type solid-state image sensor) are known.
- the solid-state image sensor performs photoelectric conversion that accumulates electric charges according to the amount of incident light and outputs an electrical signal corresponding to the accumulated electric charges.
- the charge accumulation amount has an upper limit level, the accumulated charge amount reaches a saturation level when the amount of light received exceeds the upper limit level in a certain exposure time. As a result, it is not possible to express a gradation higher than the saturation level, and a so-called whiteout state is obtained.
- the present disclosure provides a solid-state imaging device capable of imaging a scene with a wide dynamic range without reducing spatial resolution, an imaging device having the solid-state imaging device, and an electronic apparatus having the imaging device.
- the purpose is to do.
- a solid-state imaging device of the present disclosure is provided.
- a pixel array unit in which unit pixels including photoelectric conversion elements are arranged in a matrix, and each unit pixel is grouped into a plurality of pixel groups;
- An exposure start timing and an exposure end timing are independently set for each of the plurality of pixel groups so that at least one of the plurality of pixel groups is exposed a plurality of times within one vertical synchronization period.
- a timing control unit Is a solid-state imaging device.
- An image pickup apparatus for achieving the above object is an image pickup apparatus having a solid-state image pickup device having the above-described configuration.
- an electronic device for achieving the above object is an electronic device including the solid-state imaging device having the above-described configuration.
- the solid-state imaging device, imaging apparatus, or electronic device having the above-described configuration when imaging a scene with a wide dynamic range by performing multiple exposures within one vertical synchronization period, multiple images by multiple exposures are used. The exposure time ratio between the exposed images can be set close to each other.
- the exposure time ratio between the exposure images can be set close by performing multiple exposures within one vertical synchronization period, so that a wide dynamic range scene can be captured without reducing the spatial resolution. It becomes possible to do.
- FIG. 1 is a system configuration diagram illustrating an outline of a configuration of a CMOS image sensor which is an example of the solid-state imaging device of the present disclosure.
- FIG. 2 is a circuit diagram illustrating an example of a circuit configuration of a unit pixel.
- FIG. 3A is a diagram illustrating a first example of pixel grouping, and
- FIG. 3B is a diagram illustrating a second example of pixel grouping.
- FIG. 4 is a view showing an example of an exposure sequence (exposure sequence 1) according to the present embodiment, which is executed under the control of the timing control unit.
- FIG. 5 is a diagram showing a reading sequence of pixel value data of the pixel group A in the present embodiment.
- FIG. 1 is a system configuration diagram illustrating an outline of a configuration of a CMOS image sensor which is an example of the solid-state imaging device of the present disclosure.
- FIG. 2 is a circuit diagram illustrating an example of a circuit configuration of a unit pixel.
- FIG. 6 is a schematic diagram of a configuration of a CMOS image sensor in which a plurality of A / D converters are arranged for one pixel column and these A / D converters are used in parallel.
- FIG. 7 is a view showing a portion corresponding to one vertical synchronization period (one display frame) in the exposure sequence according to the present embodiment.
- FIG. 8 is a diagram in which two pixel columns in the pixel array unit, column signal lines and A / D converters corresponding thereto are extracted and drawn.
- FIG. 15A is a diagram showing a correspondence relationship between unit pixels and A / D converters when applied to a CMOS image sensor having a stacked structure
- FIG. 15B shows each pixel in one pixel group and a corresponding A It is a figure which shows the electrical connection with a / D converter.
- FIG. 15A is a diagram showing a correspondence relationship between unit pixels and A / D converters when applied to a CMOS image sensor having a stacked structure
- FIG. 15B shows each pixel in one pixel group and a corresponding A It is a figure which shows
- FIG. 22 is an example of a flow of processing performed by the image processing unit according to the second embodiment to determine a combination ratio of two images according to a moving subject estimation result for each pixel position and combine the two images. It is a flowchart which shows.
- FIG. 23 is a block diagram illustrating a configuration example of an image processing unit according to the third embodiment.
- FIG. 24 is a diagram showing an exposure sequence in the case of the fourth embodiment.
- FIG. 25 is a system configuration diagram illustrating an outline of the configuration of the electronic device of the present disclosure.
- Example of exposure sequence 2-6.1 Example of arranging a plurality of A / D converters for one pixel column 2-7. Modified example 2.
- Imaging device of the present disclosure 3-1.
- System configuration 3-2. Configuration of image processing unit 3-2-1.
- Example 4 Example of countermeasure against blackout due to flicker phenomenon) 4).
- the timing controller is configured to perform a final readout line of an exposure image that has been exposed first for at least two exposures of a plurality of exposures.
- the exposure start timing is independently set for each of the plurality of pixel groups so that the exposure end timing of the unit pixel in the first read row of the exposure image whose exposure is started later is earlier than the exposure end timing of the unit pixel.
- an exposure end timing can be set.
- the timing control unit is configured to time division format for each row for a plurality of exposure images whose exposure ends overlap in time.
- the control for outputting each image data can be performed.
- it can be set as the structure provided with the image process part which produces
- the exposure image whose exposure order is not final within one vertical synchronization period is set for each row.
- the pixel value data can be held in the storage unit.
- pixel value data in the same row is read from the storage unit according to the output timing of pixel value data for each row of the final exposure image whose exposure order is within one vertical synchronization period, and a plurality of exposure image rows are aligned.
- a row sequence conversion unit that outputs the data.
- the image processing unit includes a first image synthesis unit that synthesizes and outputs pixel value data of a plurality of exposure pixels output from the row sequence conversion unit with the rows aligned. be able to.
- the image processing unit is a pixel for an exposure image whose exposure order is the first within one vertical synchronization period.
- the value data can be stored in the storage unit. Further, for an exposure image whose exposure order is other than the first in one vertical synchronization period, it is synthesized by referring to the pixel value data corresponding to its own pixel position and peripheral pixel position already stored in the storage unit for each pixel.
- the pixel value can be calculated and stored in the storage unit. Further, it may be configured to have a second image synthesis unit that outputs the final synthesis result when the synthesis of the pixel value data for the exposure image with the final exposure order within one vertical synchronization period is completed.
- the exposure order within one vertical synchronization period is first in each pixel group for the second image synthesis unit.
- pixel value data can be stored in the storage unit.
- it is synthesized by referring to the pixel value data corresponding to its own pixel position and peripheral pixel position already stored in the storage unit for each pixel. The pixel value can be calculated and stored in the storage unit.
- a combined value between the pixel groups with reference to a final combining result for each pixel group obtained by combining the exposure images whose exposure order is final within one vertical synchronization period in the second image combining unit is possible to employ a configuration having an inter-pixel group combining unit that calculates
- the image processing unit is within one vertical synchronization period among the exposure images in all the pixel groups.
- the pixel value data can be held in the storage unit for the exposure image with the first exposure order. Further, for an exposure image whose exposure order is other than the first in one vertical synchronization period, it is synthesized by referring to the pixel value data corresponding to its own pixel position and peripheral pixel position already stored in the storage unit for each pixel. The pixel value can be calculated and stored in the storage unit. Further, for an exposure image whose exposure order is the final within one vertical synchronization period, a third image composition unit that outputs a final composition result when composition of pixel value data is completed can be employed.
- the image processing unit includes pixels at all pixel positions based on the pixel values read for each pixel group.
- a configuration having a pixel value interpolation unit for interpolating values can be adopted.
- pixel value data for the exposure image whose exposure order is the first in one vertical synchronization period among all the exposure images of all the pixel groups based on the output of the pixel value interpolation unit. Can be held in the storage unit.
- a synthesized pixel is referred to by referring to pixel value data corresponding to its own pixel position and peripheral pixel position already held in the storage unit for each pixel. A value can be calculated and held in the storage unit.
- the moving subject determination unit that determines whether or not the subject is moving at each pixel position in the image processing unit
- a composition ratio determination unit that determines a composition ratio when calculating a composition pixel value based on the determination result of the moving subject determination unit.
- the moving subject determination unit can be configured to estimate whether or not the subject is moving at each pixel position by referring to the pixel value of the exposure image and the pixel value on the storage unit.
- the composite ratio history storage that stores the history of the composite ratio determined by the composite ratio determination unit for the image processing unit It can be set as the structure which has a part.
- the composition ratio determination unit may be configured to determine the composition ratio with reference to the composition ratio history stored in the composition ratio history storage unit.
- the composition ratio determining unit may be configured to determine the composition ratio with reference to the exposure time of each exposure image, and to determine the composition ratio with reference to the pixel value level of each exposure image.
- the analog pixel signal output for each pixel column from each unit pixel of the pixel array unit is digitized. It can be set as the structure provided with an A / D converter.
- the timing control unit when capturing a light emitting object, the timing control unit takes into account the blinking cycle of the light emitting object, and performs a plurality of exposures by a plurality of exposures.
- the exposure start time and the exposure time length of each image can be set.
- Solid-state imaging device of the present disclosure First, an outline of the configuration of the solid-state imaging device of the present disclosure will be described. Here, a CMOS image sensor will be described as an example of the solid-state imaging device of the present disclosure.
- FIG. 1 is a system configuration diagram illustrating an outline of a configuration of a CMOS image sensor according to the present disclosure.
- the CMOS image sensor 10 includes a pixel array unit 11, a peripheral driving system, and a signal processing system.
- a row scanning unit 12, a column processing unit 13, a column scanning unit 14, a horizontal output line 15, and a timing control unit 16 are provided as peripheral drive systems and signal processing systems.
- These drive system and signal processing system are integrated on the same semiconductor substrate (chip) 30 as the pixel array unit 11.
- the timing control unit 16 includes, for example, the row scanning unit 12, the column processing unit 13, and the column scanning based on the vertical synchronization signal VD, the horizontal synchronization signal HD, the master clock MCK, and the like that are input from the outside.
- a clock signal, a control signal, or the like that is a reference for the operation of the unit 14 or the like is generated.
- Clock signals, control signals, and the like generated by the timing control unit 16 are given as drive signals to the row scanning unit 12, the column processing unit 13, the column scanning unit 14, and the like.
- the pixel array unit 11 generates photoelectric charges according to the amount of received light, and unit pixels (hereinafter also simply referred to as “pixels”) 20 each having a photoelectric conversion element to accumulate are arranged in a row direction and a column direction.
- pixels unit pixels
- the configuration is two-dimensionally arranged in a matrix (matrix).
- the row direction refers to the pixel arrangement direction in the pixel row
- the column direction refers to the pixel arrangement direction in the pixel column.
- row control lines 31 ( 31_1 to 31_m ) are wired in the row direction for each pixel row with respect to a pixel array of m rows and n columns, and column signal lines 32 are provided for each pixel column. ( 32_1 to 32_n ) are wired along the column direction.
- the row control line 31 transmits a control signal for performing control when a signal is read from the unit pixel 20.
- the row control line 31 is illustrated as one wiring, but the number is not limited to one.
- One end of each of the row control lines 31_1 to 31_m is connected to each output end corresponding to each row of the row scanning unit 12.
- the row scanning unit 12 includes a shift register, an address decoder, and the like, and drives each pixel 20 of the pixel array unit 11 at the same time or in units of rows.
- the row scanning unit 12 generally has two scanning systems, a reading scanning system and a sweeping scanning system.
- the readout scanning system In order to read out a signal from the unit pixel 20, the readout scanning system selectively scans the unit pixels 20 of the pixel array unit 11 in units of rows.
- a signal read from the unit pixel 20 is an analog signal.
- the sweep-out scanning system performs sweep-out scanning with respect to the readout row on which readout scanning is performed by the readout scanning system, preceding the readout scanning by a time corresponding to the shutter speed.
- a so-called electronic shutter operation is performed by sweeping (resetting) unnecessary charges by the sweep scanning system.
- the electronic shutter operation refers to an operation in which the photoelectric charge of the photoelectric conversion element is discarded and a new exposure is started (photocharge accumulation is started).
- the signal read out by the readout operation by the readout scanning system corresponds to the amount of light received after the immediately preceding readout operation or electronic shutter operation.
- a period from the signal readout timing by the immediately preceding readout operation or the sweep timing by the electronic shutter operation to the signal readout timing by the current readout operation is an exposure period of the photocharge in the unit pixel 20. In this exposure period, the signal read timing by the previous read operation or the sweep timing by the electronic shutter operation becomes the exposure start timing, and the signal read timing by the current read operation becomes the exposure end timing.
- the column processing unit 13 is, for example, an A / D (analog / digital) provided with a one-to-one correspondence for each pixel column of the pixel array unit 11, that is, for each column signal line 32 ( 32_1 to 32_n ).
- This is a signal processing unit having a converter 40 ( 40_1 to 40_n ).
- the A / D converter 40 ( 40_1 to 40_n ) digitizes an analog pixel signal output from each unit pixel 20 of the pixel array unit 11 for each pixel column.
- a / D converter 40 for example, a configuration including a comparator and a counter can be used.
- the comparator uses a so-called slope-shaped reference signal whose voltage value changes stepwise as time passes as a reference input, and outputs an analog signal output from each unit pixel 20 of the pixel array unit 11 in units of pixel columns.
- the pixel signal is used as a comparison input, and both are compared.
- the counter converts an analog pixel signal into digital data by performing a count operation over a period from the start of the comparison operation in the comparator to the end of the comparison operation in synchronization with a predetermined clock.
- the A / D converter 40 having the above-described configuration, by using an up / down counter as a counter, a process for removing noise during the reset operation of the unit pixel 20, specifically, correlated double sampling ( It is possible to perform noise removal processing by Correlated Double Sampling (CDS).
- CDS Correlated Double Sampling
- the reset component corresponds to a pixel signal when the unit pixel 20 is reset.
- the signal component corresponds to a pixel signal obtained by photoelectric conversion in the unit pixel 20.
- the random noise generated when resetting is held in the charge detection unit 26 (see FIG. 2). Therefore, the signal component read by adding the signal charge is reset. The same amount of noise as the component is retained. Therefore, in the up / down counter, for example, the CDS process for subtracting the reset component from the signal component can be performed by down-counting the reset component and up-counting the signal component.
- the column scanning unit 14 includes a shift register, an address decoder, and the like, and controls column addresses and column scanning of the A / D converters 40 _1 to 40 _n in the column processing unit 13. Under the control of the column scanning unit 14, the A / D converted digital data in each of the A / D converters 40_1 to 40_n is sequentially read out to the horizontal output line 15 and then output from the output terminal 33. The image data is output outside the chip (semiconductor substrate) 30.
- FIG. 2 is a circuit diagram illustrating an example of a circuit configuration of the unit pixel 20.
- the unit pixel 20 includes, for example, a photodiode (PD) 21 as a photoelectric conversion element.
- the unit pixel 20 includes, for example, a transfer transistor 22, a reset transistor 23, an amplification transistor 24, and a selection transistor 25.
- N-type MOSFETs are used as the four transistors of the transfer transistor 22, the reset transistor 23, the amplification transistor 24, and the selection transistor 25.
- the combination of the conductivity types of the four transistors 22 to 25 illustrated here is merely an example, and is not limited to these combinations.
- a plurality of control lines 311, 312, and 313 are wired in common to the pixels in the same pixel row as the above-described row control lines 31 (31 _ 1 to 31 _m ) for the unit pixel 20.
- the plurality of control lines 311, 312, 313 are connected to the output end corresponding to each pixel row of the row scanning unit 12 in units of pixel rows.
- the row scanning unit 12 appropriately outputs a transfer signal TRG, a reset signal RST, and a selection signal SEL to the plurality of control lines 311, 312, and 313.
- the photodiode 21 has an anode electrode connected to a low-potential-side power source (for example, ground), and photoelectrically converts received light into photocharge (here, photoelectrons) having a charge amount corresponding to the amount of light. Accumulate charge.
- the cathode electrode of the photodiode 21 is electrically connected to the gate electrode of the amplification transistor 24 through the transfer transistor 22.
- a region electrically connected to the gate electrode of the amplification transistor 24 is a charge detection unit 26 that converts a charge into a voltage.
- the charge detection unit 26 is referred to as an FD (floating diffusion / floating diffusion region / impurity diffusion region) unit 26.
- the transfer transistor 22 is connected between the cathode electrode of the photodiode 21 and the FD portion 26.
- a transfer signal TRG that activates a high level (for example, V dd level) is applied from the row scanning unit 12 through the control line 311 to the gate electrode of the transfer transistor 22.
- the transfer transistor 22 is turned on in response to the transfer signal TRG, and thus photoelectrically converted by the photodiode 21 and transfers the accumulated photocharge to the FD unit 26.
- the reset transistor 23 has a drain electrode connected to the power supply line 34 having the voltage V dd and a source electrode connected to the FD unit 26.
- a reset signal RST that activates a high level is applied to the gate electrode of the reset transistor 23 from the row scanning unit 12 through the control line 312.
- the reset transistor 23 becomes conductive in response to the reset signal RST, and resets the FD unit 26 by discarding the electric charge of the FD unit 26 to the power supply line 34.
- the amplification transistor 24 has a gate electrode connected to the FD section 26 and a drain electrode connected to the power supply line 34.
- the amplification transistor 24 serves as an input section of a source follower that is a readout circuit that reads a signal obtained by photoelectric conversion at the photodiode 21. That is, the amplifying transistor 24 forms a source follower with a current source 35 connected to one end of the column signal line 32 by connecting the source electrode to the column signal line 32 via the selection transistor 25.
- the selection transistor 25 has a drain electrode connected to the source electrode of the amplification transistor 24 and a source electrode connected to the column signal line 32.
- a selection signal SEL that activates a high level is supplied from the row scanning unit 12 through the control line 313 to the gate electrode of the selection transistor 25.
- the selection transistor 25 becomes conductive in response to the selection signal SEL, and transmits the signal output from the amplification transistor 24 to the column signal line 32 with the unit pixel 20 selected.
- the selection transistor 25 may have a circuit configuration connected between the power supply line 34 and the drain electrode of the amplification transistor 24.
- a 4Tr configuration including the transfer transistor 22, the reset transistor 23, the amplification transistor 24, and the selection transistor 25, that is, four transistors (Tr) is given as an example.
- the selection transistor 25 may be omitted, and a 3Tr configuration in which the amplification transistor 24 has the function of the selection transistor 25 may be used, or a configuration in which the number of transistors is increased as necessary.
- each pixel 20 of the pixel array unit 11 is grouped into two (two types) pixel groups A and B.
- the pixel group setting method is arbitrary. A first example of pixel grouping is shown in FIG. 3A, and a second example of pixel grouping is shown in FIG. 3B. 3A and 3B, it is assumed that pixels having the same character (A, B) in each pixel 20 form the same pixel group.
- pixel group A and pixel group B are grouped alternately every two rows.
- pixel grouping of the second example shown in FIG. 3B corresponding to the R (red), G (green), and B (blue) Bayer arrangement, four pixels adjacent vertically and horizontally are used as a unit, and pixel group A and pixel group B is grouped in such a manner that RGB is isotropically distributed. That is, in the pixel grouping of the first example and the pixel grouping of the second example, the pixel group A and the pixel group B are grouped so that the number of pixels is equal.
- row control for transmitting a control signal for performing control when reading a signal from the unit pixel 20 is performed.
- Two lines 31 are provided corresponding to the pixel group A and the pixel group B.
- the timing control unit 16 (see FIG. 1) is characterized by performing the following control. That is, the timing control unit 16 starts exposure independently for each of the plurality of pixel groups so that at least one pixel group of the plurality of pixel groups is subjected to multiple exposures within one vertical synchronization period. Timing and exposure end timing are set.
- the exposure start timing is a timing at which the photoelectric charge of the photodiode 21 is discarded through the FD unit 26 (reset) and exposure is newly started.
- the exposure end timing ends when the transfer transistor 22 becomes conductive, and the photoelectric charge photoelectrically converted by the photodiode 21 is transferred to the FD unit 26, and is transmitted to the column signal line 32 through the amplification transistor 24 as a pixel signal.
- This is a read timing (signal read timing).
- the pixel group A is exposed three times within one vertical synchronization period, and the pixel group B is exposed once within one vertical synchronization period.
- the exposure of 3 times in the pixel group A, exposure from the side exposure start timing is early A _a, exposure A _b, the exposure A _c, and exposing B a single exposure in the pixel group B.
- Exposure A_a to exposure A_c are exposures by the rolling shutter method, respectively, and exposure start (RESET) and exposure end / signal readout (READ) are sequentially performed from the smallest line number (row number). In this case, before exposure A _a is to finish reading to the last line (line), starting the first exposure end-signal reading line of exposure A _b and exposure A _c. That is, the three exposures A_a , A_b , and A_c are set so close that their exposure end timings overlap in time.
- FIG. 5 shows a readout sequence of pixel value data of the pixel group A in the case of the exposure sequence 1.
- a 0 , a 1 , a 2 ,..., B 0 , b 1 , b 2 ,..., C 0 , c 1 , c 2 The pixel value data of each row read from each exposure image of _b and exposure A_c is represented, and the subscript numerals correspond to the line numbers.
- the first pixel value data of the read line b 0 of the exposure image of the exposure A _b is read by time division manner in a subsequent shape in the readout row a 3 exposure image exposure A _a.
- the reading of the pixel value data of the exposure image of exposure A_c is also performed in a time-sharing manner in the same manner as the reading of the pixel value data of each exposure image of exposure A_a and exposure A_b .
- the exposure sequence 1 described above realizes three exposures at close time intervals for the pixel group A, and four types of exposure can be performed together with the long-time exposure of the pixel group B.
- CMOS image sensor 10 of the present disclosure exposure is performed a plurality of times within one vertical synchronization period for at least one pixel group among a plurality of grouped pixel groups, and spatially.
- the exposure time ratio between the exposure images can be set close to each other.
- the exposure time ratio between a plurality of exposure images can be set close to each other, so that a scene with a wide dynamic range (hereinafter referred to as “wide dynamic range”) can be captured without lowering the spatial resolution. It becomes possible. It is possible to shoot without saturating a wide dynamic range scene (without causing whiteout) and without causing blackout.
- reading of a plurality of exposure images in a time-overlapping period is performed in a time-sharing manner (time-sharing reading), and the number of A / D converters 40 is larger than that in normal reading, and these
- time-sharing reading time-sharing reading
- a / D converters 40 in a distributed manner, a plurality of exposure images can be read out within one vertical synchronization period.
- a plurality of A / D converters 40 are arranged for one pixel column, but the present invention is not limited to this.
- a plurality of exposure images in a time-overlapping period are read out in a time-sharing manner, and the number of A / D converters 40 is the same as that in normal reading, and these A / D converters Even if 40 is driven at high speed, a plurality of exposure images can be read out within one vertical synchronization period.
- CMOS that realizes a plurality of exposures in a plurality of pixel groups by arranging a plurality of A / D converters 40 for one pixel column and using these A / D converters 40 in parallel.
- the image sensor 10 will be described.
- An example will be described.
- pixel grouping is the first example shown in FIG. 3A.
- FIG. 6 is a schematic diagram of a configuration of the CMOS image sensor 10 in which a plurality of A / D converters 40 are arranged for one pixel column and these A / D converters 40 are used in parallel.
- the column processing unit 13 including the A / D converter 40 is disposed above and below (13A, 13B) the pixel array unit 11 is illustrated.
- Four column signal lines 32 are wired for each pixel column formed with a width of two pixels, and two A / D converters 40 are arranged above and below the pixel array unit 11, for a total of four.
- the A / D converter 40 has a comparator (comparator) 41 and a counter 42.
- the comparator 41 uses a reference signal having a sloped waveform as a reference input, uses an analog pixel signal output from each unit pixel 20 of the pixel array unit 11 as a comparison input, and compares the two.
- the reference signal serving as the standard input of the comparator 41 is generated by a reference signal generation unit 50 configured by a D / A converter or the like.
- the counter 42 performs a counting operation over a period from the start of the comparison operation in the comparator 41 to the end of the comparison operation in synchronization with a predetermined clock, thereby converting the analog pixel signal into digital data (pixel value data). Convert to
- connection switch 60 can arbitrarily change the connection relationship between the four column signal lines 32 and the four A / D converters 40 based on the register setting value.
- each pixel 20 of the pixel array unit 11 shares a connection to the column signal line 32 in units of 4 rows ⁇ 2 columns.
- the unit pixel 20 is shown by being surrounded by a rectangle every four rows and two columns, and this represents a unit sharing the connection of four column signal lines 32 provided corresponding to the two pixel columns.
- the row scanning unit 12 includes an address decoder 12A and a pixel drive timing drive circuit 12B.
- the address decoder 12A operates based on the address signal given from the timing control unit 16.
- the pixel drive timing drive circuit 12B operates based on the drive timing signal given from the timing control unit 16, and based on the control signal given from the address decoder 12A, a signal for giving shutter timing to each pixel 20 and a signal for giving readout timing. And send.
- FIG. 7 is a view showing a portion corresponding to one vertical synchronization period (one display frame) in the exposure sequence 1 (see FIG. 4).
- a / D converter 40 necessary for realizing the exposure sequence shown in FIG. 7 will be described.
- FIG. 8 there are two pixel columns in the pixel array section 11, column signal lines 32 (VSL 0 , VSL 1 , VSL 2 , VSL 3 ) and A / D converters 40 (ADC 0 , ADC 1 ) corresponding thereto. , ADC 2 , ADC 3 ) are drawn and drawn.
- two lines are collectively designated as a line L, and numbers (L 0 , L 1 ,...) Are assigned to the lines.
- the non-shaded rows (L 0 , L 2 ,...) Correspond to the pixel group A
- the shaded rows (L 1 , L 3 ,...) Correspond to the pixel group B. It shall be.
- ADC 0 is the pixel value of the exposure A _a
- ADC 1 has a pixel value of the exposure A _b
- ADC 2 is a pixel value of the exposure A _c
- ADC 3 is assumed to read the pixel value of the exposure B.
- connection switch 60 is provided to handle the connection between the two.
- a desired exposure sequence is realized by sequentially switching the connection switch 60 in the process of reading the pixel values of each row.
- Table 1 shows a process of changing how to connect the connection switches 60 when the pixel values of each row are sequentially read out.
- the row of VSL_SW_ADC 0 describes the number of the column signal line 32 connected to ADC 0 at each timing. The same applies to VSL_SW_ADC 1 , VSL_SW_ADC 2 , and VSL_SW_ADC 3 .
- the row L1 belonging to the pixel group B is connected to the column signal line VSL 0
- a plurality of exposure images in a time-overlapping period are simultaneously read (simultaneous split reading), and the number of A / D converters 40 is set larger than that in the normal reading so that these A / D
- the D converters 40 By using the D converters 40 in a distributed manner, a plurality of exposure images can be read out within one vertical synchronization period.
- the pixel unit sharing the column signal line 32 the number of the column signal lines 32, the number of parallel installations of the A / D converters 40, and the like are specifically shown and described. It is not necessary to set this.
- the sharing unit of the column signal line 32 may be every two rows and two columns or every one pixel.
- the connection between each pixel 20 and the column signal line 32 is not limited to that described in this configuration example, and each pixel row is connected to an appropriate A / D converter 40 in order to realize a desired exposure sequence. Any configuration that is connected may be used.
- the pixel value data read by each A / D converter 40 corresponds to each exposure data, but this is not always necessary.
- the connection switch 60 is connected in a fixed manner in the pixel value reading process, and after being read out from the A / D converter 40, processing such as replacement of pixel value data is performed as appropriate. It is also possible to realize reading of exposure data.
- the CMOS image sensor 10 has the flat structure shown in FIG. 1, but may have a laminated structure.
- the “flat structure” means a peripheral circuit section of the pixel array section 11, that is, a driving section that drives each pixel 20 of the pixel array section 11, or an A / D converter 40.
- a signal processing unit including the like is arranged on the same semiconductor substrate 30 as the pixel array unit 11.
- the "laminated structure”, as shown in FIG. 15A, the pixel array portion 11 and its peripheral circuit section and a separate semiconductor substrate 30 - 1, mounted on 30 _2, these semiconductor substrates 30 - 1, 30 _2 This refers to the structure of stacking.
- the unit pixel 20 in the semiconductor substrate 30_1 on the pixel array unit 11 side is set as a pixel group 27 for each region of a predetermined size, and the A / D converter 40 is provided for each pixel group 27. Can be arranged on the semiconductor substrate 30_2 .
- one A / D converter 40 is shared by a plurality of unit pixels 20 that are adjacent vertically and horizontally in the pixel group 27.
- each pixel 20 in one pixel group 27 and the corresponding A / D converter 40 are connected by a signal line 34.
- FIG. 15B illustrates a case in which the pixel group 27 is 4 vertical pixels ⁇ 4 horizontal pixels.
- the pixel structure that captures incident light of the CMOS image sensor 10 may be a front-illuminated pixel structure that captures incident light from the front side when the side on which the wiring layer is disposed is the front side, or the back surface.
- a backside-illuminated pixel structure that captures from the side (the side opposite to the side on which the wiring layer is disposed) may be used.
- CMOS image sensor has been described as an example of the solid-state imaging device of the present disclosure.
- the present invention is not limited to application to a CMOS image sensor. That is, the technique of the present disclosure in which the unit pixels 20 are grouped and at least one pixel group among the plurality of pixel groups is exposed a plurality of times within one vertical synchronization period is similarly applied to the CCD image sensor. Applicable.
- the CMOS image sensor 10 according to the above-described embodiment can be used as an imaging unit in an imaging apparatus such as a digital still camera, a video camera, a surveillance camera, and an in-vehicle camera. Since the CMOS image sensor 10 according to the present embodiment can shoot a scene with a wide dynamic range without saturating and without causing blackout, it is particularly used for an imaging device such as a surveillance camera or an in-vehicle camera. Is preferred.
- FIG. 16 is a block diagram illustrating a configuration example of an imaging apparatus according to the present disclosure.
- the imaging apparatus 100 includes an optical lens 101, an imaging unit 102, an image processing unit 103, a camera signal processing unit 104, a control unit 105, and a timing control unit 106. Yes.
- the optical lens 101 condenses image light (incident light) from a subject (not shown) on the imaging surface of the imaging unit 102.
- the imaging unit 102 the above-described CMOS image sensor 10 is used.
- the unit pixels are grouped into a plurality of pixel groups.
- each of the plurality of pixel groups is so exposed that at least one pixel group of the plurality of pixel groups is exposed a plurality of times within one vertical synchronization period.
- the exposure start timing and the exposure end timing are set independently.
- the CMOS image sensor 10 for example under exposure sequence 1, 3 exposures A _a in the pixel group A, an exposure A _b, and the image data of each exposure image exposure A _c, 1 once in the pixel group B
- the image data of the exposure image of exposure B is output.
- the timing control unit 16 is provided on the semiconductor substrate (chip) 30 of the CMOS image sensor 10.
- a configuration in which the timing control unit 16 is provided outside the semiconductor substrate 30 may be employed.
- the timing control unit 16 is provided outside the semiconductor substrate 30. That is, the timing control unit 106 in FIG. 16 corresponds to the timing control unit 16 in FIG. 1, and outputs each image data in a time-division format for each row for a plurality of exposure images whose exposure end timings overlap in time. Control.
- the image processing unit 103 is output from the CMOS image sensor 10, exposure A _a, exposure A _b, and the image data of each exposure image exposure A _c, based on the image data of the exposure image of the exposure B, wide dynamic
- the camera signal processing unit 104 performs signal processing in a general camera such as white balance adjustment and gamma correction, and generates an output image (HDR image).
- the control unit 105 is configured by, for example, a CPU (Central Processing Unit) or the like, and controls the image processing unit 103, the camera signal processing unit 104, and the timing control unit 106 according to a program stored in a memory (not shown). Control signals to control various processes.
- a CPU Central Processing Unit
- the control unit 105 controls the image processing unit 103, the camera signal processing unit 104, and the timing control unit 106 according to a program stored in a memory (not shown). Control signals to control various processes.
- the CMOS image sensor 10 at least one pixel group among the plurality of pixel groups is independent for each of the plurality of pixel groups so that the exposure is performed a plurality of times within one vertical synchronization period.
- the exposure start timing and the exposure end timing are set in In the image processing unit 103, for example, the image data of each exposure image of the three exposures A_a , exposure A_b , and exposure A_c in the pixel group A output from the CMOS image sensor 10 and the pixel group B
- a wide dynamic range image is generated based on the image data of the exposure image of one exposure B.
- each exposure image can be captured even in a wide dynamic range scene. Since the exposure time can be set close to each other, the resolution is hardly lowered. That is, according to the imaging apparatus 100, it is possible to suppress a decrease in resolution while capturing a scene with a wide dynamic range.
- the case where the image processing unit 103 (see FIG. 16) that generates an HDR image is provided outside the CMOS image sensor 10 that is the imaging unit 102 has been described as an example.
- the CMOS image sensor 10 is a solid-state imaging device having the function of the image processing unit 103.
- the image processing unit 103 can be mounted on the semiconductor substrate 30_2 on which the A / D converter 40 is arranged, so that the chip size is increased. There is an advantage that the image processing unit 103 can be incorporated without any problem.
- the configuration of the image processing unit 103 is different for each embodiment.
- the image processing unit 103 according to the first and fourth embodiments performs image processing under the exposure sequence 1, and the image processing unit 103 according to the second and third embodiments performs image processing under the exposure sequence 2 described later. Is called.
- FIG. 17 is a block diagram illustrating a configuration example of the image processing unit 103A according to the first embodiment.
- the image processing unit 103A according to the first embodiment includes a row sequence conversion unit 1031, a storage unit 1032, and an HDR composition unit 1033 as a first image composition unit.
- the row sequence conversion unit 1031 holds pixel value data in the storage unit 1032 for each row for an exposure image whose exposure order is not final within one vertical synchronization period in the same pixel group. Then, the row sequence conversion unit 1031 reads out the pixel value data of the same row from the storage unit 1032 according to the output timing of the pixel value data for each row of the exposure image whose exposure order is the last within one vertical synchronization period. By the processing by the row sequence conversion unit 1031, pixel value data of a plurality of exposure pixels are output with the rows aligned.
- the HDR synthesizing unit 1033 as the first image synthesizing unit synthesizes the HDR image data by synthesizing the pixel value data of the plurality of exposure pixels output by aligning the rows by the row sequence conversion unit 1031 between the plurality of exposure pixels. Generate.
- FIG. 18 is a diagram illustrating a flow of image processing executed by the image processing unit 103A according to the first embodiment.
- the pixel group A is exposed three times within one vertical synchronization period, and the pixel group B within one vertical synchronization period. 1 exposure is performed (see exposure sequence 1 in FIG. 4). Then, from the CMOS image sensor 10, pixel value data of the three types of exposure images in the pixel group A are read out in a time division manner.
- first, pixel value data of each row a 0 , a 1 , a 2 ,... Of the exposure image of exposure A_a is sequentially read.
- the first pixel value data of the read line b 0 of the exposure image of the exposure A _b is read by time division manner in a subsequent shape in the readout row a 3 exposure image exposure A _a.
- the reading of the pixel value data of the exposure image of exposure A_c is also performed in a time-sharing manner, similar to the reading of the pixel value data of each exposure image of exposure A_a and exposure A_b .
- Pixel value data read from the CMOS image sensor 10 is supplied to the row sequence conversion unit 1031.
- the row sequence conversion unit 1031 once stores the pixel value data of each row of the exposure image of exposure A_a and the exposure image of exposure A_b read from the CMOS image sensor 10 in the storage unit 1032. Then, the line sequence converting unit 1031, the exposure A at the timing when each row of pixel value data of the exposure image is read in _c, exposure A _a stored in the storage unit 1032 exposure image and the exposure A _b exposure image Read out the pixel value data of the corresponding row.
- the HDR synthesizing unit 1033 generates pixel data of three types of exposure images, thereby generating wide dynamic range (HDR) image data (HDR image data).
- Figure 18 is a CMOS image sensor 10, exposure A _a, exposure A _b, each row of the pixel value data of each of the exposure pixel of exposure A _c indicates a flow of processing after read.
- each row of the pixel value data of each of the exposure pixel of exposure A _c indicates a flow of processing after read.
- the pixel value data of each row is stored in the storage unit 1032 according to the read order.
- pixel value data c 0 of the first row of the exposure image of exposure A_c is read from the CMOS image sensor 10, and pixel value data a 0 and b 0 of the corresponding row are read from the storage unit 1032 at the read timing. And their combined result A 0 is output.
- synchronization processing by reading from storage and the storage unit 1032 of the storage unit 1032 Is absorbed by.
- the necessary memory capacity of the storage unit 1032 is equal to the number of rows read before the pixel value data c 0 of the row read first in the exposure A_c . This is the data of exposure A _a and the exposure image exposure A _b read out in pixel value data c 0 subsequent timing, exposure A _a, data of each exposure image exposure A _b has already read storage unit This is because the memory area 1032 may be overwritten.
- HDR image data A 0 , A 1 , A 2 From the pixel group A and output data B 0 , B 1 , B 2 ,. Can be further combined, and the combined result can be output as HDR image data C 0 , C 1 , C 2 ,.
- pixel value data is held in the storage unit 1032 for each row for an exposure image whose exposure order is not final within one vertical synchronization period.
- a synchronization process for reading the pixel value data of the same row from the storage unit 1032 is performed.
- the memory capacity of the storage unit 1032 for waiting (synchronization) required when performing multiple exposures within one vertical synchronization period can be greatly reduced. The same applies to the following embodiments.
- Example 2 In the image processing in the first embodiment, an exposure sequence 1 in which the pixel group A is exposed three times within one vertical synchronization period and the pixel group B is exposed once within one vertical synchronization period (see FIG. 4). In this case, the image processing is performed.
- both the pixel group A and the pixel group B are exposed multiple times within one vertical synchronization period, for example, an exposure sequence (for example, three times).
- exposure sequence 2 the image processing is described as “exposure sequence 2”.
- FIG. 20 is a view showing another example (exposure sequence 2) of the exposure sequence according to the present embodiment.
- the exposure sequence 2 pixel groups A, pixel group B each exposure image A _a, A _b, A _c , B _a, B _b,
- B _c exposure time B _a, A _a, B _b, a _b, B _c, is set to be staggered shorter in the order of a _c.
- FIG. 21 is a block diagram illustrating a configuration example of the image processing unit 103B according to the second embodiment.
- the image processing unit 103B according to the second embodiment includes a sequential HDR synthesis unit 1034 as a second image synthesis unit instead of the row sequence conversion unit 1031 and the HDR synthesis unit 1033 of the first example. Yes.
- the sequential HDR synthesizing unit 1034 holds pixel value data in the storage unit 1032 for an exposure image whose exposure order is the first within one vertical synchronization period in the same pixel group. Further, the sequential HDR synthesizing unit 1034, for an exposure image whose exposure order is not the first within one vertical synchronization period, corresponds to its own pixel position and peripheral pixel position that are already held in the storage unit 1032 for each pixel. The composite value is calculated with reference to the value data and stored in the storage unit 1032. Then, the sequential HDR synthesizing unit 1034 outputs the synthesis result when the synthesis of the pixel value data is completed for the exposure image whose exposure order is the last in one vertical synchronization period.
- the sequential HDR synthesizing unit 1034 sequentially synthesizes six alternating exposure images and outputs the final result for the first row in the image.
- the synchronization processing is performed by using the storage unit 1032 as in the first embodiment.
- the image processing unit 103B includes an inter-pixel group HDR synthesis unit 1035, a moving subject determination unit 1036, a synthesis ratio determination unit 1037, and a synthesis ratio history storage. Part 1038.
- the inter-pixel group HDR synthesizing unit 1035 refers to the final synthesis result for each pixel group obtained by synthesizing the exposure images whose exposure order is final within one vertical synchronization period in the sequential HDR synthesizing unit 1034. A composite value is calculated and output as HDR image data.
- the moving subject determination unit 1036 determines (estimates) whether or not the subject is moving at each pixel position, that is, whether or not the subject is a moving subject. More specifically, the moving subject determination unit 1036 determines whether or not the subject is moving at each pixel position by referring to the pixel value of the exposure image and the pixel value on the storage unit 1032, and the determination Output the result.
- the combination ratio determination unit 1037 determines a combination ratio when calculating a combined pixel value, that is, a combination ratio in the sequential HDR combination unit 1034 and the inter-pixel group HDR combination unit 1035. .
- the composition ratio history storage unit 1038 stores the composition ratio of the sequential HDR composition unit 1034 up to that point, that is, the composition ratio history determined by the composition ratio determination unit 1037, and outputs the history.
- the composition ratio determination unit 1037 determines the composition ratio with reference to the composition ratio history output from the composition ratio history storage unit 1038 when determining the composition ratio based on the estimation result of the moving subject determination unit 1036.
- the composition ratio determination unit 1037 further determines the composition ratio with reference to the exposure time of each exposure image. Information on the exposure time of each exposure image can be acquired from the timing control unit 16 (see FIG. 1).
- the composition ratio determining unit 1037 further determines the composition ratio with reference to the pixel value level of each exposure image.
- the moving subject determination unit 1036 estimates whether or not the subject is moving for each pixel position, and the composition ratio determining unit 1037 determines the composition ratio of the two images according to the estimation result, and the two images are combined.
- An example of the flow of processing will be described with reference to the flowchart of FIG. This series of processing is executed under the control of the CPU constituting the control unit 105 (see FIG. 16).
- an image M and an image N represent two images that are sequentially combined.
- the CPU selects target positions in the image M and the image N (step S11), generates an average value aveM of 5 ⁇ 5 pixels around the target position in the image M (step S12), and then determines the target position in the image N.
- An average value aveN of the surrounding 5 ⁇ 5 pixels is generated (step S13).
- the CPU calculates an absolute difference value d between the average value aveM and the average value aveN (step S14).
- step S15 the CPU determines the composition ratio ⁇ according to the magnitude relationship between the difference absolute value d and the determination threshold (step S15). Specifically, in step S15, when the difference absolute value d between the two images M and N is larger than the determination threshold, it is determined that the pixel position is a moving object region, and an image with a shorter exposure time is used. A process for setting the composition ratio ⁇ is performed.
- the CPU synthesizes the pixel values at the target positions of the images M and N with the synthesis ratio ⁇ set in step S15 (step S16), and then selects all pixel positions on the two images M and N. It is determined whether or not the above-described processing has been completed at all pixel positions on the two images M and N (step S17).
- step S17 If the CPU is not finished at all pixel positions (NO in S17), the CPU returns to step S11 and repeats the series of processes described above. If completed (YES in S17), the CPU outputs the synthesis result (step S18), determines the synthesis ratio ⁇ of the two images M and N, and determines the two images M and M at the synthesis ratio ⁇ . A series of processes for synthesizing one N sequential HDR synthesis is completed.
- the HDR synthesizing unit 1034 sequentially performs the HDR synthesizing process from the exposure image in which the row numbers are aligned in the same pixel group for each pixel group.
- the final HDR synthesis result obtained within one vertical synchronization period is sequentially output from the HDR synthesis unit 1034.
- the inter-pixel group HDR synthesizing unit 1035 performs a process of calculating a composite value between the pixel groups and outputting it as HDR image data.
- FIG. 23 is a block diagram illustrating a configuration example of an image processing unit according to the third embodiment.
- the image processing unit 103C according to the third embodiment includes a sequential HDR synthesis unit 1039 as a third image synthesis unit instead of the sequential HDR synthesis unit 1034 according to the second embodiment.
- the image processing unit 103C according to the third embodiment further includes a pixel value interpolation unit 1040, and other configurations are the same as those of the image processing unit 103B according to the second embodiment.
- the sequential HDR synthesizing unit 1039 holds the pixel value data at 1032 for the exposure image having the first exposure order within one vertical synchronization period among the exposure images in all the pixel groups. For exposure images other than the first exposure order within one vertical synchronization period, refer to the pixel value data corresponding to the own pixel position and the peripheral pixel position already stored in the storage unit 1032 for each pixel. The composite pixel value is calculated and stored in the storage unit 1032. Then, the sequential HDR synthesizing unit 1039 outputs the final synthesis result when the synthesis of the pixel value data for the exposure image whose exposure order is the final within one vertical synchronization period is completed.
- pixel group A and pixel group B are sampled at different spatial phases.
- the pixel value interpolating unit 1040 interpolates the pixel values at all pixel positions, and then the synthesizing process is performed.
- the pixel value interpolation unit 1040 interpolates pixel values at all pixel positions based on pixel values read from the CMOS image sensor 10 for each pixel group.
- the sequential HDR synthesizing unit 1039 receives the output from the pixel value interpolation unit 1040, and stores it when the exposure order within one vertical synchronization period is the first image among all the exposure images of all the pixel groups.
- the pixel value data is held in the unit 1032.
- the sequential HDR synthesizing unit 1039 corresponds to its own pixel position and peripheral pixel position already held in the storage unit 1032 for each pixel when the exposure order is not the first in one vertical synchronization period. An operation of calculating a composite value with reference to the pixel value data and holding it in the storage unit 1032 is performed.
- the sequential HDR synthesis unit 1039 uses the storage unit 1032 to sequentially perform HDR synthesis processing between pixel groups. Further, the pixel value interpolation unit 1040 performs processing for interpolating pixel values read from the pixel group to all pixel positions.
- Example 4 By the way, in a traffic light using an LED (light emitting diode) as a light source or an electric signboard using a fluorescent lamp as a light source, when the light source is flickering due to fluctuations in the AC power supply, depending on the relationship between the blinking cycle and the frame rate of the camera. , The shining traffic lights and signs will appear to be extinguished or flickering. This phenomenon is called a flicker phenomenon. Under this flicker phenomenon, it is perceived that the luminescent object has disappeared (so-called blackout) or flickers. Such a phenomenon is a problem particularly in an in-vehicle camera for recognition use.
- Example 4 was made in order to solve the above-described problems caused by the flicker phenomenon, and is an exposure drive that suppresses a phenomenon in which a light emitting object disappears or flickers.
- FIG. 24 shows an exposure sequence in the case of the fourth embodiment.
- the pixel group A is exposed five times within one vertical synchronization period, and the pixel group B is exposed once within one vertical synchronization period. Shall be performed.
- Five exposure in the pixel group A an exposure A _a from the side exposure start timing is early, exposure A _b, exposure A _c, exposure A _d, the exposure A _e.
- each exposure period t i is set so that a desired dynamic range can be realized by using five exposure images of the pixel group A and one exposure image of the pixel group B. At this time, it is considered to set the exposure timing of each exposure image so as to capture the period during which the light source is turned on.
- each exposure timing is set based on the exposure start time S i that maximizes the value range of M all , an exposure that easily captures the flicker lighting period can be realized.
- Example 4 when capturing a light emitting object that blinks at a high frequency, such as a traffic light using an LED as a light source, exposure of each of a plurality of exposure images is started in consideration of the blinking cycle of the light emitting object. Set the time and exposure time length. As a result, the exposure sequence can be determined so as not to miss the light emission timing of the light emitting object, so that it is possible to capture the light emission timing of the light source object that has been captured with the existing imaging method flickering or extinguished more easily. It becomes possible.
- the CMOS image sensor 10 can be used in an imaging apparatus such as a digital still camera, a video camera, a surveillance camera, and an in-vehicle camera, and in general electronic devices having an imaging function such as a mobile phone and a smartphone. It can be used as the imaging unit.
- FIG. 25 is a system configuration diagram illustrating an outline of the configuration of the electronic device of the present disclosure.
- an electronic apparatus 200 includes an optical system 201 including a lens group, an imaging unit 202, a DSP circuit 203 that is a camera signal processing unit, a frame memory 204, a display device 205, a recording device 206, An operation system 207, a power supply system 208, and the like are included.
- the DSP circuit 203, the frame memory 204, the display device 205, the recording device 206, the operation system 207, and the power supply system 208 are connected to each other via a bus line 209.
- the optical system 201 takes in incident light (image light) from a subject and forms an image on the imaging surface of the imaging unit 202.
- the imaging unit 202 converts the amount of incident light imaged on the imaging surface by the optical system 201 into an electrical signal for each pixel and outputs the electrical signal as a pixel signal.
- the imaging unit 202 the CMOS image sensor 10 according to each of the above-described embodiments can be used.
- the DSP circuit 203 performs general camera signal processing, such as white balance processing, demosaic processing, gamma correction processing, and the like. I do.
- the DSP circuit 203 performs the processing of the image processing unit 103 prior to the general camera signal processing described above.
- the frame memory 204 is used for storing data as appropriate during the signal processing in the DSP circuit 203.
- the display device 205 includes a panel display device such as a liquid crystal display device or an organic EL (electroluminescence) display device, and displays a moving image or a still image captured by the imaging unit 202.
- the recording device 206 records the moving image or still image captured by the imaging unit 202 on a recording medium such as a portable semiconductor memory, an optical disk, or an HDD (Hard Disk Disk Drive).
- the operation system 207 issues operation commands for various functions of the electronic device 200 under the operation of the user.
- the power supply system 208 appropriately supplies various power supplies serving as operation power supplies for the DSP circuit 203, the frame memory 204, the display device 205, the recording device 206, and the operation system 207 to these supply targets.
- CMOS image sensor 10 or the imaging device 100 as the imaging unit 202, it is possible to capture a scene with a wide dynamic range without reducing the spatial resolution. .
- a pixel array unit in which unit pixels including photoelectric conversion elements are arranged in a matrix and each unit pixel is grouped into a plurality of pixel groups; An exposure start timing and an exposure end timing are independently set for each of the plurality of pixel groups so that at least one of the plurality of pixel groups is exposed a plurality of times within one vertical synchronization period.
- a timing control unit A solid-state imaging device.
- the timing control unit for at least two exposures of the plurality of exposures, starts exposure after the exposure end timing of the unit pixel in the last read row of the exposure image that has been exposed first.
- the solid-state imaging device Independently setting the exposure start timing and the exposure end timing for each of the plurality of pixel groups so that the exposure end timing of the unit pixel of the row read out first in the image is advanced.
- the solid-state imaging device according to [1] above.
- the timing control unit performs control to output each image data in a time-division format for each row for a plurality of exposure images whose exposure ends overlap in time.
- An image processing unit that generates an image with a wide dynamic range based on image data of each exposure image of a plurality of pixel groups.
- the solid-state imaging device according to any one of [1] to [3] above.
- the image processing unit holds pixel value data for each row in the storage unit for an exposure image whose exposure order is not final within one vertical synchronization period, and the exposure order within one vertical synchronization period is final. In accordance with the output timing of the pixel value data for each row of the exposure image, the pixel value data of the same row is read from the storage unit, and a row sequence conversion unit that outputs a plurality of exposure image rows in a uniform manner, The solid-state imaging device according to [4] above. [6] The image processing unit includes a first image synthesizing unit that synthesizes and outputs the pixel value data of the plurality of exposure pixels output from the row sequence conversion unit with the rows aligned. The solid-state imaging device according to [5] above.
- the image processing unit holds the pixel value data in the storage unit for an exposure image whose exposure order is the first within one vertical synchronization period, and exposure whose exposure order is other than the first within one vertical synchronization period.
- the composite pixel value is calculated and stored in the storage unit with reference to the pixel value data corresponding to the own pixel position and the peripheral pixel position already stored in the storage unit for each pixel, and is stored in one vertical synchronization period.
- a second image composition unit that outputs a final composition result when composition of pixel value data is completed for the final exposure image in the exposure order
- the solid-state imaging device according to [4] above.
- the second image composition unit holds pixel value data in the storage unit for an exposure image whose exposure order is the first in one vertical synchronization period, and stores the pixel value data in one vertical synchronization period.
- the composite pixel value is calculated by referring to the pixel value data corresponding to the own pixel position and the peripheral pixel position already held in the storage unit for each pixel and stored in the storage unit. Hold, The solid-state imaging device according to [7] above.
- the image processing unit refers to a final synthesis result for each pixel group obtained by synthesizing the exposure images whose exposure order is final within one vertical synchronization period in the second image synthesis unit, and performs synthesis between the pixel groups.
- the image processing unit holds pixel value data in the storage unit for an exposure image whose exposure order is the first within one vertical synchronization period among exposure images in all pixel groups, and performs one vertical synchronization.
- the composite pixel value is calculated by referring to the pixel value data corresponding to the own pixel position and the peripheral pixel position already held in the storage unit for each pixel.
- a third image composition unit that outputs the final composition result when the composition of the pixel value data is completed for the exposure image whose exposure order is final within one vertical synchronization period.
- the image processing unit includes a pixel value interpolation unit that interpolates pixel values at all pixel positions based on pixel values read for each pixel group,
- the third image composition unit stores pixel value data for an exposure image whose exposure order is the first in one vertical synchronization period among all exposure images of all pixel groups based on the output of the pixel value interpolation unit.
- the pixel value data corresponding to the own pixel position and the peripheral pixel position already stored in the storage unit for each pixel is referred to for an exposure image whose exposure order is not the first in one vertical synchronization period.
- To calculate a composite pixel value and store it in the storage unit The solid-state imaging device according to [10] above.
- the image processing unit A moving subject determination unit that determines whether or not the subject is moving at each pixel position; A composition ratio determining unit that determines a composition ratio when calculating a composition pixel value based on a determination result of the moving subject determination unit; The solid-state imaging device according to any one of [7] to [10] above. [13] The moving subject determination unit estimates whether or not the subject is moving at each pixel position by referring to the pixel value of the exposure image and the pixel value on the storage unit. The solid-state imaging device according to the above [12]. [14] The image processing unit includes a combination ratio history storage unit that stores a history of the combination ratio determined by the combination ratio determination unit.
- the composition ratio determining unit determines the composition ratio with reference to the composition ratio history stored in the composition ratio history storage unit.
- the solid-state imaging device according to [12] or [13].
- the composition ratio determining unit determines the composition ratio with reference to the exposure time of each exposure image.
- the solid-state imaging device according to any one of [12] to [14].
- the composition ratio determination unit determines the composition ratio with reference to the pixel value level of each exposure image.
- the solid-state imaging device according to any one of [12] to [15] above.
- An A / D converter that digitizes an analog pixel signal output from each unit pixel of the pixel array unit for each pixel column, The solid-state imaging device according to any one of [1] to [16].
- a pixel array unit in which unit pixels including photoelectric conversion elements are arranged in a matrix, and each unit pixel is grouped into a plurality of pixel groups; An exposure start timing and an exposure end timing are independently set for each of the plurality of pixel groups so that at least one of the plurality of pixel groups is exposed a plurality of times within one vertical synchronization period.
- a timing control unit An imaging apparatus having a solid-state imaging device. [19] The timing control unit sets the exposure start time and the exposure time length of each of a plurality of exposure images by a plurality of exposures in consideration of the blinking cycle of the light emitting object when capturing the light emitting object.
- a pixel array unit in which unit pixels including photoelectric conversion elements are arranged in a matrix, and each unit pixel is grouped into a plurality of pixel groups; An exposure start timing and an exposure end timing are independently set for each of the plurality of pixel groups so that at least one of the plurality of pixel groups is exposed a plurality of times within one vertical synchronization period.
- a timing control unit An electronic apparatus having a solid-state imaging device.
- SYMBOLS 10 CMOS image sensor, 11 ... Pixel array part, 12 ... Row scanning part, 13 ... Column processing part, 14 ... Column scanning part, 15 ... Horizontal output line, 16 * ..Timing control unit, 20 ... unit pixel, 21 ... photodiode (PD), 22 ... transfer transistor, 23 ... reset transistor, 24 ... amplification transistor, 25 ... select transistor , 26... FD section (charge detection section), 30... Semiconductor substrate, 31 ( 31.sub._1 to 31.sub .-- m )... Row control line, 32 ( 32.sub._1 to 32.sub .-- n ).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Power Engineering (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
Description
光電変換素子を含む単位画素が行列状に配置され、各単位画素が複数の画素グループにグループ分けされて成る画素アレイ部と、
複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定するタイミング制御部と、
を備える固体撮像素子である。
1.本開示の固体撮像素子、撮像装置、及び、電子機器、全般に関する説明
2.本開示の固体撮像素子(CMOSイメージセンサの例)
2-1.システム構成
2-2.単位画素の回路構成
2-3.単位画素のグループ分け
2-4.1垂直同期期間内での複数回露光
2-5.露光シーケンスの例
2-6.1つの画素列に対して複数個のA/D変換器を配置する例
2-7.変形例
3.本開示の撮像装置
3-1.システム構成
3-2.画像処理部の構成
3-2-1.実施例1(同時化処理の例)
3-2-2.実施例2(画素グループ毎の逐次HDR合成の例)
3-2-3.実施例3(画素グループ間での逐次HDR合成の例)
3-2-4.実施例4(フリッカ現象によるブラックアウト対策の例)
4.本開示の電子機器
本開示の固体撮像素子、撮像装置、及び、電子機器にあっては、タイミング制御部について、複数回の露光のうちの少なくとも2つの露光について、先に露光を開始した露光画像の最終読出し行の単位画素の露光終了タイミングよりも、後から露光を開始した露光画像の最初に読み出される行の単位画素の露光終了タイミングが早くなるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定する構成とすることができる。
先ず、本開示の固体撮像素子の構成の概略について説明する。ここでは、本開示の固体撮像素子としてCMOSイメージセンサを例に挙げて説明するものとする。
図1は、本開示のCMOSイメージセンサの構成の概略を示すシステム構成図である。図1に示すように、本CMOSイメージセンサ10は、画素アレイ部11と、その周辺の駆動系及び信号処理系を有する。本例では、周辺の駆動系や信号処理系として、例えば、行走査部12、カラム処理部13、列走査部14、水平出力線15、及び、タイミング制御部16が設けられている。これらの駆動系及び信号処理系は、画素アレイ部11と同一の半導体基板(チップ)30上に集積されている。
図2は、単位画素20の回路構成の一例を示す回路図である。図2に示すように、本例に係る単位画素20は、光電変換素子として例えばフォトダイオード(PD)21を有している。単位画素20は、フォトダイオード21に加えて、例えば、転送トランジスタ22、リセットトランジスタ23、増幅トランジスタ24、及び、選択トランジスタ25を有する構成となっている。
上記の構成の単位画素20が行列状に配置されて成るCMOSイメージセンサ10において、本実施形態では、ダイナミックレンジの広いシーンを飽和させず(白飛びを起こさず)、黒つぶれも起こさずに撮影することを可能とするために、単位画素20を複数の画素グループにグループ分けする。ここでは、一例として、画素アレイ部11の各画素20を2つ(2種類)の画素グループA,Bにグループ分けする。画素グループの設定方法は任意である。画素グループ分けの第1例を図3Aに示し、画素グループ分けの第2例を図3Bに示す。図3A、図3Bにおいて、各画素20内の文字(A,B)が同じ画素同士が同一の画素グループを形成するものとする。
上述したように、画素アレイ部11の各画素20が複数の画素グループ(本例では、2つの画素グループA,B)にグループ分けされたCMOSイメージセンサ10において、本実施形態では、タイミング制御部16(図1参照)が次のような制御を行うことを特徴とする。すなわち、タイミング制御部16は、複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定する。
ここで、タイミング制御部16による制御の下に実行される、本実施形態に係る露光シーケンスの一例(以下、「露光シーケンス1」と記述する)について図4を用いて説明する。図4は、タイミング制御部16による制御の下に実行される、本実施形態に係る露光シーケンスの一例(露光シーケンス1)を示す図である。
ここで、1つの画素列に対して複数個のA/D変換器40を配置し、これらのA/D変換器40を並列使用することにより、複数の画素グループにおいて複数の露光を実現するCMOSイメージセンサ10について説明する。ここでは、上述したように、画素アレイ部11の単位画素20について2種類の画素グループA,Bを設定し、一方の画素グループAについては1垂直同期期間内に3回の露光を行う場合を例に挙げて説明する。また、画素グループ分けとしては、図3Aに示す第1例とする。
上記の実施形態では、CMOSイメージセンサ10として、図1に示す平置構造のものを例示したが、積層構造のものであってもよい。ここで、「平置構造」とは、図1に示すように、画素アレイ部11の周辺回路部、即ち、画素アレイ部11の各画素20を駆動する駆動部や、A/D変換器40等を含む信号処理部などを、画素アレイ部11と同じ半導体基板30上に配置する構造をいう。
上記の実施形態に係るCMOSイメージセンサ10は、デジタルスチルカメラ、ビデオカメラ、監視カメラ、車載カメラ等の撮像装置において、その撮像部として用いることができる。本実施形態に係るCMOSイメージセンサ10は、広ダイナミックレンジのシーンを飽和させず、黒つぶれも起こさずに撮影することを可能とするため、特に、監視カメラや車載カメラ等の撮像装置に用いて好適なものである。
図16は、本開示の撮像装置の構成例を示すブロック図である。図16に示すように、本開示の撮像装置100は、光学レンズ101、撮像部102、画像処理部103、カメラ信号処理部104、制御部105、及び、タイミング制御部106を有する構成となっている。
図17は、実施例1に係る画像処理部103Aの構成例を示すブロック図である。図17に示すように、実施例1に係る画像処理部103Aは、行シーケンス変換部1031、記憶部1032、及び、第1画像合成部としてのHDR合成部1033を有する。
実施例1での画像処理は、画素グループAについて1垂直同期期間内に3回の露光を行い、画素グループBについて1垂直同期期間内に1回の露光を行う露光シーケンス1(図4参照)の場合の画像処理である。これに対して、実施例2での画像処理は、図20に示すように、画素グループA、画素グループB共に、1垂直同期期間内に複数回、例えば共に3回の露光を行う露光シーケンス(以下、「露光シーケンス2」と記述する)の場合の画像処理である。
図23は、実施例3に係る画像処理部の構成例を示すブロック図である。図23に示すように、実施例3に係る画像処理部103Cは、実施例2の逐次HDR合成部1034に代えて、逐次HDR合成部1039を第3画像合成部として備えている。実施例3に係る画像処理部103Cは更に、画素値補間部1040を備えており、それ以外の構成は実施例2に係る画像処理部103Bと同じである。
ところで、LED(発光ダイオード)を光源とする信号機や、蛍光灯を光源とする電光看板等では、光源が交流電源の揺らぎにより明滅している場合、その明滅の周期とカメラのフレームレートの関係によって、光っている信号機や看板が消灯したり、ちらついたりしているように捉えられる。この現象はフリッカ現象と呼ばれている。このフリッカ現象の下では、発光物体が消えている(所謂、ブラックアウト)、又はちらついているように捉えられる。このような現象は、特に認識用途の車載カメラなどで問題となっている。
xSi=Si-floor(Si/T)・T
となる。ただし、floor()は床関数である。ここで、各露光画像の露光期間が明滅周期内のどの位相に相当するかを考える。明滅周期においてi枚目の露光期間の占める範囲を集合Miで表すと、
if(ti>T) Mi={p│0<p≦T}
else if(xSi+ti>T) Mi={p│0<p≦xSi+ti-T or xSi≦p≦T}
else Mi={p│xSi<p≦xSi+ti}
となる。ここで、pは集合Miに含まれる時間を表す要素である。
Mall=M1UM2U・・・UMN
としたとき、Mallの値域を最大化する露光開始時刻Siに基づいて各露光タイミングを設定すると、フリッカの点灯期間を捉えやすい露光が実現できる。
先述した実施形態に係るCMOSイメージセンサ10は、デジタルスチルカメラ、ビデオカメラ、監視カメラ、車載カメラ等の撮像装置に用いることができる他、携帯電話機やスマートフォン等の撮像機能を有する電子機器全般において、その撮像部として用いることができる。
[1]光電変換素子を含む単位画素が行列状に配置され、各単位画素が複数の画素グループにグループ分けされて成る画素アレイ部と、
複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定するタイミング制御部と、
を備える固体撮像素子。
[2]タイミング制御部は、複数回の露光のうちの少なくとも2つの露光について、先に露光を開始した露光画像の最終読出し行の単位画素の露光終了タイミングよりも、後から露光を開始した露光画像の最初に読み出される行の単位画素の露光終了タイミングが早くなるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定する、
上記[1]に記載の固体撮像素子。
[3]タイミング制御部は、露光終了が時間的に重複する複数の露光画像について、行毎に時分割形式で各々の画像データを出力する制御を行う、
上記[2]に記載の固体撮像素子。
[4]複数の画素グループの各露光画像の画像データに基づいて広いダイナミックレンジの画像を生成する画像処理部を備える、
上記[1]~[3]のいずれかに記載の固体撮像素子。
[5]画像処理部は、1垂直同期期間内での露光順序が最終でない露光画像については行毎に画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最終の露光画像の行毎の画素値データの出力タイミングに応じて、記憶部から同一行の画素値データを読み出し、複数の露光画像の行を揃えて出力させる行シーケンス変換部を有する、
上記[4]に記載の固体撮像素子。
[6]画像処理部は、行シーケンス変換部から行を揃えて出力される複数の露光画素の画素値データを、複数の露光画素間で合成して出力する第1画像合成部を有する、
上記[5]に記載の固体撮像素子。
[7]画像処理部は、1垂直同期期間内での露光順序が最初の露光画像については画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像については、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持し、1垂直同期期間内での露光順序が最終の露光画像について画素値データの合成が終了したら、最終の合成結果を出力する第2画像合成部を有する、
上記[4]に記載の固体撮像素子。
[8]第2画像合成部は、それぞれ画素グループにおいて、1垂直同期期間内での露光順序が最初の露光画像については画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像については、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持する、
上記[7]に記載の固体撮像素子。
[9]画像処理部は、第2画像合成部において1垂直同期期間内で露光順序が最終の露光画像を合成して得られる画素グループ毎の最終の合成結果を参照して画素グループ間の合成値を算出する画素グループ間合成部を有する、
上記[7]又は[8]に記載の固体撮像素子。
[10]画像処理部は、全ての画素グループにおける露光画像の中で、1垂直同期期間内での露光順序が最初の露光画像については画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像については、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持し、1垂直同期期間内での露光順序が最終の露光画像については画素値データの合成が終了したら、最終の合成結果を出力する第3画像合成部を有する、
上記[7]に記載の固体撮像素子。
[11]画像処理部は、各々の画素グループ毎に読み出される画素値に基づいて全画素位置に画素値を補間する画素値補間部を有し、
第3画像合成部は、画素値補間部の出力に基づいて、全ての画素グループの全ての露光画像の中で1垂直同期期間内での露光順序が最初の露光画像について画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像について、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持する、
上記[10]に記載の固体撮像素子。
[12]画像処理部は、
各画素位置において被写体が動いているか否かを判定する動被写体判定部と、
動被写体判定部の判定結果に基づいて、合成画素値を算出する際の合成比率を決定する合成比率決定部とを有する、
上記[7]~[10]のいずれかに記載の固体撮像素子。
[13]動被写体判定部は、露光画像の画素値と記憶部上の画素値とを参照することによって、各画素位置において被写体が動いているか否かを推定する、
上記[12]に記載の固体撮像素子。
[14]画像処理部は、合成比率決定部が決定した合成比率の履歴を格納する合成比率履歴格納部を有し、
合成比率決定部は、合成比率履歴格納部に格納されている合成比率の履歴を参照して合成比率を決定する、
上記[12]又は[13]に記載の固体撮像素子。
[15]合成比率決定部は、各露光画像の露光時間を参照して合成比率を決定する、
上記[12]~[14]のいずれかに記載の固体撮像素子。
[16]合成比率決定部は、各露光画像の画素値レベルを参照して合成比率を決定する、
上記[12]~[15]のいずれかに記載の固体撮像素子。
[17]画素アレイ部の各単位画素から画素列毎に出力されるアナログの画素信号をデジタル化するA/D変換器を備える、
上記[1]~[16]のいずれかに記載の固体撮像素子。
[18]光電変換素子を含む単位画素が行列状に配置され、各単位画素が複数の画素グループにグループ分けされて成る画素アレイ部と、
複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定するタイミング制御部と、
を備える固体撮像素子を有する撮像装置。
[19]タイミング制御部は、発光物体を捉える場合において、当該発光物体の明滅周期を勘案して、複数回の露光による複数枚の露光画像の各々の露光開始時刻や露光時間長を設定する、
上記[18]に記載の撮像装置。
[20]光電変換素子を含む単位画素が行列状に配置され、各単位画素が複数の画素グループにグループ分けされて成る画素アレイ部と、
複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定するタイミング制御部と、
を備える固体撮像素子を有する電子機器。
Claims (20)
- 光電変換素子を含む単位画素が行列状に配置され、各単位画素が複数の画素グループにグループ分けされて成る画素アレイ部と、
複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定するタイミング制御部と、
を備える固体撮像素子。 - タイミング制御部は、複数回の露光のうちの少なくとも2つの露光について、先に露光を開始した露光画像の最終読出し行の単位画素の露光終了タイミングよりも、後から露光を開始した露光画像の最初に読み出される行の単位画素の露光終了タイミングが早くなるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定する、
請求項1に記載の固体撮像素子。 - タイミング制御部は、露光終了が時間的に重複する複数の露光画像について、行毎に時分割形式で各々の画像データを出力する制御を行う、
請求項2に記載の固体撮像素子。 - 複数の画素グループの各露光画像の画像データに基づいて広いダイナミックレンジの画像を生成する画像処理部を備える、
請求項1に記載の固体撮像素子。 - 画像処理部は、1垂直同期期間内での露光順序が最終でない露光画像については行毎に画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最終の露光画像の行毎の画素値データの出力タイミングに応じて、記憶部から同一行の画素値データを読み出し、複数の露光画像の行を揃えて出力させる行シーケンス変換部を有する、
請求項4に記載の固体撮像素子。 - 画像処理部は、行シーケンス変換部から行を揃えて出力される複数の露光画素の画素値データを、複数の露光画素間で合成して出力する第1画像合成部を有する、
請求項5に記載の固体撮像素子。 - 画像処理部は、1垂直同期期間内での露光順序が最初の露光画像については画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像については、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持し、1垂直同期期間内での露光順序が最終の露光画像について画素値データの合成が終了したら、最終の合成結果を出力する第2画像合成部を有する、
請求項4に記載の固体撮像素子。 - 第2画像合成部は、それぞれ画素グループにおいて、1垂直同期期間内での露光順序が最初の露光画像については画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像については、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持する、
請求項7に記載の固体撮像素子。 - 画像処理部は、第2画像合成部において1垂直同期期間内で露光順序が最終の露光画像を合成して得られる画素グループ毎の最終の合成結果を参照して画素グループ間の合成値を算出する画素グループ間合成部を有する、
請求項7に記載の固体撮像素子。 - 画像処理部は、全ての画素グループにおける露光画像の中で、1垂直同期期間内での露光順序が最初の露光画像については画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像については、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持し、1垂直同期期間内での露光順序が最終の露光画像については画素値データの合成が終了したら、最終の合成結果を出力する第3画像合成部を有する、
請求項7に記載の固体撮像素子。 - 画像処理部は、各々の画素グループ毎に読み出される画素値に基づいて全画素位置に画素値を補間する画素値補間部を有し、
第3画像合成部は、画素値補間部の出力に基づいて、全ての画素グループの全ての露光画像の中で1垂直同期期間内での露光順序が最初の露光画像について画素値データを記憶部に保持しておき、1垂直同期期間内での露光順序が最初以外の露光画像について、画素毎に既に記憶部に保持されている自身の画素位置及び周辺画素位置に対応する画素値データを参照して合成画素値を算出して記憶部に保持する、
請求項10に記載の固体撮像素子。 - 画像処理部は、
各画素位置において被写体が動いているか否かを判定する動被写体判定部と、
動被写体判定部の判定結果に基づいて、合成画素値を算出する際の合成比率を決定する合成比率決定部とを有する、
請求項7に記載の固体撮像素子。 - 動被写体判定部は、露光画像の画素値と記憶部上の画素値とを参照することによって、各画素位置において被写体が動いているか否かを推定する、
請求項12に記載の固体撮像素子。 - 画像処理部は、合成比率決定部が決定した合成比率の履歴を格納する合成比率履歴格納部を有し、
合成比率決定部は、合成比率履歴格納部に格納されている合成比率の履歴を参照して合成比率を決定する、
請求項12に記載の固体撮像素子。 - 合成比率決定部は、各露光画像の露光時間を参照して合成比率を決定する、
請求項12に記載の固体撮像素子。 - 合成比率決定部は、各露光画像の画素値レベルを参照して合成比率を決定する、
請求項12に記載の固体撮像素子。 - 画素アレイ部の各単位画素から画素列毎に出力されるアナログの画素信号をデジタル化するA/D変換器を備える、
請求項1に記載の固体撮像素子。 - 光電変換素子を含む単位画素が行列状に配置され、各単位画素が複数の画素グループにグループ分けされて成る画素アレイ部と、
複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定するタイミング制御部と、
を備える固体撮像素子を有する撮像装置。 - タイミング制御部は、発光物体を捉える場合において、当該発光物体の明滅周期を勘案して、複数回の露光による複数枚の露光画像の各々の露光開始時刻や露光時間長を設定する、
請求項18に記載の撮像装置。 - 光電変換素子を含む単位画素が行列状に配置され、各単位画素が複数の画素グループにグループ分けされて成る画素アレイ部と、
複数の画素グループのうちの少なくとも1つの画素グループについて1垂直同期期間内に複数回の露光が行われるように、複数の画素グループのそれぞれに対して独立に露光開始タイミングと露光終了タイミングとを設定するタイミング制御部と、
を備える固体撮像素子を有する電子機器。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201680063615.6A CN108293098B (zh) | 2015-11-05 | 2016-09-20 | 固态成像器件、成像装置和电子设备 |
US15/771,792 US10541259B2 (en) | 2015-11-05 | 2016-09-20 | Solid-state imaging device, imaging device, and electronic apparatus |
JP2017548667A JPWO2017077775A1 (ja) | 2015-11-05 | 2016-09-20 | 固体撮像素子、撮像装置、及び、電子機器 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-217243 | 2015-11-05 | ||
JP2015217243 | 2015-11-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017077775A1 true WO2017077775A1 (ja) | 2017-05-11 |
Family
ID=58661813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/077665 WO2017077775A1 (ja) | 2015-11-05 | 2016-09-20 | 固体撮像素子、撮像装置、及び、電子機器 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10541259B2 (ja) |
JP (1) | JPWO2017077775A1 (ja) |
CN (1) | CN108293098B (ja) |
WO (1) | WO2017077775A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110603456A (zh) * | 2017-07-11 | 2019-12-20 | 索尼半导体解决方案公司 | 测距装置和移动设备 |
EP3734964A4 (en) * | 2017-12-25 | 2021-08-18 | Canon Kabushiki Kaisha | IMAGING DEVICE, IMAGING SYSTEM, AND METHOD OF CONTROLLING THE IMAGING DEVICE |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016117034A1 (ja) * | 2015-01-20 | 2016-07-28 | オリンパス株式会社 | 画像処理装置、画像処理方法およびプログラム |
JP6604297B2 (ja) * | 2016-10-03 | 2019-11-13 | 株式会社デンソー | 撮影装置 |
CN111193882A (zh) * | 2018-11-15 | 2020-05-22 | 格科微电子(上海)有限公司 | 单帧高动态范围图像传感器的实现方法 |
KR20210002966A (ko) * | 2019-07-01 | 2021-01-11 | 삼성전자주식회사 | 이미지 센서 및 그것의 구동 방법 |
CN112543261A (zh) * | 2020-12-08 | 2021-03-23 | 浙江大华技术股份有限公司 | 一种图像质量提升方法、装置以及计算机可读存储介质 |
CN114363524B (zh) * | 2022-01-24 | 2022-09-20 | 北京显芯科技有限公司 | 背光控制方法、装置、设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011244309A (ja) * | 2010-05-20 | 2011-12-01 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
WO2012042967A1 (ja) * | 2010-09-27 | 2012-04-05 | 富士フイルム株式会社 | 撮像装置及び撮像方法 |
JP2014154982A (ja) * | 2013-02-06 | 2014-08-25 | Canon Inc | 撮像装置およびその制御方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4482781B2 (ja) | 2000-10-06 | 2010-06-16 | ソニー株式会社 | 撮像装置および方法 |
CN101057493B (zh) | 2004-11-02 | 2011-05-25 | 松下电器产业株式会社 | 图像传感器 |
JP2007124174A (ja) * | 2005-10-27 | 2007-05-17 | Fujifilm Corp | 固体撮像装置および固体撮像素子の駆動制御方法 |
JP4661922B2 (ja) * | 2008-09-03 | 2011-03-30 | ソニー株式会社 | 画像処理装置、撮像装置、固体撮像素子、画像処理方法およびプログラム |
JP5156148B2 (ja) * | 2010-07-28 | 2013-03-06 | 富士フイルム株式会社 | 撮像装置及び撮像方法 |
-
2016
- 2016-09-20 WO PCT/JP2016/077665 patent/WO2017077775A1/ja active Application Filing
- 2016-09-20 JP JP2017548667A patent/JPWO2017077775A1/ja not_active Abandoned
- 2016-09-20 US US15/771,792 patent/US10541259B2/en active Active
- 2016-09-20 CN CN201680063615.6A patent/CN108293098B/zh active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011244309A (ja) * | 2010-05-20 | 2011-12-01 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
WO2012042967A1 (ja) * | 2010-09-27 | 2012-04-05 | 富士フイルム株式会社 | 撮像装置及び撮像方法 |
JP2014154982A (ja) * | 2013-02-06 | 2014-08-25 | Canon Inc | 撮像装置およびその制御方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110603456A (zh) * | 2017-07-11 | 2019-12-20 | 索尼半导体解决方案公司 | 测距装置和移动设备 |
EP3654055A4 (en) * | 2017-07-11 | 2020-06-24 | Sony Semiconductor Solutions Corporation | DISTANCE MEASURING DEVICE AND MOBILE BODY APPARATUS |
CN110603456B (zh) * | 2017-07-11 | 2023-12-01 | 索尼半导体解决方案公司 | 测距装置和包括测距装置的移动装置 |
EP3734964A4 (en) * | 2017-12-25 | 2021-08-18 | Canon Kabushiki Kaisha | IMAGING DEVICE, IMAGING SYSTEM, AND METHOD OF CONTROLLING THE IMAGING DEVICE |
US11284023B2 (en) | 2017-12-25 | 2022-03-22 | Canon Kabushiki Kaisha | Imaging apparatus, imaging system, and drive method of imaging apparatus |
Also Published As
Publication number | Publication date |
---|---|
JPWO2017077775A1 (ja) | 2018-08-23 |
CN108293098B (zh) | 2021-01-19 |
US20180323225A1 (en) | 2018-11-08 |
CN108293098A (zh) | 2018-07-17 |
US10541259B2 (en) | 2020-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017077775A1 (ja) | 固体撮像素子、撮像装置、及び、電子機器 | |
US9615033B2 (en) | Image sensor with transfer gate control signal lines | |
US10110827B2 (en) | Imaging apparatus, signal processing method, and program | |
CN102984472B (zh) | 固态成像设备和相机*** | |
US8520103B2 (en) | Solid-state imaging device, signal processing method thereof and image capturing apparatus | |
JP4747781B2 (ja) | 撮像装置 | |
US9288399B2 (en) | Image processing apparatus, image processing method, and program | |
KR100361945B1 (ko) | 고체 이미징 장치 | |
CN106847845B (zh) | 摄像设备、摄像***和摄像设备的驱动方法 | |
JP5645505B2 (ja) | 撮像装置及びその制御方法 | |
US9674469B2 (en) | Solid-state imaging device, method of driving the same, and electronic apparatus | |
JP2011244309A (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP2008054136A (ja) | 撮像装置及び駆動制御方法 | |
WO2019208412A1 (ja) | 撮像装置及び撮像装置の駆動方法 | |
JP2016119652A (ja) | 撮像装置及び撮像素子の駆動方法 | |
JP2004282552A (ja) | 固体撮像素子および固体撮像装置 | |
US20220321759A1 (en) | Solid-state imaging device, method for driving solid-state imaging device, and electronic apparatus | |
US20150036033A1 (en) | Solid-state imaging device | |
US11412168B2 (en) | Imaging element and method of controlling the same, and imaging device | |
JP5884847B2 (ja) | 固体撮像装置、固体撮像装置の信号処理方法および撮像装置 | |
WO2019176309A1 (ja) | 画像処理装置、画像処理方法及び画像処理システム | |
JP3970069B2 (ja) | 撮像装置 | |
JP2006033885A (ja) | 撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16861844 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017548667 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15771792 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16861844 Country of ref document: EP Kind code of ref document: A1 |