WO2018142707A1 - Imaging system and imaging device - Google Patents

Imaging system and imaging device Download PDF

Info

Publication number
WO2018142707A1
WO2018142707A1 PCT/JP2017/040155 JP2017040155W WO2018142707A1 WO 2018142707 A1 WO2018142707 A1 WO 2018142707A1 JP 2017040155 W JP2017040155 W JP 2017040155W WO 2018142707 A1 WO2018142707 A1 WO 2018142707A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
period
pixels
test
imaging device
Prior art date
Application number
PCT/JP2017/040155
Other languages
French (fr)
Japanese (ja)
Inventor
直樹 河津
敦史 鈴木
純一郎 薊
裕一 本橋
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2017206335A external-priority patent/JP6953274B2/en
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Priority to DE112017006977.7T priority Critical patent/DE112017006977T5/en
Priority to CN201780084589.XA priority patent/CN110226325B/en
Priority to US16/471,406 priority patent/US10819928B2/en
Publication of WO2018142707A1 publication Critical patent/WO2018142707A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/67Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/74Circuitry for scanning or addressing the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • H04N25/772Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/78Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/79Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing

Definitions

  • the present disclosure relates to an imaging system and an imaging apparatus.
  • amplification-type solid-state imaging devices represented by MOS type image sensors such as CMOS (Complementary Metal Oxide Semiconductor) are known.
  • MOS type image sensors such as CMOS (Complementary Metal Oxide Semiconductor)
  • CCD Charge Coupled Device
  • CCD Charge Coupled Device
  • a unit pixel is formed by a photoelectric conversion element (for example, a photodiode) and a plurality of pixel transistors, and a pixel array (pixel region) in which the plurality of unit pixels are arranged in a two-dimensional array, And having a peripheral circuit region.
  • the plurality of pixel transistors are formed of MOS transistors, and include transfer transistors, reset transistors, three transistors of amplification and transistors, or four transistors including a selection transistor.
  • Patent Document 1 discloses an example of a mechanism for detecting a failure of a solid-state imaging device using a failure detection circuit.
  • Patent Document 1 since various tests are performed using a failure detection circuit when the image detection chip is powered on or a signal is received from an external inspection device, for example, a failure that occurs during imaging is detected at run-time. Difficult to do.
  • the present disclosure proposes an imaging system and an imaging apparatus capable of more efficiently executing various tests for detecting an abnormality.
  • an imaging device that is mounted on a vehicle and images a peripheral region of the vehicle to generate an image
  • a processing device that is mounted on the vehicle and executes processing related to a function of controlling the vehicle
  • the imaging apparatus includes a plurality of pixels, a control unit that controls exposure by each of the plurality of pixels, and a processing unit that executes a predetermined test, and the control unit includes the plurality of pixels. Reading of the pixel signal is started in the second period in which one or more exposures are executed after the reading of the pixel signal is completed in the first period in which one or more exposures are performed on at least some of the pixels.
  • Exposure is controlled so that the processing unit performs the predetermined test in a third period between the readout of the pixel signal in the first period and the readout of the pixel signal in the second period.
  • Run The processor, based on the predetermined test result, limits the function of controlling the vehicle, the imaging system is provided.
  • the present disclosure further includes a plurality of pixels, a control unit that controls exposure by each of the plurality of pixels, and a processing unit that executes a predetermined test, and the control unit includes the plurality of pixels.
  • the control unit After reading of the pixel signal is completed in the first period in which at least one exposure is performed on at least some of the pixels, reading out of the pixel signal is started in the second period in which at least one exposure is performed. Exposure is controlled so that the processing unit performs the predetermined test in a third period between the readout of the pixel signal in the first period and the readout of the pixel signal in the second period.
  • An imaging device is provided for execution.
  • a plurality of pixels, a control unit that controls exposure by each of the plurality of pixels, and at least one exposure by at least some of the plurality of pixels are executed.
  • An imaging apparatus includes a processing unit that executes a predetermined test in a third period until the first time.
  • an imaging system and an imaging apparatus capable of more efficiently executing various tests for detecting an abnormality.
  • FIG. 6 is a block diagram illustrating another example of a functional configuration of a solid-state imaging device according to an embodiment of the present disclosure. It is a figure showing other examples of composition of a solid imaging device concerning one embodiment of this indication. It is a figure showing an example of circuit composition of a unit pixel concerning one embodiment of this indication.
  • 5 is a schematic timing chart illustrating an example of drive control of a solid-state imaging device according to an embodiment of the present disclosure.
  • 5 is a schematic timing chart illustrating an example of drive control of a solid-state imaging device according to an embodiment of the present disclosure.
  • 1 is a block diagram illustrating an example of a schematic configuration of a solid-state imaging device according to a first embodiment of the present disclosure. It is the block diagram which showed an example of the schematic structure of the solid-state imaging device concerning the embodiment.
  • 3 is a schematic timing chart showing an example of drive control of the solid-state imaging device according to the embodiment. It is explanatory drawing for demonstrating an example of the drive control of the solid-state imaging device concerning the embodiment.
  • FIG. 6 is an explanatory diagram for describing an example of an operation related to pixel signal correction in the solid-state imaging device according to the embodiment. It is the figure which showed an example of the circuit structure of the unit pixel in the solid-state imaging device which concerns on the modification of the embodiment. 6 is a schematic timing chart illustrating an example of drive control of a solid-state imaging device according to a modification of the embodiment. It is explanatory drawing for demonstrating an example of the drive control of the solid-state imaging device which concerns on the modification of the embodiment.
  • FIG. 6 is an explanatory diagram for describing an example of an operation related to pixel signal correction in the solid-state imaging device according to the embodiment.
  • FIG. 6 is an explanatory diagram for describing an example of an operation related to pixel signal correction in the solid-state imaging device according to the embodiment.
  • 3 is a schematic timing chart showing an example of drive control of the solid-state imaging device according to the embodiment.
  • FIG. 6 is an explanatory diagram for describing an example of schematic control related to readout of a pixel signal from each pixel in the solid-state imaging device according to the embodiment.
  • FIG. 6 is an explanatory diagram for describing an example of schematic control related to readout of a pixel signal from each pixel in the solid-state imaging device according to the embodiment.
  • 6 is a timing chart for explaining a relationship between an exposure time constraint and a vertical blank period in the solid-state imaging device according to the embodiment. It is explanatory drawing for demonstrating the structure of the hardware of a front camera ECU and an image pick-up element. It is explanatory drawing for demonstrating the structure of the hardware of a front camera ECU and an image pick-up element.
  • FIG. 1 illustrates a schematic configuration of a CMOS solid-state imaging device as an example of a configuration of a solid-state imaging device according to an embodiment of the present disclosure. This CMOS solid-state imaging device is applied to the solid-state imaging device of each embodiment.
  • the solid-state imaging device 1 of the present example includes a pixel array unit 3, an address recorder 4, a pixel timing driving circuit 5, a column signal processing circuit 6, a sensor controller 7, and an analog potential generation circuit 8.
  • each pixel 2 is connected to the pixel timing drive circuit 5 via a horizontal signal line and also via a vertical signal line VSL. It is connected to the column signal processing circuit 6.
  • the plurality of pixels 2 each output a pixel signal corresponding to the amount of light irradiated through an optical system (not shown), and an image of the subject imaged on the pixel array unit 3 is constructed from these pixel signals. .
  • the pixel 2 includes, for example, a photodiode serving as a photoelectric conversion unit and a plurality of pixel transistors (so-called MOS transistors).
  • the plurality of pixel transistors can be constituted by three transistors, for example, a transfer transistor, a reset transistor, and an amplification transistor.
  • a selection transistor may be added to configure the transistor with four transistors.
  • An example of an equivalent circuit of the unit pixel will be described later separately.
  • the pixel 2 can be configured as one unit pixel. Further, the pixel 2 may have a shared pixel structure. This shared pixel structure includes a plurality of photodiodes, a plurality of transfer transistors, a shared floating diffusion, and a shared other pixel transistor. That is, in the shared pixel, a photodiode and a transfer transistor that constitute a plurality of unit pixels are configured by sharing each other pixel transistor.
  • a dummy pixel 2a that does not contribute to display may be arranged in a part of the pixel array unit 3 (for example, a non-display area).
  • the dummy pixel 2 a is used for acquiring various information related to the solid-state imaging device 1. For example, a voltage corresponding to white luminance is applied to the dummy pixel 2a during a period in which the pixel 2 contributing to display is driven. At this time, for example, the current flowing in the dummy pixel 2a is converted into a voltage, and the voltage obtained by this conversion is measured, so that the deterioration of the pixel 2 contributing to display can be predicted. That is, the dummy pixel 2 a can correspond to a sensor that can detect the electrical characteristics of the solid-state imaging device 1.
  • the address recorder 4 controls the vertical access of the pixel array unit 3, and the pixel timing drive circuit 5 drives the pixel 2 according to the logical sum of the control signal from the address recorder 4 and the pixel drive pulse.
  • the column signal processing circuit 6 performs AD conversion of the pixel signal by performing CDS (Correlated Double Sampling) processing on the pixel signal output from the plurality of pixels 2 through the vertical signal line VSL. And reset noise.
  • the column signal processing circuit 6 includes a plurality of AD converters corresponding to the number of columns of the pixels 2, and can perform CDS processing in parallel for each column of the pixels 2.
  • the column signal processing circuit 6 includes a constant current circuit forming a load MOS portion of the source follower circuit, and a single slope type DA converter for analog-digital conversion of the potential of the vertical signal line VSL.
  • the sensor controller 7 controls the overall driving of the solid-state imaging device 1. For example, the sensor controller 7 generates a clock signal according to the driving cycle of each block constituting the solid-state imaging device 1 and supplies the clock signal to each block.
  • the analog potential generation circuit 8 generates an analog potential for driving the dummy pixels 2a in a desired manner in order to acquire various information related to the solid-state imaging device 1. For example, when the pixel timing drive circuit 5 drives the dummy pixel 2a based on the analog potential generated by the analog potential generation circuit 8, various information regarding the solid-state imaging device 1 is obtained based on the output signal from the dummy pixel 2a. To be acquired.
  • the solid-state imaging device 330 illustrated in the upper part of FIG. 2 includes a pixel region 332, a control circuit 333, and a logic circuit 334 including the above-described signal processing circuit in one semiconductor chip 331. Is done.
  • the solid-state imaging device 340 shown in the middle of FIG. 2 includes a first semiconductor chip unit 341 and a second semiconductor chip unit 342.
  • a pixel region 343 and a control circuit 344 are mounted on the first semiconductor chip portion 341, and a logic circuit 345 including the signal processing circuit described above is mounted on the second semiconductor chip portion 342.
  • the first semiconductor chip unit 341 and the second semiconductor chip unit 342 are electrically connected to each other, so that a solid-state imaging device 340 as one semiconductor chip is configured.
  • the solid-state imaging device 350 shown in the lower part of FIG. 2 includes a first semiconductor chip part 351 and a second semiconductor chip part 352.
  • a pixel region 353 is mounted on the first semiconductor chip portion 351, and a control circuit 354 and a logic circuit 355 including the signal processing circuit described above are mounted on the second semiconductor chip portion 352.
  • the first semiconductor chip unit 351 and the second semiconductor chip unit 352 are electrically connected to each other, so that a solid-state imaging device 350 as one semiconductor chip is configured.
  • FIG. 3 is a block diagram illustrating an example of a partial functional configuration of the solid-state imaging device according to an embodiment of the present disclosure.
  • the solid-state imaging device 1 shown in FIG. 3 is an imaging element that captures a subject and obtains digital data of the captured image, such as a complementary metal oxide semiconductor (CMOS) image sensor or a charge coupled device (CCD) image sensor. .
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the solid-state imaging device 1 includes a control unit 101, a pixel array unit 111, a selection unit 112, an A / D conversion unit (ADC (Analog Digital Converter)) 113, and a constant current circuit unit 114. .
  • ADC Analog Digital Converter
  • the control unit 101 controls each unit of the solid-state imaging device 1 to execute processing related to reading of image data (pixel signal).
  • the pixel array unit 111 is a pixel region in which pixel configurations having photoelectric conversion elements such as photodiodes are arranged in a matrix (array).
  • the pixel array unit 111 is controlled by the control unit 101 to receive the light of the subject at each pixel, photoelectrically convert the incident light to accumulate charges, and store the charges accumulated in each pixel at a predetermined timing. Output as a pixel signal.
  • the pixel 121 and the pixel 122 indicate two pixels that are adjacent in the vertical direction in the pixel group arranged in the pixel array unit 111.
  • the pixel 121 and the pixel 122 are pixels in consecutive rows in the same column.
  • a photoelectric conversion element and four transistors are used in the circuit of each pixel. Note that the circuit configuration of each pixel is arbitrary and may be other than the example shown in FIG.
  • output lines for pixel signals are provided for each column.
  • two (two systems) output lines are provided for each column.
  • the circuit of the pixel in one column is alternately connected to these two output lines every other row.
  • the circuit of the pixel 121 is connected to the first output line (VSL1)
  • the circuit of the pixel 122 is connected to the second output line (VSL2).
  • FIG. 3 for convenience of explanation, only an output line for one column is shown, but actually, two output lines are provided for each column in the same manner. Each output line is connected to every other row of pixel circuits in that column.
  • the selection unit 112 includes a switch that connects each output line of the pixel array unit 111 to the input of the ADC 113, and is controlled by the control unit 101 to control connection between the pixel array unit 111 and the ADC 113. That is, the pixel signal read from the pixel array unit 111 is supplied to the ADC 113 via the selection unit 112.
  • the selection unit 112 includes a switch 131, a switch 132, and a switch 133.
  • the switch 131 (selection SW) controls connection of two output lines corresponding to the same column. For example, the first output line (VSL1) and the second output line (VSL2) are connected when the switch 131 is turned on (ON), and disconnected when the switch 131 is turned off (OFF).
  • one ADC is provided for each output line (column ADC). Therefore, if both the switch 132 and the switch 133 are in the on state, when the switch 131 is in the on state, the two output lines of the same column are connected, so that the circuit of one pixel is connected to the two ADCs. Will be. Conversely, when the switch 131 is turned off, the two output lines in the same column are disconnected, and the circuit of one pixel is connected to one ADC. That is, the switch 131 selects the number of ADCs (column ADCs) that are output destinations of signals of one pixel.
  • the switch 131 controls the number of ADCs to which pixel signals are output, so that the solid-state imaging device 1 can output more various pixel signals according to the number of ADCs. That is, the solid-state imaging device 1 can realize more various data outputs.
  • the switch 132 controls the connection between the first output line (VSL1) corresponding to the pixel 121 and the ADC corresponding to the output line.
  • VSL1 first output line
  • ADC ADC corresponding to the output line.
  • the switch 133 controls the connection between the second output line (VSL2) corresponding to the pixel 122 and the ADC corresponding to the output line.
  • VSL2 the second output line
  • ADC the ADC corresponding to the output line.
  • the selection unit 112 can control the number of ADCs (column ADCs) that are output destinations of signals of one pixel by switching the states of the switches 131 to 133 according to the control of the control unit 101. .
  • each output line may be always connected to the ADC corresponding to the output line.
  • the selection of the number of ADCs (column ADCs) that are the output destinations of signals of one pixel is expanded by enabling these switches to control connection / disconnection of these pixels. That is, by providing these switches, the solid-state imaging device 1 can output more various pixel signals.
  • the selection unit 112 has the same configuration as that shown in FIG. 133). That is, the selection unit 112 performs connection control similar to that described above for each column according to the control of the control unit 101.
  • the ADC 113 A / D converts each pixel signal supplied from the pixel array unit 111 via each output line, and outputs it as digital data.
  • the ADC 113 includes an ADC (column ADC) for each output line from the pixel array unit 111. That is, the ADC 113 has a plurality of column ADCs.
  • a column ADC corresponding to one output line is a single slope type ADC having a comparator, a D / A converter (DAC), and a counter.
  • the comparator compares the signal value (potential) of the pixel signal supplied via the vertical signal line VSL with the potential of the ramp wave supplied from the DAC and inverts the signal at the timing when these potentials intersect. Output a pulse.
  • the counter counts an AD period corresponding to the timing at which the potential of the pixel signal and the potential of the ramp wave intersect in order to convert the analog value into a digital value.
  • the counter increments the count value (digital value) until the signal value of the pixel signal is equal to the ramp wave potential supplied from the DAC.
  • the comparator stops the counter when the DAC output reaches the signal value. Thereafter, the signals digitized by the counters 1 and 2 are output to the outside of the solid-state imaging device 1 from DATA1 and DATA2.
  • the counter returns the count value to the initial value (for example, 0) after outputting the data for the next A / D conversion.
  • the ADC 113 has two column ADCs for each column. For example, a comparator 141 (COMP1), a DAC 142 (DAC1), and a counter 143 (counter 1) are provided for the first output line (VSL1), and a comparison is made for the second output line (VSL2). A device 151 (COMP2), a DAC 152 (DAC2), and a counter 153 (counter 2) are provided. Although not shown, the ADC 113 has the same configuration for output lines of other columns.
  • the DAC can be shared among these configurations. DAC sharing is performed for each system. That is, DACs of the same system in each column are shared. In the example of FIG. 3, the DAC corresponding to the first output line (VSL1) of each column is shared as the DAC 142, and the DAC corresponding to the second output line (VSL2) of each column is shared as the DAC 152. ing. Note that a comparator and a counter are provided for each output line system.
  • the constant current circuit unit 114 is a constant current circuit connected to each output line, and is driven by being controlled by the control unit 101.
  • the circuit of the constant current circuit unit 114 includes, for example, a MOS (Metal Oxide Semiconductor) transistor or the like.
  • MOS Metal Oxide Semiconductor
  • FIG. 3 for convenience of explanation, a MOS transistor 161 (LOAD1) is provided for the first output line (VSL1), and for the second output line (VSL2).
  • LOAD2 MOS transistor 162
  • the control unit 101 receives a request from the outside such as a user, selects a read mode, controls the selection unit 112, and controls connection to the output line. Further, the control unit 101 controls driving of the column ADC according to the selected read mode. Further, in addition to the column ADC, the control unit 101 controls driving of the constant current circuit unit 114 as necessary, and controls driving of the pixel array unit 111 such as a reading rate and timing. .
  • control unit 101 can operate not only the selection unit 112 but also each unit other than the selection unit 112 in more various modes. Therefore, the solid-state imaging device 1 can output more various pixel signals.
  • the pixels 121 and 122 shown in FIG. 3 correspond to the pixel 2 in FIG.
  • the selection unit 112, the ADC 113, and the constant current circuit unit 114 correspond to the column signal processing circuit 6 described with reference to FIG.
  • the control unit 101 shown in FIG. 3 corresponds to the sensor controller 7 described with reference to FIG.
  • each part shown in FIG. 3 is arbitrary as long as there is no shortage.
  • three or more output lines may be provided for each column.
  • the number of pixel signals output in parallel may be increased by increasing the number of parallel pixel signals output from the switch 132 and the number of switches 132 themselves shown in FIG.
  • FIG. 4 is a block diagram illustrating another example of the functional configuration of the solid-state imaging device according to an embodiment of the present disclosure.
  • reference numerals 6a and 6b respectively indicate configurations corresponding to the column signal processing circuit 6 described with reference to FIG. That is, in the example shown in FIG. 4, a plurality of systems corresponding to the column signal processing circuit 6 (for example, the comparators 141 and 151, the counters 143 and 153, and the constant current circuit unit 114) are provided. Further, as shown in FIG. 4, the DACs 142 and 152 may be shared between the column signal processing circuits 6a and 6b.
  • FIG. 5 is a diagram illustrating another example of the configuration of the solid-state imaging device according to an embodiment of the present disclosure.
  • a pixel array unit 111 in which a plurality of pixels 2 are arranged is provided on an upper semiconductor chip and an ADC 113 is provided on a lower chip is shown.
  • the pixel array unit 111 is divided into a plurality of areas 1111 each including a plurality of pixels 2, and an ADC 1131 is provided for each area 1111.
  • the pixel array unit 111 is divided into a plurality of areas 1111 using 10 pixels ⁇ 16 pixels as a unit of the area 1111.
  • each pixel 2 included in the area 1111 and the ADC 1131 provided corresponding to the area 1111 are electrically connected by stacking semiconductor chips.
  • a direct connection between a wiring connected to each pixel 2 included in the area 1111 and a wiring connected to the ADC 1131 provided corresponding to the area is based on a so-called Cu—Cu bonding. They may be connected by a so-called TSV (Through-Silicon Via).
  • the ADC 1131 for each area 1111 As described above, by providing the ADC 1131 for each area 1111, for example, compared to the case where the ADC 113 is provided for each column, the pixel signal from each pixel 2 is A / D converted and output as digital data. It becomes possible to increase the parallel number. Therefore, for example, it is possible to further reduce the time required for reading out the pixel signal from each pixel 2.
  • the ADC 1131 for each area 1111 can be individually driven independently. Therefore, for example, pixel signals can be read from each pixel 2 more flexibly, for example, pixel signals from pixels 2 included in some areas 1111 can be individually read at a desired timing. It becomes possible.
  • some configurations may be provided outside the solid-state imaging device 1.
  • the configuration that bears at least part of the function of the control unit 101 illustrated in FIG. 3 transmits a control signal to each component in the solid-state imaging device 1 from the outside of the solid-state imaging device 1. You may control the operation
  • the configuration corresponding to the control unit 101 corresponds to an example of a “control device”.
  • FIG. 6 is a diagram illustrating an example of a circuit configuration of a unit pixel according to an embodiment of the present disclosure.
  • the unit pixel 2 includes a photoelectric conversion element (for example, a photodiode) PD and four pixel transistors.
  • the four pixel transistors are, for example, a transfer transistor Tr11, a reset transistor Tr12, an amplification transistor Tr13, and a selection transistor Tr14.
  • These pixel transistors can be composed of, for example, n-channel MOS transistors.
  • the transfer transistor Tr11 is connected between the cathode of the photoelectric conversion element PD and the floating diffusion portion FD.
  • Signal charges here, electrons
  • TRG transfer pulse
  • the reset transistor Tr12 has a drain connected to the power supply VDD and a source connected to the floating diffusion portion FD. Prior to the transfer of signal charges from the photoelectric conversion element PD to the floating diffusion portion FD, the potential of the floating diffusion portion FD is reset by applying a reset pulse RST to the gate.
  • the amplification transistor Tr13 has a gate connected to the floating diffusion portion FD, a drain connected to the power supply VDD, and a source connected to the drain of the selection transistor Tr14.
  • the amplification transistor Tr13 outputs the potential of the floating diffusion portion FD after being reset by the reset transistor Tr12 to the selection transistor Tr14 as a reset level. Further, the amplification transistor Tr13 outputs the potential of the floating diffusion portion FD after the signal charge is transferred by the transfer transistor Tr11 as a signal level to the selection transistor Tr14.
  • the selection transistor Tr14 has a drain connected to the source of the amplification transistor Tr13 and a source connected to the vertical signal line VSL.
  • the selection pulse SEL is applied to the gate of the selection transistor Tr14, the selection transistor Tr14 is turned on, and the signal output from the amplification transistor Tr13 is output to the vertical signal line VSL.
  • the selection transistor Tr14 may be configured to be connected between the power supply VDD and the drain of the amplification transistor Tr13.
  • the solid-state imaging device 1 is configured as a stacked solid-state imaging device
  • elements such as photodiodes and a plurality of MOS transistors are included in the first semiconductor chip in the middle or lower stage of FIG. Formed in part 341.
  • the transfer pulse, the reset pulse, the selection pulse, and the power supply voltage are supplied from the second semiconductor chip unit 342 in the middle stage or the lower stage of FIG.
  • the elements subsequent to the vertical signal line VSL connected to the drain of the selection transistor are configured in the logic circuit 345, and the elements subsequent to the vertical signal line VSL connected to the drain of the selection transistor are configured in the second circuit. It is formed in the semiconductor chip part 342.
  • FIG. 7 is a schematic timing chart illustrating an example of drive control of the solid-state imaging device 1 according to an embodiment of the present disclosure, and illustrates an example of drive control of the pixel 2.
  • FIG. 7 shows a horizontal synchronization signal (XHS) indicating one horizontal synchronization period, a TRG drive pulse for driving the transfer transistor Tr11 (transfer pulse for reading and transfer pulse for electronic shutter), and an RST drive pulse for driving the reset transistor Tr12 ( An electronic shutter reset pulse and a readout reset pulse), and a SEL drive pulse (readout selection pulse) for driving the selection transistor Tr14 are shown.
  • XHS horizontal synchronization signal
  • the potential of the photoelectric conversion element PD is reset by turning on the electronic shutter transfer pulse and the electronic shutter reset pulse. Thereafter, charges are accumulated in the photoelectric conversion element PD during the accumulation time, and a read pulse is issued from the sensor controller 7.
  • the potential of the floating diffusion part FD is reset by turning on a reset pulse at the time of reading, and then the potential of the pre-data phase (P phase) is AD converted. Thereafter, the charge of the photoelectric conversion element PD is transferred to the floating diffusion portion FD by a transfer pulse at the time of reading, and the data phase (D phase) is AD converted.
  • the selection pulse at the time of reading is in an on state.
  • the above is merely an example, and at least a part of the drive timing may be changed according to the electronic shutter and the reading operation.
  • the transfer pulse at the electronic shutter and the reset pulse at the electronic shutter are changed.
  • the potential of the photoelectric conversion element PD may be reset by turning it on.
  • FIG. 8 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1 according to an embodiment of the present disclosure, and shows an example of drive control of the ADC 113.
  • the driving of the ADC 113 will be described by focusing on the operations of the DAC 142, the comparator 141, and the counter 143 in the ADC 113 illustrated in FIG.
  • the horizontal synchronization signal (XHS) indicating one horizontal synchronization period
  • the potential of the ramp signal output from the DAC 142 (solid line)
  • the potential of the pixel signal output from the vertical signal line VSL (broken line)
  • the comparator 141 the comparator 141.
  • the inversion pulse output from the counter 143 and the operation image of the counter 143 are shown.
  • the DAC 142 has a first slope in which the potential sequentially drops at a constant gradient in the P phase for reading the reset level of the pixel signal, and is constant in the D phase for reading the data level of the pixel signal.
  • a ramp wave having a second slope in which the potential drops sequentially with a gradient is generated.
  • the comparator 141 compares the potential of the pixel signal with the potential of the ramp wave, and outputs an inversion pulse that is inverted at the timing when the potential of the pixel signal and the potential of the ramp wave intersect.
  • the counter 143 counts (P-phase count value) from the timing when the ramp wave starts to drop in the P phase to the timing when the potential of the ramp wave becomes equal to or lower than the potential of the pixel signal, and then the ramp wave in the D phase. Is counted from the timing when the voltage starts to fall to the timing when the potential of the ramp wave becomes equal to or lower than the potential of the pixel signal (D-phase count value). Thereby, the difference between the P-phase count value and the D-phase count value is acquired as a pixel signal from which reset noise has been removed. As described above, AD conversion of the pixel signal is performed using the ramp wave.
  • FIGS. 9 and 10 are block diagrams illustrating an example of a schematic configuration of the solid-state imaging device 1a according to the present embodiment.
  • the configuration of the solid-state imaging device 1a will be described by focusing on the difference from the solid-state imaging device 1 described with reference to FIGS. 1 to 8, and substantially the same as the solid-state imaging device 1. Detailed description of this part is omitted.
  • FIG. 9 shows an example of the power supply configuration of the solid-state imaging device 1 according to this embodiment.
  • the configuration of the portion where the pixel timing drive circuit 5 supplies the drive signal to the pixel 2 is mainly shown, and the other configurations are not shown.
  • a power source that supplies a power source voltage to the pixel 2 and a pixel timing driving circuit 5 that supplies a driving signal to the pixel 2 A power supply for supplying a power supply voltage to the pixel timing drive circuit 5 is provided individually. Therefore, hereinafter, the power supply that supplies the power supply voltage to the pixel 2 is also referred to as “power supply VDDHPX”, and the power supply voltage to the pixel timing drive circuit 5 (that is, the power supply for supplying the drive signal to the pixel 2).
  • the power supply for supplying the voltage is also referred to as “power supply VDDHVS”.
  • the power supplies VDDHPX and VDDHVS may be provided on different semiconductor chips.
  • the power supply VDDHPX may be provided in a semiconductor chip in which the pixels 2 are arranged (for example, the first semiconductor chip unit 341 shown in FIG. 2).
  • the power supply VDDHVS may be provided in a semiconductor chip (for example, the second semiconductor chip unit 342 shown in FIG. 2) provided with the pixel timing driving circuit 5.
  • the semiconductor chip on which the pixels 2 are arranged and the semiconductor chip on which the pixel timing driving circuit 5 is provided are connected via a connection part (for example, TSV (Through-Silicon Via)).
  • FIG. 10 shows an example of the configuration of a part related to reading of a pixel signal from the pixel 2 in the configuration of the solid-state imaging device 1a according to the present embodiment. That is, in the example shown in FIG. 10, the parts corresponding to the constant current circuit unit 114 and the ADC 113 are mainly shown, and the other components are not shown.
  • the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 are substantially the same as the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 shown in FIG. Omitted.
  • the comparator 141, the DAC 142, and the counter 143 correspond to the ADC 113 shown in FIG.
  • the MOS transistor 161 corresponds to the constant current circuit portion 114 shown in FIG.
  • the solid-state imaging device 1 a includes a sensor data unit 211.
  • the sensor data unit 211 recognizes the state of the pixel 2 based on a signal output from the counter 143, that is, a digital signal obtained by converting the pixel signal supplied from the pixel 2, and performs various processing using the recognition result. Execute.
  • the sensor data unit 211 may perform various processes related to so-called failure detection by using the recognition result of the state of the pixel 2.
  • the failure of the photoelectric conversion element PD is individually determined for each pixel 2. Can be recognized. Note that details of a mechanism for detecting a failure of the photoelectric conversion element PD included in the pixel 2 for each pixel 2 will be described later together with an example of drive control for recognizing the state of the pixel 2.
  • a part related to recognition of the pixel 2 corresponds to an example of a “recognition unit”.
  • the sensor data unit 211 may notify the detection result of the abnormality to the outside of the solid-state imaging device 1a.
  • the sensor data unit 211 may output a predetermined signal indicating that an abnormality has been detected to the outside of the solid-state imaging device 1a via a predetermined output terminal (that is, an Error pin).
  • a predetermined DSP (Digital Signal Processor) 401 provided outside the solid-state imaging device 1a may be notified that an abnormality has been detected.
  • the DSP 401 can notify the user that an abnormality has occurred in the solid-state imaging device 1a, for example, via a predetermined output unit.
  • the DSP 401 may perform control so as to limit all or a part of the vehicle safety function (ADAS function).
  • ADAS function vehicle safety function
  • the DSP 401 can correct the output of the pixel 2 in which an abnormality is detected using the output of another pixel 2 (for example, an adjacent pixel) different from the pixel 2.
  • a predetermined output destination for example, the DSP 401 or the like
  • the sensor data unit 211 itself may correct the output of the pixel 2 in which an abnormality is detected by using the result of failure detection.
  • the correction method is the same as that when the DSP 401 performs correction.
  • a portion of the sensor data unit 211 that corrects the output of the pixel 2 in which an abnormality has been detected corresponds to an example of a “correction processing unit”.
  • FIG. 6 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1a according to the present embodiment, and an example of control for recognizing the state of the photoelectric conversion element PD included in the pixel 2. Shows about.
  • VDDHPX indicates a power supply voltage applied to the pixel 2 from the power supply VDDHPX.
  • INCK indicates a synchronization signal
  • one pulse of the synchronization signal is a minimum unit of various processing periods executed in the solid-state imaging device 1a.
  • XVS and XHS indicate a vertical synchronization signal and a horizontal synchronization signal. That is, 1XVS corresponds to one frame period.
  • TRG, RST, and SEL indicate drive signals (that is, TRG drive pulse, RST drive pulse, and SEL drive pulse) supplied to the transfer transistor Tr11, the reset transistor Tr12, and the selection transistor Tr14, respectively.
  • the control related to the recognition of the state of the photoelectric conversion element PD is mainly the first control for accumulating charges in the photoelectric conversion element PD of the target pixel 2, Second control for reading out the electric charge accumulated in the photoelectric conversion element PD.
  • first control for accumulating charges in the photoelectric conversion element PD of the target pixel 2
  • Second control for reading out the electric charge accumulated in the photoelectric conversion element PD.
  • one frame period is assigned to each of the first control and the second control. Therefore, in this description, as shown in FIG. 11, the frame period to which the first control is assigned is also referred to as an “accumulated frame”, and the frame period to which the second control is assigned is also referred to as a “read frame”.
  • the accumulation frame will be described. As shown in FIG. 11, in the accumulation frame, first, the power supply voltage applied to the pixel 2 from the power supply VDDHPX is controlled to 0 V, and then the power supply voltage is controlled to the predetermined voltage VDD. The voltage VDD is applied to the pixel 2.
  • FIG. 12 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1a according to the present embodiment, and schematically shows the state of the pixel 2 in the period T11 in FIG.
  • the TRG drive pulse and the RST drive pulse are controlled to be on, the SEL drive pulse is controlled to be off, and the voltage applied to the pixel 2 from the power supply VDDHPX is controlled to 0V. Is done.
  • the potential of the floating diffusion portion FD is controlled to 0 V, a potential difference is generated between the anode and the cathode of the photoelectric conversion element PD, and charges are injected into the photoelectric conversion element PD.
  • the amount of charge held in the photoelectric conversion element PD as a result of the control shown in FIG. 12 is determined by the saturation characteristics of the photoelectric conversion element PD regardless of the light receiving state of the photoelectric conversion element PD.
  • the control for injecting charges into the photoelectric conversion element PD may be executed for all the pixels 2 at a predetermined timing (so-called global reset), or time division for each pixel 2. May be executed individually.
  • FIG. 13 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1a according to the present embodiment, and schematically shows the state of the pixel 2 in the period T13 in FIG.
  • the RST drive pulse is kept in the on state, and the TRG drive pulse is controlled in the off state.
  • the SEL drive pulse is kept off.
  • the voltage applied to the pixel 2 from the power supply VDDHPX is controlled to VDD.
  • the floating diffusion portion FD and the photoelectric conversion element PD are brought into a non-conductive state, and the potential of the floating diffusion portion FD is controlled to VDD.
  • the readout frame will be described.
  • the target pixel 2 is driven at a predetermined timing, and a pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD of the pixel 2 is read out.
  • the pixel 2 is driven in the period indicated by the reference symbol T ⁇ b> 15, and a pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD of the pixel 2 is read.
  • FIG. 14 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1a according to the present embodiment, and schematically shows the state of the pixel 2 in the period T15 in FIG.
  • each of the TRG drive pulse, the RST drive pulse, and the SEL drive pulse is controlled to be in an off state.
  • the state where the voltage VDD is applied to the pixel 2 is maintained.
  • each of the TRG drive pulse, the RST drive pulse, and the SEL drive pulse is controlled to be in an on state.
  • the transfer transistor Tr11 and the reset transistor Tr12 are turned on, and the charge accumulated in the photoelectric conversion element PD is transferred to the floating diffusion portion FD. Accumulated in the floating diffusion portion FD.
  • the selection transistor Tr14 is controlled to be conductive.
  • a voltage corresponding to the charge accumulated in the floating diffusion portion FD (in other words, charge leaked from the photoelectric conversion element PD) is applied to the gate of the amplification transistor Tr13, and the amplification transistor Tr13 is controlled to be in a conductive state.
  • a pixel signal corresponding to the voltage applied to the gate of the amplification transistor Tr13 is output from the pixel 2 via the vertical signal line VSL. That is, a charge corresponding to the saturation characteristic of the photoelectric conversion element PD is read from the photoelectric conversion element PD, and a pixel signal corresponding to the charge read result is output from the pixel 2 via the vertical signal line VSL. It will be.
  • the pixel signal output from the pixel 2 via the vertical signal line VSL is converted into a digital signal by the ADC 113 and output to, for example, the sensor data unit 211 described with reference to FIG.
  • the digital signal output to the sensor data unit 211 indicates a potential corresponding to the saturation characteristic of the photoelectric conversion element PD included in the pixel 2. That is, the sensor data unit 211 can individually recognize the state of the pixel 2 (and thus the state of the photoelectric conversion element PD included in the pixel 2) for each pixel 2 based on the digital signal. Therefore, for example, when an abnormality occurs in the pixel 2, the sensor data unit 211 can detect the abnormality for each pixel 2 individually. Based on such a configuration, for example, the sensor data unit 211 can output information regarding the pixel 2 in which an abnormality has occurred to a predetermined output destination.
  • the sensor data unit 211 may correct the pixel signal output from the pixel 2 in which an abnormality has occurred based on the pixel signal output from the other pixel 2.
  • FIG. 15 is an explanatory diagram for explaining an example of an operation related to pixel signal correction in the solid-state imaging device 1a according to the present embodiment. In the example illustrated in FIG. 15, an example in which the pixel signal output from the pixel 2 in which an abnormality has occurred is corrected based on the pixel signal output from another pixel 2 adjacent to the pixel 2 is illustrated.
  • the sensor data unit 211 for example, based on the timing at which the pixel signal from the pixel 2 in which an abnormality has occurred is read, and the position of the pixel 2 and the other pixel 2 adjacent to the pixel 2 What is necessary is just to recognize a position.
  • control related to the recognition of the state of the photoelectric conversion element PD included in each pixel 2 described above for example, control for detecting abnormality of the photoelectric conversion element PD
  • the target pixel 2 It may be executed at a timing when normal driving is not performed.
  • the above control may be executed when the solid-state imaging device 1 is activated.
  • the above control may be executed for other pixels 2 that are not used for capturing the image.
  • FIG. 16 is a diagram illustrating an example of a circuit configuration of a unit pixel in a solid-state imaging device according to a modification of the present embodiment.
  • a high-sensitivity photodiode (PD1) and a low-sensitivity for one pixel 1 shows an example of a seven-transistor configuration in which a photodiode (PD2) and a pixel internal capacitance (FC) are arranged.
  • the solid-state imaging device according to the modification of the present embodiment may be referred to as “solid-state imaging device 1c” in order to distinguish it from the solid-state imaging device 1a according to the above-described embodiment.
  • the pixel of the solid-state imaging device 1c according to the modification of the present embodiment that is, the pixel constituting the shared pixel structure from the pixel 2 of the solid-state imaging device 1a according to the above-described embodiment, “pixel 2c Or “unit pixel 2c”.
  • the unit pixel 2c includes a photoelectric conversion element PD1, a first transfer gate unit Tr21, a photoelectric conversion element PD2, a second transfer gate unit Tr22, a third transfer gate unit Tr23, a fourth transfer gate unit Tr25,
  • the charge storage unit FC, the reset gate unit Tr24, the floating diffusion unit FD, the amplification transistor Tr26, and the selection transistor Tr27 are included.
  • a plurality of drive lines for supplying various drive signals to the unit pixel 2c are wired for each pixel row, for example.
  • Various drive signals TG1, TG2, FCG, RST, and SEL are supplied from the pixel timing drive circuit 5 shown in FIG. 1 via a plurality of drive lines.
  • these drive signals are in an active state at a high level (for example, the power supply voltage VDD) and are in a non-low state (for example, a negative potential). This is a pulse signal that becomes active.
  • the photoelectric conversion element PD1 is composed of, for example, a PN junction photodiode.
  • the photoelectric conversion element PD1 generates and accumulates charges corresponding to the received light quantity.
  • the first transfer gate portion Tr21 is connected between the photoelectric conversion element PD1 and the floating diffusion portion FD.
  • a drive signal TG1 is applied to the gate electrode of the first transfer gate portion Tr21.
  • the drive signal TG1 becomes active, the first transfer gate portion Tr21 becomes conductive, and the charges accumulated in the photoelectric conversion element PD1 are transferred to the floating diffusion portion FD via the first transfer gate portion Tr21. .
  • the photoelectric conversion element PD2 is composed of, for example, a PN junction photodiode, similarly to the photoelectric conversion element PD1.
  • the photoelectric conversion element PD2 generates and accumulates charges corresponding to the received light quantity.
  • the photoelectric conversion element PD1 has a larger light receiving surface area and higher sensitivity
  • the photoelectric conversion element PD2 has a smaller light receiving surface area and lower sensitivity.
  • the second transfer gate portion Tr22 is connected between the charge storage portion FC and the floating diffusion portion FD.
  • a drive signal FCG is applied to the gate electrode of the second transfer gate portion Tr22.
  • the drive signal FCG becomes active, the second transfer gate portion Tr22 becomes conductive, and the potentials of the charge storage portion FC and the floating diffusion portion FD are coupled.
  • the third transfer gate portion Tr23 is connected between the photoelectric conversion element PD2 and the charge storage portion FC.
  • the drive signal TG2 is applied to the gate electrode of the third transfer gate portion Tr23.
  • the third transfer gate portion Tr23 becomes conductive, and the charge accumulated in the photoelectric conversion element PD2 passes through the third transfer gate portion Tr23, or the charge accumulation portion FC or It is transferred to a region where the potentials of the charge storage unit FC and the floating diffusion unit FD are combined.
  • the lower portion of the gate electrode of the third transfer gate portion Tr23 has a slightly deep potential, and exceeds the saturation charge amount of the photoelectric conversion element PD2, and transfers the charges overflowing from the photoelectric conversion element PD2 to the charge storage portion FC.
  • An overflow path is formed.
  • the overflow path formed below the gate electrode of the third transfer gate portion Tr23 is simply referred to as the overflow path of the third transfer gate portion Tr23.
  • the fourth transfer gate portion Tr25 is connected between the second transfer gate portion Tr22 and the reset gate portion Tr24, and the floating diffusion portion FD.
  • a drive signal FDG is applied to the gate electrode of the fourth transfer gate portion Tr25.
  • the fourth transfer gate portion Tr25 becomes conductive, the node 152 between the second transfer gate portion Tr22, the reset gate portion Tr24, and the fourth transfer gate portion Tr25, and the floating diffusion
  • the potential with the part FD is coupled.
  • the charge storage unit FC includes, for example, a capacitor, and is connected between the second transfer gate unit Tr22 and the third transfer gate unit Tr23.
  • the counter electrode of the charge storage unit FC is connected between the power supply VDD that supplies the power supply voltage VDD.
  • the charge storage unit FC stores the charge transferred from the photoelectric conversion element PD2.
  • the reset gate portion Tr24 is connected between the power supply VDD and the floating diffusion portion FD.
  • a drive signal RST is applied to the gate electrode of the reset gate portion Tr24.
  • the reset gate portion Tr24 becomes conductive and the potential of the floating diffusion portion FD is reset to the level of the power supply voltage VDD.
  • the floating diffusion unit FD converts the charge into a voltage signal and outputs it.
  • the amplification transistor Tr26 has a gate electrode connected to the floating diffusion portion FD, a drain electrode connected to the power supply VDD, and a readout circuit for reading out the electric charge held in the floating diffusion portion FD, a so-called source follower circuit input portion and Become. That is, the amplifying transistor Tr26 forms a constant current source and a source follower circuit connected to one end of the vertical signal line VSL by connecting the source electrode to the vertical signal line VSL via the selection transistor Tr27.
  • the selection transistor Tr27 is connected between the source electrode of the amplification transistor Tr26 and the vertical signal line VSL.
  • a drive signal SEL is applied to the gate electrode of the selection transistor Tr27.
  • the selection transistor Tr27 becomes conductive and the unit pixel 2c becomes selected.
  • the pixel signal output from the amplification transistor Tr26 is output to the vertical signal line VSL via the selection transistor Tr27.
  • each drive signal is in an active state, each drive signal is turned on, or each drive signal is controlled to be in an on state, and each drive signal is in an inactive state.
  • Each drive signal is turned off or each drive signal is controlled to be turned off.
  • each gate portion or each transistor is turned on, each gate portion or each transistor may be turned on, and each gate portion or each transistor is turned off. It is also said that the transistor is turned off.
  • FIG. 17 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1c according to the modification of the present embodiment, and recognizes the states of the photoelectric conversion elements PD1 and PD2 included in the pixel 2c. An example of control for this is shown.
  • VDDHPX indicates a power supply voltage applied to the pixel 2c from the power supply VDDHPX.
  • INCK indicates a synchronization signal
  • one pulse of the synchronization signal is a minimum unit of various processing periods executed in the solid-state imaging device 1c.
  • XVS and XHS indicate a vertical synchronization signal and a horizontal synchronization signal. That is, 1XVS corresponds to one frame period.
  • TG1, FCG, TG2, and FDG are drive signals (hereinafter referred to as “drive signals”) supplied to the first transfer gate unit Tr21, the second transfer gate unit Tr22, the third transfer gate unit Tr23, and the fourth transfer gate unit Tr25, respectively.
  • RST and SEL indicate drive signals (ie, RST drive pulse and SEL drive pulse) supplied to the reset gate unit Tr24 and the selection transistor Tr27, respectively.
  • the control related to the recognition of the state of the photoelectric conversion elements PD1 and PD2 is the first control for accumulating charges in the photoelectric conversion elements PD1 and PD2 of the target pixel 2c.
  • second control for reading out the electric charge accumulated in the photoelectric conversion element PD.
  • one frame period is assigned to each of the first control and the second control. That is, the frame period to which the first control is assigned corresponds to an “accumulated frame”, and the frame period to which the second control is assigned corresponds to a “read frame”.
  • the accumulation frame will be described. As shown in FIG. 17, in the accumulation frame, first, the power supply voltage applied to the pixel 2c from the power supply VDDHPX is controlled to 0V, and then the power supply voltage is controlled to the predetermined voltage VDD. The voltage VDD is applied to the pixel 2c.
  • FIG. 18 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1c according to the modification of the present embodiment, and schematically shows the state of the pixel 2c in the period T21 in FIG.
  • the TG1 drive pulse, the FCG drive pulse, the TG2 drive pulse, the FDG drive pulse, and the RST drive pulse are controlled to be in the on state, and the SEL drive pulse is controlled to be in the off state.
  • the voltage applied to the pixel 2 from the power supply VDDHPX is controlled to 0V.
  • the potentials of the floating diffusion portion FD and the charge storage portion FC are controlled to 0 V, a potential difference is generated between the anode and the cathode of each of the photoelectric conversion elements PD1 and PD2, and charges are injected into the photoelectric conversion element PD. Note that, as a result of the control shown in FIG.
  • the amount of charge held in each of the photoelectric conversion elements PD1 and PD2 is the saturation of each of the photoelectric conversion elements PD1 and PD2 regardless of the light receiving state of the photoelectric conversion elements PD1 and PD2. It will be determined by the characteristics. That is, when some abnormality occurs in the photoelectric conversion element PD1, the amount of charge held in the photoelectric conversion element PD1 changes (for example, decreases) compared to the normal time. The same applies to the photoelectric conversion element PD2. As shown in FIG. 18, the control for injecting charges into each of the photoelectric conversion elements PD1 and PD2 may be executed for all the pixels 2c at a predetermined timing (that is, global reset), or each pixel. 2c may be individually executed in a time division manner.
  • FIG. 19 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1c according to the application example of the present embodiment, and schematically shows the state of the pixel 2c in the period T23 of FIG.
  • each of the FDG drive pulse and the RST drive pulse is kept in the on state, and each of the TG1 drive pulse, the FCG drive pulse, and the TG2 drive pulse is controlled to be in the off state. .
  • the SEL drive pulse is kept off.
  • the voltage applied to the pixel 2c from the power supply VDDHPX is controlled to VDD.
  • FIG. 20 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1c according to the present embodiment, and control related to readout of charges accumulated in the photoelectric conversion elements PD1 and PD2 of the pixel 2c. An example is shown.
  • VSL indicates the potential of a signal output via the vertical signal line (that is, a pixel signal output from the pixel 2c).
  • the signal shown as VSL is individually shown for each of the dark state and the bright state.
  • RAMP indicates the potential of the ramp wave output from the DAC in the ADC to the comparator.
  • a pulse indicating a change in potential of a signal output via the vertical signal line is superimposed on a pulse indicating a change in potential of the ramp wave.
  • VCO represents a voltage signal output from a counter in the ADC.
  • the P phase indicates a pre-data phase for reading the reset level of the pixel signal output from the pixel 2c.
  • the D phase indicates a data phase for reading the data level of the pixel signal.
  • the solid-state imaging device 1c As shown in FIG. 20, in the solid-state imaging device 1c according to the modified example of the present embodiment, first the first pixel signal corresponding to the electric charge accumulated in the photoelectric conversion element PD1 is read, and then accumulated in the photoelectric conversion element PD2. A second pixel signal corresponding to the generated charge is read out. At this time, for reading the first pixel signal, the P phase is read first, and then the D phase is read. On the other hand, regarding the reading of the second pixel signal, the charge accumulated in the charge accumulation unit FC is reset with the reading of the P phase, so the D phase is read first and then the P phase is read. Read phase. In the following, the operation of the solid-state imaging device 1c related to the reading of each of the first pixel signal and the second pixel signal is divided into an operation related to the P-phase reading and an operation related to the D-phase reading. explain.
  • the FDG drive pulse and the RST drive pulse are controlled to be in an off state. That is, at the start of the readout frame, each of the TG1 drive pulse, the FCG drive pulse, the TG2 drive pulse, the FDG drive pulse, the RST drive pulse, and the SEL drive pulse is in an OFF state. Thereafter, readout of the pixel signal from the target pixel 2c is started at a predetermined timing (a predetermined horizontal synchronization period) in the readout frame.
  • P-phase readout is performed on the first pixel signal corresponding to the electric charge accumulated in the photoelectric conversion element PD1.
  • the potential of the floating diffusion portion FD is set to the power supply voltage VDD by temporarily controlling the RST drive pulse to be in the on state. Reset to level.
  • the TG1 drive pulse, the FCG drive pulse, and the TG2 drive pulse are kept off. That is, between the photoelectric conversion element PD1 and the floating diffusion portion FD, and between the charge storage portion FC (and thus the photoelectric conversion element PD2) and the floating diffusion portion FD are in a non-conductive state. Therefore, the pixel signal read from the pixel 2c via the vertical signal line VSL at this time indicates the reset level of the pixel signal output from the pixel 2c.
  • D-phase reading is performed on the first pixel signal corresponding to the electric charge accumulated in the photoelectric conversion element PD1.
  • the TG1 drive pulse is temporarily controlled to be in an on state, and the photoelectric conversion element PD1 and the floating diffusion portion FD are in a conductive state during the period in which the TG1 drive pulse indicates the on state.
  • the electric charge accumulated in the photoelectric conversion element PD1 is transferred to the floating diffusion portion FD and accumulated in the floating diffusion portion FD.
  • a voltage corresponding to the charge accumulated in the floating diffusion portion FD (in other words, charge leaked from the photoelectric conversion element PD1) is applied to the gate of the amplification transistor Tr26, and the amplification transistor Tr26 is controlled to be in a conductive state.
  • a pixel signal (that is, a first pixel signal) corresponding to the voltage applied to the gate of the amplification transistor Tr26 is output from the pixel 2c via the vertical signal line VSL. That is, the charge corresponding to the saturation characteristic of the photoelectric conversion element PD1 is read from the photoelectric conversion element PD1, and the first pixel signal corresponding to the read result of the charge is read from the pixel 2c via the vertical signal line VSL. Will be output.
  • the SEL drive signal is controlled to the off state
  • the FDG drive signal is first temporarily controlled to the off state
  • the RST drive signal is temporarily The on state is controlled.
  • the potential of the floating diffusion portion FD is reset to the level of the power supply voltage VDD.
  • the FCG drive signal is controlled to be in an on state, and the floating diffusion unit FD and the charge storage unit FC are brought into conduction.
  • the SEL drive signal is controlled to be on, and reading of the second pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD2 is started.
  • the D-phase reading is first performed as described above. Specifically, the TG1 drive pulse is temporarily controlled to be in an on state, and the photoelectric conversion element PD2 and the charge storage unit FC are in a conductive state during the period in which the TG2 drive pulse is in the on state. That is, during the period, the photoelectric conversion element PD2, the charge storage unit FC, and the floating diffusion unit FD are in a conductive state. As a result, the potentials of the charge storage unit FC and the floating diffusion unit FD are combined, and the charges stored in the photoelectric conversion element PD2 are transferred to the combined region and stored in the region.
  • a voltage corresponding to the charge accumulated in the region (in other words, the charge leaked from the photoelectric conversion element PD2) is applied to the gate of the amplification transistor Tr26, and the amplification transistor Tr26 is controlled to be conductive. Accordingly, a pixel signal (that is, a second pixel signal) corresponding to the voltage applied to the gate of the amplification transistor Tr26 is output from the pixel 2c through the vertical signal line VSL. That is, the charge corresponding to the saturation characteristic of the photoelectric conversion element PD2 is read from the photoelectric conversion element PD2, and the second pixel signal corresponding to the read result of the charge is transferred from the pixel 2c via the vertical signal line VSL. Will be output.
  • P-phase readout is performed on the second pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD2.
  • the SEL drive signal is controlled to the off state, and then the RST drive signal is temporarily controlled to the on state.
  • the potential of the region where the potentials of the charge storage unit FC and the floating diffusion unit FD are combined is reset to the level of the power supply voltage VDD.
  • the SEL drive signal is controlled to be in an ON state, a voltage corresponding to the potential of the region is applied to the gate of the amplification transistor Tr26, and a pixel signal corresponding to the voltage (that is, the second pixel signal) is a vertical signal. It is output via the line VSL.
  • the TG1 drive pulse, the FCG drive pulse, and the TG2 drive pulse are kept off.
  • each between the photoelectric conversion element PD1 and the floating diffusion part FD and between the charge storage part FC and the floating diffusion part FD (and thus between the photoelectric conversion element PD2 and the floating diffusion part FD) is non-existent. It becomes conductive. Therefore, the pixel signal read from the pixel 2c via the vertical signal line VSL at this time indicates the reset level of the pixel signal output from the pixel 2c.
  • the first pixel signal and the second pixel signal sequentially output from the pixel 2c via the vertical signal line VSL are converted into digital signals by the ADC 113, for example, the sensor data unit described with reference to FIG. 211 is output.
  • the digital signal sequentially output to the sensor data unit 211 indicates a potential corresponding to the saturation characteristics of the photoelectric conversion elements PD1 and PD2 included in the pixel 2c. That is, the sensor data unit 211 can individually recognize the state of the pixel 2c (and thus the state of each of the photoelectric conversion elements PD1 and PD2 included in the pixel 2c) based on the digital signal. It becomes.
  • the application of the power supply voltage to the pixels is controlled so that charges are injected into the photoelectric conversion elements of at least some of the plurality of pixels, After that, supply of a drive signal to the pixel is controlled so that a pixel signal corresponding to the charge injected from the photoelectric conversion element is read out.
  • the solid-state imaging device according to the present embodiment recognizes the state of the pixel according to the readout result of the pixel signal corresponding to the charge from the photoelectric conversion element of the at least some pixels.
  • the state of the pixel (and thus the photoelectric conversion element included in the pixel) is individually recognized based on the pixel signal output from each pixel. Is possible. Therefore, in the solid-state imaging device, for example, when a failure occurs in some pixels, the abnormality can be detected for each pixel. Further, by using such a mechanism, for example, when an abnormality occurs in some pixels, it is possible to output information about the pixels to a predetermined output destination. As another example, since it is possible to specify the position of a pixel in which a failure has occurred, a pixel signal output from the pixel when an image is captured is output from another pixel (for example, an adjacent pixel). It is also possible to correct based on the pixel signal.
  • the application of power supply voltage to each pixel is controlled to inject charges into the photoelectric conversion element of the pixel. That is, the amount of charge held in the photoelectric conversion element as a result of the control is determined by the saturation characteristics of the photoelectric conversion elements PD1 and PD2, regardless of the light receiving state of the photoelectric conversion element. Due to such characteristics, according to the solid-state imaging device according to the present embodiment, control related to recognition of the state of each pixel (for example, a test for detecting a defective pixel) is executed regardless of the amount of light in the external environment. Is possible. That is, according to the solid-state imaging device according to the present embodiment, for example, a test for detecting a failure of each pixel 2 can be performed even in an environment where the amount of light in the external environment is smaller.
  • Solid-state imaging device 1d an example of a mechanism for the solid-state imaging device 1 to more efficiently execute various tests such as failure detection during an image (particularly, moving image) imaging period will be described.
  • the solid-state imaging device 1 may be referred to as “solid-state imaging device 1d”.
  • FIG. 21 is a block diagram illustrating an example of a schematic configuration of the solid-state imaging device 1d according to the present embodiment.
  • the configuration of the solid-state imaging device 1a will be described by focusing on the difference from the solid-state imaging device 1 described with reference to FIGS. 1 to 8, and substantially the same as the solid-state imaging device 1. Detailed description of this part is omitted.
  • FIG. 21 illustrates an example of a configuration of a portion related to reading of a pixel signal from the pixel 2 in the configuration of the solid-state imaging device 1d according to the present embodiment. That is, in the example shown in FIG. 21, the parts corresponding to the constant current circuit unit 114 and the ADC 113 are mainly shown, and the other components are not shown.
  • the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 are substantially the same as the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 shown in FIG. Omitted.
  • the comparator 141, the DAC 142, and the counter 143 correspond to the ADC 113 shown in FIG.
  • the MOS transistor 161 corresponds to the constant current circuit unit 114 shown in FIG.
  • the solid-state imaging device 1d includes a sensor data unit 221.
  • the sensor data unit 221 corresponds to the sensor data unit 211 in the solid-state imaging device 1a according to the first embodiment described with reference to FIG.
  • the control unit 101 illustrated in FIG. 3 controls the timing of exposure by each pixel 2 and the timing of reading a pixel signal based on the exposure result from the pixel 2. Further, the control unit 101 performs exposure by the pixel 2 and reading of a pixel signal based on the exposure result during a unit frame period corresponding to a predetermined frame rate in at least some of the pixels 2.
  • the operation of a predetermined configuration (for example, the sensor data unit 221) in the solid-state imaging device 1d is controlled so that a predetermined test such as failure detection is executed using a period that is not.
  • the timing at which the control unit 101 causes the predetermined configuration such as the sensor data unit 221 to execute the predetermined test will be described later in detail along with an example of drive control of the solid-state imaging device 1d.
  • the sensor data unit 221 executes a predetermined test such as failure detection based on the control from the control unit 101. Specifically, the sensor data unit 221 determines a state of a predetermined configuration in the solid-state imaging device 1d based on a signal output from the counter 143, that is, a digital signal obtained by converting the pixel signal supplied from the pixel 2. By recognizing, when an abnormality occurs in the configuration, the abnormality is detected.
  • the sensor data unit 221 is configured to supply a driving signal to at least some of the pixels 2 and each pixel 2 based on the digital signal output from the counter 143 (for example, the pixel timing driving circuit 5 and the address decoder 4). Etc.) and an abnormality occurring in at least one of the ADCs 111 can be detected.
  • the sensor data unit 221 may specify the ADC 113 that is the output source of the digital signal and the pixel 2 in which an abnormality has occurred according to the output timing of the digital signal.
  • the configuration related to the output of the pixel signal from each of the plurality of pixels 2 for example, the address decoder 4, the pixel timing drive circuit 5, the ADC 113, etc. It is possible to recognize that an abnormality has occurred.
  • the sensor data unit 221 includes a wiring connected to at least some of the pixels 2, a configuration for supplying a driving signal to each pixel 2, and the ADC 113 according to the output state of the digital signal from the counter 143. It is possible to detect an abnormality occurring in at least one of them.
  • the vertical signal line corresponding to the column or the column corresponds to the column. It can be recognized that an abnormality has occurred in the ADC 113.
  • an abnormality occurs in the output state of a digital signal for some rows, it can be recognized that an abnormality has occurred in the horizontal signal line corresponding to the row.
  • the subject of the detection Is not limited to the sensor data unit 221 and the detection method is not limited.
  • a unit for detecting an abnormality occurring in the configuration may be additionally provided separately from the sensor data unit 221 depending on the configuration to be tested.
  • a predetermined filter for example, LPF
  • LPF predetermined filter
  • the sensor data unit 221 may execute a predetermined process according to the detection result.
  • the sensor data unit 221 may notify the outside of the solid-state imaging device 1d of a detection result of an abnormality that has occurred in at least a part of the configuration.
  • the sensor data unit 211 may output a predetermined signal indicating that an abnormality has been detected to the outside of the solid-state imaging device 1d via a predetermined output terminal (that is, an Error pin).
  • a predetermined DSP (Digital Signal Processor) 401 provided outside the solid-state imaging device 1d may be notified that an abnormality has been detected.
  • a part of the sensor data unit 221 that controls the detection result of an abnormality occurring in at least a part of the configuration to a predetermined output destination is an example of the “output control unit”. It corresponds to.
  • the pixel The output from 2 may be corrected based on the output from other pixels 2.
  • FIGS. 22 and 23 are explanatory diagrams for explaining an example of an operation related to pixel signal correction in the solid-state imaging device 1d according to the present embodiment.
  • FIG. 22 shows an example when abnormality occurs in the output of pixel signals corresponding to some columns.
  • an example in which the pixel signal corresponding to the column in which the abnormality has occurred is corrected based on the pixel signal corresponding to another column adjacent to the column.
  • the sensor data unit 221 specifies the ADC 113 in which the abnormality is detected in the output of the digital signal, thereby specifying the column in which the abnormality has occurred and another column adjacent to the column. Good.
  • FIG. 23 shows an example where an abnormality occurs in the output of pixel signals corresponding to some rows.
  • an example in which the pixel signal corresponding to the row in which the abnormality has occurred is corrected based on the pixel signal corresponding to another row adjacent to the row.
  • the sensor data unit 221 may identify a row in which an abnormality has occurred and another row adjacent to the row based on the timing at which the pixel signal in which the abnormality has occurred is read.
  • the pixel signal output from the pixel 2 in which an abnormality has occurred is corrected based on the pixel signal output from another pixel 2 adjacent to the pixel 2. Is also possible.
  • a portion of the sensor data unit 221 that corrects an output from at least some of the pixels 2 corresponds to an example of a “correction processing unit”.
  • FIG. 24 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1d according to the present embodiment, and an example of timing control at which a predetermined test of the solid-state imaging device 1d is executed. Show.
  • the horizontal axis indicates the time direction
  • the vertical axis indicates the position in the row direction of the two-dimensionally arranged pixels 2.
  • each pixel 2 is exposed during the unit frame period (that is, one vertical synchronization period) and the exposure result.
  • the drive control of the solid-state imaging device 1d will be described by paying attention to the case where reading is executed a plurality of times.
  • the solid-state imaging device 1 d includes a first exposure (Long exposure), a second exposure (Middle exposure), and a third exposure (Short exposure) having different exposure times during a unit frame period. exposure) is executed sequentially in a time-sharing manner.
  • reference symbols T111 and T112 indicate an exposure period (Long Shutter) in the first exposure
  • reference symbols T121 and T122 indicate pixel signals based on the result of the first exposure. Indicates the read period (Long Read).
  • Reference symbols T131 and T132 indicate an exposure period (Middle Shutter) in the second exposure
  • reference symbols T141 and T142 indicate a pixel signal readout period (Middle Read) based on the result of the second exposure.
  • Reference symbols T151 and T152 indicate an exposure period (Short Shutter) in the third exposure
  • reference symbols T161 and T162 indicate a pixel signal readout period (Short Read) based on the result of the third exposure. Show.
  • reference symbol VBLK indicates a vertical blank (V blank) period.
  • V blank vertical blank
  • a predetermined test such as column signal line failure detection or TSV failure detection is performed, and pixel signals are read from any of the pixels 2 during the period.
  • the vertical blank period VBLK is a period from the completion of reading pixel signals from a series of pixels 2 in a certain frame period to the start of reading pixel signals from the series of pixels 2 in the next frame period. It corresponds to a period.
  • Reference numerals T171 and T172 indicate that pixel 2 in each row is subjected to exposure by the pixel 2 (for example, first exposure to third exposure) and readout of a pixel signal based on the exposure result. Corresponds to no period.
  • the solid-state imaging device 1d according to the present embodiment performs a predetermined test (for example, BIST: Built-In Self-Test) using the periods T171 and T172. Specific examples of the predetermined test include failure detection for each pixel.
  • the periods indicated by reference numerals T171 and T172 are also referred to as “BIST periods”.
  • the BIST periods T171 and T172 are also referred to as “BIST period T170” unless otherwise distinguished.
  • the BIST period T170 is the last in the unit frame period in which one or more exposures (for example, the first exposure to the third exposure) are performed by pixels in a certain row. This is started after the reading of the pixel signal based on the result of the exposure (for example, the third exposure) is completed. Further, the BIST period T170 ends before the first exposure (for example, the first exposure) in the next frame period after the unit frame period is started. As a more specific example, the BIST period T171 shown in FIG. 24 starts from the exposure period T112 of the first exposure in the next unit frame period after the end of the pixel signal readout period T161 based on the third exposure result. It is a period until. The BIST period T170 may be set between the first exposure and the second exposure, or between the second exposure and the third exposure. Although details will be described later, the BIST period T170 is generated by setting the vertical blank period VBLK.
  • FIG. 25 and FIG. 26 are explanatory diagrams for explaining an example of schematic control related to readout of pixel signals from each pixel 2 in the solid-state imaging device 1d according to the present embodiment.
  • the vertical axis schematically shows the vertical synchronization period XVS
  • the horizontal axis schematically shows the horizontal synchronization period XHS.
  • square regions indicated by reference characters L, M, and S schematically show the readout timing of the exposure result from each of the two-dimensionally arranged plurality of pixels 2. This corresponds to each of the exposure, the second exposure, and the third exposure.
  • the horizontal direction corresponds to the column direction of the plurality of pixels 2 that are two-dimensionally arranged
  • the vertical direction corresponds to the row direction of the plurality of pixels 2. It corresponds.
  • pixel signals are read from the pixels 2 included in the row for each horizontal synchronization period.
  • pixel signals are sequentially read out based on each exposure result in the order of the first exposure, the second exposure, and the third exposure every horizontal synchronization period.
  • reference numeral R111 in FIG. 25 schematically illustrates a part of the vertical synchronization period. That is, in the example shown in FIG. 25, in the period R111, the pixel signals based on the results of the first exposure, the second exposure, and the third exposure are the pixel 2 in the ⁇ row and the pixel in the ⁇ row, respectively. 2 and the pixel 2 in the ⁇ -th row.
  • FIG. 26 shows a schematic timing chart relating to readout of the pixel signal from each pixel 2 in the example shown in FIG.
  • the pixel signal is read based on the first exposure result from the pixel 2 in the ⁇ row, and the pixel signal based on the second exposure result from the pixel 2 in the ⁇ row.
  • Readout and readout of pixel signals based on the third exposure result from the pixels 2 in the ⁇ -th row are sequentially executed.
  • readout of a pixel signal based on the first exposure result from the pixel 2 in the ⁇ + 1 row readout of a pixel signal based on the second exposure result from the pixel 2 in the ⁇ + 1 row, and ⁇ + 1 row.
  • the readout of pixel signals based on the third exposure result from the pixel 2 of the eye is sequentially executed.
  • the drive control described above is merely an example, and at least the BIST period T170 is provided, and if a predetermined test can be performed during the BIST period T170, the drive control of the solid-state imaging device 1d according to the present embodiment is
  • the example described with reference to FIGS. 24 to 26 is not necessarily limited.
  • the solid-state imaging device 1d according to the present embodiment may be configured such that each pixel 2 executes exposure and reading of the exposure result only once during a unit frame period. Good.
  • the BIST period T170 is started after the reading of the pixel signal based on the exposure result in a certain unit frame period, and is ended until the exposure in the next unit frame period is started.
  • FIG. 27 is a timing chart for explaining the relationship between the exposure time constraint and the vertical blank period in the solid-state imaging device 1d according to the present embodiment.
  • the first exposure Long exposure
  • the second exposure Middle exposure
  • the third exposure having different exposure times during the unit frame period.
  • An example in which exposure (Short exposure) is sequentially performed is shown.
  • the horizontal axis and the vertical axis in FIG. 27 are the same as the horizontal axis and the vertical axis in FIG.
  • the unit frame period (that is, one vertical synchronization period) is 25 ms.
  • the ratio of the exposure period (in other words, the charge accumulation period in the pixel 2) between each of the first exposure to the third exposure (hereinafter also referred to as “exposure ratio”) is 16 times, Assuming that the first exposure period (Long Shutter) is A, the second exposure period (Middle Shutter) is A / 16, and the third exposure period (Short Shutter) is 1/256.
  • the solid-state imaging device performs exposure by at least some pixels and reading of pixel signals based on the exposure result during a unit frame period corresponding to a predetermined frame rate.
  • a predetermined test is executed during a BIST period that is not performed.
  • the BIST period is started after the reading of the pixel signal based on the result of the last exposure in the unit frame period in which one or more exposures are performed by at least some pixels (for example, pixels in a certain row) is completed. . Further, the BIST period ends before the first exposure in the next frame period after the unit frame period is started.
  • a test for detecting a failure of the pixel 2 included in each row is executed in a BIST period defined corresponding to the row. It becomes possible.
  • the conventional solid-state imaging device when fault detection is executed for all rows, it takes a period of at least one frame to execute the test, and a dedicated image is not taken for the test. It was necessary to provide a frame.
  • the solid-state imaging device according to the present embodiment it is possible to execute a test for failure detection for each row in parallel with the imaging of the image, compared with the conventional solid-state imaging device. Therefore, there is no need to provide a dedicated frame in which no image is taken for testing.
  • the solid-state imaging device According to the solid-state imaging device according to the present embodiment, at least a part of the tests performed in the vertical blank period can be performed in the BIST period. With such a configuration, the vertical blank period can be further shortened, and as a result, the frame rate can be further improved.
  • failure detection for TSV, column signal line failure detection, and the like may be executed in the vertical blank period. With such a configuration, it is possible to execute each failure detection while maintaining a sufficient exposure time while maintaining the frame rate.
  • the solid-state imaging device by performing a predetermined test using the BIST period, various tests such as failure detection during the imaging period can be performed. It becomes possible to execute efficiently.
  • the hardware of the front camera ECU and the imaging device has a configuration in which a lower chip 1091 and an upper chip 1092 are stacked.
  • the right part of FIG. 28 represents a floor plan that is the hardware configuration of the lower chip 1091, and the left part of FIG. 28 represents the floor plan that is the hardware structure of the upper chip 1092.
  • the lower chip 1091 and the upper chip 1092 are provided with TCVs (Through Chip Vias) 1093-1 and 1093-2 at the left and right ends in the respective drawings, and the lower chip 1091 and the upper chip 1092 penetrate therethrough. Are electrically connected.
  • a row driving unit 1102 (FIG. 29) is arranged on the right side of the TCV 1093-1 in the drawing and is electrically connected.
  • a control line gate 1143 (FIG. 29) of the front camera ECU 73 is arranged on the left side of the TCV 1093-2 in the drawing and is electrically connected. Details of the row driver 1102 and the control line gate 1143 will be described later with reference to FIG. In this specification, TCV and TSV are treated as synonymous.
  • the lower chip 1091 and the upper chip 1092 are provided with TCV 1093-11 and 1093-12 at the upper and lower ends in the respective drawings, and the lower chip 1091 and the upper chip 1092 are penetrated and electrically connected. Yes.
  • a column ADC (Analog to Digital Converter) 1111-1 is arranged and electrically connected to the lower part of the TCV 1093-11 in the figure, and the upper part of the TCV 1093-12 in the figure is A column ADC (Analog to Digital Converter) 111-2 is disposed and electrically connected.
  • a DAC (Digital to Analog Converter) 1112 is provided between the right end portions of the column ADCs 1111-1 and 111-2 in the drawing and on the left side of the control line gate 1143, and arrows C1 and C2 in the drawing. As shown, the ramp voltage is output to the column ADCs 1111-1 and 111-2. Note that the column ADCs 1111-1 and 111-2 and the DAC 1112 have a configuration corresponding to the image signal output unit 1103 in FIG. Since the DAC 1112 preferably outputs ramp voltages having the same characteristics to the column ADCs 1111-1 and 111-2, it is desirable that the DAC 1112 be equidistant from both the column ADCs 1111-1 and 111-2. Further, although one example is shown in the example of FIG. 28, two DACs 1112 having the same characteristics are provided for each of the column ADCs 1111-1 and 111-2. May be. Details of the image signal output unit 1103 will be described later with reference to FIG.
  • a signal processing circuit 1113 is provided between the upper and lower columns ADC 1111-1 and 111-2 and between the row driving unit 1102 and the DAC 1112.
  • the control unit 1121 and the image processing unit 1122 in FIG. Functions corresponding to the output unit 1123 and the failure detection unit 1124 are realized.
  • the pixel array 1101 substantially forms the entire surface of a rectangular area surrounded by TCVs 1093-1, 1093-2, 1093-11, and 1093-12 provided at the top, bottom, left and right ends. Yes.
  • the pixel array 1101 is based on a control signal supplied from the row driver 1102 from the TCV 1093-1 via the pixel control line L (FIG. 29), and among the pixel signals, the pixel signal of the upper half pixel in the figure. Is output to the lower chip 1091 via the TCV 1093-11, and the pixel signal of the lower half pixel in the figure is output to the lower chip 1091 via the TCV 1093-12.
  • the control signal is sent from the signal processing circuit 1113 that implements the row driving unit 1102 via the TCV 1093-1 via the pixel control line L of the pixel array of the upper chip 1092, as indicated by an arrow B1 in the figure. It is output to the gate 1143 (FIG. 29).
  • the control line gate 1143 controls the control line according to the control signal via the pixel control line L from the row driving unit 1102 (FIG. 29) for the row address which is the command information from the control unit 1121 (FIG. 29).
  • the control line gate 1143 By comparing the signal output from the gate 1143 with the detection pulse of the control signal corresponding to the row address supplied from the control unit 1121, the presence or absence of a failure due to the disconnection of the pixel control line L and TCV 1093-1 and 1093-2 is determined. To detect. Then, the control line gate 1143 outputs information on the presence / absence of a failure to the failure detection unit 1124 realized by the signal processing circuit 1113, as indicated by an arrow B2 in the figure.
  • the column ADC 11111-1 converts the pixel signal of the upper half pixel in the figure of the pixel array 1101 supplied via the TCV 1093-11 into a digital signal in units of columns, as indicated by an arrow A 1 in the figure. And output to the signal processing circuit 1113. Further, the column ADC 111-2, as indicated by an arrow A2 in the figure, converts the pixel signal of the lower half pixel in the figure of the pixel array 1101 supplied via the TCV 1093-12 into a digital signal in units of columns. And output to the signal processing circuit 1113.
  • the upper chip 1092 becomes only the pixel array 1101, so that it is possible to introduce a semiconductor process specialized for pixels. For example, since there is no circuit transistor in the upper chip 1092, it is not necessary to pay attention to characteristic fluctuations caused by an annealing process at 1000 ° C., etc. The characteristics can be improved.
  • the failure detection unit 1124 in the lower chip 1091, it is possible to detect signals after passing through the TCVs 1093-1 and 1093-2 in the lower chip 1091 through the upper chip 1092 and the upper chip 1092 through the lower chip 1091. Therefore, it becomes possible to detect a failure appropriately.
  • the upper chip 1092 corresponds to an example of a “first substrate”
  • the lower chip 1091 corresponds to an example of a “second substrate”.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device that is mounted on any type of mobile body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, and a robot. May be.
  • FIG. 30 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are illustrated.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp.
  • the body control unit 12020 can be input with radio waves transmitted from a portable device that substitutes for a key or signals from various switches.
  • the body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle on which the vehicle control system 12000 is mounted.
  • the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle and receives the captured image.
  • the vehicle outside information detection unit 12030 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the amount of received light.
  • the imaging unit 12031 can output an electrical signal as an image, or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared rays.
  • the vehicle interior information detection unit 12040 detects vehicle interior information.
  • a driver state detection unit 12041 that detects a driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the vehicle interior information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver is asleep.
  • the microcomputer 12051 calculates a control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside / outside the vehicle acquired by the vehicle outside information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 realizes an ADAS (Advanced Driver Assistance System) function including vehicle collision avoidance or impact mitigation, following traveling based on inter-vehicle distance, vehicle speed maintaining traveling, vehicle collision warning, or vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of automatic driving that autonomously travels without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on information outside the vehicle acquired by the vehicle outside information detection unit 12030.
  • the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of anti-glare, such as switching from a high beam to a low beam. It can be carried out.
  • the sound image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include at least one of an on-board display and a head-up display, for example.
  • FIG. 31 is a diagram illustrating an example of an installation position of the imaging unit 12031.
  • the vehicle 12100 includes imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper part of a windshield in the vehicle interior of the vehicle 12100.
  • the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirror mainly acquire an image of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
  • the forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 31 shows an example of the shooting range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively
  • the imaging range 12114 The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, an overhead image when the vehicle 12100 is viewed from above is obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
  • the microcomputer 12051 based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100).
  • a predetermined speed for example, 0 km / h or more
  • the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • cooperative control for the purpose of autonomous driving or the like autonomously traveling without depending on the operation of the driver can be performed.
  • the microcomputer 12051 converts the three-dimensional object data related to the three-dimensional object to other three-dimensional objects such as a two-wheeled vehicle, a normal vehicle, a large vehicle, a pedestrian, and a utility pole based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles.
  • the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see.
  • the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 is connected via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration or avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the captured images of the imaging units 12101 to 12104. Such pedestrian recognition is, for example, whether or not the user is a pedestrian by performing a pattern matching process on a sequence of feature points indicating the outline of an object and a procedure for extracting feature points in the captured images of the imaging units 12101 to 12104 as infrared cameras. It is carried out by the procedure for determining.
  • the audio image output unit 12052 When the microcomputer 12051 determines that there is a pedestrian in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 has a rectangular contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to be superimposed and displayed.
  • voice image output part 12052 may control the display part 12062 so that the icon etc. which show a pedestrian may be displayed on a desired position.
  • the technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above.
  • the solid-state imaging device 1 illustrated in FIG. 1 can be applied to the imaging unit 12031.
  • the technology according to the present disclosure to the imaging unit 12031, for example, when an abnormality occurs in at least some of the pixels of the solid-state imaging device that configures the imaging unit 12031, the abnormality is determined. It becomes possible to detect.
  • information indicating that the abnormality has occurred can be notified to the user via a predetermined output unit. It becomes possible.
  • the function which controls a vehicle can be restrict
  • Specific examples of the function of controlling the vehicle include a collision avoidance or impact mitigation function of the vehicle, a following traveling function based on the inter-vehicle distance, a vehicle speed maintaining traveling function, a vehicle collision warning function, or a vehicle lane departure warning function. .
  • the function of controlling the vehicle can be restricted or prohibited. Thereby, it is possible to prevent an accident caused by erroneous detection based on the malfunction of the imaging unit 7410.
  • FIG. 32 is a block diagram showing an example of a schematic configuration of an imaging apparatus applied to a moving body.
  • the imaging apparatus 800 illustrated in FIG. 32 corresponds to, for example, the imaging unit 12031 illustrated in FIG.
  • the imaging apparatus 800 includes an optical system 801, a solid-state imaging device 803, a control unit 805, and a communication unit 807.
  • the solid-state imaging device 803 may correspond to, for example, the imaging unit 12031 illustrated in FIG. That is, light that has entered the imaging device 800 via the optical system 801 such as a lens is photoelectrically converted into an electric signal by the solid-state imaging device 803, and an image corresponding to the electric signal or an electric signal corresponding to the electric signal is obtained. Ranging information is output to the control unit 805.
  • the control unit 805 is configured as an ECU (Electronic Control Unit), for example, and executes various processes based on an image output from the solid-state image sensor 803 and distance measurement information. As a specific example, the control unit 805 performs various kinds of analysis processing on the image output from the solid-state image sensor 803, so that an external person, a vehicle, an obstacle, a sign, or a character on the road surface is based on the analysis result. Etc., and the distance to the object is measured.
  • ECU Electronic Control Unit
  • control unit 805 is connected to a vehicle-mounted network (CAN: Controller Area Network) via the communication unit 807.
  • the communication unit 807 corresponds to an interface with so-called CAN communication. Based on such a configuration, for example, the control unit 805 transmits / receives various information to / from other control units (for example, the integrated control unit 12050 shown in FIG. 30) connected to the in-vehicle network.
  • control unit 805 can provide various functions by using, for example, the recognition result of the object and the measurement result of the distance to the object as described above.
  • FCW Pedestrian Detection for Forward Collision Warning
  • AEB Automatic Emergency Braking
  • Vehicle Detection for FCW / AEB LDW (Lane Departure Warning)
  • TJP Traffic Jam Pilot
  • LKA Lane Keeping Aid
  • VO ACC Vision Only Adaptive Cruise Control
  • VO TSR Traffic Sign Recognition
  • IHC Intelligent Head Ramp Control
  • control unit 805 can calculate the time until the vehicle collides with an external object such as a person or another vehicle in a situation where the vehicle is likely to collide with the object. is there. Therefore, for example, when the calculation result of such time is notified to the integrated control unit 12050, the integrated control unit 12050 can use the notified information for realizing the FCW.
  • an external object such as a person or another vehicle in a situation where the vehicle is likely to collide with the object. is there. Therefore, for example, when the calculation result of such time is notified to the integrated control unit 12050, the integrated control unit 12050 can use the notified information for realizing the FCW.
  • control unit 805 can detect the brake lamp of the preceding vehicle based on the analysis result of the image ahead of the vehicle. That is, when the detection result is notified to the integrated control unit 12050, the integrated control unit 12050 can use the notified information for realizing the AEB.
  • control unit 805 can recognize a lane in which the vehicle is traveling, an edge of the lane, a curb, and the like based on an analysis result of an image in front of the vehicle. Therefore, when the recognition result is notified to the integrated control unit 12050, the integrated control unit 12050 can use the notified information for realizing the LDW.
  • control unit 805 may recognize the presence or absence of a preceding vehicle based on the analysis result of the image ahead of the vehicle, and notify the integrated control unit 12050 of the recognition result.
  • the integrated control unit 12050 can control the vehicle speed according to the presence or absence of a preceding vehicle, for example, when the TJP is executed.
  • the control unit 805 may recognize the sign based on the analysis result of the image ahead of the vehicle, and notify the integrated control unit 12050 of the recognition result.
  • the integrated control unit 12050 can recognize the speed limit according to the recognition result of the sign and control the vehicle speed according to the speed limit, for example, when the TJP is executed.
  • the control unit 805 can also recognize the entrance and exit of the expressway, recognize whether or not the traveling vehicle has reached a curve, and the recognition result is obtained by the integrated control unit 12050. It can be used for vehicle control.
  • the control unit 805 can also recognize the light source located in front of the vehicle based on the analysis result of the image in front of the vehicle. That is, the integrated control unit 12050 is notified of the recognition result of the light source, so that the integrated control unit 12050 can use the notified information for realizing the IHC. As a specific example, the integrated control unit 12050 can control the light amount of the headlamp according to the recognized light amount of the light source. As another example, the integrated control unit 12050 can limit the amount of light of either the left or right headlamp according to the recognized position of the light source.
  • the control unit 805 when an abnormality occurs in the solid-state imaging device 803, the control unit 805 outputs information to be output from the solid-state imaging device 803. Based on this, it is possible to detect the abnormality. Therefore, for example, the control unit 805 notifies the integrated control unit 12050 of the abnormality detection result of the solid-state imaging device 803, so that the integrated control unit 12050 performs various controls for ensuring safety. It becomes possible to execute.
  • the integrated control unit 12050 may notify the user that an abnormality has occurred in the solid-state imaging device 803 via various output devices.
  • the output device include an audio speaker 12061, a display unit 12062, an instrument panel 12063, and the like shown in FIG.
  • the integrated control unit 12050 may control the operation of the vehicle according to the recognition result.
  • the integrated control unit 12050 may limit a so-called automatic control function such as TJP or LKA described above. Further, the integrated control unit 12050 may execute control for ensuring safety, such as limiting the vehicle speed.
  • an abnormality occurs in the solid-state imaging device 803, and it is difficult to operate various recognition processes normally.
  • the abnormality can be detected. Therefore, for example, in accordance with the detection result of the abnormality, execution of various measures for ensuring safety, such as notifying the user of notification information regarding the abnormality or controlling the operation of the configuration related to various recognition processes Can be realized.
  • a plurality of pixels A control unit for controlling exposure by each of the plurality of pixels; From the completion of the readout of the pixel signal based on the final exposure result in the first period in which at least one exposure is performed by at least some of the plurality of pixels, than in the first period.
  • a processing unit that executes a predetermined test in a third period until the first exposure in the second period in which the one or more exposures are performed later;
  • An imaging apparatus comprising: (2) The imaging apparatus according to (1), wherein the first period and the second period are unit frame periods corresponding to a predetermined frame rate. (3) The imaging apparatus according to (2), wherein the third period is set according to a vertical blanking period in the unit frame period.
  • the control unit controls the exposure start timing for each of the plurality of pixels arranged in a two-dimensional matrix, for each row, For each row, the processing unit performs the first in the second period after the completion of reading out the pixel signal based on the last exposure result in the first period by the pixels included in the row. Performing the test in the third period until the exposure of The imaging apparatus according to any one of (1) to (5).
  • the imaging apparatus according to any one of (1) to (6), wherein the processing unit executes a test for the some pixels as the test.
  • a drive circuit for supplying a drive signal to each of the plurality of pixels;
  • the imaging apparatus according to any one of (1) to (7), wherein the processing unit executes a test for the drive circuit as the test.
  • An AD converter that converts the analog pixel signal read from the pixel into a digital signal;
  • the imaging apparatus according to any one of (1) to (9), wherein the processing unit executes a test for wiring connected to the some pixels as the test.
  • the imaging apparatus according to any one of (1) to (10), further including an output control unit configured to control information according to a result of the test to be output to a predetermined output destination.
  • the imaging apparatus according to any one of (1) to (11), further including a correction processing unit that corrects the pixel signals output from at least some of the pixels in accordance with a result of the test.
  • a control unit for controlling exposure by each of a plurality of pixels; From the completion of the readout of the pixel signal based on the final exposure result in the first period in which at least one exposure is performed by at least some of the plurality of pixels, than in the first period.
  • a processing unit that performs a test on the partial pixels in a third period until the first exposure in the second period in which the one or more exposures are performed later;
  • a control device comprising: (14) The control device according to (13), further including an output control unit that performs control so that information according to the result of the test is presented to a predetermined output unit. (15) The control device according to (13) or (14), further including a correction processing unit that corrects an image based on a readout result of the pixel signal from the plurality of pixels according to the result of the test.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

[Problem] To make it possible to more efficiently perform various tests for detecting abnormality. [Solution] An imaging system provided with: an imaging device that is mounted on a vehicle and that generates images by capturing images of the area around the vehicle; and a processing device that is mounted on the vehicle and that performs processing relating to a function for controlling the vehicle. The imaging device comprises a plurality of pixels, a control unit for controlling exposure by each of the plurality of pixels, and a processing unit for performing a predetermined test. The control unit controls exposure so that after the reading of pixel signals is completed in a first period in which exposure is performed at least once by at least part of the pixels among the plurality of pixels, the reading of pixel signals is started in a second period in which exposure is performed at least once. The processing unit performs a predetermined test in a third period that is between the reading of pixel signals in the first period and the reading of pixel signals in the second period. The processing device restricts the function for controlling the vehicle on the basis of the results of the predetermined test.

Description

撮像システム及び撮像装置Imaging system and imaging apparatus
 本開示は、撮像システム及び撮像装置に関する。 The present disclosure relates to an imaging system and an imaging apparatus.
 固体撮像装置として、CMOS(Complementary Metal Oxide Semiconductor)等のMOS型イメージセンサに代表される増幅型固体撮像装置が知られている。また、CCD(Charge Coupled Device)イメージセンサに代表される電荷転送型固体撮像装置が知られている。これら固体撮像装置は、デジタルスチルカメラ、デジタルビデオカメラなどに広く用いられている。近年、カメラ付き携帯電話やPDA(Personal Digital Assistant)などのモバイル機器に搭載される固体撮像装置としては、電源電圧が低く、消費電力の観点などからMOS型イメージセンサが多く用いられている。 As solid-state imaging devices, amplification-type solid-state imaging devices represented by MOS type image sensors such as CMOS (Complementary Metal Oxide Semiconductor) are known. In addition, a charge transfer type solid-state imaging device represented by a CCD (Charge Coupled Device) image sensor is known. These solid-state imaging devices are widely used in digital still cameras, digital video cameras, and the like. In recent years, MOS image sensors are often used as solid-state imaging devices mounted on mobile devices such as camera-equipped mobile phones and PDAs (Personal Digital Assistants) from the viewpoint of low power supply voltage and power consumption.
 MOS型の固体撮像装置は、単位画素が光電変換素子(例えば、フォトダイオード)と複数の画素トランジスタで形成され、この複数の単位画素が2次元アレイ状に配列された画素アレイ(画素領域)と、周辺回路領域を有して構成される。複数の画素トランジスタは、MOSトランジスタで形成され、転送トランジスタ、リセットトランジスタ、増幅とトランジスタの3トランジスタ、あるいは選択トランジスタを加えた4トランジスタで構成される。 In a MOS type solid-state imaging device, a unit pixel is formed by a photoelectric conversion element (for example, a photodiode) and a plurality of pixel transistors, and a pixel array (pixel region) in which the plurality of unit pixels are arranged in a two-dimensional array, And having a peripheral circuit region. The plurality of pixel transistors are formed of MOS transistors, and include transfer transistors, reset transistors, three transistors of amplification and transistors, or four transistors including a selection transistor.
 また、近年では、固体撮像装置の用途も多様化してきており、例えば、画像解析技術や各種認識技術の発展に伴い、単に画像を撮像するのみに限らず、撮像された画像に基づき、人や物などのような所定の対象を認識する各種認識システムへの応用も検討されている。 In recent years, applications of solid-state imaging devices have also been diversified. For example, along with the development of image analysis technology and various recognition technologies, not only images are captured, but people and Application to various recognition systems for recognizing a predetermined object such as an object has also been studied.
米国特許出願公開第2008/0158363号明細書US Patent Application Publication No. 2008/0158363
 ところで、固体撮像装置を各種認識システムに応用するような状況下においては、当該固体撮像装置に異常が生じた場合に、当該異常を検出するための仕組みが重要となる。例えば、特許文献1には、故障検出回路を用いて固体撮像装置の故障を検出するための仕組みの一例が開示されている。 By the way, in a situation where the solid-state imaging device is applied to various recognition systems, when an abnormality occurs in the solid-state imaging device, a mechanism for detecting the abnormality is important. For example, Patent Document 1 discloses an example of a mechanism for detecting a failure of a solid-state imaging device using a failure detection circuit.
 一方で、特許文献1では、画像検出チップの電源オン時もしくは外部検査機器からの信号受信時に故障検出回路を用いて各種試験を実行するため、例えば撮像中に発生した故障をrun-timeで検出することが困難である。 On the other hand, in Patent Document 1, since various tests are performed using a failure detection circuit when the image detection chip is powered on or a signal is received from an external inspection device, for example, a failure that occurs during imaging is detected at run-time. Difficult to do.
 そこで、本開示では、異常を検出するための各種試験をより効率的に実行することが可能な、撮像システム及び撮像装置を提案する。 Therefore, the present disclosure proposes an imaging system and an imaging apparatus capable of more efficiently executing various tests for detecting an abnormality.
 本開示によれば、車両に搭載され、前記車両の周辺領域を撮像して画像を生成する撮像装置と、前記車両に搭載され、前記車両を制御する機能に関する処理を実行する処理装置と、を備え、前記撮像装置は、複数の画素と、前記複数の画素それぞれによる露光を制御する制御部と、所定の試験を実行する処理部と、を有し、前記制御部は、前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間において画素信号の読み出しが完了した後に、1回以上の露光が実行される第2の期間において画素信号の読み出しが開始されるように露光を制御し、前記処理部は、前記第1の期間における画素信号の読み出しと前記第2の期間における画素信号の読み出しとの間である第3の期間に、前記所定の試験を実行し、前記処理装置は、前記所定の試験の結果に基づいて、前記車両を制御する機能を制限する、撮像システムが提供される。 According to the present disclosure, an imaging device that is mounted on a vehicle and images a peripheral region of the vehicle to generate an image, and a processing device that is mounted on the vehicle and executes processing related to a function of controlling the vehicle, The imaging apparatus includes a plurality of pixels, a control unit that controls exposure by each of the plurality of pixels, and a processing unit that executes a predetermined test, and the control unit includes the plurality of pixels. Reading of the pixel signal is started in the second period in which one or more exposures are executed after the reading of the pixel signal is completed in the first period in which one or more exposures are performed on at least some of the pixels. Exposure is controlled so that the processing unit performs the predetermined test in a third period between the readout of the pixel signal in the first period and the readout of the pixel signal in the second period. Run The processor, based on the predetermined test result, limits the function of controlling the vehicle, the imaging system is provided.
 また、本開示によれば複数の画素と、前記複数の画素それぞれによる露光を制御する制御部と、所定の試験を実行する処理部と、を備え、前記制御部は、前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間において画素信号の読み出しが完了した後に、1回以上の露光が実行される第2の期間において画素信号の読み出しが開始されるように露光を制御し、前記処理部は、前記第1の期間における画素信号の読み出しと前記第2の期間における画素信号の読み出しとの間である第3の期間に、前記所定の試験を実行する、撮像装置が提供される。 The present disclosure further includes a plurality of pixels, a control unit that controls exposure by each of the plurality of pixels, and a processing unit that executes a predetermined test, and the control unit includes the plurality of pixels. After reading of the pixel signal is completed in the first period in which at least one exposure is performed on at least some of the pixels, reading out of the pixel signal is started in the second period in which at least one exposure is performed. Exposure is controlled so that the processing unit performs the predetermined test in a third period between the readout of the pixel signal in the first period and the readout of the pixel signal in the second period. An imaging device is provided for execution.
 また、本開示によれば、複数の画素と、前記複数の画素それぞれによる露光を制御する制御部と、前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間のうちの最後の露光結果
に基づく画素信号の読み出しの完了後から、前記第1の期間よりも後の前記1回以上の露光が実行される第2の期間における最初の露光が開始されるまでの第3の期間に、所定の試験を実行する処理部と、を備える撮像装置が提供される。
According to the present disclosure, a plurality of pixels, a control unit that controls exposure by each of the plurality of pixels, and at least one exposure by at least some of the plurality of pixels are executed. The first exposure in the second period in which the one or more exposures after the first period are executed after the completion of the readout of the pixel signal based on the last exposure result in the period of An imaging apparatus is provided that includes a processing unit that executes a predetermined test in a third period until the first time.
 以上説明したように本開示によれば、異常を検出するための各種試験をより効率的に実行することが可能な、撮像システム及び撮像装置が提供される。 As described above, according to the present disclosure, it is possible to provide an imaging system and an imaging apparatus capable of more efficiently executing various tests for detecting an abnormality.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本開示の一実施形態に係る固体撮像装置の構成の一例として、CMOS固体撮像装置の概略構成を示した図である。It is a figure showing a schematic structure of a CMOS solid-state imaging device as an example of a configuration of a solid-state imaging device according to an embodiment of the present disclosure. 本開示に係る技術を適用し得る積層型の固体撮像装置の構成例の概要を示す図である。It is a figure which shows the outline | summary of the structural example of the laminated | stacked solid-state imaging device which can apply the technique which concerns on this indication. 本開示の一実施形態に係る固体撮像装置の一部の機能構成の一例を示すブロック図である。It is a block diagram showing an example of some functional composition of a solid imaging device concerning one embodiment of this indication. 本開示の一実施形態に係る固体撮像装置の機能構成の他の一例を示したブロック図である。FIG. 6 is a block diagram illustrating another example of a functional configuration of a solid-state imaging device according to an embodiment of the present disclosure. 本開示の一実施形態に係る固体撮像装置の構成の他の一例を示した図である。It is a figure showing other examples of composition of a solid imaging device concerning one embodiment of this indication. 本開示の一実施形態に係る単位画素の回路構成の一例を示した図である。It is a figure showing an example of circuit composition of a unit pixel concerning one embodiment of this indication. 本開示の一実施形態に係る固体撮像装置の駆動制御の一例について示した概略的なタイミングチャートである。5 is a schematic timing chart illustrating an example of drive control of a solid-state imaging device according to an embodiment of the present disclosure. 本開示の一実施形態に係る固体撮像装置の駆動制御の一例について示した概略的なタイミングチャートである。5 is a schematic timing chart illustrating an example of drive control of a solid-state imaging device according to an embodiment of the present disclosure. 本開示の第1の実施形態に係る固体撮像装置の概略的な構成の一例を示したブロック図である。1 is a block diagram illustrating an example of a schematic configuration of a solid-state imaging device according to a first embodiment of the present disclosure. 同実施形態に係る固体撮像装置の概略的な構成の一例を示したブロック図である。It is the block diagram which showed an example of the schematic structure of the solid-state imaging device concerning the embodiment. 同実施形態に係る固体撮像装置の駆動制御の一例について示した概略的なタイミングチャートである。3 is a schematic timing chart showing an example of drive control of the solid-state imaging device according to the embodiment. 同実施形態に係る固体撮像装置の駆動制御の一例について説明するための説明図である。It is explanatory drawing for demonstrating an example of the drive control of the solid-state imaging device concerning the embodiment. 同実施形態に係る固体撮像装置の駆動制御の一例について説明するための説明図である。It is explanatory drawing for demonstrating an example of the drive control of the solid-state imaging device concerning the embodiment. 同実施形態に係る固体撮像装置の駆動制御の一例について説明するための説明図である。It is explanatory drawing for demonstrating an example of the drive control of the solid-state imaging device concerning the embodiment. 同実施形態に係る固体撮像装置における画素信号の補正に係る動作の一例について説明するための説明図である。FIG. 6 is an explanatory diagram for describing an example of an operation related to pixel signal correction in the solid-state imaging device according to the embodiment. 同実施形態の変形例に係る固体撮像装置における単位画素の回路構成の一例を示した図である。It is the figure which showed an example of the circuit structure of the unit pixel in the solid-state imaging device which concerns on the modification of the embodiment. 同実施形態の変形例に係る固体撮像装置の駆動制御の一例について示した概略的なタイミングチャートである。6 is a schematic timing chart illustrating an example of drive control of a solid-state imaging device according to a modification of the embodiment. 同実施形態の変形例に係る固体撮像装置の駆動制御の一例について説明するための説明図である。It is explanatory drawing for demonstrating an example of the drive control of the solid-state imaging device which concerns on the modification of the embodiment. 同実施形態の応用例に係る固体撮像装置の駆動制御の一例について説明するための説明図である。It is explanatory drawing for demonstrating an example of the drive control of the solid-state imaging device which concerns on the application example of the embodiment. 同実施形態に係る固体撮像装置の駆動制御の一例について示した概略的なタイミングチャートである。3 is a schematic timing chart showing an example of drive control of the solid-state imaging device according to the embodiment. 本開示の第2の実施形態に固体撮像装置の概略的な構成の一例を示したブロック図である。It is the block diagram which showed an example of the schematic structure of a solid-state imaging device to 2nd Embodiment of this indication. 同実施形態に係る固体撮像装置における画素信号の補正に係る動作の一例について説明するための説明図である。FIG. 6 is an explanatory diagram for describing an example of an operation related to pixel signal correction in the solid-state imaging device according to the embodiment. 同実施形態に係る固体撮像装置における画素信号の補正に係る動作の一例について説明するための説明図である。FIG. 6 is an explanatory diagram for describing an example of an operation related to pixel signal correction in the solid-state imaging device according to the embodiment. 同実施形態に係る固体撮像装置の駆動制御の一例について示した概略的なタイミングチャートである。3 is a schematic timing chart showing an example of drive control of the solid-state imaging device according to the embodiment. 同実施形態に係る固体撮像装置における各画素からの画素信号の読み出しに係る概略的な制御の一例について説明するための説明図である。FIG. 6 is an explanatory diagram for describing an example of schematic control related to readout of a pixel signal from each pixel in the solid-state imaging device according to the embodiment. 同実施形態に係る固体撮像装置における各画素からの画素信号の読み出しに係る概略的な制御の一例について説明するための説明図である。FIG. 6 is an explanatory diagram for describing an example of schematic control related to readout of a pixel signal from each pixel in the solid-state imaging device according to the embodiment. 同実施形態に係る固体撮像装置における露光時間の制約と垂直ブランク期間との関係について説明するためのタイミングチャートである。6 is a timing chart for explaining a relationship between an exposure time constraint and a vertical blank period in the solid-state imaging device according to the embodiment. フロントカメラECUと撮像素子のハードウェアの構成について説明するための説明図である。It is explanatory drawing for demonstrating the structure of the hardware of a front camera ECU and an image pick-up element. フロントカメラECUと撮像素子のハードウェアの構成について説明するための説明図である。It is explanatory drawing for demonstrating the structure of the hardware of a front camera ECU and an image pick-up element. 車両制御システムの概略的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a schematic structure of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。It is explanatory drawing which shows an example of the installation position of a vehicle exterior information detection part and an imaging part. 移動体に適用される撮像装置の概略的な構成の一例について示したブロック図である。It is the block diagram shown about an example of the schematic structure of the imaging device applied to a moving body.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
 1.固体撮像装置の構成例
  1.1.概略構成
  1.2.機能構成
  1.3.単位画素の回路構成
  1.4.駆動制御
 2.第1の実施形態
  2.1.構成
  2.2.駆動制御
  2.3.変形例
  2.4.評価
 3.第2の実施形態
  3.1.構成
  3.2.駆動制御
  3.3.露光時間の制約と垂直ブランク期間との関係
  3.4.評価
 4.応用例
  4.1.移動体への応用例1
  4.2.移動体への応用例2
 5.むすび
The description will be made in the following order.
1. Configuration example of solid-state imaging device 1.1. Schematic configuration 1.2. Functional configuration 1.3. Circuit configuration of unit pixel 1.4. Drive control First embodiment 2.1. Configuration 2.2. Drive control 2.3. Modification 2.4. Evaluation Second Embodiment 3.1. Configuration 3.2. Drive control 3.3. Relationship between exposure time constraint and vertical blank period 3.4. Evaluation 4. Application example 4.1. Application example 1 to moving objects
4.2. Application example 2 to moving objects
5). Conclusion
 <<1.固体撮像装置の構成例>>
 本実施形態に係る固体撮像装置の一構成例について以下に説明する。
<< 1. Configuration example of solid-state imaging device >>
A configuration example of the solid-state imaging device according to this embodiment will be described below.
  <1.1.概略構成>
 図1に、本開示の一実施形態に係る固体撮像装置の構成の一例として、CMOS固体撮像装置の概略構成を示す。このCMOS固体撮像装置は、各実施の形態の固体撮像装置に適用される。
<1.1. Schematic configuration>
FIG. 1 illustrates a schematic configuration of a CMOS solid-state imaging device as an example of a configuration of a solid-state imaging device according to an embodiment of the present disclosure. This CMOS solid-state imaging device is applied to the solid-state imaging device of each embodiment.
 図1に示すように、本例の固体撮像装置1は、画素アレイ部3、アドレスレコーダ4、画素タイミング駆動回路5、カラム信号処理回路6、センサコントローラ7、及びアナログ電位生成回路8を含む。 As shown in FIG. 1, the solid-state imaging device 1 of the present example includes a pixel array unit 3, an address recorder 4, a pixel timing driving circuit 5, a column signal processing circuit 6, a sensor controller 7, and an analog potential generation circuit 8.
 画素アレイ部3には、複数の画素2がアレイ状に配置されており、それぞれの画素2は、水平信号線を介して画素タイミング駆動回路5に接続されるとともに、垂直信号線VSLを介してカラム信号処理回路6に接続される。複数の画素2は、図示しない光学系を介して照射される光の光量に応じた画素信号をそれぞれ出力し、それらの画素信号から、画素アレイ部3に結像する被写体の画像が構築される。 In the pixel array section 3, a plurality of pixels 2 are arranged in an array, and each pixel 2 is connected to the pixel timing drive circuit 5 via a horizontal signal line and also via a vertical signal line VSL. It is connected to the column signal processing circuit 6. The plurality of pixels 2 each output a pixel signal corresponding to the amount of light irradiated through an optical system (not shown), and an image of the subject imaged on the pixel array unit 3 is constructed from these pixel signals. .
 画素2は、光電変換部となる例えばフォトダイオードと、複数の画素トランジスタ(いわゆるMOSトランジスタ)を有して成る。複数の画素トランジスタは、例えば転送トランジスタ、リセットトランジスタ及び増幅トランジスタの3つのトランジスタで構成することができる。その他、選択トランジスタ追加して4つのトランジスタで構成することもできる。なお、単位画素の等価回路の一例については別途後述する。画素2は、1つの単位画素として構成することができる。また、画素2は、共有画素構造とすることもできる。この共有画素構造は、複数のフォトダイオードと、複数の転送トランジスタと、共有する1つのフローティングディフージョンと、共有する1つずつの他の画素トランジスタとから構成される。すなわち、共有画素では、複数の単位画素を構成するフォトダイオード及び転送トランジスタが、他の1つずつの画素トランジスタを共有して構成される。 The pixel 2 includes, for example, a photodiode serving as a photoelectric conversion unit and a plurality of pixel transistors (so-called MOS transistors). The plurality of pixel transistors can be constituted by three transistors, for example, a transfer transistor, a reset transistor, and an amplification transistor. In addition, a selection transistor may be added to configure the transistor with four transistors. An example of an equivalent circuit of the unit pixel will be described later separately. The pixel 2 can be configured as one unit pixel. Further, the pixel 2 may have a shared pixel structure. This shared pixel structure includes a plurality of photodiodes, a plurality of transfer transistors, a shared floating diffusion, and a shared other pixel transistor. That is, in the shared pixel, a photodiode and a transfer transistor that constitute a plurality of unit pixels are configured by sharing each other pixel transistor.
 また、画素アレイ部3の一部(例えば、非表示領域)には、表示に寄与しないダミー画素2aが配置されていてもよい。ダミー画素2aは、固体撮像装置1に関する各種情報の取得に利用される。例えば、ダミー画素2aは、表示に寄与する画素2が駆動されている期間中に、白輝度に相当する電圧が印可される。このとき、例えば、ダミー画素2aに流れる電流を電圧に変換し、この変換により得られる電圧を計測することで、表示に寄与する画素2の劣化を予測することも可能である。即ち、ダミー画素2aは、固体撮像装置1の電気的特性を検出可能なセンサに相当し得る。 In addition, a dummy pixel 2a that does not contribute to display may be arranged in a part of the pixel array unit 3 (for example, a non-display area). The dummy pixel 2 a is used for acquiring various information related to the solid-state imaging device 1. For example, a voltage corresponding to white luminance is applied to the dummy pixel 2a during a period in which the pixel 2 contributing to display is driven. At this time, for example, the current flowing in the dummy pixel 2a is converted into a voltage, and the voltage obtained by this conversion is measured, so that the deterioration of the pixel 2 contributing to display can be predicted. That is, the dummy pixel 2 a can correspond to a sensor that can detect the electrical characteristics of the solid-state imaging device 1.
 アドレスレコーダ4は、画素アレイ部3の垂直方向のアクセスを制御し、画素タイミング駆動回路5は、アドレスレコーダ4からの制御信号と画素駆動パルスとの論理和に従って画素2を駆動する。 The address recorder 4 controls the vertical access of the pixel array unit 3, and the pixel timing drive circuit 5 drives the pixel 2 according to the logical sum of the control signal from the address recorder 4 and the pixel drive pulse.
 カラム信号処理回路6は、複数の画素2から垂直信号線VSLを介して出力される画素信号に対してCDS(Correlated Double Sampling:相関2重サンプリング)処理を施すことで、画素信号のAD変換を行うとともにリセットノイズを除去する。例えば、カラム信号処理回路6は、画素2の列数に応じた複数のAD変換器を有して構成され、画素2の列ごとに並列的にCDS処理を行うことができる。また、カラム信号処理回路6は、ソースフォロワ回路の負荷MOS部を形成する定電流回路、垂直信号線VSLの電位をアナログデジタル変換するためのシングルスロープ型のDAコンバータを備える。 The column signal processing circuit 6 performs AD conversion of the pixel signal by performing CDS (Correlated Double Sampling) processing on the pixel signal output from the plurality of pixels 2 through the vertical signal line VSL. And reset noise. For example, the column signal processing circuit 6 includes a plurality of AD converters corresponding to the number of columns of the pixels 2, and can perform CDS processing in parallel for each column of the pixels 2. The column signal processing circuit 6 includes a constant current circuit forming a load MOS portion of the source follower circuit, and a single slope type DA converter for analog-digital conversion of the potential of the vertical signal line VSL.
 センサコントローラ7は、固体撮像装置1の全体の駆動を制御する。例えば、センサコントローラ7は、固体撮像装置1を構成する各ブロックの駆動周期に従ったクロック信号を生成して、それぞれのブロックに供給する。 The sensor controller 7 controls the overall driving of the solid-state imaging device 1. For example, the sensor controller 7 generates a clock signal according to the driving cycle of each block constituting the solid-state imaging device 1 and supplies the clock signal to each block.
 アナログ電位生成回路8は、固体撮像装置1に関する各種情報を取得するために、ダミー画素2aを所望の態様で駆動するためのアナログ電位を生成する。例えば、アナログ電位生成回路8により生成されたアナログ電位に基づき、画素タイミング駆動回路5がダミー画素2aを駆動することで、当該ダミー画素2aからの出力信号に基づき、固体撮像装置1に関する各種情報が取得される。 The analog potential generation circuit 8 generates an analog potential for driving the dummy pixels 2a in a desired manner in order to acquire various information related to the solid-state imaging device 1. For example, when the pixel timing drive circuit 5 drives the dummy pixel 2a based on the analog potential generated by the analog potential generation circuit 8, various information regarding the solid-state imaging device 1 is obtained based on the output signal from the dummy pixel 2a. To be acquired.
 ここで、図2を参照して、本技術の固体撮像装置1の基本的な概略構成について説明する。 Here, a basic schematic configuration of the solid-state imaging device 1 of the present technology will be described with reference to FIG.
 第1の例として、図2上段に示される固体撮像装置330は、1つの半導体チップ331内に、画素領域332、制御回路333、上述した信号処理回路を含むロジック回路334とを搭載して構成される。 As a first example, the solid-state imaging device 330 illustrated in the upper part of FIG. 2 includes a pixel region 332, a control circuit 333, and a logic circuit 334 including the above-described signal processing circuit in one semiconductor chip 331. Is done.
 第2の例として、図2中段に示される固体撮像装置340は、第1の半導体チップ部341と第2の半導体チップ部342とから構成される。第1の半導体チップ部341には、画素領域343と制御回路344が搭載され、第2の半導体チップ部342には、上述した信号処理回路を含むロジック回路345が搭載される。そして、第1の半導体チップ部341と第2の半導体チップ部342とが相互に電気的に接続されることで、1つの半導体チップとしての固体撮像装置340が構成される。 As a second example, the solid-state imaging device 340 shown in the middle of FIG. 2 includes a first semiconductor chip unit 341 and a second semiconductor chip unit 342. A pixel region 343 and a control circuit 344 are mounted on the first semiconductor chip portion 341, and a logic circuit 345 including the signal processing circuit described above is mounted on the second semiconductor chip portion 342. The first semiconductor chip unit 341 and the second semiconductor chip unit 342 are electrically connected to each other, so that a solid-state imaging device 340 as one semiconductor chip is configured.
 第3の例として、図2下段に示される固体撮像装置350は、第1の半導体チップ部351と第2の半導体チップ部352とから構成される。第1の半導体チップ部351には、画素領域353が搭載され、第2の半導体チップ部352には、制御回路354と、上述した信号処理回路を含むロジック回路355が搭載される。そして、第1の半導体チップ部351と第2の半導体チップ部352とが相互に電気的に接続されることで、1つの半導体チップとしての固体撮像装置350が構成される。 As a third example, the solid-state imaging device 350 shown in the lower part of FIG. 2 includes a first semiconductor chip part 351 and a second semiconductor chip part 352. A pixel region 353 is mounted on the first semiconductor chip portion 351, and a control circuit 354 and a logic circuit 355 including the signal processing circuit described above are mounted on the second semiconductor chip portion 352. The first semiconductor chip unit 351 and the second semiconductor chip unit 352 are electrically connected to each other, so that a solid-state imaging device 350 as one semiconductor chip is configured.
  <1.2.機能構成>
 続いて、図3を参照して、本開示の一実施形態に係る固体撮像装置の機能構成の一例について説明する。図3は、本開示の一実施形態に係る固体撮像装置の一部の機能構成の一例を示すブロック図である。図3に示される固体撮像装置1は、例えば、CMOS(Complementary Metal Oxide Semiconductor)イメージセンサやCCD(Charge Coupled Device)イメージセンサ等の、被写体を撮像し、撮像画像のデジタルデータを得る撮像素子である。
<1.2. Functional configuration>
Next, an example of a functional configuration of the solid-state imaging device according to an embodiment of the present disclosure will be described with reference to FIG. FIG. 3 is a block diagram illustrating an example of a partial functional configuration of the solid-state imaging device according to an embodiment of the present disclosure. The solid-state imaging device 1 shown in FIG. 3 is an imaging element that captures a subject and obtains digital data of the captured image, such as a complementary metal oxide semiconductor (CMOS) image sensor or a charge coupled device (CCD) image sensor. .
 図3に示されるように、固体撮像装置1は、制御部101、画素アレイ部111、選択部112、A/D変換部(ADC(Analog Digital Converter))113、及び定電流回路部114を有する。 As shown in FIG. 3, the solid-state imaging device 1 includes a control unit 101, a pixel array unit 111, a selection unit 112, an A / D conversion unit (ADC (Analog Digital Converter)) 113, and a constant current circuit unit 114. .
 制御部101は、固体撮像装置1の各部を制御し、画像データ(画素信号)の読み出し等に関する処理を実行させる。 The control unit 101 controls each unit of the solid-state imaging device 1 to execute processing related to reading of image data (pixel signal).
 画素アレイ部111は、フォトダイオード等の光電変換素子を有する画素構成が行列(アレイ)状に配置される画素領域である。画素アレイ部111は、制御部101に制御されて、各画素で被写体の光を受光し、その入射光を光電変換して電荷を蓄積し、所定のタイミングにおいて、各画素に蓄積された電荷を画素信号として出力する。 The pixel array unit 111 is a pixel region in which pixel configurations having photoelectric conversion elements such as photodiodes are arranged in a matrix (array). The pixel array unit 111 is controlled by the control unit 101 to receive the light of the subject at each pixel, photoelectrically convert the incident light to accumulate charges, and store the charges accumulated in each pixel at a predetermined timing. Output as a pixel signal.
 画素121および画素122は、その画素アレイ部111に配置される画素群の中の、上下に隣接する2画素を示している。画素121および画素122は、互いに同じカラム(列)の連続する行の画素である。図3の例の場合、画素121および画素122に示されるように、各画素の回路には、光電変換素子並びに4つのトランジスタが用いられている。なお、各画素の回路の構成は、任意であり、図3に示される例以外であってもよい。 The pixel 121 and the pixel 122 indicate two pixels that are adjacent in the vertical direction in the pixel group arranged in the pixel array unit 111. The pixel 121 and the pixel 122 are pixels in consecutive rows in the same column. In the case of the example of FIG. 3, as shown in the pixel 121 and the pixel 122, a photoelectric conversion element and four transistors are used in the circuit of each pixel. Note that the circuit configuration of each pixel is arbitrary and may be other than the example shown in FIG.
 一般的な画素アレイには、カラム(列)毎に、画素信号の出力線が設けられる。画素アレイ部111の場合、1カラム(列)毎に、2本(2系統)の出力線が設けられる。1カラムの画素の回路は、1行おきに、この2本の出力線に交互に接続される。例えば、上から奇数番目の行の画素の回路が一方の出力線に接続され、偶数番目の行の画素の回路が他方の出力線に接続される。図3の例の場合、画素121の回路は、第1の出力線(VSL1)に接続され、画素122の回路は、第2の出力線(VSL2)に接続される。 In general pixel arrays, output lines for pixel signals are provided for each column. In the case of the pixel array unit 111, two (two systems) output lines are provided for each column. The circuit of the pixel in one column is alternately connected to these two output lines every other row. For example, the pixel circuits in the odd-numbered rows from the top are connected to one output line, and the pixel circuits in the even-numbered rows are connected to the other output line. In the example of FIG. 3, the circuit of the pixel 121 is connected to the first output line (VSL1), and the circuit of the pixel 122 is connected to the second output line (VSL2).
 なお、図3においては、説明の便宜上、1カラム分の出力線のみ示されているが、実際には、各カラムに対して、同様に2本ずつ出力線が設けられる。各出力線には、そのカラムの画素の回路が1行おきに接続される。 In FIG. 3, for convenience of explanation, only an output line for one column is shown, but actually, two output lines are provided for each column in the same manner. Each output line is connected to every other row of pixel circuits in that column.
 選択部112は、画素アレイ部111の各出力線をADC113の入力に接続するスイッチを有し、制御部101に制御されて、画素アレイ部111とADC113との接続を制御する。つまり、画素アレイ部111から読み出された画素信号は、この選択部112を介してADC113に供給される。 The selection unit 112 includes a switch that connects each output line of the pixel array unit 111 to the input of the ADC 113, and is controlled by the control unit 101 to control connection between the pixel array unit 111 and the ADC 113. That is, the pixel signal read from the pixel array unit 111 is supplied to the ADC 113 via the selection unit 112.
 選択部112は、スイッチ131、スイッチ132、およびスイッチ133を有する。スイッチ131(選択SW)は、互いに同じカラムに対応する2本の出力線の接続を制御する。例えば、スイッチ131がオン(ON)状態になると、第1の出力線(VSL1)と第2の出力線(VSL2)が接続され、オフ(OFF)状態になると切断される。 The selection unit 112 includes a switch 131, a switch 132, and a switch 133. The switch 131 (selection SW) controls connection of two output lines corresponding to the same column. For example, the first output line (VSL1) and the second output line (VSL2) are connected when the switch 131 is turned on (ON), and disconnected when the switch 131 is turned off (OFF).
 固体撮像装置1においては、各出力線に対してADCが1つずつ設けられている(カラムADC)。したがって、スイッチ132およびスイッチ133がともにオン状態であるとすると、スイッチ131がオン状態になれば、同カラムの2本の出力線が接続されるので、1画素の回路が2つのADCに接続されることになる。逆に、スイッチ131がオフ状態になると、同カラムの2本の出力線が切断されて、1画素の回路が1つのADCに接続されることになる。つまり、スイッチ131は、1つの画素の信号の出力先とするADC(カラムADC)の数を選択する。 In the solid-state imaging device 1, one ADC is provided for each output line (column ADC). Therefore, if both the switch 132 and the switch 133 are in the on state, when the switch 131 is in the on state, the two output lines of the same column are connected, so that the circuit of one pixel is connected to the two ADCs. Will be. Conversely, when the switch 131 is turned off, the two output lines in the same column are disconnected, and the circuit of one pixel is connected to one ADC. That is, the switch 131 selects the number of ADCs (column ADCs) that are output destinations of signals of one pixel.
 このようにスイッチ131が画素信号の出力先とするADCの数を制御することにより、固体撮像装置1は、そのADCの数に応じてより多様な画素信号を出力することができる。つまり、固体撮像装置1は、より多様なデータ出力を実現することができる。 As described above, the switch 131 controls the number of ADCs to which pixel signals are output, so that the solid-state imaging device 1 can output more various pixel signals according to the number of ADCs. That is, the solid-state imaging device 1 can realize more various data outputs.
 スイッチ132は、画素121に対応する第1の出力線(VSL1)と、その出力線に対応するADCとの接続を制御する。スイッチ132がオン(ON)状態になると、第1の出力線が、対応するADCの比較器の一方の入力に接続される。また、オフ(OFF)状態になるとそれらが切断される。 The switch 132 controls the connection between the first output line (VSL1) corresponding to the pixel 121 and the ADC corresponding to the output line. When the switch 132 is turned on, the first output line is connected to one input of the corresponding ADC comparator. In addition, when they are turned off, they are disconnected.
 スイッチ133は、画素122に対応する第2の出力線(VSL2)と、その出力線に対応するADCとの接続を制御する。スイッチ133がオン(ON)状態になると、第2の出力線が、対応するADCの比較器の一方の入力に接続される。また、オフ(OFF)状態になるとそれらが切断される。 The switch 133 controls the connection between the second output line (VSL2) corresponding to the pixel 122 and the ADC corresponding to the output line. When the switch 133 is turned on, the second output line is connected to one input of the corresponding ADC comparator. In addition, when they are turned off, they are disconnected.
 選択部112は、制御部101の制御に従って、このようなスイッチ131~スイッチ133の状態を切り替えることにより、1つの画素の信号の出力先とするADC(カラムADC)の数を制御することができる。 The selection unit 112 can control the number of ADCs (column ADCs) that are output destinations of signals of one pixel by switching the states of the switches 131 to 133 according to the control of the control unit 101. .
 なお、スイッチ132やスイッチ133(いずれか一方もしくは両方)を省略し、各出力線と、その出力線に対応するADCとを常時接続するようにしてもよい。ただし、これらのスイッチによって、これらの接続・切断を制御することができるようにすることにより、1つの画素の信号の出力先とするADC(カラムADC)の数の選択の幅が拡がる。つまり、これらのスイッチを設けることにより、固体撮像装置1は、より多様な画素信号を出力することができる。 Note that the switch 132 and the switch 133 (either one or both) may be omitted, and each output line may be always connected to the ADC corresponding to the output line. However, the selection of the number of ADCs (column ADCs) that are the output destinations of signals of one pixel is expanded by enabling these switches to control connection / disconnection of these pixels. That is, by providing these switches, the solid-state imaging device 1 can output more various pixel signals.
 なお、図3においては、1カラム分の出力線に対する構成のみ示されているが、実際には、選択部112は、カラム毎に、図3に示されるのと同様の構成(スイッチ131~スイッチ133)を有している。つまり、選択部112は、各カラムについて、制御部101の制御に従って、上述したのと同様の接続制御を行う。 In FIG. 3, only the configuration for the output line for one column is shown. However, in practice, the selection unit 112 has the same configuration as that shown in FIG. 133). That is, the selection unit 112 performs connection control similar to that described above for each column according to the control of the control unit 101.
 ADC113は、画素アレイ部111から各出力線を介して供給される画素信号を、それぞれA/D変換し、デジタルデータとして出力する。ADC113は、画素アレイ部111からの出力線毎のADC(カラムADC)を有する。つまり、ADC113は、複数のカラムADCを有する。1出力線に対応するカラムADCは、比較器、D/A変換器(DAC)、およびカウンタを有するシングルスロープ型のADCである。 The ADC 113 A / D converts each pixel signal supplied from the pixel array unit 111 via each output line, and outputs it as digital data. The ADC 113 includes an ADC (column ADC) for each output line from the pixel array unit 111. That is, the ADC 113 has a plurality of column ADCs. A column ADC corresponding to one output line is a single slope type ADC having a comparator, a D / A converter (DAC), and a counter.
 比較器は、垂直信号線VSLを介して供給される画素信号の信号値(電位)と、DACから供給されるランプ波の電位とを比較して、それらの電位が交差するタイミングにおいて反転する反転パルスを出力する。カウンタは、アナログ値をデジタル値に変換するために、画素信号の電位とランプ波の電位とが交差するタイミングに応じたAD期間をカウントする。カウンタは、画素信号の信号値とDACから供給されるランプ波の電位とが等しくなるまで、カウント値(デジタル値)をインクリメントする。比較器は、DAC出力が信号値に達すると、カウンタを停止する。その後カウンタ1,2によってデジタル化された信号をDATA1およびDATA2より固体撮像装置1の外部に出力する。 The comparator compares the signal value (potential) of the pixel signal supplied via the vertical signal line VSL with the potential of the ramp wave supplied from the DAC and inverts the signal at the timing when these potentials intersect. Output a pulse. The counter counts an AD period corresponding to the timing at which the potential of the pixel signal and the potential of the ramp wave intersect in order to convert the analog value into a digital value. The counter increments the count value (digital value) until the signal value of the pixel signal is equal to the ramp wave potential supplied from the DAC. The comparator stops the counter when the DAC output reaches the signal value. Thereafter, the signals digitized by the counters 1 and 2 are output to the outside of the solid-state imaging device 1 from DATA1 and DATA2.
 カウンタは、次のA/D変換のためデータ出力後、カウント値を初期値(例えば0)に戻す。 The counter returns the count value to the initial value (for example, 0) after outputting the data for the next A / D conversion.
 ADC113は、各カラムに対して2系統のカラムADCを有する。例えば、第1の出力線(VSL1)に対して、比較器141(COMP1)、DAC142(DAC1)、およびカウンタ143(カウンタ1)が設けられ、第2の出力線(VSL2)に対して、比較器151(COMP2)、DAC152(DAC2)、およびカウンタ153(カウンタ2)が設けられている。図示は省略しているが、ADC113は、他のカラムの出力線に対しても同様の構成を有する。 The ADC 113 has two column ADCs for each column. For example, a comparator 141 (COMP1), a DAC 142 (DAC1), and a counter 143 (counter 1) are provided for the first output line (VSL1), and a comparison is made for the second output line (VSL2). A device 151 (COMP2), a DAC 152 (DAC2), and a counter 153 (counter 2) are provided. Although not shown, the ADC 113 has the same configuration for output lines of other columns.
 ただし、これらの構成の内、DACは、共通化することができる。DACの共通化は、系統毎に行われる。つまり、各カラムの互いに同じ系統のDACが共通化される。図3の例の場合、各カラムの第1の出力線(VSL1)に対応するDACがDAC142として共通化され、各カラムの第2の出力線(VSL2)に対応するDACがDAC152として共通化されている。なお、比較器とカウンタは、各出力線の系統毎に設けられる。 However, the DAC can be shared among these configurations. DAC sharing is performed for each system. That is, DACs of the same system in each column are shared. In the example of FIG. 3, the DAC corresponding to the first output line (VSL1) of each column is shared as the DAC 142, and the DAC corresponding to the second output line (VSL2) of each column is shared as the DAC 152. ing. Note that a comparator and a counter are provided for each output line system.
 定電流回路部114は、各出力線に接続される定電流回路であり、制御部101により制御されて駆動する。定電流回路部114の回路は、例えば、MOS(Metal Oxide Semiconductor)トランジスタ等により構成される。この回路構成は任意であるが、図3においては、説明の便宜上、第1の出力線(VSL1)に対して、MOSトランジスタ161(LOAD1)が設けられ、第2の出力線(VSL2)に対して、MOSトランジスタ162(LOAD2)が設けられている。 The constant current circuit unit 114 is a constant current circuit connected to each output line, and is driven by being controlled by the control unit 101. The circuit of the constant current circuit unit 114 includes, for example, a MOS (Metal Oxide Semiconductor) transistor or the like. Although this circuit configuration is arbitrary, in FIG. 3, for convenience of explanation, a MOS transistor 161 (LOAD1) is provided for the first output line (VSL1), and for the second output line (VSL2). Thus, a MOS transistor 162 (LOAD2) is provided.
 制御部101は、例えばユーザ等の外部から要求を受け付けて読み出しモードを選択し、選択部112を制御して、出力線に対する接続を制御する。また、制御部101は、選択した読み出しモードに応じて、カラムADCの駆動を制御したりする。さらに、制御部101は、カラムADC以外にも、必要に応じて、定電流回路部114の駆動を制御したり、例えば、読み出しのレートやタイミング等、画素アレイ部111の駆動を制御したりする。 The control unit 101 receives a request from the outside such as a user, selects a read mode, controls the selection unit 112, and controls connection to the output line. Further, the control unit 101 controls driving of the column ADC according to the selected read mode. Further, in addition to the column ADC, the control unit 101 controls driving of the constant current circuit unit 114 as necessary, and controls driving of the pixel array unit 111 such as a reading rate and timing. .
 つまり、制御部101は、選択部112の制御だけでなく、選択部112以外の各部も、より多様なモードで動作させることができる。したがって、固体撮像装置1は、より多様な画素信号を出力することができる。 That is, the control unit 101 can operate not only the selection unit 112 but also each unit other than the selection unit 112 in more various modes. Therefore, the solid-state imaging device 1 can output more various pixel signals.
 ここで、図3に示す画素121及び122が、図1における画素2に相当する。また、選択部112、ADC113、及び定電流回路部114が、図1を参照して説明したカラム信号処理回路6に相当する。また、図3に示す制御部101が、図1を参照して説明したセンサコントローラ7に相当する。 Here, the pixels 121 and 122 shown in FIG. 3 correspond to the pixel 2 in FIG. The selection unit 112, the ADC 113, and the constant current circuit unit 114 correspond to the column signal processing circuit 6 described with reference to FIG. Further, the control unit 101 shown in FIG. 3 corresponds to the sensor controller 7 described with reference to FIG.
 なお、図3に示す各部の数は、不足しない限り任意である。例えば、各カラムに対して、出力線が3系統以上設けられるようにしてもよい。また、図3に示した、スイッチ132から出力される画素信号の並列数や、スイッチ132自体の数を増やすことで、外部に並列して出力される画素信号の数を増やしてもよい。 In addition, the number of each part shown in FIG. 3 is arbitrary as long as there is no shortage. For example, three or more output lines may be provided for each column. In addition, the number of pixel signals output in parallel may be increased by increasing the number of parallel pixel signals output from the switch 132 and the number of switches 132 themselves shown in FIG.
 例えば、図4は、本開示の一実施形態に係る固体撮像装置の機能構成の他の一例を示したブロック図である。図4において、参照符号6a及び6bは、図1を参照して説明したカラム信号処理回路6に相当する構成をそれぞれ示している。即ち、図4に示す例では、カラム信号処理回路6に相当する構成(例えば、比較器141及び151、カウンタ143及び153、並びに、定電流回路部114)を複数系統設けている。また、図4に示すように、カラム信号処理回路6a及び6b間において、DAC142及び152が共通化されていてもよい。 For example, FIG. 4 is a block diagram illustrating another example of the functional configuration of the solid-state imaging device according to an embodiment of the present disclosure. In FIG. 4, reference numerals 6a and 6b respectively indicate configurations corresponding to the column signal processing circuit 6 described with reference to FIG. That is, in the example shown in FIG. 4, a plurality of systems corresponding to the column signal processing circuit 6 (for example, the comparators 141 and 151, the counters 143 and 153, and the constant current circuit unit 114) are provided. Further, as shown in FIG. 4, the DACs 142 and 152 may be shared between the column signal processing circuits 6a and 6b.
 また、図5は、本開示の一実施形態に係る固体撮像装置の構成の他の一例を示した図である。図5に示す例では、積層型の固体撮像装置において、上側の半導体チップに複数の画素2が配列された画素アレイ部111を設け、下側のチップにADC113を設けた場合の一例を示している。また、図5に示す例では、画素アレイ部111を、それぞれが複数の画素2を含む複数のエリア1111に分割し、当該エリア1111ごとにADC1131が設けられている。より具体的な一例として、図5に示す例では、10画素×16画素をエリア1111の単位として、画素アレイ部111を複数のエリア1111に分割している。 FIG. 5 is a diagram illustrating another example of the configuration of the solid-state imaging device according to an embodiment of the present disclosure. In the example shown in FIG. 5, in the stacked solid-state imaging device, an example in which a pixel array unit 111 in which a plurality of pixels 2 are arranged is provided on an upper semiconductor chip and an ADC 113 is provided on a lower chip is shown. Yes. In the example illustrated in FIG. 5, the pixel array unit 111 is divided into a plurality of areas 1111 each including a plurality of pixels 2, and an ADC 1131 is provided for each area 1111. As a more specific example, in the example illustrated in FIG. 5, the pixel array unit 111 is divided into a plurality of areas 1111 using 10 pixels × 16 pixels as a unit of the area 1111.
 また、エリア1111に含まれる各画素2と、当該エリア1111に対応して設けられたADC1131との間は、半導体チップ間が積層されることで電気的に接続される。具体的な一例として、エリア1111に含まれる各画素2に接続された配線と、当該エリアに対応して設けられたADC1131に接続された配線との間が、所謂Cu-Cu接合に基づき直接接合されていてもよいし、所謂TSV(Through-Silicon Via)により接続されていてもよい。 Further, each pixel 2 included in the area 1111 and the ADC 1131 provided corresponding to the area 1111 are electrically connected by stacking semiconductor chips. As a specific example, a direct connection between a wiring connected to each pixel 2 included in the area 1111 and a wiring connected to the ADC 1131 provided corresponding to the area is based on a so-called Cu—Cu bonding. They may be connected by a so-called TSV (Through-Silicon Via).
 以上のように、エリア1111ごとにADC1131を設けることで、例えば、列ごとにADC113が設けられる場合に比べて、各画素2からの画素信号をA/D変換し、デジタルデータとして出力する処理の並列数を増加させることが可能となる。そのため、例えば、各画素2から画素信号の読み出しに係る時間をより短縮することが可能となる。また、エリア1111ごとのADC1131を個々に独立して駆動させることも可能である。そのため、例えば、所望のタイミングで一部のエリア1111に含まれる画素2からの画素信号の読み出しを個別に行う等のように、各画素2からの画素信号の読み出しをより柔軟に制御することも可能となる。 As described above, by providing the ADC 1131 for each area 1111, for example, compared to the case where the ADC 113 is provided for each column, the pixel signal from each pixel 2 is A / D converted and output as digital data. It becomes possible to increase the parallel number. Therefore, for example, it is possible to further reduce the time required for reading out the pixel signal from each pixel 2. In addition, the ADC 1131 for each area 1111 can be individually driven independently. Therefore, for example, pixel signals can be read from each pixel 2 more flexibly, for example, pixel signals from pixels 2 included in some areas 1111 can be individually read at a desired timing. It becomes possible.
 また、図3を参照して説明した構成のうち、一部の構成が固体撮像装置1の外部に設けられていてもよい。具体的な一例として、図3に示す制御部101の少なくとも一部の機能を担う構成が、固体撮像装置1の外部から当該固体撮像装置1内の各構成に対して制御信号を送信することで当該構成の動作を制御してもよい。この場合における、制御部101に相当する構成が、「制御装置」の一例に相当する。 Of the configurations described with reference to FIG. 3, some configurations may be provided outside the solid-state imaging device 1. As a specific example, the configuration that bears at least part of the function of the control unit 101 illustrated in FIG. 3 transmits a control signal to each component in the solid-state imaging device 1 from the outside of the solid-state imaging device 1. You may control the operation | movement of the said structure. In this case, the configuration corresponding to the control unit 101 corresponds to an example of a “control device”.
 以上、図3~図5を参照して、本開示の一実施形態に係る固体撮像装置の機能構成の一例について説明した。 The example of the functional configuration of the solid-state imaging device according to an embodiment of the present disclosure has been described above with reference to FIGS. 3 to 5.
  <1.3.単位画素の回路構成>
 続いて、図6を参照して、単位画素の回路構成の一例について説明する。図6は、本開示の一実施形態に係る単位画素の回路構成の一例を示した図である。図6に示すように、本開示の一実施形態に係る単位画素2は、光電変換素子(例えばフォトダイオード)PDと、4つの画素トランジスタとを含む。4つの画素トランジスタは、例えば、転送トランジスタTr11、リセットトランジスタTr12、増幅トランジスタTr13、及び選択トランジスタTr14である。これらの画素トランジスタは、例えば、nチャネルのMOSトランジスタにより構成され得る。
<1.3. Circuit configuration of unit pixel>
Next, an example of the circuit configuration of the unit pixel will be described with reference to FIG. FIG. 6 is a diagram illustrating an example of a circuit configuration of a unit pixel according to an embodiment of the present disclosure. As illustrated in FIG. 6, the unit pixel 2 according to an embodiment of the present disclosure includes a photoelectric conversion element (for example, a photodiode) PD and four pixel transistors. The four pixel transistors are, for example, a transfer transistor Tr11, a reset transistor Tr12, an amplification transistor Tr13, and a selection transistor Tr14. These pixel transistors can be composed of, for example, n-channel MOS transistors.
 転送トランジスタTr11は、光電変換素子PDのカソードとフローティングディフュージョン部FDとの間に接続される。光電変換素子PDで光電変換され、ここに蓄積された信号電荷(ここでは、電子)を、ゲートに転送パルスTRGが与えられることによってフローティングディフュージョン部FDに転送する。 The transfer transistor Tr11 is connected between the cathode of the photoelectric conversion element PD and the floating diffusion portion FD. Signal charges (here, electrons) that have been subjected to photoelectric conversion by the photoelectric conversion element PD and accumulated therein are transferred to the floating diffusion portion FD when a transfer pulse TRG is applied to the gate.
 リセットトランジスタTr12は、電源VDDにドレインが、フローティングディフュージョン部FDにソースがそれぞれ接続される。そして、光電変換素子PDからフローティングディフュージョン部FDへの信号電荷の転送に先立って、ゲートにリセットパルスRSTが与えられることによってフローティングディフュージョン部FDの電位をリセットする。 The reset transistor Tr12 has a drain connected to the power supply VDD and a source connected to the floating diffusion portion FD. Prior to the transfer of signal charges from the photoelectric conversion element PD to the floating diffusion portion FD, the potential of the floating diffusion portion FD is reset by applying a reset pulse RST to the gate.
 増幅トランジスタTr13は、フローティングディフュージョン部FDにゲートが、電源VDDにドレインが、選択トランジスタTr14のドレインにソースがそれぞれ接続される。増幅トランジスタTr13は、リセットトランジスタTr12によってリセットした後のフローティングディフュージョン部FDの電位をリセットレベルとして選択トランジスタTr14に出力する。さらに増幅トランジスタTr13は、転送トランジスタTr11によって信号電荷を転送した後のフローティングディフュージョン部FDの電位を信号レベルとして選択トランジスタTr14に出力する。 The amplification transistor Tr13 has a gate connected to the floating diffusion portion FD, a drain connected to the power supply VDD, and a source connected to the drain of the selection transistor Tr14. The amplification transistor Tr13 outputs the potential of the floating diffusion portion FD after being reset by the reset transistor Tr12 to the selection transistor Tr14 as a reset level. Further, the amplification transistor Tr13 outputs the potential of the floating diffusion portion FD after the signal charge is transferred by the transfer transistor Tr11 as a signal level to the selection transistor Tr14.
 選択トランジスタTr14は、例えば、増幅トランジスタTr13のソースにドレインが、垂直信号線VSLにソースがそれぞれ接続される。そして選択トランジスタTr14のゲートに選択パルスSELが与えられることによってオン状態となり、増幅トランジスタTr13から出力される信号を垂直信号線VSLに出力する。なお、この選択トランジスタTr14については、電源VDDと増幅トランジスタTr13のドレインとの間に接続した構成を採ることも可能である。 For example, the selection transistor Tr14 has a drain connected to the source of the amplification transistor Tr13 and a source connected to the vertical signal line VSL. When the selection pulse SEL is applied to the gate of the selection transistor Tr14, the selection transistor Tr14 is turned on, and the signal output from the amplification transistor Tr13 is output to the vertical signal line VSL. The selection transistor Tr14 may be configured to be connected between the power supply VDD and the drain of the amplification transistor Tr13.
 本実施形態に係る固体撮像装置1を、積層型の固体撮像装置として構成する場合には、例えば、フォトダイオード及び複数のMOSトランジスタ等の素子が、図2の中段または下段における第1の半導体チップ部341に形成される。また、転送パルス、リセットパルス、選択パルス、電源電圧は、図2の中段または下段における第2の半導体チップ部342から供給される。また、選択トランジスタのドレインに接続される垂直信号線VSLから後段の素子は、選択トランジスタのドレインに接続される垂直信号線VSLから後段の素子は、ロジック回路345に構成されており、第2の半導体チップ部342に形成される。 In the case where the solid-state imaging device 1 according to the present embodiment is configured as a stacked solid-state imaging device, for example, elements such as photodiodes and a plurality of MOS transistors are included in the first semiconductor chip in the middle or lower stage of FIG. Formed in part 341. Further, the transfer pulse, the reset pulse, the selection pulse, and the power supply voltage are supplied from the second semiconductor chip unit 342 in the middle stage or the lower stage of FIG. Further, the elements subsequent to the vertical signal line VSL connected to the drain of the selection transistor are configured in the logic circuit 345, and the elements subsequent to the vertical signal line VSL connected to the drain of the selection transistor are configured in the second circuit. It is formed in the semiconductor chip part 342.
 以上、図6を参照して、単位画素の回路構成の一例について説明した。 The example of the circuit configuration of the unit pixel has been described above with reference to FIG.
  <1.4.駆動制御>
 続いて、本開示の一実施形態に係る固体撮像装置1の駆動制御の一例として、画素の駆動と、当該画素から供給される画素信号をデジタル信号に変換するADCの駆動と、についてそれぞれ説明する。
<1.4. Drive control>
Subsequently, as an example of drive control of the solid-state imaging device 1 according to an embodiment of the present disclosure, driving of a pixel and driving of an ADC that converts a pixel signal supplied from the pixel into a digital signal will be described. .
 (画素の駆動)
 まず、図7を参照して、画素2の駆動について説明する。図7は、本開示の一実施形態に係る固体撮像装置1の駆動制御の一例について示した概略的なタイミングチャートであり、画素2の駆動制御の一例について示している。
(Pixel drive)
First, the driving of the pixel 2 will be described with reference to FIG. FIG. 7 is a schematic timing chart illustrating an example of drive control of the solid-state imaging device 1 according to an embodiment of the present disclosure, and illustrates an example of drive control of the pixel 2.
 図7には、1水平同期期間を示す水平同期信号(XHS)、転送トランジスタTr11を駆動するTRG駆動パルス(読み出し時転送パルスおよび電子シャッタ時転送パルス)、リセットトランジスタTr12を駆動するRST駆動パルス(電子シャッタ時リセットパルスおよび読み出し時リセットパルス)、及び、選択トランジスタTr14を駆動するSEL駆動パルス(読み出し時選択パルス)が示されている。 FIG. 7 shows a horizontal synchronization signal (XHS) indicating one horizontal synchronization period, a TRG drive pulse for driving the transfer transistor Tr11 (transfer pulse for reading and transfer pulse for electronic shutter), and an RST drive pulse for driving the reset transistor Tr12 ( An electronic shutter reset pulse and a readout reset pulse), and a SEL drive pulse (readout selection pulse) for driving the selection transistor Tr14 are shown.
 電子シャッタ時には、電子シャッタ時転送パルス及び電子シャッタ時リセットパルスをオンすることで光電変換素子PDの電位をリセット状態にする。その後、蓄積時間中に光電変換素子PDに電荷を蓄積し、センサコントローラ7から読み出しパルスが発行される。 At the time of the electronic shutter, the potential of the photoelectric conversion element PD is reset by turning on the electronic shutter transfer pulse and the electronic shutter reset pulse. Thereafter, charges are accumulated in the photoelectric conversion element PD during the accumulation time, and a read pulse is issued from the sensor controller 7.
 読み出し時には、読み出し時リセットパルスをオンすることでフローティングディフュージョン部FDの電位をリセットさせた後、プリデータ相(P相)の電位をAD変換する。その後、読み出し時転送パルスにて光電変換素子PDの電荷をフローティングディフュージョン部FDへ転送させデータ相(D相)をAD変換する。なお、読み出し時には読み出し時選択パルスはオン状態になっている。 At the time of reading, the potential of the floating diffusion part FD is reset by turning on a reset pulse at the time of reading, and then the potential of the pre-data phase (P phase) is AD converted. Thereafter, the charge of the photoelectric conversion element PD is transferred to the floating diffusion portion FD by a transfer pulse at the time of reading, and the data phase (D phase) is AD converted. At the time of reading, the selection pulse at the time of reading is in an on state.
 なお、上記はあくまで一例であり、電子シャッタや読み出しの動作に応じて、少なくとも一部の駆動タイミングが変更されてもよい。具体的な一例として、図7において破線で示すように、読み出し時転送パルスにて光電変換素子PDの電荷をフローティングディフュージョン部FDへ転送させた後に、電子シャッタ時転送パルス及び電子シャッタ時リセットパルスをオンすることで光電変換素子PDの電位をリセット状態にしてもよい。 Note that the above is merely an example, and at least a part of the drive timing may be changed according to the electronic shutter and the reading operation. As a specific example, as shown by a broken line in FIG. 7, after the charge of the photoelectric conversion element PD is transferred to the floating diffusion portion FD by the transfer pulse at the time of reading, the transfer pulse at the electronic shutter and the reset pulse at the electronic shutter are changed. The potential of the photoelectric conversion element PD may be reset by turning it on.
 以上、図7を参照して、画素2の駆動について説明した。 The driving of the pixel 2 has been described above with reference to FIG.
 次いで、図8を参照して、図3に示したADC113の一般的な駆動について説明する。図8は、本開示の一実施形態に係る固体撮像装置1の駆動制御の一例について示した概略的なタイミングチャートであり、ADC113の駆動制御の一例について示している。なお、本説明では、図3に示したADC113のうち、DAC142、比較器141、及びカウンタ143の動作に着目して、当該ADC113の駆動について説明する。 Next, general driving of the ADC 113 shown in FIG. 3 will be described with reference to FIG. FIG. 8 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1 according to an embodiment of the present disclosure, and shows an example of drive control of the ADC 113. In this description, the driving of the ADC 113 will be described by focusing on the operations of the DAC 142, the comparator 141, and the counter 143 in the ADC 113 illustrated in FIG.
 図8には、1水平同期期間を示す水平同期信号(XHS)、DAC142から出力されるランプ信号の電位(実線)、垂直信号線VSLから出力される画素信号の電位(破線)、比較器141から出力される反転パルス、およびカウンタ143の動作イメージが示されている。 In FIG. 8, the horizontal synchronization signal (XHS) indicating one horizontal synchronization period, the potential of the ramp signal output from the DAC 142 (solid line), the potential of the pixel signal output from the vertical signal line VSL (broken line), the comparator 141. The inversion pulse output from the counter 143 and the operation image of the counter 143 are shown.
 一般的に、DAC142は、画素信号のリセットレベルを読み出すためのP相において一定の勾配で電位が順次降下する第1の傾斜を有し、画素信号のデータレベルを読み出すためのD相において一定の勾配で電位が順次降下する第2の傾斜を有するランプ波を生成する。また、比較器141は、画素信号の電位とランプ波の電位とを比較して、画素信号の電位とランプ波の電位とが交差するタイミングにおいて反転する反転パルスを出力する。そして、カウンタ143は、P相においてランプ波が降下し始めたタイミングから、ランプ波の電位が画素信号の電位以下になったタイミングまでをカウント(P相カウント値)した後、D相においてランプ波が降下し始めたタイミングから、ランプ波の電位が画素信号の電位以下になったタイミングまでをカウント(D相カウント値)する。これにより、P相カウント値とD相カウント値との差分が、リセットノイズが除去された画素信号として取得される。以上のようにして、ランプ波を利用して画素信号のAD変換が行われる。 In general, the DAC 142 has a first slope in which the potential sequentially drops at a constant gradient in the P phase for reading the reset level of the pixel signal, and is constant in the D phase for reading the data level of the pixel signal. A ramp wave having a second slope in which the potential drops sequentially with a gradient is generated. The comparator 141 compares the potential of the pixel signal with the potential of the ramp wave, and outputs an inversion pulse that is inverted at the timing when the potential of the pixel signal and the potential of the ramp wave intersect. Then, the counter 143 counts (P-phase count value) from the timing when the ramp wave starts to drop in the P phase to the timing when the potential of the ramp wave becomes equal to or lower than the potential of the pixel signal, and then the ramp wave in the D phase. Is counted from the timing when the voltage starts to fall to the timing when the potential of the ramp wave becomes equal to or lower than the potential of the pixel signal (D-phase count value). Thereby, the difference between the P-phase count value and the D-phase count value is acquired as a pixel signal from which reset noise has been removed. As described above, AD conversion of the pixel signal is performed using the ramp wave.
 以上、図8を参照して、図3に示したADC113の一般的な駆動について説明した。 The general driving of the ADC 113 shown in FIG. 3 has been described above with reference to FIG.
 <<2.第1の実施形態>>
 続いて、本開示の第1の実施形態について説明する。本実施形態では、固体撮像装置1の各画素2に含まれる光電変換素子PDの状態(例えば、飽和特性)を認識することで、当該光電変換素子PDの故障検出を可能とする仕組みの一例について説明する。なお、以降の説明において、本実施形態に係る固体撮像装置1を、他の実施形態に係る固体撮像装置1と区別するために、「固体撮像装置1a」と称する場合がある。
<< 2. First Embodiment >>
Subsequently, a first embodiment of the present disclosure will be described. In the present embodiment, an example of a mechanism that enables failure detection of the photoelectric conversion element PD by recognizing the state (for example, saturation characteristics) of the photoelectric conversion element PD included in each pixel 2 of the solid-state imaging device 1. explain. In the following description, the solid-state imaging device 1 according to this embodiment may be referred to as “solid-state imaging device 1a” in order to distinguish it from the solid-state imaging device 1 according to other embodiments.
  <2.1.構成>
 まず、図9及び図10を参照して、本実施形態に係る固体撮像装置1aの概略的な構成の一例について説明する。図9及び図10は、本実施形態に係る固体撮像装置1aの概略的な構成の一例を示したブロック図である。なお、本説明では、当該固体撮像装置1aの構成について、図1~図8を参照して説明した固体撮像装置1と異なる部分に着目して説明し、当該固体撮像装置1と実質的に同様の部分については詳細な説明は省略する。
<2.1. Configuration>
First, an example of a schematic configuration of the solid-state imaging device 1a according to the present embodiment will be described with reference to FIGS. 9 and 10 are block diagrams illustrating an example of a schematic configuration of the solid-state imaging device 1a according to the present embodiment. In the present description, the configuration of the solid-state imaging device 1a will be described by focusing on the difference from the solid-state imaging device 1 described with reference to FIGS. 1 to 8, and substantially the same as the solid-state imaging device 1. Detailed description of this part is omitted.
 図9は、本実施形態に係る固体撮像装置1の電源構成の一例を示している。なお、図9に示す例では、主に、画素タイミング駆動回路5が画素2に対して駆動信号を供給する部分の構成について示しており、その他の構成については図示を省略している。 FIG. 9 shows an example of the power supply configuration of the solid-state imaging device 1 according to this embodiment. In the example shown in FIG. 9, the configuration of the portion where the pixel timing drive circuit 5 supplies the drive signal to the pixel 2 is mainly shown, and the other configurations are not shown.
 図9に示すように、本実施形態に係る固体撮像装置1aでは、画素2に対して電源電圧を供給する電源と、画素タイミング駆動回路5が画素2に対して駆動信号を供給するために当該画素タイミング駆動回路5に対して電源電圧を供給する電源とが個別に設けられている。そこで、以降では、画素2に対して電源電圧を供給する電源を「電源VDDHPX」とも称し、画素タイミング駆動回路5に対して電源電圧(即ち、画素2に対して駆動信号を供給するための電源電圧)を供給する電源を「電源VDDHVS」とも称する。 As shown in FIG. 9, in the solid-state imaging device 1 a according to this embodiment, a power source that supplies a power source voltage to the pixel 2 and a pixel timing driving circuit 5 that supplies a driving signal to the pixel 2 A power supply for supplying a power supply voltage to the pixel timing drive circuit 5 is provided individually. Therefore, hereinafter, the power supply that supplies the power supply voltage to the pixel 2 is also referred to as “power supply VDDHPX”, and the power supply voltage to the pixel timing drive circuit 5 (that is, the power supply for supplying the drive signal to the pixel 2). The power supply for supplying the voltage is also referred to as “power supply VDDHVS”.
 なお、固体撮像装置1aを積層型の固体撮像装置として構成する場合には、電源VDDHPX及びVDDHVSが互いに異なる半導体チップに設けられていてもよい。具体的な一例として、電源VDDHPXは、画素2が配列された半導体チップ(例えば、図2に示す第1の半導体チップ部341)に設けられていてもよい。また、電源VDDHVSについては、画素タイミング駆動回路5が設けられた半導体チップ(例えば、図2に示す第2の半導体チップ部342)に設けられていてもよい。この構成の場合、画素2が配列された半導体チップと画素タイミング駆動回路5が設けられた半導体チップとは、接続部(例えばTSV(Through-Silicon Via)等)を介して接続される。 When the solid-state imaging device 1a is configured as a stacked solid-state imaging device, the power supplies VDDHPX and VDDHVS may be provided on different semiconductor chips. As a specific example, the power supply VDDHPX may be provided in a semiconductor chip in which the pixels 2 are arranged (for example, the first semiconductor chip unit 341 shown in FIG. 2). Further, the power supply VDDHVS may be provided in a semiconductor chip (for example, the second semiconductor chip unit 342 shown in FIG. 2) provided with the pixel timing driving circuit 5. In the case of this configuration, the semiconductor chip on which the pixels 2 are arranged and the semiconductor chip on which the pixel timing driving circuit 5 is provided are connected via a connection part (for example, TSV (Through-Silicon Via)).
 また、図10は、本実施形態に係る固体撮像装置1aの構成のうち、画素2からの画素信号の読み出しに係る部分の構成の一例を示している。即ち、図10に示す例では、主に、定電流回路部114及びADC113に相当する部分について示しており、その他の構成については図示を省略している。なお、図10において、MOSトランジスタ161、比較器141、DAC142、及びカウンタ143については、図3に示すMOSトランジスタ161、比較器141、DAC142、及びカウンタ143と実質的に同様のため詳細な説明は省略する。また、図10において、比較器141、DAC142、及びカウンタ143が、図3に示すADC113の部分に相当する。また、図10において、MOSトランジスタ161が、図3に示す定電流回路部114の部分に相当する。 FIG. 10 shows an example of the configuration of a part related to reading of a pixel signal from the pixel 2 in the configuration of the solid-state imaging device 1a according to the present embodiment. That is, in the example shown in FIG. 10, the parts corresponding to the constant current circuit unit 114 and the ADC 113 are mainly shown, and the other components are not shown. In FIG. 10, the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 are substantially the same as the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 shown in FIG. Omitted. In FIG. 10, the comparator 141, the DAC 142, and the counter 143 correspond to the ADC 113 shown in FIG. In FIG. 10, the MOS transistor 161 corresponds to the constant current circuit portion 114 shown in FIG.
 図10に示すように、本実施形態に係る固体撮像装置1aは、センサデータユニット211を含む。センサデータユニット211は、カウンタ143から出力される信号、即ち、画素2から供給される画素信号が変換されたデジタル信号に基づき当該画素2の状態を認識し、当該認識結果を利用して各種処理を実行する。 As shown in FIG. 10, the solid-state imaging device 1 a according to this embodiment includes a sensor data unit 211. The sensor data unit 211 recognizes the state of the pixel 2 based on a signal output from the counter 143, that is, a digital signal obtained by converting the pixel signal supplied from the pixel 2, and performs various processing using the recognition result. Execute.
 具体的な一例として、センサデータユニット211は、画素2の状態の認識結果を利用することで、所謂故障検出に係る各種処理を行ってもよい。特に、本実施形態に係る固体撮像装置1aにおいては、センサデータユニット211は、画素2に含まれる光電変換素子PDに故障が生じた場合に、当該光電変換素子PDの故障を画素2ごとに個別に認識することが可能である。なお、画素2ごとに、当該画素2に含まれる光電変換素子PDの故障を検出するための仕組みの詳細については、画素2の状態を認識するための駆動制御の一例とあわせて別途後述する。また、センサデータユニット211のうち、画素2の認識に係る部分が、「認識部」の一例に相当する。 As a specific example, the sensor data unit 211 may perform various processes related to so-called failure detection by using the recognition result of the state of the pixel 2. In particular, in the solid-state imaging device 1a according to the present embodiment, when the sensor data unit 211 fails in the photoelectric conversion element PD included in the pixel 2, the failure of the photoelectric conversion element PD is individually determined for each pixel 2. Can be recognized. Note that details of a mechanism for detecting a failure of the photoelectric conversion element PD included in the pixel 2 for each pixel 2 will be described later together with an example of drive control for recognizing the state of the pixel 2. Further, in the sensor data unit 211, a part related to recognition of the pixel 2 corresponds to an example of a “recognition unit”.
 また、センサデータユニット211は、上記故障検出の結果として、一部の画素2に異常が生じていることを検出した場合に、当該異常の検出結果を固体撮像装置1aの外部に通知してもよい。具体的な一例として、センサデータユニット211は、所定の出力端子(即ち、Errorピン)を介して、異常を検出したことを示す所定の信号を固体撮像装置1aの外部に出力してもよい。また、他の一例として、固体撮像装置1aの外部に設けられた所定のDSP(Digital Signal Processor)401に、異常を検出したことを通知してもよい。このような構成により、当該DSP401は、例えば、所定の出力部を介して、固体撮像装置1aに異常が生じたことをユーザに報知することが可能となる。また、固体撮像装置1aに異常が検出された場合、当該DSP401は、車両の安全に関する機能(ADAS機能)の全部または一部を制限するよう制御しても良い。また、他の一例として、DSP401は、異常が検出された画素2の出力を、当該画素2とは異なる他の画素2(例えば、隣接する画素)の出力を利用して補正することも可能となる。なお、センサデータユニット211のうち、画素2の異常の検出結果が所定の出力先(例えば、DSP401等)に出力されるように制御する部分が、「出力制御部」の一例に相当する。 Further, when the sensor data unit 211 detects that an abnormality has occurred in some of the pixels 2 as a result of the failure detection, the sensor data unit 211 may notify the detection result of the abnormality to the outside of the solid-state imaging device 1a. Good. As a specific example, the sensor data unit 211 may output a predetermined signal indicating that an abnormality has been detected to the outside of the solid-state imaging device 1a via a predetermined output terminal (that is, an Error pin). As another example, a predetermined DSP (Digital Signal Processor) 401 provided outside the solid-state imaging device 1a may be notified that an abnormality has been detected. With such a configuration, the DSP 401 can notify the user that an abnormality has occurred in the solid-state imaging device 1a, for example, via a predetermined output unit. When an abnormality is detected in the solid-state imaging device 1a, the DSP 401 may perform control so as to limit all or a part of the vehicle safety function (ADAS function). As another example, the DSP 401 can correct the output of the pixel 2 in which an abnormality is detected using the output of another pixel 2 (for example, an adjacent pixel) different from the pixel 2. Become. A part of the sensor data unit 211 that controls the detection result of the abnormality of the pixel 2 to be output to a predetermined output destination (for example, the DSP 401 or the like) corresponds to an example of an “output control unit”.
 また、センサデータユニット211自体が、故障検出の結果を利用することで、異常が検出された画素2の出力を補正してもよい。なお、補正方法については、DSP401が補正を行う場合と同様である。また、センサデータユニット211のうち、異常が検出された画素2の出力を補正する部分が、「補正処理部」の一例に相当する。 Further, the sensor data unit 211 itself may correct the output of the pixel 2 in which an abnormality is detected by using the result of failure detection. The correction method is the same as that when the DSP 401 performs correction. Further, a portion of the sensor data unit 211 that corrects the output of the pixel 2 in which an abnormality has been detected corresponds to an example of a “correction processing unit”.
 以上、図9及び図10を参照して、本実施形態に係る固体撮像装置1aの概略的な構成の一例について説明した。 The example of the schematic configuration of the solid-state imaging device 1a according to the present embodiment has been described above with reference to FIGS.
  <2.2.駆動制御>
 続いて、本実施形態に係る固体撮像装置1aの駆動制御の一例として、特に、各画素2に含まれる光電変換素子PDの状態を認識し、ひいては当該光電変換素子PDの異常を検出するための制御の一例について説明する。なお、本説明では、図6に示すように、画素2が所謂4トランジスタ構成の場合を例に、固体撮像装置1aの駆動制御の一例について説明する。例えば、図11は、本実施形態に係る固体撮像装置1aの駆動制御の一例について示した概略的なタイミングチャートであり、画素2に含まれる光電変換素子PDの状態を認識するための制御の一例について示している。
<2.2. Drive control>
Subsequently, as an example of drive control of the solid-state imaging device 1a according to the present embodiment, in particular, the state of the photoelectric conversion element PD included in each pixel 2 is recognized, and as a result, an abnormality of the photoelectric conversion element PD is detected. An example of control will be described. In this description, as shown in FIG. 6, an example of the drive control of the solid-state imaging device 1a will be described by taking the case where the pixel 2 has a so-called four-transistor configuration as an example. For example, FIG. 11 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1a according to the present embodiment, and an example of control for recognizing the state of the photoelectric conversion element PD included in the pixel 2. Shows about.
 図11において、VDDHPXは、電源VDDHPXから画素2に対して印可される電源電圧を示している。また、INCKは、同期信号を示しており、当該同期信号の1パルス分が、固体撮像装置1a内において実行される各種処理の期間の最小単位となる。また、XVS及びXHSは、垂直同期信号及び水平同期信号を示している。即ち、1XVSが、1フレーム期間に相当する。また、TRG、RST、及びSELは、転送トランジスタTr11、リセットトランジスタTr12、及び選択トランジスタTr14それぞれに供給される駆動信号(即ち、TRG駆動パルス、RST駆動パルス、及びSEL駆動パルス)を示している。 11, VDDHPX indicates a power supply voltage applied to the pixel 2 from the power supply VDDHPX. Further, INCK indicates a synchronization signal, and one pulse of the synchronization signal is a minimum unit of various processing periods executed in the solid-state imaging device 1a. XVS and XHS indicate a vertical synchronization signal and a horizontal synchronization signal. That is, 1XVS corresponds to one frame period. TRG, RST, and SEL indicate drive signals (that is, TRG drive pulse, RST drive pulse, and SEL drive pulse) supplied to the transfer transistor Tr11, the reset transistor Tr12, and the selection transistor Tr14, respectively.
 本実施形態に係る固体撮像装置1aにおいて、光電変換素子PDの状態の認識に係る制御は、主に、対象となる画素2の光電変換素子PDに対して電荷を蓄積する第1の制御と、当該光電変換素子PDに蓄積された電荷を読み出す第2の制御とを含む。例えば、図11に示す例では、当該第1の制御及び当該第2の制御それぞれに対して、1フレーム期間を割り当てている。そこで、本説明では、図11に示すように、第1の制御が割り当てられたフレーム期間を「蓄積フレーム」とも称し、第2の制御が割り当てられたフレーム期間を「読み出しフレーム」とも称する。 In the solid-state imaging device 1a according to the present embodiment, the control related to the recognition of the state of the photoelectric conversion element PD is mainly the first control for accumulating charges in the photoelectric conversion element PD of the target pixel 2, Second control for reading out the electric charge accumulated in the photoelectric conversion element PD. For example, in the example illustrated in FIG. 11, one frame period is assigned to each of the first control and the second control. Therefore, in this description, as shown in FIG. 11, the frame period to which the first control is assigned is also referred to as an “accumulated frame”, and the frame period to which the second control is assigned is also referred to as a “read frame”.
 まず、蓄積フレームについて説明する。図11に示すように、蓄積フレームでは、まず、電源VDDHPXから画素2に対して印可される電源電圧が0Vに制御され、その後に、当該電源電圧が所定の電圧VDDに制御されることで当該画素2に対して当該電圧VDDが印可される。 First, the accumulation frame will be described. As shown in FIG. 11, in the accumulation frame, first, the power supply voltage applied to the pixel 2 from the power supply VDDHPX is controlled to 0 V, and then the power supply voltage is controlled to the predetermined voltage VDD. The voltage VDD is applied to the pixel 2.
 ここで、図12を参照して、図11において参照符号T11で示された期間における、画素2の動作について説明する。図12は、本実施形態に係る固体撮像装置1aの駆動制御の一例について説明するための説明図であり、図11の期間T11における画素2の状態を模式的に示している。 Here, with reference to FIG. 12, the operation of the pixel 2 in the period indicated by the reference symbol T11 in FIG. 11 will be described. FIG. 12 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1a according to the present embodiment, and schematically shows the state of the pixel 2 in the period T11 in FIG.
 図11に示すように、期間T11においては、TRG駆動パルス及びRST駆動パルスがオン状態に制御され、SEL駆動パルスがオフ状態に制御され、電源VDDHPXから画素2に印加される電圧は0Vに制御される。これにより、図12に示すように、フローティングディフュージョン部FDの電位が0Vに制御され、光電変換素子PDのアノード及びカソード間に電位差が生じ、当該光電変換素子PDに電荷が注入される。なお、図12に示す制御の結果として光電変換素子PDに保持される電荷量は、当該光電変換素子PDの受光状態に関わらず、当該光電変換素子PDの飽和特性によって決定されることとなる。即ち、光電変換素子PDに何らかの異常が生じている場合には、当該光電変換素子PDに保持される電荷量が正常時に比べて変化する(例えば、低下する)こととなる。なお、図12に示すように、光電変換素子PDに対して電荷を注入する制御については、所定のタイミングで全画素2について実行されてもよいし(所謂グローバルリセット)、各画素2について時分割で個別に実行されてもよい。 As shown in FIG. 11, in the period T11, the TRG drive pulse and the RST drive pulse are controlled to be on, the SEL drive pulse is controlled to be off, and the voltage applied to the pixel 2 from the power supply VDDHPX is controlled to 0V. Is done. Accordingly, as shown in FIG. 12, the potential of the floating diffusion portion FD is controlled to 0 V, a potential difference is generated between the anode and the cathode of the photoelectric conversion element PD, and charges are injected into the photoelectric conversion element PD. Note that the amount of charge held in the photoelectric conversion element PD as a result of the control shown in FIG. 12 is determined by the saturation characteristics of the photoelectric conversion element PD regardless of the light receiving state of the photoelectric conversion element PD. That is, when some abnormality occurs in the photoelectric conversion element PD, the amount of charge held in the photoelectric conversion element PD changes (for example, decreases) as compared with the normal time. As shown in FIG. 12, the control for injecting charges into the photoelectric conversion element PD may be executed for all the pixels 2 at a predetermined timing (so-called global reset), or time division for each pixel 2. May be executed individually.
 次いで、図13を参照して、図11において参照符号T13で示された期間における、画素2の動作について説明する。図13は、本実施形態に係る固体撮像装置1aの駆動制御の一例について説明するための説明図であり、図11の期間T13における画素2の状態を模式的に示している。 Next, with reference to FIG. 13, the operation of the pixel 2 in the period indicated by the reference symbol T13 in FIG. 11 will be described. FIG. 13 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1a according to the present embodiment, and schematically shows the state of the pixel 2 in the period T13 in FIG.
 図11に示すように、期間T13においては、RST駆動パルスはオン状態が保持され、TRG駆動パルスについてはオフ状態に制御される。なお、SEL駆動パルスについてはオフ状態が保持される。また、電源VDDHPXから画素2に印加される電圧はVDDに制御される。このような制御により、図13に示すように、フローティングディフュージョン部FDと光電変換素子PDとの間が非導通状態となり、当該フローティングディフュージョン部FDの電位がVDDに制御される。 As shown in FIG. 11, in the period T13, the RST drive pulse is kept in the on state, and the TRG drive pulse is controlled in the off state. The SEL drive pulse is kept off. The voltage applied to the pixel 2 from the power supply VDDHPX is controlled to VDD. By such control, as shown in FIG. 13, the floating diffusion portion FD and the photoelectric conversion element PD are brought into a non-conductive state, and the potential of the floating diffusion portion FD is controlled to VDD.
 次いで、読み出しフレームについて説明する。読み出しフレームでは、所定のタイミングで対象となる画素2が駆動され、当該画素2の光電変換素子PDに蓄積された電荷に応じた画素信号が読み出される。具体的な一例として、図11に示す例では、参照符号T15で示された期間において画素2が駆動され、当該画素2の光電変換素子PDに蓄積された電荷に応じた画素信号が読み出される。ここで、図14を参照して、図11において参照符号T15で示された期間における、画素2の動作について説明する。図14は、本実施形態に係る固体撮像装置1aの駆動制御の一例について説明するための説明図であり、図11の期間T15における画素2の状態を模式的に示している。 Next, the readout frame will be described. In the readout frame, the target pixel 2 is driven at a predetermined timing, and a pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD of the pixel 2 is read out. As a specific example, in the example illustrated in FIG. 11, the pixel 2 is driven in the period indicated by the reference symbol T <b> 15, and a pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD of the pixel 2 is read. Here, with reference to FIG. 14, the operation of the pixel 2 in the period indicated by the reference symbol T15 in FIG. 11 will be described. FIG. 14 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1a according to the present embodiment, and schematically shows the state of the pixel 2 in the period T15 in FIG.
 図11に示すように、読み出しフレームの開始時に、TRG駆動パルス、RST駆動パルス、及びSEL駆動パルスのそれぞれはオフ状態となるように制御される。また、読み出しフレームでは、画素2に対して電圧VDDが印可された状態が保持される。次いで、期間T15において、TRG駆動パルス、RST駆動パルス、及びSEL駆動パルスのそれぞれがオン状態に制御される。このような制御により、期間T15においては、図14に示すように、転送トランジスタTr11及びリセットトランジスタTr12が導通状態となり、光電変換素子PDに蓄積された電荷が、フローティングディフュージョン部FDに転送され、当該フローティングディフュージョン部FDに蓄積される。また、選択トランジスタTr14が導通状態に制御される。そのため、フローティングディフュージョン部FDに蓄積された電荷(換言すると、光電変換素子PDからリークした電荷)に応じた電圧が増幅トランジスタTr13のゲートに印加され、当該増幅トランジスタTr13が導通状態に制御される。これにより、画素2からは、増幅トランジスタTr13のゲートに印加された電圧に応じた画素信号が、垂直信号線VSLを介して出力される。即ち、光電変換素子PDの飽和特性に応じた電荷が、当該光電変換素子PDから読み出され、当該電荷の読み出し結果に応じた画素信号が、垂直信号線VSLを介して画素2から出力されることとなる。 As shown in FIG. 11, at the start of the readout frame, each of the TRG drive pulse, the RST drive pulse, and the SEL drive pulse is controlled to be in an off state. In the readout frame, the state where the voltage VDD is applied to the pixel 2 is maintained. Next, in a period T15, each of the TRG drive pulse, the RST drive pulse, and the SEL drive pulse is controlled to be in an on state. By such control, in the period T15, as shown in FIG. 14, the transfer transistor Tr11 and the reset transistor Tr12 are turned on, and the charge accumulated in the photoelectric conversion element PD is transferred to the floating diffusion portion FD. Accumulated in the floating diffusion portion FD. Further, the selection transistor Tr14 is controlled to be conductive. Therefore, a voltage corresponding to the charge accumulated in the floating diffusion portion FD (in other words, charge leaked from the photoelectric conversion element PD) is applied to the gate of the amplification transistor Tr13, and the amplification transistor Tr13 is controlled to be in a conductive state. As a result, a pixel signal corresponding to the voltage applied to the gate of the amplification transistor Tr13 is output from the pixel 2 via the vertical signal line VSL. That is, a charge corresponding to the saturation characteristic of the photoelectric conversion element PD is read from the photoelectric conversion element PD, and a pixel signal corresponding to the charge read result is output from the pixel 2 via the vertical signal line VSL. It will be.
 なお、垂直信号線VSLを介して画素2から出力された画素信号は、ADC113によりデジタル信号に変換され、例えば、図10を参照して説明したセンサデータユニット211に出力される。このとき、センサデータユニット211に出力されるデジタル信号は、当該画素2に含まれる光電変換素子PDの飽和特性に応じた電位を示すこととなる。即ち、センサデータユニット211は、当該デジタル信号に基づき、画素2の状態(ひいては、当該画素2に含まれる光電変換素子PDの状態)を、画素2ごとに個別に認識することが可能となる。そのため、例えば、センサデータユニット211は、画素2に異常が生じた場合に、当該異常を画素2ごとに個別に検出することが可能となる。このような構成に基づき、例えば、センサデータユニット211は、異常が生じた画素2に関する情報を所定の出力先に出力することが可能である。 Note that the pixel signal output from the pixel 2 via the vertical signal line VSL is converted into a digital signal by the ADC 113 and output to, for example, the sensor data unit 211 described with reference to FIG. At this time, the digital signal output to the sensor data unit 211 indicates a potential corresponding to the saturation characteristic of the photoelectric conversion element PD included in the pixel 2. That is, the sensor data unit 211 can individually recognize the state of the pixel 2 (and thus the state of the photoelectric conversion element PD included in the pixel 2) for each pixel 2 based on the digital signal. Therefore, for example, when an abnormality occurs in the pixel 2, the sensor data unit 211 can detect the abnormality for each pixel 2 individually. Based on such a configuration, for example, the sensor data unit 211 can output information regarding the pixel 2 in which an abnormality has occurred to a predetermined output destination.
 また、他の一例として、センサデータユニット211は、異常が生じた画素2から出力される画素信号を、他の画素2から出力される画素信号に基づき補正してもよい。例えば、図15は、本実施形態に係る固体撮像装置1aにおける画素信号の補正に係る動作の一例について説明するための説明図である。図15に示す例では、異常が生じた画素2から出力される画素信号を、当該画素2に隣接する他の画素2から出力される画素信号に基づき補正する場合の一例を示している。この場合には、センサデータユニット211は、例えば、異常が生じた画素2からの画素信号が読み出されたタイミングに基づき、当該画素2の位置と、当該画素2に隣接する他の画素2の位置とを認識すればよい。 As another example, the sensor data unit 211 may correct the pixel signal output from the pixel 2 in which an abnormality has occurred based on the pixel signal output from the other pixel 2. For example, FIG. 15 is an explanatory diagram for explaining an example of an operation related to pixel signal correction in the solid-state imaging device 1a according to the present embodiment. In the example illustrated in FIG. 15, an example in which the pixel signal output from the pixel 2 in which an abnormality has occurred is corrected based on the pixel signal output from another pixel 2 adjacent to the pixel 2 is illustrated. In this case, the sensor data unit 211, for example, based on the timing at which the pixel signal from the pixel 2 in which an abnormality has occurred is read, and the position of the pixel 2 and the other pixel 2 adjacent to the pixel 2 What is necessary is just to recognize a position.
 なお、上記に説明した各画素2に含まれる光電変換素子PDの状態の認識に係る制御(例えば、光電変換素子PDの異常を検出するための制御)については、例えば、対象となる画素2が通常駆動を行っていないタイミングで実行されるとよい。具体的な一例として、固体撮像装置1の起動時に上記制御が実行してもよい。また、他の一例として、一部の画素2のみが画像の撮像に使用されている場合において、当該画像の撮像に使用されていない他の画素2を対象として上記制御が実行されてもよい。 In addition, regarding the control related to the recognition of the state of the photoelectric conversion element PD included in each pixel 2 described above (for example, control for detecting abnormality of the photoelectric conversion element PD), for example, the target pixel 2 It may be executed at a timing when normal driving is not performed. As a specific example, the above control may be executed when the solid-state imaging device 1 is activated. As another example, in the case where only some of the pixels 2 are used for capturing an image, the above control may be executed for other pixels 2 that are not used for capturing the image.
 以上、図11~図15を参照して、本実施形態に係る固体撮像装置1aの駆動制御の一例として、特に、各画素2に含まれる光電変換素子PDの状態を認識し、ひいては当該光電変換素子PDの異常を検出するための制御の一例について説明した。 As described above, with reference to FIGS. 11 to 15, as an example of the drive control of the solid-state imaging device 1a according to the present embodiment, in particular, the state of the photoelectric conversion element PD included in each pixel 2 is recognized, and thus the photoelectric conversion is performed. An example of control for detecting an abnormality of the element PD has been described.
  <2.3.変形例>
 続いて、本実施形態に係る固体撮像装置1の変形例について説明する。本変形例では、画素2が所謂共有画素構造を構成する場合の一例について説明する。
<2.3. Modification>
Subsequently, a modification of the solid-state imaging device 1 according to the present embodiment will be described. In this modification, an example in which the pixel 2 forms a so-called shared pixel structure will be described.
 (回路構成)
 まず、図16を参照して、共有画素構造を構成する場合の単位画素の回路構成の一例について説明する。前述したように、共有画素構造は、複数のフォトダイオードと、複数の転送トランジスタと、共有する1つのフローティングディフージョンと、共有する1つずつの他の画素トランジスタとから構成される。例えば、図16は、本実施形態の変形例に係る固体撮像装置における単位画素の回路構成の一例を示した図であり、1つの画素に対して高感度のフォトダイオード(PD1)と、低感度のフォトダイオード(PD2)と、画素内容量(FC)とを配置した7トランジスタ構成の一例を示している。なお、本説明では、本実施形態に変形例に係る固体撮像装置を、前述した実施形態に係る固体撮像装置1aと区別するため、「固体撮像装置1c」と称する場合がある。また、本実施形態の変形例に係る固体撮像装置1cの画素、即ち、共有画素構造を構成する画素を、前述した実施形態に係る固体撮像装置1aの画素2と区別する場合に、「画素2c」または「単位画素2c」と称する場合がある。
(Circuit configuration)
First, an example of a circuit configuration of a unit pixel when a shared pixel structure is configured will be described with reference to FIG. As described above, the shared pixel structure includes a plurality of photodiodes, a plurality of transfer transistors, a shared floating diffusion, and a shared other pixel transistor. For example, FIG. 16 is a diagram illustrating an example of a circuit configuration of a unit pixel in a solid-state imaging device according to a modification of the present embodiment. A high-sensitivity photodiode (PD1) and a low-sensitivity for one pixel 1 shows an example of a seven-transistor configuration in which a photodiode (PD2) and a pixel internal capacitance (FC) are arranged. In this description, the solid-state imaging device according to the modification of the present embodiment may be referred to as “solid-state imaging device 1c” in order to distinguish it from the solid-state imaging device 1a according to the above-described embodiment. In addition, when distinguishing the pixel of the solid-state imaging device 1c according to the modification of the present embodiment, that is, the pixel constituting the shared pixel structure from the pixel 2 of the solid-state imaging device 1a according to the above-described embodiment, “pixel 2c Or “unit pixel 2c”.
 図16に示すように、単位画素2cは、光電変換素子PD1、第1転送ゲート部Tr21、光電変換素子PD2、第2転送ゲート部Tr22、第3転送ゲート部Tr23、第4転送ゲート部Tr25、電荷蓄積部FC、リセットゲート部Tr24、フローティングディフュージョン部FD、増幅トランジスタTr26、及び、選択トランジスタTr27を含むように構成される。 As shown in FIG. 16, the unit pixel 2c includes a photoelectric conversion element PD1, a first transfer gate unit Tr21, a photoelectric conversion element PD2, a second transfer gate unit Tr22, a third transfer gate unit Tr23, a fourth transfer gate unit Tr25, The charge storage unit FC, the reset gate unit Tr24, the floating diffusion unit FD, the amplification transistor Tr26, and the selection transistor Tr27 are included.
 また、単位画素2cに対して、各種駆動信号を供給するための複数の駆動線が、例えば画素行毎に配線される。そして、図1に示した画素タイミング駆動回路5から複数の駆動線を介して、各種の駆動信号TG1、TG2、FCG、RST、SELが供給される。これらの駆動信号は、例えば、単位画素2cの各トランジスタがNMOSトランジスタの場合には、高レベル(例えば、電源電圧VDD)の状態がアクティブ状態となり、低レベルの状態(例えば、負電位)が非アクティブ状態となるパルス信号である。 Further, a plurality of drive lines for supplying various drive signals to the unit pixel 2c are wired for each pixel row, for example. Various drive signals TG1, TG2, FCG, RST, and SEL are supplied from the pixel timing drive circuit 5 shown in FIG. 1 via a plurality of drive lines. For example, when each transistor of the unit pixel 2c is an NMOS transistor, these drive signals are in an active state at a high level (for example, the power supply voltage VDD) and are in a non-low state (for example, a negative potential). This is a pulse signal that becomes active.
 光電変換素子PD1は、例えば、PN接合のフォトダイオードからなる。光電変換素子PD1は、受光した光量に応じた電荷を生成し、蓄積する。 The photoelectric conversion element PD1 is composed of, for example, a PN junction photodiode. The photoelectric conversion element PD1 generates and accumulates charges corresponding to the received light quantity.
 第1転送ゲート部Tr21は、光電変換素子PD1とフローティングディフュージョン部FDとの間に接続されている。第1転送ゲート部Tr21のゲート電極には、駆動信号TG1が印加される。駆動信号TG1がアクティブ状態になると、第1転送ゲート部Tr21が導通状態になり、光電変換素子PD1に蓄積されている電荷が、第1転送ゲート部Tr21を介してフローティングディフュージョン部FDに転送される。 The first transfer gate portion Tr21 is connected between the photoelectric conversion element PD1 and the floating diffusion portion FD. A drive signal TG1 is applied to the gate electrode of the first transfer gate portion Tr21. When the drive signal TG1 becomes active, the first transfer gate portion Tr21 becomes conductive, and the charges accumulated in the photoelectric conversion element PD1 are transferred to the floating diffusion portion FD via the first transfer gate portion Tr21. .
 光電変換素子PD2は、光電変換素子PD1と同様に、例えば、PN接合のフォトダイオードからなる。光電変換素子PD2は、受光した光量に応じた電荷を生成し、蓄積する。 The photoelectric conversion element PD2 is composed of, for example, a PN junction photodiode, similarly to the photoelectric conversion element PD1. The photoelectric conversion element PD2 generates and accumulates charges corresponding to the received light quantity.
 光電変換素子PD1と光電変換素子PD2を比較すると、例えば、光電変換素子PD1の方が受光面の面積が広く、感度が高く、光電変換素子PD2の方が受光面の面積が狭く、感度が低い。 Comparing the photoelectric conversion element PD1 and the photoelectric conversion element PD2, for example, the photoelectric conversion element PD1 has a larger light receiving surface area and higher sensitivity, and the photoelectric conversion element PD2 has a smaller light receiving surface area and lower sensitivity. .
 第2転送ゲート部Tr22は、電荷蓄積部FCとフローティングディフュージョン部FDとの間に接続されている。第2転送ゲート部Tr22のゲート電極には、駆動信号FCGが印加される。駆動信号FCGがアクティブ状態になると、第2転送ゲート部Tr22が導通状態になり、電荷蓄積部FCとフローティングディフュージョン部FDのポテンシャルが結合する。 The second transfer gate portion Tr22 is connected between the charge storage portion FC and the floating diffusion portion FD. A drive signal FCG is applied to the gate electrode of the second transfer gate portion Tr22. When the drive signal FCG becomes active, the second transfer gate portion Tr22 becomes conductive, and the potentials of the charge storage portion FC and the floating diffusion portion FD are coupled.
 第3転送ゲート部Tr23は、光電変換素子PD2と電荷蓄積部FCとの間に接続されている。第3転送ゲート部Tr23のゲート電極には、駆動信号TG2が印加される。駆動信号TG2がアクティブ状態になると、第3転送ゲート部Tr23が導通状態になり、光電変換素子PD2に蓄積されている電荷が、第3転送ゲート部Tr23を介して、電荷蓄積部FC、或いは、電荷蓄積部FCとフローティングディフュージョン部FDのポテンシャルが結合した領域に転送される。 The third transfer gate portion Tr23 is connected between the photoelectric conversion element PD2 and the charge storage portion FC. The drive signal TG2 is applied to the gate electrode of the third transfer gate portion Tr23. When the drive signal TG2 becomes active, the third transfer gate portion Tr23 becomes conductive, and the charge accumulated in the photoelectric conversion element PD2 passes through the third transfer gate portion Tr23, or the charge accumulation portion FC or It is transferred to a region where the potentials of the charge storage unit FC and the floating diffusion unit FD are combined.
 また、第3転送ゲート部Tr23のゲート電極の下部は、ポテンシャルが若干深くなっており、光電変換素子PD2の飽和電荷量を超え、光電変換素子PD2から溢れた電荷を電荷蓄積部FCに転送するオーバーフローパスが形成されている。なお、以下、第3転送ゲート部Tr23のゲート電極の下部に形成されているオーバーフローパスを、単に第3転送ゲート部Tr23のオーバーフローパスと称する。 Further, the lower portion of the gate electrode of the third transfer gate portion Tr23 has a slightly deep potential, and exceeds the saturation charge amount of the photoelectric conversion element PD2, and transfers the charges overflowing from the photoelectric conversion element PD2 to the charge storage portion FC. An overflow path is formed. Hereinafter, the overflow path formed below the gate electrode of the third transfer gate portion Tr23 is simply referred to as the overflow path of the third transfer gate portion Tr23.
 第4転送ゲート部Tr25は、第2転送ゲート部Tr22及びリセットゲート部Tr24と、フローティングディフュージョン部FDとの間に接続されている。第4転送ゲート部Tr25のゲート電極には、駆動信号FDGが印加される。駆動信号FDGがアクティブ状態になると、第4転送ゲート部Tr25が導通状態になり、第2転送ゲート部Tr22、リセットゲート部Tr24、及び、第4転送ゲート部Tr25の間のノード152と、フローティングディフュージョン部FDとのポテンシャルが結合する。 The fourth transfer gate portion Tr25 is connected between the second transfer gate portion Tr22 and the reset gate portion Tr24, and the floating diffusion portion FD. A drive signal FDG is applied to the gate electrode of the fourth transfer gate portion Tr25. When the drive signal FDG becomes active, the fourth transfer gate portion Tr25 becomes conductive, the node 152 between the second transfer gate portion Tr22, the reset gate portion Tr24, and the fourth transfer gate portion Tr25, and the floating diffusion The potential with the part FD is coupled.
 電荷蓄積部FCは、例えば、キャパシタからなり、第2転送ゲート部Tr22と第3転送ゲート部Tr23との間に接続されている。電荷蓄積部FCの対向電極は、電源電圧VDDを供給する電源VDDの間に接続されている。電荷蓄積部FCは、光電変換素子PD2から転送される電荷を蓄積する。 The charge storage unit FC includes, for example, a capacitor, and is connected between the second transfer gate unit Tr22 and the third transfer gate unit Tr23. The counter electrode of the charge storage unit FC is connected between the power supply VDD that supplies the power supply voltage VDD. The charge storage unit FC stores the charge transferred from the photoelectric conversion element PD2.
 リセットゲート部Tr24は、電源VDDとフローティングディフュージョン部FDとの間に接続されている。リセットゲート部Tr24のゲート電極には、駆動信号RSTが印加される。駆動信号RSTがアクティブ状態になると、リセットゲート部Tr24が導通状態になり、フローティングディフュージョン部FDの電位が、電源電圧VDDのレベルにリセットされる。 The reset gate portion Tr24 is connected between the power supply VDD and the floating diffusion portion FD. A drive signal RST is applied to the gate electrode of the reset gate portion Tr24. When the drive signal RST becomes active, the reset gate portion Tr24 becomes conductive and the potential of the floating diffusion portion FD is reset to the level of the power supply voltage VDD.
 フローティングディフュージョン部FDは、電荷を電圧信号に電荷電圧変換して出力する。 The floating diffusion unit FD converts the charge into a voltage signal and outputs it.
 増幅トランジスタTr26は、ゲート電極がフローティングディフュージョン部FDに接続され、ドレイン電極が電源VDDに接続されており、フローティングディフュージョン部FDに保持されている電荷を読み出す読出し回路、所謂ソースフォロワ回路の入力部となる。すなわち、増幅トランジスタTr26は、ソース電極が選択トランジスタTr27を介して垂直信号線VSLに接続されることにより、当該垂直信号線VSLの一端に接続される定電流源とソースフォロワ回路を構成する。 The amplification transistor Tr26 has a gate electrode connected to the floating diffusion portion FD, a drain electrode connected to the power supply VDD, and a readout circuit for reading out the electric charge held in the floating diffusion portion FD, a so-called source follower circuit input portion and Become. That is, the amplifying transistor Tr26 forms a constant current source and a source follower circuit connected to one end of the vertical signal line VSL by connecting the source electrode to the vertical signal line VSL via the selection transistor Tr27.
 選択トランジスタTr27は、増幅トランジスタTr26のソース電極と垂直信号線VSLとの間に接続されている。選択トランジスタTr27のゲート電極には、駆動信号SELが印加される。駆動信号SELがアクティブ状態になると、選択トランジスタTr27が導通状態になり、単位画素2cが選択状態となる。これにより、増幅トランジスタTr26から出力される画素信号が、選択トランジスタTr27を介して、垂直信号線VSLに出力される。 The selection transistor Tr27 is connected between the source electrode of the amplification transistor Tr26 and the vertical signal line VSL. A drive signal SEL is applied to the gate electrode of the selection transistor Tr27. When the drive signal SEL becomes active, the selection transistor Tr27 becomes conductive and the unit pixel 2c becomes selected. Thus, the pixel signal output from the amplification transistor Tr26 is output to the vertical signal line VSL via the selection transistor Tr27.
 なお、本説明では、各駆動信号がアクティブ状態になることを、各駆動信号がオンする、または、各駆動信号をオン状態に制御するともいい、各駆動信号が非アクティブ状態になることを、各駆動信号がオフする、または、各駆動信号をオフ状態に制御するともいう。また、以下、各ゲート部又は各トランジスタが導通状態になることを、各ゲート部又は各トランジスタがオンするともいい、各ゲート部又は各トランジスタが非導通状態になることを、各ゲート部又は各トランジスタがオフするともいう。 In this description, each drive signal is in an active state, each drive signal is turned on, or each drive signal is controlled to be in an on state, and each drive signal is in an inactive state. Each drive signal is turned off or each drive signal is controlled to be turned off. In addition, hereinafter, each gate portion or each transistor is turned on, each gate portion or each transistor may be turned on, and each gate portion or each transistor is turned off. It is also said that the transistor is turned off.
 (駆動制御)
 続いて、本変形例に係る固体撮像装置における駆動制御の一例にとして、特に、各画素2に含まれる光電変換素子PD1及びPD2の状態を認識し、ひいては当該光電変換素子PD1及びPD2の異常を検出するための制御の一例について説明する。
(Drive control)
Subsequently, as an example of drive control in the solid-state imaging device according to the present modification, in particular, the state of the photoelectric conversion elements PD1 and PD2 included in each pixel 2 is recognized, and thus the abnormality of the photoelectric conversion elements PD1 and PD2 is detected. An example of control for detection will be described.
 例えば、図17は、本実施形態の変形例に係る固体撮像装置1cの駆動制御の一例について示した概略的なタイミングチャートであり、画素2cに含まれる光電変換素子PD1及びPD2の状態を認識するための制御の一例について示している。 For example, FIG. 17 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1c according to the modification of the present embodiment, and recognizes the states of the photoelectric conversion elements PD1 and PD2 included in the pixel 2c. An example of control for this is shown.
 図11において、VDDHPXは、電源VDDHPXから画素2cに対して印可される電源電圧を示している。また、INCKは、同期信号を示しており、当該同期信号の1パルス分が、固体撮像装置1c内において実行される各種処理の期間の最小単位となる。また、XVS及びXHSは、垂直同期信号及び水平同期信号を示している。即ち、1XVSが、1フレーム期間に相当する。また、TG1、FCG、TG2、及びFDGは、第1転送ゲート部Tr21、第2転送ゲート部Tr22、第3転送ゲート部Tr23、及び第4転送ゲート部Tr25それぞれに供給される駆動信号(以下、「TG1駆動パルス」、「FCG駆動パルス」、「TG2駆動パルス」、及び「FDG駆動パルス」とも称する)を示している。また、RST及びSELは、リセットゲート部Tr24及び選択トランジスタTr27それぞれに供給される駆動信号(即ち、RST駆動パルス及びSEL駆動パルス)を示している。 In FIG. 11, VDDHPX indicates a power supply voltage applied to the pixel 2c from the power supply VDDHPX. Further, INCK indicates a synchronization signal, and one pulse of the synchronization signal is a minimum unit of various processing periods executed in the solid-state imaging device 1c. XVS and XHS indicate a vertical synchronization signal and a horizontal synchronization signal. That is, 1XVS corresponds to one frame period. Further, TG1, FCG, TG2, and FDG are drive signals (hereinafter referred to as “drive signals”) supplied to the first transfer gate unit Tr21, the second transfer gate unit Tr22, the third transfer gate unit Tr23, and the fourth transfer gate unit Tr25, respectively. “TG1 drive pulse”, “FCG drive pulse”, “TG2 drive pulse”, and “FDG drive pulse”). RST and SEL indicate drive signals (ie, RST drive pulse and SEL drive pulse) supplied to the reset gate unit Tr24 and the selection transistor Tr27, respectively.
 本実施形態に係る固体撮像装置1cにおいて、光電変換素子PD1及びPD2の状態の認識に係る制御は、対象となる画素2cの光電変換素子PD1及びPD2に対して電荷を蓄積する第1の制御と、当該光電変換素子PDに蓄積された電荷を読み出す第2の制御とを含む。例えば、図17に示す例では、当該第1の制御及び当該第2の制御それぞれに対して、1フレーム期間を割り当てている。即ち、第1の制御が割り当てられたフレーム期間が「蓄積フレーム」に相当し、第2の制御が割り当てられたフレーム期間が「読み出しフレーム」に相当する。 In the solid-state imaging device 1c according to the present embodiment, the control related to the recognition of the state of the photoelectric conversion elements PD1 and PD2 is the first control for accumulating charges in the photoelectric conversion elements PD1 and PD2 of the target pixel 2c. And second control for reading out the electric charge accumulated in the photoelectric conversion element PD. For example, in the example shown in FIG. 17, one frame period is assigned to each of the first control and the second control. That is, the frame period to which the first control is assigned corresponds to an “accumulated frame”, and the frame period to which the second control is assigned corresponds to a “read frame”.
 まず、蓄積フレームについて説明する。図17に示すように、蓄積フレームでは、まず、電源VDDHPXから画素2cに対して印可される電源電圧が0Vに制御され、その後に、当該電源電圧が所定の電圧VDDに制御されることで当該画素2cに対して当該電圧VDDが印可される。 First, the accumulation frame will be described. As shown in FIG. 17, in the accumulation frame, first, the power supply voltage applied to the pixel 2c from the power supply VDDHPX is controlled to 0V, and then the power supply voltage is controlled to the predetermined voltage VDD. The voltage VDD is applied to the pixel 2c.
 ここで、図18を参照して、図17において参照符号T21で示された期間における、画素2cの動作について説明する。図18は、本実施形態の変形例に係る固体撮像装置1cの駆動制御の一例について説明するための説明図であり、図17の期間T21における画素2cの状態を模式的に示している。 Here, with reference to FIG. 18, the operation of the pixel 2c in the period indicated by the reference symbol T21 in FIG. 17 will be described. FIG. 18 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1c according to the modification of the present embodiment, and schematically shows the state of the pixel 2c in the period T21 in FIG.
 図18に示すように、期間T21においては、TG1駆動パルス、FCG駆動パルス、TG2駆動パルス、FDG駆動パルス、及びRST駆動パルスがオン状態に制御され、SEL駆動パルスがオフ状態に制御される。また、前述の通り、電源VDDHPXから画素2に印加される電圧は0Vに制御される。これにより、フローティングディフュージョン部FD及び電荷蓄積部FCそれぞれの電位が0Vに制御され、光電変換素子PD1及びPD2それぞれのアノード及びカソード間に電位差が生じ、当該光電変換素子PDに電荷が注入される。なお、図18に示す制御の結果として光電変換素子PD1及びPD2それぞれに保持される電荷量は、当該光電変換素子PD1及びPD2それぞれの受光状態に関わらず、当該光電変換素子PD1及びPD2それぞれの飽和特性によって決定されることとなる。即ち、光電変換素子PD1に何らかの異常が生じている場合には、当該光電変換素子PD1に保持される電荷量が正常時に比べて変化する(例えば、低下する)こととなる。これは、光電変換素子PD2についても同様である。なお、図18に示すように、光電変換素子PD1及びPD2それぞれに対して電荷を注入する制御については、所定のタイミングで全画素2cについて実行されてもよいし(即ち、グローバルリセット)、各画素2cについて時分割で個別に実行されてもよい。 As shown in FIG. 18, in the period T21, the TG1 drive pulse, the FCG drive pulse, the TG2 drive pulse, the FDG drive pulse, and the RST drive pulse are controlled to be in the on state, and the SEL drive pulse is controlled to be in the off state. Further, as described above, the voltage applied to the pixel 2 from the power supply VDDHPX is controlled to 0V. As a result, the potentials of the floating diffusion portion FD and the charge storage portion FC are controlled to 0 V, a potential difference is generated between the anode and the cathode of each of the photoelectric conversion elements PD1 and PD2, and charges are injected into the photoelectric conversion element PD. Note that, as a result of the control shown in FIG. 18, the amount of charge held in each of the photoelectric conversion elements PD1 and PD2 is the saturation of each of the photoelectric conversion elements PD1 and PD2 regardless of the light receiving state of the photoelectric conversion elements PD1 and PD2. It will be determined by the characteristics. That is, when some abnormality occurs in the photoelectric conversion element PD1, the amount of charge held in the photoelectric conversion element PD1 changes (for example, decreases) compared to the normal time. The same applies to the photoelectric conversion element PD2. As shown in FIG. 18, the control for injecting charges into each of the photoelectric conversion elements PD1 and PD2 may be executed for all the pixels 2c at a predetermined timing (that is, global reset), or each pixel. 2c may be individually executed in a time division manner.
 次いで、図19を参照して、図17において参照符号T23で示された期間における、画素2cの動作について説明する。図19は、本実施形態の応用例に係る固体撮像装置1cの駆動制御の一例について説明するための説明図であり、図17の期間T23における画素2cの状態を模式的に示している。 Next, with reference to FIG. 19, the operation of the pixel 2c in the period indicated by the reference symbol T23 in FIG. 17 will be described. FIG. 19 is an explanatory diagram for explaining an example of drive control of the solid-state imaging device 1c according to the application example of the present embodiment, and schematically shows the state of the pixel 2c in the period T23 of FIG.
 図17に示すように、期間T23においては、FDG駆動パルス及びRST駆動パルスのそれぞれはオン状態が保持され、TG1駆動パルス、FCG駆動パルス、及びTG2駆動パルスのそれぞれについてはオフ状態に制御される。なお、SEL駆動パルスについてはオフ状態が保持される。また、電源VDDHPXから画素2cに印加される電圧はVDDに制御される。このような制御により、フローティングディフュージョン部FDと光電変換素子PD1との間と、電荷蓄積部FCと光電変換素子PD2との間と、フローティングディフュージョン部FDと電荷蓄積部FCとの間と、のそれぞれが非導通状態となる。また、フローティングディフュージョン部FDの電位はVDDに制御される。 As shown in FIG. 17, in the period T23, each of the FDG drive pulse and the RST drive pulse is kept in the on state, and each of the TG1 drive pulse, the FCG drive pulse, and the TG2 drive pulse is controlled to be in the off state. . The SEL drive pulse is kept off. Further, the voltage applied to the pixel 2c from the power supply VDDHPX is controlled to VDD. By such control, each of between the floating diffusion part FD and the photoelectric conversion element PD1, between the charge storage part FC and the photoelectric conversion element PD2, and between the floating diffusion part FD and the charge storage part FC. Becomes non-conductive. Further, the potential of the floating diffusion portion FD is controlled to VDD.
 次いで、読み出しフレームについて説明する。読み出しフレームでは、所定のタイミングで対象となる画素2cが駆動され、当該画素2cの光電変換素子PD1及びPD2に蓄積された電荷に応じた画素信号が読み出される。例えば、図20は、本実施形態に係る固体撮像装置1cの駆動制御の一例について示した概略的なタイミングチャートであり、画素2cの光電変換素子PD1及びPD2に蓄積された電荷の読み出しに係る制御の一例について示している。 Next, the readout frame will be described. In the readout frame, the target pixel 2c is driven at a predetermined timing, and pixel signals corresponding to the charges accumulated in the photoelectric conversion elements PD1 and PD2 of the pixel 2c are read out. For example, FIG. 20 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1c according to the present embodiment, and control related to readout of charges accumulated in the photoelectric conversion elements PD1 and PD2 of the pixel 2c. An example is shown.
 図20において、XHS、SEL、RST、TG1、FCG、TG2、及びFDGは、図17において同様の符号が付された信号をそれぞれ示している。また、VSLは、垂直信号線を介して出力される信号(即ち、画素2cから出力される画素信号)の電位を示している。なお、図20に示す例では、VSLとして示した信号を、暗状態と明状態とのそれぞれについて個別に示している。また、RAMPは、ADC内のDACから比較器に出力されるランプ波の電位を示している。なお、図20に示す例では、ランプ波の電位の変化を示すパルスに対して、垂直信号線を介して出力される信号の比較器内における電位の変化を示すパルスを重畳させて示している。また、VCOは、ADC内のカウンタから出力される電圧信号を示している。 20, XHS, SEL, RST, TG1, FCG, TG2, and FDG indicate signals with the same reference numerals in FIG. VSL indicates the potential of a signal output via the vertical signal line (that is, a pixel signal output from the pixel 2c). In the example shown in FIG. 20, the signal shown as VSL is individually shown for each of the dark state and the bright state. RAMP indicates the potential of the ramp wave output from the DAC in the ADC to the comparator. In the example shown in FIG. 20, a pulse indicating a change in potential of a signal output via the vertical signal line is superimposed on a pulse indicating a change in potential of the ramp wave. . VCO represents a voltage signal output from a counter in the ADC.
 また、図20において、P相は、画素2cから出力される画素信号のリセットレベルを読み出すためのプリデータ相を示している。また、D相は、当該画素信号のデータレベルを読み出すためのデータ相を示している。 In FIG. 20, the P phase indicates a pre-data phase for reading the reset level of the pixel signal output from the pixel 2c. The D phase indicates a data phase for reading the data level of the pixel signal.
 図20に示すように、本実施形態に変形例に係る固体撮像装置1cでは、まず光電変換素子PD1に蓄積された電荷に応じた第1の画素信号を読み出し、その後、光電変換素子PD2に蓄積された電荷に応じた第2の画素信号を読み出す。また、このとき第1の画素信号の読み出しについては、まずP相を読み出し、その後にD相を読み出す。これに対して、第2の画素信号の読み出しについては、P相の読み出しに伴い電荷蓄積部FCに蓄積された電荷がリセットされるため、まず先にD相を読み出したうえで、その後にP相を読み出す。なお、以降では、第1の画素信号及び第2の画素信号それぞれの読み出しに係る固体撮像装置1cの動作について、P相の読み出しに係る動作と、D相の読み出しに係る動作とに分けてそれぞれ説明する。 As shown in FIG. 20, in the solid-state imaging device 1c according to the modified example of the present embodiment, first the first pixel signal corresponding to the electric charge accumulated in the photoelectric conversion element PD1 is read, and then accumulated in the photoelectric conversion element PD2. A second pixel signal corresponding to the generated charge is read out. At this time, for reading the first pixel signal, the P phase is read first, and then the D phase is read. On the other hand, regarding the reading of the second pixel signal, the charge accumulated in the charge accumulation unit FC is reset with the reading of the P phase, so the D phase is read first and then the P phase is read. Read phase. In the following, the operation of the solid-state imaging device 1c related to the reading of each of the first pixel signal and the second pixel signal is divided into an operation related to the P-phase reading and an operation related to the D-phase reading. explain.
 まず、図17に示すように、読み出しフレームの開始時に、FDG駆動パルス及びRST駆動パルスはオフ状態に制御される。即ち、読み出しフレームの開始時においては、TG1駆動パルス、FCG駆動パルス、TG2駆動パルス、FDG駆動パルス、RST駆動パルス、及びSEL駆動パルスのそれぞれはオフ状態となっている。その後、読み出しフレーム中の所定のタイミング(所定の水平同期期間)で、対象となる画素2cからの画素信号の読み出しが開始される。 First, as shown in FIG. 17, at the start of the readout frame, the FDG drive pulse and the RST drive pulse are controlled to be in an off state. That is, at the start of the readout frame, each of the TG1 drive pulse, the FCG drive pulse, the TG2 drive pulse, the FDG drive pulse, the RST drive pulse, and the SEL drive pulse is in an OFF state. Thereafter, readout of the pixel signal from the target pixel 2c is started at a predetermined timing (a predetermined horizontal synchronization period) in the readout frame.
 図20に示すように、まず、光電変換素子PD1に蓄積された電荷に応じた第1の画素信号についてP相の読み出しが行われる。具体的には、FDG駆動パルス及びSEL駆動パルスがオン状態に制御された状態で、RST駆動パルスが一時的にオン状態に制御されることで、フローティングディフュージョン部FDの電位が、電源電圧VDDのレベルにリセットされる。このとき、TG1駆動パルス、FCG駆動パルス、及びTG2駆動パルスは、オフ状態が維持されている。即ち、光電変換素子PD1とフローティングディフュージョン部FDとの間と、電荷蓄積部FC(ひいては、光電変換素子PD2)とフローティングディフュージョン部FDとの間と、のそれぞれは非導通状態となる。そのため、このとき画素2cから垂直信号線VSLを介して読み出される画素信号は、当該画素2cから出力される画素信号のリセットレベルを示している。 As shown in FIG. 20, first, P-phase readout is performed on the first pixel signal corresponding to the electric charge accumulated in the photoelectric conversion element PD1. Specifically, when the FDG drive pulse and the SEL drive pulse are controlled to be in the on state, the potential of the floating diffusion portion FD is set to the power supply voltage VDD by temporarily controlling the RST drive pulse to be in the on state. Reset to level. At this time, the TG1 drive pulse, the FCG drive pulse, and the TG2 drive pulse are kept off. That is, between the photoelectric conversion element PD1 and the floating diffusion portion FD, and between the charge storage portion FC (and thus the photoelectric conversion element PD2) and the floating diffusion portion FD are in a non-conductive state. Therefore, the pixel signal read from the pixel 2c via the vertical signal line VSL at this time indicates the reset level of the pixel signal output from the pixel 2c.
 次いで、光電変換素子PD1に蓄積された電荷に応じた第1の画素信号についてD相の読み出しが行われる。具体的には、TG1駆動パルスが一時的にオン状態に制御され、当該TG1駆動パルスがオン状態を示す期間中に、光電変換素子PD1とフローティングディフュージョン部FDとの間が導通状態となる。これにより、光電変換素子PD1に蓄積された電荷が、フローティングディフュージョン部FDに転送され、当該フローティングディフュージョン部FDに蓄積される。そのため、フローティングディフュージョン部FDに蓄積された電荷(換言すると、光電変換素子PD1からリークした電荷)に応じた電圧が増幅トランジスタTr26のゲートに印加され、当該増幅トランジスタTr26が導通状態に制御される。これにより、画素2cからは、増幅トランジスタTr26のゲートに印加された電圧に応じた画素信号(即ち、第1の画素信号)が、垂直信号線VSLを介して出力される。即ち、光電変換素子PD1の飽和特性に応じた電荷が、当該光電変換素子PD1から読み出され、当該電荷の読み出し結果に応じた第1の画素信号が、垂直信号線VSLを介して画素2cから出力されることとなる。 Next, D-phase reading is performed on the first pixel signal corresponding to the electric charge accumulated in the photoelectric conversion element PD1. Specifically, the TG1 drive pulse is temporarily controlled to be in an on state, and the photoelectric conversion element PD1 and the floating diffusion portion FD are in a conductive state during the period in which the TG1 drive pulse indicates the on state. Thereby, the electric charge accumulated in the photoelectric conversion element PD1 is transferred to the floating diffusion portion FD and accumulated in the floating diffusion portion FD. Therefore, a voltage corresponding to the charge accumulated in the floating diffusion portion FD (in other words, charge leaked from the photoelectric conversion element PD1) is applied to the gate of the amplification transistor Tr26, and the amplification transistor Tr26 is controlled to be in a conductive state. As a result, a pixel signal (that is, a first pixel signal) corresponding to the voltage applied to the gate of the amplification transistor Tr26 is output from the pixel 2c via the vertical signal line VSL. That is, the charge corresponding to the saturation characteristic of the photoelectric conversion element PD1 is read from the photoelectric conversion element PD1, and the first pixel signal corresponding to the read result of the charge is read from the pixel 2c via the vertical signal line VSL. Will be output.
 なお、第1の画素信号についてD相の読み出しが完了すると、SEL駆動信号がオフ状態に制御されたうえで、まずFDG駆動信号が一時的にオフ状態に制御され、次いで、RST駆動信号が一時的にオン状態に制御される。これにより、フローティングディフュージョン部FDの電位が、電源電圧VDDのレベルにリセットされる。また、FCG駆動信号がオン状態に制御され、フローティングディフュージョン部FDと電荷蓄積部FCとの間が導通状態となる。その後、SEL駆動信号がオン状態に制御され、光電変換素子PD2に蓄積された電荷に応じた第2の画素信号の読み出しが開始される。 Note that when the D-phase readout of the first pixel signal is completed, the SEL drive signal is controlled to the off state, the FDG drive signal is first temporarily controlled to the off state, and then the RST drive signal is temporarily The on state is controlled. As a result, the potential of the floating diffusion portion FD is reset to the level of the power supply voltage VDD. Further, the FCG drive signal is controlled to be in an on state, and the floating diffusion unit FD and the charge storage unit FC are brought into conduction. Thereafter, the SEL drive signal is controlled to be on, and reading of the second pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD2 is started.
 光電変換素子PD2に蓄積された電荷に応じた第2の画素信号の読み出しについては、前述した通り、まずD相の読み出しが行われる。具体的には、TG1駆動パルスが一時的にオン状態に制御され、当該TG2駆動パルスがオン状態を示す期間中に、光電変換素子PD2と電荷蓄積部FCとの間が導通状態となる。即ち、当該期間中においては、光電変換素子PD2、電荷蓄積部FC、及びフローティングディフュージョン部FDそれぞれの間が導通状態となる。これにより、電荷蓄積部FCとフローティングディフュージョン部FDのポテンシャルが結合するとともに、光電変換素子PD2に蓄積された電荷が結合した領域に転送され、当該領域に蓄積される。そのため、上記領域に蓄積された電荷(換言すると、光電変換素子PD2からリークした電荷)に応じた電圧が増幅トランジスタTr26のゲートに印加され、当該増幅トランジスタTr26が導通状態に制御される。これにより、画素2cからは、増幅トランジスタTr26のゲートに印加された電圧に応じた画素信号(即ち、第2の画素信号)が、垂直信号線VSLを介して出力される。即ち、光電変換素子PD2の飽和特性に応じた電荷が、当該光電変換素子PD2から読み出され、当該電荷の読み出し結果に応じた第2の画素信号が、垂直信号線VSLを介して画素2cから出力されることとなる。 Regarding the reading of the second pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD2, the D-phase reading is first performed as described above. Specifically, the TG1 drive pulse is temporarily controlled to be in an on state, and the photoelectric conversion element PD2 and the charge storage unit FC are in a conductive state during the period in which the TG2 drive pulse is in the on state. That is, during the period, the photoelectric conversion element PD2, the charge storage unit FC, and the floating diffusion unit FD are in a conductive state. As a result, the potentials of the charge storage unit FC and the floating diffusion unit FD are combined, and the charges stored in the photoelectric conversion element PD2 are transferred to the combined region and stored in the region. For this reason, a voltage corresponding to the charge accumulated in the region (in other words, the charge leaked from the photoelectric conversion element PD2) is applied to the gate of the amplification transistor Tr26, and the amplification transistor Tr26 is controlled to be conductive. Accordingly, a pixel signal (that is, a second pixel signal) corresponding to the voltage applied to the gate of the amplification transistor Tr26 is output from the pixel 2c through the vertical signal line VSL. That is, the charge corresponding to the saturation characteristic of the photoelectric conversion element PD2 is read from the photoelectric conversion element PD2, and the second pixel signal corresponding to the read result of the charge is transferred from the pixel 2c via the vertical signal line VSL. Will be output.
 次いで、光電変換素子PD2に蓄積された電荷に応じた第2の画素信号についてP相の読み出しが行われる。具体的には、まず、SEL駆動信号がオフ状態に制御されたうえで、RST駆動信号が一時的にオン状態に制御される。これにより、電荷蓄積部FCとフローティングディフュージョン部FDのポテンシャルが結合した領域の電位が、電源電圧VDDのレベルにリセットされる。その後、SEL駆動信号がオン状態に制御され、当該領域の電位に応じた電圧が増幅トランジスタTr26のゲートに印加され、当該電圧に応じた画素信号(即ち、第2の画素信号)が、垂直信号線VSLを介して出力される。このとき、TG1駆動パルス、FCG駆動パルス、及びTG2駆動パルスは、オフ状態が維持されている。即ち、光電変換素子PD1とフローティングディフュージョン部FDとの間と、電荷蓄積部FCとフローティングディフュージョン部FDとの間(ひいては、光電変換素子PD2とフローティングディフュージョン部FDとの間)と、のそれぞれは非導通状態となる。そのため、このとき画素2cから垂直信号線VSLを介して読み出される画素信号は、当該画素2cから出力される画素信号のリセットレベルを示している。 Next, P-phase readout is performed on the second pixel signal corresponding to the charge accumulated in the photoelectric conversion element PD2. Specifically, first, the SEL drive signal is controlled to the off state, and then the RST drive signal is temporarily controlled to the on state. As a result, the potential of the region where the potentials of the charge storage unit FC and the floating diffusion unit FD are combined is reset to the level of the power supply voltage VDD. Thereafter, the SEL drive signal is controlled to be in an ON state, a voltage corresponding to the potential of the region is applied to the gate of the amplification transistor Tr26, and a pixel signal corresponding to the voltage (that is, the second pixel signal) is a vertical signal. It is output via the line VSL. At this time, the TG1 drive pulse, the FCG drive pulse, and the TG2 drive pulse are kept off. In other words, each between the photoelectric conversion element PD1 and the floating diffusion part FD and between the charge storage part FC and the floating diffusion part FD (and thus between the photoelectric conversion element PD2 and the floating diffusion part FD) is non-existent. It becomes conductive. Therefore, the pixel signal read from the pixel 2c via the vertical signal line VSL at this time indicates the reset level of the pixel signal output from the pixel 2c.
 なお、垂直信号線VSLを介して画素2cから順次出力される第1の画素信号及び第2の画素信号は、ADC113によりデジタル信号に変換され、例えば、図10を参照して説明したセンサデータユニット211に出力される。このとき、センサデータユニット211に順次出力されるデジタル信号は、当該画素2cに含まれる光電変換素子PD1及びPD2の飽和特性に応じた電位をそれぞれ示すこととなる。即ち、センサデータユニット211は、当該デジタル信号に基づき、画素2cの状態(ひいては、当該画素2cに含まれる光電変換素子PD1及びPD2それぞれの状態)を、画素2cごとに個別に認識することが可能となる。 Note that the first pixel signal and the second pixel signal sequentially output from the pixel 2c via the vertical signal line VSL are converted into digital signals by the ADC 113, for example, the sensor data unit described with reference to FIG. 211 is output. At this time, the digital signal sequentially output to the sensor data unit 211 indicates a potential corresponding to the saturation characteristics of the photoelectric conversion elements PD1 and PD2 included in the pixel 2c. That is, the sensor data unit 211 can individually recognize the state of the pixel 2c (and thus the state of each of the photoelectric conversion elements PD1 and PD2 included in the pixel 2c) based on the digital signal. It becomes.
 以上、本実施形態に係る固体撮像装置の変形例として、図16~図20を参照して、画素が共有画素構造を構成する場合の一例について説明した。 As described above, as a modification of the solid-state imaging device according to the present embodiment, an example in which the pixels configure a shared pixel structure has been described with reference to FIGS.
  <2.4.評価>
 以上説明したように、本実施形態に係る固体撮像装置では、複数の画素のうち少なくとも一部の画素の光電変換素子に電荷が注入されるように当該画素への電源電圧の印加を制御し、その後に、当該光電変換素子から注入された当該電荷に応じた画素信号が読み出されるように当該画素への駆動信号の供給を制御する。以上のような構成に基づき、本実施形態に係る固体撮像装置は、上記少なくとも一部の画素の光電変換素子からの電荷に応じた画素信号の読み出し結果に応じて、当該画素の状態を認識する。
<2.4. Evaluation>
As described above, in the solid-state imaging device according to the present embodiment, the application of the power supply voltage to the pixels is controlled so that charges are injected into the photoelectric conversion elements of at least some of the plurality of pixels, After that, supply of a drive signal to the pixel is controlled so that a pixel signal corresponding to the charge injected from the photoelectric conversion element is read out. Based on the above configuration, the solid-state imaging device according to the present embodiment recognizes the state of the pixel according to the readout result of the pixel signal corresponding to the charge from the photoelectric conversion element of the at least some pixels. .
 以上のような構成により、本実施形態に係る固体撮像装置に依れば、各画素から出力される画素信号に基づき当該画素(ひいては、当該画素に含まれる光電変換素子)の状態を個別に認識することが可能である。そのため、当該固体撮像装においては、例えば、一部の画素に故障が生じた場合に、当該異常を画素ごとに検出することが可能である。また、このような仕組みを利用することで、例えば、一部の画素に異常が生じた場合に、当該画素に関する情報を所定の出力先に出力することが可能である。また、他の一例として、故障が生じた画素の位置を特定することが可能であるため、画像の撮像時に当該画素から出力される画素信号を、他の画素(例えば、隣接画素)から出力される画素信号に基づき補正することも可能となる。 With the configuration as described above, according to the solid-state imaging device according to the present embodiment, the state of the pixel (and thus the photoelectric conversion element included in the pixel) is individually recognized based on the pixel signal output from each pixel. Is possible. Therefore, in the solid-state imaging device, for example, when a failure occurs in some pixels, the abnormality can be detected for each pixel. Further, by using such a mechanism, for example, when an abnormality occurs in some pixels, it is possible to output information about the pixels to a predetermined output destination. As another example, since it is possible to specify the position of a pixel in which a failure has occurred, a pixel signal output from the pixel when an image is captured is output from another pixel (for example, an adjacent pixel). It is also possible to correct based on the pixel signal.
 また、本実施形態に係る固体撮像装置においては、上述の通り、各画素への電源電圧の印加を制御することで当該画素の光電変換素子に電荷を注入する。即ち、当該制御の結果として光電変換素子に保持される電荷量は、当該光電変換素子の受光状態に関わらず、当該光電変換素子PD1及びPD2それぞれの飽和特性によって決定される。このような特性により、本実施形態に係る固体撮像装置に依れば、外部環境の光量に関わらず、各画素の状態の認識に係る制御(例えば、故障画素を検出するための試験)を実行することが可能である。即ち、本実施形態に係る固体撮像装置に依れば、例えば、外部環境の光量がより少ない環境下においても、各画素2の故障検出のための試験を実行することが可能となる。 In the solid-state imaging device according to the present embodiment, as described above, the application of power supply voltage to each pixel is controlled to inject charges into the photoelectric conversion element of the pixel. That is, the amount of charge held in the photoelectric conversion element as a result of the control is determined by the saturation characteristics of the photoelectric conversion elements PD1 and PD2, regardless of the light receiving state of the photoelectric conversion element. Due to such characteristics, according to the solid-state imaging device according to the present embodiment, control related to recognition of the state of each pixel (for example, a test for detecting a defective pixel) is executed regardless of the amount of light in the external environment. Is possible. That is, according to the solid-state imaging device according to the present embodiment, for example, a test for detecting a failure of each pixel 2 can be performed even in an environment where the amount of light in the external environment is smaller.
 <<3.第2の実施形態>>
 続いて、本開示の第2の実施形態に係る固体撮像装置について説明する。本実施形態では、固体撮像装置1が、画像(特に、動画像)の撮像期間中に、故障検出等のような各種試験をより効率的に実行するための仕組みの一例について説明する。なお、以降の説明において、本実施形態に係る固体撮像装置1を、他の実施形態に係る固体撮像装置1と区別する場合ために、「固体撮像装置1d」と称する場合がある。
<< 3. Second Embodiment >>
Subsequently, a solid-state imaging device according to the second embodiment of the present disclosure will be described. In the present embodiment, an example of a mechanism for the solid-state imaging device 1 to more efficiently execute various tests such as failure detection during an image (particularly, moving image) imaging period will be described. In the following description, in order to distinguish the solid-state imaging device 1 according to the present embodiment from the solid-state imaging device 1 according to another embodiment, the solid-state imaging device 1 may be referred to as “solid-state imaging device 1d”.
  <3.1.構成>
 まず、図21を参照して、本実施形態に係る固体撮像装置1dの概略的な構成の一例について説明する。図21は、本実施形態に係る固体撮像装置1dの概略的な構成の一例を示したブロック図である。なお、本説明では、当該固体撮像装置1aの構成について、図1~図8を参照して説明した固体撮像装置1と異なる部分に着目して説明し、当該固体撮像装置1と実質的に同様の部分については詳細な説明は省略する。
<3.1. Configuration>
First, an example of a schematic configuration of the solid-state imaging device 1d according to the present embodiment will be described with reference to FIG. FIG. 21 is a block diagram illustrating an example of a schematic configuration of the solid-state imaging device 1d according to the present embodiment. In the present description, the configuration of the solid-state imaging device 1a will be described by focusing on the difference from the solid-state imaging device 1 described with reference to FIGS. 1 to 8, and substantially the same as the solid-state imaging device 1. Detailed description of this part is omitted.
 図21は、本実施形態に係る固体撮像装置1dの構成のうち、画素2からの画素信号の読み出しに係る部分の構成の一例を示している。即ち、図21に示す例では、主に、定電流回路部114及びADC113に相当する部分について示しており、その他の構成については図示を省略している。なお、図21において、MOSトランジスタ161、比較器141、DAC142、及びカウンタ143については、図3に示すMOSトランジスタ161、比較器141、DAC142、及びカウンタ143と実質的に同様のため詳細な説明は省略する。また、図21において、比較器141、DAC142、及びカウンタ143が、図3に示すADC113の部分に相当する。また、図21において、MOSトランジスタ161が、図3に示す定電流回路部114の部分に相当する。 FIG. 21 illustrates an example of a configuration of a portion related to reading of a pixel signal from the pixel 2 in the configuration of the solid-state imaging device 1d according to the present embodiment. That is, in the example shown in FIG. 21, the parts corresponding to the constant current circuit unit 114 and the ADC 113 are mainly shown, and the other components are not shown. In FIG. 21, the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 are substantially the same as the MOS transistor 161, the comparator 141, the DAC 142, and the counter 143 shown in FIG. Omitted. In FIG. 21, the comparator 141, the DAC 142, and the counter 143 correspond to the ADC 113 shown in FIG. In FIG. 21, the MOS transistor 161 corresponds to the constant current circuit unit 114 shown in FIG.
 図21に示すように、本実施形態に係る固体撮像装置1dは、センサデータユニット221を含む。センサデータユニット221は、図10を参照して説明した第1の実施形態に係る固体撮像装置1aにおけるセンサデータユニット211に相当する。 As shown in FIG. 21, the solid-state imaging device 1d according to the present embodiment includes a sensor data unit 221. The sensor data unit 221 corresponds to the sensor data unit 211 in the solid-state imaging device 1a according to the first embodiment described with reference to FIG.
 本実施形態に係る固体撮像装置1dでは、例えば、図3に示す制御部101が、各画素2による露光のタイミングや、当該画素2からの露光結果に基づく画素信号の読み出しのタイミングを制御する。また、当該制御部101は、少なくとも一部の画素2において、所定のフレームレートに応じた単位フレーム期間中の、当該画素2による露光と、当該露光結果に基づく画素信号の読み出しと、が行われていない期間を利用して、故障検出等の所定の試験が実行されるように、固体撮像装置1d内の所定の構成(例えば、センサデータユニット221)の動作を制御する。なお、制御部101が、センサデータユニット221等の所定の構成に、上記所定の試験を実行させるタイミングについては、固体撮像装置1dの駆動制御の一例とあわせて詳細を別途後述する。 In the solid-state imaging device 1d according to the present embodiment, for example, the control unit 101 illustrated in FIG. 3 controls the timing of exposure by each pixel 2 and the timing of reading a pixel signal based on the exposure result from the pixel 2. Further, the control unit 101 performs exposure by the pixel 2 and reading of a pixel signal based on the exposure result during a unit frame period corresponding to a predetermined frame rate in at least some of the pixels 2. The operation of a predetermined configuration (for example, the sensor data unit 221) in the solid-state imaging device 1d is controlled so that a predetermined test such as failure detection is executed using a period that is not. The timing at which the control unit 101 causes the predetermined configuration such as the sensor data unit 221 to execute the predetermined test will be described later in detail along with an example of drive control of the solid-state imaging device 1d.
 センサデータユニット221は、制御部101からの制御に基づき、故障検出等の所定の試験を実行する。具体的には、センサデータユニット221は、カウンタ143から出力される信号、即ち、画素2から供給される画素信号が変換されたデジタル信号に基づき、固体撮像装置1d内の所定の構成の状態を認識することで、当該構成に異常が生じた場合に当該異常を検出する。 The sensor data unit 221 executes a predetermined test such as failure detection based on the control from the control unit 101. Specifically, the sensor data unit 221 determines a state of a predetermined configuration in the solid-state imaging device 1d based on a signal output from the counter 143, that is, a digital signal obtained by converting the pixel signal supplied from the pixel 2. By recognizing, when an abnormality occurs in the configuration, the abnormality is detected.
 例えば、センサデータユニット221は、カウンタ143から出力されるデジタル信号に基づき、少なくとも一部の画素2、各画素2に駆動信号を供給するための構成(例えば、画素タイミング駆動回路5やアドレスデコーダ4等)、及びADC111のうち少なくともいずれかに生じた異常を検出することが可能である。具体的な一例として、一部の画素2についてのみデジタル信号に異常が生じている場合には、当該画素2に異常が生じたものと認識することが可能である。なお、この場合には、センサデータユニット221は、例えば、デジタル信号の出力元となるADC113や、当該デジタル信号の出力タイミングに応じて、異常が生じた画素2を特定すればよい。また、複数の画素2についてデジタル信号に異常が生じている場合には、当該複数の画素2それぞれからの画素信号の出力に関する構成(例えば、アドレスデコーダ4、画素タイミング駆動回路5、ADC113等)に異常が生じたものと認識することが可能である。 For example, the sensor data unit 221 is configured to supply a driving signal to at least some of the pixels 2 and each pixel 2 based on the digital signal output from the counter 143 (for example, the pixel timing driving circuit 5 and the address decoder 4). Etc.) and an abnormality occurring in at least one of the ADCs 111 can be detected. As a specific example, when an abnormality has occurred in only a part of the pixels 2, it can be recognized that an abnormality has occurred in the pixel 2. In this case, for example, the sensor data unit 221 may specify the ADC 113 that is the output source of the digital signal and the pixel 2 in which an abnormality has occurred according to the output timing of the digital signal. In addition, when an abnormality occurs in the digital signal for the plurality of pixels 2, the configuration related to the output of the pixel signal from each of the plurality of pixels 2 (for example, the address decoder 4, the pixel timing drive circuit 5, the ADC 113, etc.) It is possible to recognize that an abnormality has occurred.
 また、センサデータユニット221は、カウンタ143からのデジタル信号の出力状況に応じて、少なくとも一部の画素2に接続された配線、各画素2に駆動信号を供給するための構成、及びADC113のうち少なくともいずれかに生じた異常を検出することが可能である。具体的な一例として、一部の列についてデジタル信号の出力状況に異常が生じた場合(例えば、デジタル信号が出力されない場合)には、当該列に対応する垂直信号線や、当該列に対応するADC113に異常が生じたものと認識することが可能である。また、他の一例として、一部の行についてデジタル信号の出力状況に異常が生じた場合には、当該行に対応する水平信号線に異常が生じたものと認識することが可能である。 In addition, the sensor data unit 221 includes a wiring connected to at least some of the pixels 2, a configuration for supplying a driving signal to each pixel 2, and the ADC 113 according to the output state of the digital signal from the counter 143. It is possible to detect an abnormality occurring in at least one of them. As a specific example, when an abnormality occurs in the digital signal output status for some columns (for example, when no digital signal is output), the vertical signal line corresponding to the column or the column corresponds to the column. It can be recognized that an abnormality has occurred in the ADC 113. As another example, when an abnormality occurs in the output state of a digital signal for some rows, it can be recognized that an abnormality has occurred in the horizontal signal line corresponding to the row.
 なお、上述した例はあくまで一例であり、固体撮像装置1a内の少なくとも一部の構成を対象として試験を実行し、当該構成に生じた異常を検出することが可能であれば、当該検出の主体はセンサデータユニット221に限定されず、検出方法についても限定されない。例えば、試験の対象となる構成に応じて、当該構成に生じた異常を検出するためのユニットが、センサデータユニット221とは別に追加で設けられていてもよい。また、他の一例として、各画素2からの画素信号に基づくデジタル信号の出力に対して、所定のフィルタ(例えば、LPF)を適用することで、少なくとも一部の画素2や、当該画素2に関連して駆動する他の構成に生じた異常が検出されてもよい。 Note that the above-described example is merely an example, and if the test is performed on at least a part of the configuration in the solid-state imaging device 1a and an abnormality occurring in the configuration can be detected, the subject of the detection Is not limited to the sensor data unit 221 and the detection method is not limited. For example, a unit for detecting an abnormality occurring in the configuration may be additionally provided separately from the sensor data unit 221 depending on the configuration to be tested. As another example, by applying a predetermined filter (for example, LPF) to the output of a digital signal based on the pixel signal from each pixel 2, at least a part of the pixels 2 and the pixels 2 are applied. An abnormality that occurs in another configuration that is driven in association may be detected.
 また、センサデータユニット221は、固体撮像装置1dの少なくとも一部の構成に異常が生じたことを検出した場合に、当該検出結果に応じて所定の処理を実行してもよい。 Further, when the sensor data unit 221 detects that an abnormality has occurred in at least a part of the configuration of the solid-state imaging device 1d, the sensor data unit 221 may execute a predetermined process according to the detection result.
 具体的な一例として、センサデータユニット221は、少なくとも一部の構成に生じた異常の検出結果を固体撮像装置1dの外部に通知してもよい。具体的な一例として、センサデータユニット211は、所定の出力端子(即ち、Errorピン)を介して、異常を検出したことを示す所定の信号を固体撮像装置1dの外部に出力してもよい。また、他の一例として、固体撮像装置1dの外部に設けられた所定のDSP(Digital Signal Processor)401に、異常を検出したことを通知してもよい。なお、センサデータユニット221のうち、少なくとも一部の構成に生じた異常の検出結果が所定の出力先(例えば、DSP401等)に出力されるように制御する部分が、「出力制御部」の一例に相当する。 As a specific example, the sensor data unit 221 may notify the outside of the solid-state imaging device 1d of a detection result of an abnormality that has occurred in at least a part of the configuration. As a specific example, the sensor data unit 211 may output a predetermined signal indicating that an abnormality has been detected to the outside of the solid-state imaging device 1d via a predetermined output terminal (that is, an Error pin). As another example, a predetermined DSP (Digital Signal Processor) 401 provided outside the solid-state imaging device 1d may be notified that an abnormality has been detected. Note that a part of the sensor data unit 221 that controls the detection result of an abnormality occurring in at least a part of the configuration to a predetermined output destination (for example, the DSP 401) is an example of the “output control unit”. It corresponds to.
 また、他の一例として、センサデータユニット221は、少なくとも一部の構成に異常が生じ、結果として、少なくとも一部の画素2からの出力に異常が生じたことを認識した場合には、当該画素2からの出力を、他の画素2からの出力に基づき補正してもよい。 As another example, when the sensor data unit 221 recognizes that an abnormality has occurred in at least a part of the configuration and, as a result, an abnormality has occurred in the output from at least a part of the pixels 2, the pixel The output from 2 may be corrected based on the output from other pixels 2.
 例えば、図22及び図23は、本実施形態に係る固体撮像装置1dにおける画素信号の補正に係る動作の一例について説明するための説明図である。例えば、図22は、一部の列に対応する画素信号の出力に異常が生じた場合の一例を示している。図22に示す例では、異常が生じた列に対応する画素信号を、当該列に隣接する他の列に対応する画素信号に基づき補正する場合の一例を示している。この場合には、センサデータユニット221は、例えば、デジタル信号の出力に異常が検出されたADC113を特定することで、異常が生じた列と、当該列に隣接する他の列とを特定すればよい。 For example, FIGS. 22 and 23 are explanatory diagrams for explaining an example of an operation related to pixel signal correction in the solid-state imaging device 1d according to the present embodiment. For example, FIG. 22 shows an example when abnormality occurs in the output of pixel signals corresponding to some columns. In the example illustrated in FIG. 22, an example in which the pixel signal corresponding to the column in which the abnormality has occurred is corrected based on the pixel signal corresponding to another column adjacent to the column. In this case, for example, the sensor data unit 221 specifies the ADC 113 in which the abnormality is detected in the output of the digital signal, thereby specifying the column in which the abnormality has occurred and another column adjacent to the column. Good.
 また、他の一例として、図23は、一部の行に対応する画素信号の出力に異常が生じた場合の一例を示している。図23に示す例では、異常が生じた行に対応する画素信号を、当該行に隣接する他の行に対応する画素信号に基づき補正する場合の一例を示している。この場合には、センサデータユニット221は、例えば、異常が生じた画素信号が読み出されたタイミングに基づき、異常が生じた行と、当該行に隣接する他の行とを特定すればよい。 As another example, FIG. 23 shows an example where an abnormality occurs in the output of pixel signals corresponding to some rows. In the example shown in FIG. 23, an example in which the pixel signal corresponding to the row in which the abnormality has occurred is corrected based on the pixel signal corresponding to another row adjacent to the row. In this case, for example, the sensor data unit 221 may identify a row in which an abnormality has occurred and another row adjacent to the row based on the timing at which the pixel signal in which the abnormality has occurred is read.
 また、図15を参照して前述した例と同様に、異常が生じた画素2から出力される画素信号を、当該画素2に隣接する他の画素2から出力される画素信号に基づき補正することも可能である。 Similarly to the example described above with reference to FIG. 15, the pixel signal output from the pixel 2 in which an abnormality has occurred is corrected based on the pixel signal output from another pixel 2 adjacent to the pixel 2. Is also possible.
 なお、センサデータユニット221のうち、少なくとも一部の画素2からの出力(即ち、異常が生じた出力)を補正する部分が、「補正処理部」の一例に相当する。 Note that a portion of the sensor data unit 221 that corrects an output from at least some of the pixels 2 (that is, an output in which an abnormality has occurred) corresponds to an example of a “correction processing unit”.
 以上、図21を参照して、本実施形態に係る固体撮像装置1dの概略的な構成の一例について説明した。 Heretofore, an example of a schematic configuration of the solid-state imaging device 1d according to the present embodiment has been described with reference to FIG.
  <3.2.駆動制御>
 続いて、本実施形態に係る固体撮像装置1dの駆動制御の一例として、特に、当該固体撮像装置1dの所定の試験が実行されるタイミングの制御に着目して説明する。例えば、図24は、本実施形態に係る固体撮像装置1dの駆動制御の一例について示した概略的なタイミングチャートであり、当該固体撮像装置1dの所定の試験が実行されるタイミングの制御の一例について示している。図24において、横軸は時間方向を示しており、縦軸は2次元配列された画素2の行方向の位置を示している。なお、本説明では、本実施形態に係る固体撮像装置1dの特徴をよりわかりやすくするために、各画素2が単位フレーム期間(即ち、1垂直同期期間)中に、露光と、当該露光結果の読み出しと、を複数回実行する場合に着目して、当該固体撮像装置1dの駆動制御について説明する。
<3.2. Drive control>
Subsequently, as an example of drive control of the solid-state imaging device 1d according to the present embodiment, a description will be given focusing on control of timing at which a predetermined test of the solid-state imaging device 1d is executed. For example, FIG. 24 is a schematic timing chart showing an example of drive control of the solid-state imaging device 1d according to the present embodiment, and an example of timing control at which a predetermined test of the solid-state imaging device 1d is executed. Show. In FIG. 24, the horizontal axis indicates the time direction, and the vertical axis indicates the position in the row direction of the two-dimensionally arranged pixels 2. In this description, in order to make the characteristics of the solid-state imaging device 1d according to the present embodiment easier to understand, each pixel 2 is exposed during the unit frame period (that is, one vertical synchronization period) and the exposure result. The drive control of the solid-state imaging device 1d will be described by paying attention to the case where reading is executed a plurality of times.
 例えば、図24に示す例では、固体撮像装置1dは、単位フレーム期間中に互いに露光時間の異なる第1の露光(Long exposure)、第2の露光(Middle exposure)、及び第3の露光(Short exposure)を時分割で順次実行している。具体的には、図24において、参照符号T111及びT112は、第1の露光における露光期間(Long Shutter)を示しており、参照符号T121及びT122は、第1の露光の結果に基づく画素信号の読み出し期間(Long Read)を示している。また、参照符号T131及びT132は、第2の露光における露光期間(Middle Shutter)を示しており、参照符号T141及びT142は、第2の露光の結果に基づく画素信号の読み出し期間(Middle Read)を示している。また、参照符号T151及びT152は、第3の露光における露光期間(Short Shutter)を示しており、参照符号T161及びT162は、第3の露光の結果に基づく画素信号の読み出し期間(Short Read)を示している。 For example, in the example illustrated in FIG. 24, the solid-state imaging device 1 d includes a first exposure (Long exposure), a second exposure (Middle exposure), and a third exposure (Short exposure) having different exposure times during a unit frame period. exposure) is executed sequentially in a time-sharing manner. Specifically, in FIG. 24, reference symbols T111 and T112 indicate an exposure period (Long Shutter) in the first exposure, and reference symbols T121 and T122 indicate pixel signals based on the result of the first exposure. Indicates the read period (Long Read). Reference symbols T131 and T132 indicate an exposure period (Middle Shutter) in the second exposure, and reference symbols T141 and T142 indicate a pixel signal readout period (Middle Read) based on the result of the second exposure. Show. Reference symbols T151 and T152 indicate an exposure period (Short Shutter) in the third exposure, and reference symbols T161 and T162 indicate a pixel signal readout period (Short Read) based on the result of the third exposure. Show.
 また、参照符号VBLKは、垂直ブランク(Vブランク)期間を示している。なお、垂直ブランク期間VBLKにおいては、例えば、カラム信号線の故障検出やTSVの故障検出等のような所定の試験が実行され、当該期間中には、いずれの画素2からも画素信号の読み出しが行われない。即ち、垂直ブランク期間VBLKは、あるフレーム期間における一連の画素2からの画素信号の読み出しが完了してから、次のフレーム期間における当該一連の画素2からの画素信号の読み出しが開始されるまでの期間に相当する。 Further, reference symbol VBLK indicates a vertical blank (V blank) period. In the vertical blank period VBLK, for example, a predetermined test such as column signal line failure detection or TSV failure detection is performed, and pixel signals are read from any of the pixels 2 during the period. Not done. That is, the vertical blank period VBLK is a period from the completion of reading pixel signals from a series of pixels 2 in a certain frame period to the start of reading pixel signals from the series of pixels 2 in the next frame period. It corresponds to a period.
 また、参照符号T171及びT172は、各行の画素2において、当該画素2による露光(例えば、第1の露光~第3の露光)と、当該露光結果に基づく画素信号の読み出しと、が行われていない期間に相当する。本実施形態に係る固体撮像装置1dは、この期間T171及びT172を利用して所定の試験(例えば、BIST:Built-In Self-Test)を実行する。所定の試験の具体例としては、各画素に対する故障検出等が挙げられる。なお、以降では、参照符号T171及びT172で示された期間を「BIST期間」とも称する。また、BIST期間T171及びT172を特に区別しない場合には、「BIST期間T170」とも称する。 Reference numerals T171 and T172 indicate that pixel 2 in each row is subjected to exposure by the pixel 2 (for example, first exposure to third exposure) and readout of a pixel signal based on the exposure result. Corresponds to no period. The solid-state imaging device 1d according to the present embodiment performs a predetermined test (for example, BIST: Built-In Self-Test) using the periods T171 and T172. Specific examples of the predetermined test include failure detection for each pixel. Hereinafter, the periods indicated by reference numerals T171 and T172 are also referred to as “BIST periods”. The BIST periods T171 and T172 are also referred to as “BIST period T170” unless otherwise distinguished.
 具体的には、図24に示すように、BIST期間T170は、ある行の画素による1回以上の露光(例えば、第1の露光~第3の露光)が実行される単位フレーム期間における最後の露光(例えば、第3の露光)の結果に基づく画素信号の読み出しが終了した後に開始される。また、当該BIST期間T170は、当該単位フレーム期間の次のフレーム期間における最初の露光(例えば、第1の露光)が開始される前に終了する。より具体的な一例として、図24に示すBIST期間T171は、第3の露光結果に基づく画素信号の読み出し期間T161の終了後から、次の単位フレーム期間における第1の露光の露光期間T112の開始までの期間となる。なお、当該BIST期間T170は、第1の露光と第2の露光との間や、第2の露光と第3の露光との間に設定されていても良い。詳細は後述するが、BIST期間T170は、垂直ブランク期間VBLKが設定されることで発生する。 Specifically, as shown in FIG. 24, the BIST period T170 is the last in the unit frame period in which one or more exposures (for example, the first exposure to the third exposure) are performed by pixels in a certain row. This is started after the reading of the pixel signal based on the result of the exposure (for example, the third exposure) is completed. Further, the BIST period T170 ends before the first exposure (for example, the first exposure) in the next frame period after the unit frame period is started. As a more specific example, the BIST period T171 shown in FIG. 24 starts from the exposure period T112 of the first exposure in the next unit frame period after the end of the pixel signal readout period T161 based on the third exposure result. It is a period until. The BIST period T170 may be set between the first exposure and the second exposure, or between the second exposure and the third exposure. Although details will be described later, the BIST period T170 is generated by setting the vertical blank period VBLK.
 次いで、図25及び図26を参照して、単位フレーム期間(即ち、1垂直同期期間)中に、露光と、当該露光結果の読み出しと、を複数回実行する場合における、各画素2からの画素信号の読み出しに係る駆動制御の一例について説明する。図25及び図26は、本実施形態に係る固体撮像装置1dにおける各画素2からの画素信号の読み出しに係る概略的な制御の一例について説明するための説明図である。 Next, referring to FIG. 25 and FIG. 26, pixels from each pixel 2 in the case where exposure and readout of the exposure result are performed a plurality of times during a unit frame period (that is, one vertical synchronization period). An example of drive control related to signal readout will be described. FIG. 25 and FIG. 26 are explanatory diagrams for explaining an example of schematic control related to readout of pixel signals from each pixel 2 in the solid-state imaging device 1d according to the present embodiment.
 図25において、縦軸は垂直同期期間XVSを模式的に示しており、横軸は水平同期期間XHSを模式的に示している。また、図25において、参照符号L、M、Sで示された方形状の領域は、2次元配列された複数の画素2それぞれからの露光結果の読み出しタイミングを模式的に示しており、第1の露光、第2の露光、及び第3の露光のそれぞれに対応している。また、方形状の領域L、M、及びSそれぞれにおいて、横方向は、2次元配列された複数の画素2の列方向に対応しており、縦方向は、当該複数の画素2の行方向に対応している。 25, the vertical axis schematically shows the vertical synchronization period XVS, and the horizontal axis schematically shows the horizontal synchronization period XHS. In addition, in FIG. 25, square regions indicated by reference characters L, M, and S schematically show the readout timing of the exposure result from each of the two-dimensionally arranged plurality of pixels 2. This corresponds to each of the exposure, the second exposure, and the third exposure. Further, in each of the rectangular regions L, M, and S, the horizontal direction corresponds to the column direction of the plurality of pixels 2 that are two-dimensionally arranged, and the vertical direction corresponds to the row direction of the plurality of pixels 2. It corresponds.
 即ち、図25に示す例では、1水平同期期間ごとに、行単位で当該行に含まれる画素2からの画素信号の読み出しが実行される。また、図25に示す例では、1水平同期期間ごとに、第1の露光、第2の露光、及び第3の露光の順で、各露光結果に基づく画素信号の読み出しが順次実行される。 That is, in the example shown in FIG. 25, pixel signals are read from the pixels 2 included in the row for each horizontal synchronization period. In the example shown in FIG. 25, pixel signals are sequentially read out based on each exposure result in the order of the first exposure, the second exposure, and the third exposure every horizontal synchronization period.
 なお、第1の露光、第2の露光、及び第3の露光それぞれの結果に基づく画素信号が順次読み出される場合に、必ずしも同じ行に含まれる画素2から画素信号の読み出しが行われなくてもよい。例えば、図25における参照符号R111は、垂直同期期間中の一部の期間を模式的に示している。即ち、図25に示す例では、期間R111において、第1の露光、第2の露光、及び第3の露光の結果に基づく画素信号については、それぞれα行目の画素2、β行目の画素2、及びγ行目の画素2から読み出される。 In addition, when pixel signals based on the results of the first exposure, the second exposure, and the third exposure are sequentially read out, it is not always necessary to read out the pixel signals from the pixels 2 included in the same row. Good. For example, reference numeral R111 in FIG. 25 schematically illustrates a part of the vertical synchronization period. That is, in the example shown in FIG. 25, in the period R111, the pixel signals based on the results of the first exposure, the second exposure, and the third exposure are the pixel 2 in the α row and the pixel in the β row, respectively. 2 and the pixel 2 in the γ-th row.
 また、図26は、図25に示す例における、各画素2からの画素信号の読み出しに係る概略的なタイミングチャートを示している。具体的には、図26に示す例では、α行目の画素2からの第1の露光結果に基づく画素信号の読み出し、β行目の画素2からの第2の露光結果に基づく画素信号の読み出し、及びγ行目の画素2からの第3の露光結果に基づく画素信号の読み出しが順次実行される。また、その次には、α+1行目の画素2からの第1の露光結果に基づく画素信号の読み出し、β+1行目の画素2からの第2の露光結果に基づく画素信号の読み出し、及びγ+1行目の画素2からの第3の露光結果に基づく画素信号の読み出しが順次実行されることとなる。 FIG. 26 shows a schematic timing chart relating to readout of the pixel signal from each pixel 2 in the example shown in FIG. Specifically, in the example illustrated in FIG. 26, the pixel signal is read based on the first exposure result from the pixel 2 in the α row, and the pixel signal based on the second exposure result from the pixel 2 in the β row. Readout and readout of pixel signals based on the third exposure result from the pixels 2 in the γ-th row are sequentially executed. Further, next, readout of a pixel signal based on the first exposure result from the pixel 2 in the α + 1 row, readout of a pixel signal based on the second exposure result from the pixel 2 in the β + 1 row, and γ + 1 row. The readout of pixel signals based on the third exposure result from the pixel 2 of the eye is sequentially executed.
 なお、上述した駆動制御はあくまで一例であり、少なくともBIST期間T170が設けられ、当該BIST期間T170中に所定の試験が実行可能であれば、本実施形態に係る固体撮像装置1dの駆動制御は、必ずしも図24~図26を参照して説明した例には限定されない。具体的な一例として、本実施形態に係る固体撮像装置1dは、各画素2が単位フレーム期間中に、露光と、当該露光結果の読み出しと、を1回だけ実行するように構成されていてもよい。この場合には、BIST期間T170は、ある単位フレーム期間における露光結果に基づく画素信号の読み出しの終了後に開始され、次の単位フレーム期間における露光が開始されるまでに終了することとなる。 The drive control described above is merely an example, and at least the BIST period T170 is provided, and if a predetermined test can be performed during the BIST period T170, the drive control of the solid-state imaging device 1d according to the present embodiment is The example described with reference to FIGS. 24 to 26 is not necessarily limited. As a specific example, the solid-state imaging device 1d according to the present embodiment may be configured such that each pixel 2 executes exposure and reading of the exposure result only once during a unit frame period. Good. In this case, the BIST period T170 is started after the reading of the pixel signal based on the exposure result in a certain unit frame period, and is ended until the exposure in the next unit frame period is started.
 以上、図24~図26を参照して、本実施形態に係る固体撮像装置1dの駆動制御の一例として、特に、当該固体撮像装置1dの所定の試験が実行されるタイミングの制御に着目して説明した。 As described above, with reference to FIGS. 24 to 26, as an example of the drive control of the solid-state imaging device 1d according to the present embodiment, particularly focusing on the control of the timing at which a predetermined test of the solid-state imaging device 1d is executed. explained.
  <3.3.露光時間の制約と垂直ブランク期間との関係>
 続いて、図27を参照して、本実施形態に係る固体撮像装置1dにおける、露光時間の制約と垂直ブランク期間VBLKとの関係について、具体的な例を挙げて説明する。図27は、本実施形態に係る固体撮像装置1dにおける露光時間の制約と垂直ブランク期間との関係について説明するためのタイミングチャートである。なお、図27に示す例では、図24に示す例と同様に、単位フレーム期間中に互いに露光時間の異なる第1の露光(Long exposure)、第2の露光(Middle exposure)、及び第3の露光(Short exposure)が順次実行される場合の一例を示している。また、図27における横軸及び縦軸は、図24における横軸及び縦軸と同様である。
<3.3. Relationship between exposure time constraint and vertical blank period>
Next, with reference to FIG. 27, the relationship between the exposure time constraint and the vertical blank period VBLK in the solid-state imaging device 1d according to the present embodiment will be described with a specific example. FIG. 27 is a timing chart for explaining the relationship between the exposure time constraint and the vertical blank period in the solid-state imaging device 1d according to the present embodiment. In the example shown in FIG. 27, as in the example shown in FIG. 24, the first exposure (Long exposure), the second exposure (Middle exposure), and the third exposure having different exposure times during the unit frame period. An example in which exposure (Short exposure) is sequentially performed is shown. In addition, the horizontal axis and the vertical axis in FIG. 27 are the same as the horizontal axis and the vertical axis in FIG.
 図27に示すように、フレームレートを40fpsとした場合に、単位フレーム期間(即ち、1垂直同期期間)は25msとなる。また、第1の露光~第3の露光それぞれの間における露光期間(換言すると、画素2への電荷の蓄積期間)の比率(以下、「露光比」とも称する)を16倍とした場合に、第1の露光期間(Long Shutter)をAとすると、第2の露光期間間(Middle Shutter)はA/16となり、第3の露光期間間(Short Shutter)は1/256となる。 As shown in FIG. 27, when the frame rate is 40 fps, the unit frame period (that is, one vertical synchronization period) is 25 ms. Further, when the ratio of the exposure period (in other words, the charge accumulation period in the pixel 2) between each of the first exposure to the third exposure (hereinafter also referred to as “exposure ratio”) is 16 times, Assuming that the first exposure period (Long Shutter) is A, the second exposure period (Middle Shutter) is A / 16, and the third exposure period (Short Shutter) is 1/256.
 ここで、垂直ブランク期間VBLK=0となる場合における第1の露光期間Aは、例えば、以下に示す(式1)に基づき算出され、当該(式1)を解くと(式2)となる。 Here, the first exposure period A when the vertical blank period VBLK = 0 is calculated based on, for example, the following (Formula 1), and (Formula 2) is obtained by solving the (Formula 1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 即ち、図27に示す例の場合には、上述した条件よりも露光比が大きく設定された場合か、または、第1の露光期間Aが(式2)に示す条件よりも短く設定された場合に、垂直ブランク期間VBLKが発生し、BIST期間T170を確保することが可能となる。 That is, in the case of the example shown in FIG. 27, when the exposure ratio is set larger than the above-mentioned condition, or when the first exposure period A is set shorter than the condition shown in (Equation 2). In addition, the vertical blank period VBLK occurs, and the BIST period T170 can be secured.
  <3.4.評価>
 以上説明したように、本実施形態に係る固体撮像装置は、所定のフレームレートに応じた単位フレーム期間中の、少なくとも一部の画素による露光や、当該露光結果に基づく画素信号の読み出しが行われていないBIST期間に所定の試験を実行する。当該BIST期間は、少なくとも一部の画素(例えば、ある行の画素)による1回以上の露光が実行される単位フレーム期間における最後の露光の結果に基づく画素信号の読み出しが終了した後に開始される。また、当該BIST期間は、当該単位フレーム期間の次のフレーム期間における最初の露光が開始される前に終了する。
<3.4. Evaluation>
As described above, the solid-state imaging device according to the present embodiment performs exposure by at least some pixels and reading of pixel signals based on the exposure result during a unit frame period corresponding to a predetermined frame rate. A predetermined test is executed during a BIST period that is not performed. The BIST period is started after the reading of the pixel signal based on the result of the last exposure in the unit frame period in which one or more exposures are performed by at least some pixels (for example, pixels in a certain row) is completed. . Further, the BIST period ends before the first exposure in the next frame period after the unit frame period is started.
 このような構成により、本実施形態に係る固体撮像装置に依れば、例えば、各行に含まれる画素2の故障検出のための試験を、当該行に対応して規定されるBIST期間で実行することが可能となる。特に、従来の固体撮像装置においては、全行について故障検出を実行する場合には、試験の実行に少なくとも1フレーム以上の期間を要し、当該試験のために画像の撮像が行われない専用のフレームを設ける必要があった。これに対して、本実施形態に係る固体撮像装置に依れば、画像の撮像と並行して各行についての故障検出のための試験を実行することが可能となり、従来の固体撮像装置に比べて、試験のための画像の撮像が行われない専用のフレームを設ける必要がない。 With such a configuration, according to the solid-state imaging device according to the present embodiment, for example, a test for detecting a failure of the pixel 2 included in each row is executed in a BIST period defined corresponding to the row. It becomes possible. In particular, in the conventional solid-state imaging device, when fault detection is executed for all rows, it takes a period of at least one frame to execute the test, and a dedicated image is not taken for the test. It was necessary to provide a frame. On the other hand, according to the solid-state imaging device according to the present embodiment, it is possible to execute a test for failure detection for each row in parallel with the imaging of the image, compared with the conventional solid-state imaging device. Therefore, there is no need to provide a dedicated frame in which no image is taken for testing.
 また、本実施形態に係る固体撮像装置に依れば、垂直ブランク期間で実行されていた試験のうち少なくとも一部の試験を、BIST期間中に実行することも可能である。このような構成により、垂直ブランク期間をより短くすることが可能となり、ひいてはフレームレートをより向上させることも可能となる。一方、TSVに対する故障検出やカラム信号線の故障検出等を垂直ブランク期間に実行させても良い。このような構成により、フレームレートを維持しつつ十分な露光時間を確保しながら、各故障検出を実行することが可能となる。 In addition, according to the solid-state imaging device according to the present embodiment, at least a part of the tests performed in the vertical blank period can be performed in the BIST period. With such a configuration, the vertical blank period can be further shortened, and as a result, the frame rate can be further improved. On the other hand, failure detection for TSV, column signal line failure detection, and the like may be executed in the vertical blank period. With such a configuration, it is possible to execute each failure detection while maintaining a sufficient exposure time while maintaining the frame rate.
 以上のように、本実施形態に係る固体撮像装置に依れば、BIST期間を利用して所定の試験を実行することで、画像の撮像期間中に故障検出等のような各種試験をより既往率的に実行することが可能となる。 As described above, according to the solid-state imaging device according to the present embodiment, by performing a predetermined test using the BIST period, various tests such as failure detection during the imaging period can be performed. It becomes possible to execute efficiently.
 <ハードウェアの構成例>
 次に、図28を参照して、フロントカメラECUと撮像素子のハードウェアの構成について説明する。フロントカメラECUと撮像素子のハードウェアは、下チップ1091、および上チップ1092が積層された構成からなる。尚、図28の右部は、下チップ1091のハードウェア構成であるフロアプランを表しており、図28の左部は、上チップ1092のハードウェア構成であるフロアプランを表している。
<Example of hardware configuration>
Next, the hardware configuration of the front camera ECU and the image sensor will be described with reference to FIG. The hardware of the front camera ECU and the imaging device has a configuration in which a lower chip 1091 and an upper chip 1092 are stacked. The right part of FIG. 28 represents a floor plan that is the hardware configuration of the lower chip 1091, and the left part of FIG. 28 represents the floor plan that is the hardware structure of the upper chip 1092.
 下チップ1091および上チップ1092には、それぞれの図中左右端部にTCV(Through Chip Via:貫通電極)1093-1,1093-2が設けられており、下チップ1091および上チップ1092が貫通して電気的に接続されている。下チップ1091において、TCV1093-1の図中の右隣には、行駆動部1102(図29)が配置され、電気的に接続されている。TCV1093-2の図中の左隣には、フロントカメラECU73の制御線ゲート1143(図29)が配置され、電気的に接続されている。尚、行駆動部1102および制御線ゲート1143については、図29を参照して詳細を後述する。また、本明細書では、TCVとTSVは同義として扱う。 The lower chip 1091 and the upper chip 1092 are provided with TCVs (Through Chip Vias) 1093-1 and 1093-2 at the left and right ends in the respective drawings, and the lower chip 1091 and the upper chip 1092 penetrate therethrough. Are electrically connected. In the lower chip 1091, a row driving unit 1102 (FIG. 29) is arranged on the right side of the TCV 1093-1 in the drawing and is electrically connected. A control line gate 1143 (FIG. 29) of the front camera ECU 73 is arranged on the left side of the TCV 1093-2 in the drawing and is electrically connected. Details of the row driver 1102 and the control line gate 1143 will be described later with reference to FIG. In this specification, TCV and TSV are treated as synonymous.
 また、下チップ1091および上チップ1092には、それぞれの図中上下端部にTCV1093-11,1093-12が設けられており、下チップ1091および上チップ1092が貫通して電気的に接続されている。下チップ1091において、TCV1093-11の図中の下部には、カラムADC(Analog to Digital Converter)1111-1が配設され、電気的に接続されており、TCV1093-12の図中の上部には、カラムADC(Analog to Digital Converter)1111-2が配設されており、電気的に接続されている。 Further, the lower chip 1091 and the upper chip 1092 are provided with TCV 1093-11 and 1093-12 at the upper and lower ends in the respective drawings, and the lower chip 1091 and the upper chip 1092 are penetrated and electrically connected. Yes. In the lower chip 1091, a column ADC (Analog to Digital Converter) 1111-1 is arranged and electrically connected to the lower part of the TCV 1093-11 in the figure, and the upper part of the TCV 1093-12 in the figure is A column ADC (Analog to Digital Converter) 111-2 is disposed and electrically connected.
 カラムADC1111-1,1111-2の図中の右端部の間であって、制御線ゲート1143の左隣に、DAC(Digital to Analog Converter)1112が設けられており、図中の矢印C1,C2で示されるように、カラムADC1111-1,1111-2にランプ電圧を出力する。尚、カラムADC1111-1,1111-2およびDAC1112が、図29における画像信号出力部1103に対応する構成となる。また、DAC1112は、カラムADC1111-1,1111-2に対して同一特性のランプ電圧を出力することが望ましいので、カラムADC1111-1,1111-2のいずれからも等距離であることが望ましい。さらに、DAC1112は、図28の例では、1個の例が示されているが、カラムADC1111-1,1111-2に対して、それぞれ同一特性のものを1個ずつ、合計2個設けるようにしてもよい。尚、画像信号出力部1103については、図29を参照して詳細を後述する。 A DAC (Digital to Analog Converter) 1112 is provided between the right end portions of the column ADCs 1111-1 and 111-2 in the drawing and on the left side of the control line gate 1143, and arrows C1 and C2 in the drawing. As shown, the ramp voltage is output to the column ADCs 1111-1 and 111-2. Note that the column ADCs 1111-1 and 111-2 and the DAC 1112 have a configuration corresponding to the image signal output unit 1103 in FIG. Since the DAC 1112 preferably outputs ramp voltages having the same characteristics to the column ADCs 1111-1 and 111-2, it is desirable that the DAC 1112 be equidistant from both the column ADCs 1111-1 and 111-2. Further, although one example is shown in the example of FIG. 28, two DACs 1112 having the same characteristics are provided for each of the column ADCs 1111-1 and 111-2. May be. Details of the image signal output unit 1103 will be described later with reference to FIG.
 さらに、上下のカラムADC1111-1,1111-2の間であって、行駆動部1102およびDAC1112の間には、信号処理回路1113が設けられており、図29における制御部1121、画像処理部1122、出力部1123、および故障検出部1124に対応する機能を実現する。 Further, a signal processing circuit 1113 is provided between the upper and lower columns ADC 1111-1 and 111-2 and between the row driving unit 1102 and the DAC 1112. The control unit 1121 and the image processing unit 1122 in FIG. , Functions corresponding to the output unit 1123 and the failure detection unit 1124 are realized.
 上チップ1092においては、上下左右の端部に設けられたTCV1093-1,1093-2,1093-11,1093-12で囲まれた方形状の範囲の、略全面が画素アレイ1101により構成されている。 In the upper chip 1092, the pixel array 1101 substantially forms the entire surface of a rectangular area surrounded by TCVs 1093-1, 1093-2, 1093-11, and 1093-12 provided at the top, bottom, left and right ends. Yes.
 画素アレイ1101は、TCV1093-1より画素制御線L(図29)を介して、行駆動部1102より供給される制御信号に基づいて、画素信号のうち、図中の上半分の画素の画素信号を、TCV1093-11を介して、下チップ1091に出力し、図中の下半分の画素の画素信号を、TCV1093-12を介して、下チップ1091に出力する。 The pixel array 1101 is based on a control signal supplied from the row driver 1102 from the TCV 1093-1 via the pixel control line L (FIG. 29), and among the pixel signals, the pixel signal of the upper half pixel in the figure. Is output to the lower chip 1091 via the TCV 1093-11, and the pixel signal of the lower half pixel in the figure is output to the lower chip 1091 via the TCV 1093-12.
 制御信号は、図中の矢印B1で示されるように、行駆動部1102を実現する信号処理回路1113より、TCV1093-1を介して上チップ1092の画素アレイの画素制御線Lを介して制御線ゲート1143(図29)に出力される。制御線ゲート1143(図29)は、制御部1121(図29)からの指令情報である行アドレスに対する行駆動部1102(図29)からの画素制御線Lを介した制御信号に応じて制御線ゲート1143より出力される信号と、制御部1121より供給された行アドレスに対応する制御信号の検出パルスとの比較により画素制御線L、およびTCV1093-1,1093-2の断線による故障の有無を検出する。そして、制御線ゲート1143は、図中の矢印B2で示されるように、故障の有無の情報を信号処理回路1113により実現される故障検出部1124に出力する。 The control signal is sent from the signal processing circuit 1113 that implements the row driving unit 1102 via the TCV 1093-1 via the pixel control line L of the pixel array of the upper chip 1092, as indicated by an arrow B1 in the figure. It is output to the gate 1143 (FIG. 29). The control line gate 1143 (FIG. 29) controls the control line according to the control signal via the pixel control line L from the row driving unit 1102 (FIG. 29) for the row address which is the command information from the control unit 1121 (FIG. 29). By comparing the signal output from the gate 1143 with the detection pulse of the control signal corresponding to the row address supplied from the control unit 1121, the presence or absence of a failure due to the disconnection of the pixel control line L and TCV 1093-1 and 1093-2 is determined. To detect. Then, the control line gate 1143 outputs information on the presence / absence of a failure to the failure detection unit 1124 realized by the signal processing circuit 1113, as indicated by an arrow B2 in the figure.
 カラムADC1111-1は、図中の矢印A1で示されるように、TCV1093-11を介して供給される、画素アレイ1101の図中の上半分の画素の画素信号を、列単位でデジタル信号に変換して信号処理回路1113に出力する。また、カラムADC1111-2は、図中の矢印A2で示されるように、TCV1093-12を介して供給される、画素アレイ1101の図中の下半分の画素の画素信号を、列単位でデジタル信号に変換して信号処理回路1113に出力する。 The column ADC 11111-1 converts the pixel signal of the upper half pixel in the figure of the pixel array 1101 supplied via the TCV 1093-11 into a digital signal in units of columns, as indicated by an arrow A 1 in the figure. And output to the signal processing circuit 1113. Further, the column ADC 111-2, as indicated by an arrow A2 in the figure, converts the pixel signal of the lower half pixel in the figure of the pixel array 1101 supplied via the TCV 1093-12 into a digital signal in units of columns. And output to the signal processing circuit 1113.
 このように2層化することにより、上チップ1092が画素アレイ1101のみとなるため、画素に特化した半導体プロセスを導入することが可能となる。例えば、上チップ1092には、回路のトランジスタがないため、1000℃のアニール工程などによる特性変動に注意を払う必要がなくなるので、白点対策の高温プロセスなどを導入することができ、結果として、特性を改善することが可能となる。 By thus forming two layers, the upper chip 1092 becomes only the pixel array 1101, so that it is possible to introduce a semiconductor process specialized for pixels. For example, since there is no circuit transistor in the upper chip 1092, it is not necessary to pay attention to characteristic fluctuations caused by an annealing process at 1000 ° C., etc. The characteristics can be improved.
 また、下チップ1091に故障検出部1124を配置することにより、下チップ1091乃至上チップ1092および上チップ1092乃至下チップ1091におけるTCV1093-1,1093-2の通過後の信号を検出することができるので、適切に故障を検出することが可能となる。なお、なお、上チップ1092が「第1の基板」の一例に相当し、下チップ1091が「第2の基板」の一例に相当する。 Further, by arranging the failure detection unit 1124 in the lower chip 1091, it is possible to detect signals after passing through the TCVs 1093-1 and 1093-2 in the lower chip 1091 through the upper chip 1092 and the upper chip 1092 through the lower chip 1091. Therefore, it becomes possible to detect a failure appropriately. Note that the upper chip 1092 corresponds to an example of a “first substrate”, and the lower chip 1091 corresponds to an example of a “second substrate”.
 <<4.応用例>>
 続いて、本開示に係る固体撮像装置の応用例について説明する。
<< 4. Application example >>
Next, application examples of the solid-state imaging device according to the present disclosure will be described.
  <4.1.移動体への応用例1>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<4.1. Application Example 1 for Moving Objects>
The technology according to the present disclosure (present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device that is mounted on any type of mobile body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, and a robot. May be.
 図30は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 30 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図30に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。 The vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example illustrated in FIG. 30, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050. As a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are illustrated.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp. In this case, the body control unit 12020 can be input with radio waves transmitted from a portable device that substitutes for a key or signals from various switches. The body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The vehicle outside information detection unit 12030 detects information outside the vehicle on which the vehicle control system 12000 is mounted. For example, the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle and receives the captured image. The vehicle outside information detection unit 12030 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the amount of received light. The imaging unit 12031 can output an electrical signal as an image, or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared rays.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The vehicle interior information detection unit 12040 detects vehicle interior information. For example, a driver state detection unit 12041 that detects a driver's state is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the vehicle interior information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver is asleep.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates a control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside / outside the vehicle acquired by the vehicle outside information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit A control command can be output to 12010. For example, the microcomputer 12051 realizes an ADAS (Advanced Driver Assistance System) function including vehicle collision avoidance or impact mitigation, following traveling based on inter-vehicle distance, vehicle speed maintaining traveling, vehicle collision warning, or vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of automatic driving that autonomously travels without depending on the operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microcomputer 12051 can output a control command to the body system control unit 12020 based on information outside the vehicle acquired by the vehicle outside information detection unit 12030. For example, the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of anti-glare, such as switching from a high beam to a low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図30の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The sound image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle. In the example of FIG. 30, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices. The display unit 12062 may include at least one of an on-board display and a head-up display, for example.
 図31は、撮像部12031の設置位置の例を示す図である。 FIG. 31 is a diagram illustrating an example of an installation position of the imaging unit 12031.
 図31では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。 In FIG. 31, the vehicle 12100 includes imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
 撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper part of a windshield in the vehicle interior of the vehicle 12100. The imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirror mainly acquire an image of the side of the vehicle 12100. The imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100. The forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図31には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 FIG. 31 shows an example of the shooting range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively, and the imaging range 12114 The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, an overhead image when the vehicle 12100 is viewed from above is obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051, based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100). In particular, it is possible to extract, as a preceding vehicle, a three-dimensional object that travels at a predetermined speed (for example, 0 km / h or more) in the same direction as the vehicle 12100, particularly the closest three-dimensional object on the traveling path of the vehicle 12100. it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. Thus, cooperative control for the purpose of autonomous driving or the like autonomously traveling without depending on the operation of the driver can be performed.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 converts the three-dimensional object data related to the three-dimensional object to other three-dimensional objects such as a two-wheeled vehicle, a normal vehicle, a large vehicle, a pedestrian, and a utility pole based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. The microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 is connected via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration or avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the captured images of the imaging units 12101 to 12104. Such pedestrian recognition is, for example, whether or not the user is a pedestrian by performing a pattern matching process on a sequence of feature points indicating the outline of an object and a procedure for extracting feature points in the captured images of the imaging units 12101 to 12104 as infrared cameras. It is carried out by the procedure for determining. When the microcomputer 12051 determines that there is a pedestrian in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 has a rectangular contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to be superimposed and displayed. Moreover, the audio | voice image output part 12052 may control the display part 12062 so that the icon etc. which show a pedestrian may be displayed on a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、撮像部12031に適用され得る。具体的には、図1に示す固体撮像装置1は、撮像部12031に適用され得る。撮像部12031に対して本開示に係る技術を適用することにより、例えば、当該撮像部12031を構成する固体撮像装置の各画素のうち少なくとも一部の画素に異常が生じた場合に、当該異常を検出することが可能となる。また、このような仕組みを利用することで、例えば、一部の画素に異常が生じた場合に、当該異常が生じたことを示す情報を、所定の出力部を介してユーザに報知することが可能となる。また、車両制御システム7000では、認識結果に基づいて、車両を制御する機能を制限することができる。車両を制御する機能の具体例としては、車両の衝突回避あるいは衝撃緩和機能、車間距離に基づく追従走行機能、車速維持走行機能、車両の衝突警告機能、又は車両のレーン逸脱警告機能等が挙げられる。認識処理の結果、撮像部7410に不具合が生じたと判定された場合、車両を制御する機能を制限あるいは禁止することができる。これにより、撮像部7410の不具合に基づく誤検知に起因した事故を防止することができる。また、他の一例として、異常が生じた画素から出力される画素信号を、正常に動作している他の画素からの画素信号に基づき補正を行うことも可能となる。 Heretofore, an example of a vehicle control system to which the technology according to the present disclosure can be applied has been described. The technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above. Specifically, the solid-state imaging device 1 illustrated in FIG. 1 can be applied to the imaging unit 12031. By applying the technology according to the present disclosure to the imaging unit 12031, for example, when an abnormality occurs in at least some of the pixels of the solid-state imaging device that configures the imaging unit 12031, the abnormality is determined. It becomes possible to detect. In addition, by using such a mechanism, for example, when an abnormality occurs in some pixels, information indicating that the abnormality has occurred can be notified to the user via a predetermined output unit. It becomes possible. Moreover, in the vehicle control system 7000, the function which controls a vehicle can be restrict | limited based on a recognition result. Specific examples of the function of controlling the vehicle include a collision avoidance or impact mitigation function of the vehicle, a following traveling function based on the inter-vehicle distance, a vehicle speed maintaining traveling function, a vehicle collision warning function, or a vehicle lane departure warning function. . As a result of the recognition process, when it is determined that a problem has occurred in the imaging unit 7410, the function of controlling the vehicle can be restricted or prohibited. Thereby, it is possible to prevent an accident caused by erroneous detection based on the malfunction of the imaging unit 7410. As another example, it is possible to correct a pixel signal output from a pixel in which an abnormality has occurred based on a pixel signal from another pixel that is operating normally.
  <4.2.移動体への応用例2>
 続いて、移動体に適用される撮像装置を利用して実現される制御のより具体的な一例について説明する。
<4.2. Application Example 2 for Moving Objects>
Next, a more specific example of control realized using an imaging device applied to a moving object will be described.
 例えば、図32は、移動体に適用される撮像装置の概略的な構成の一例について示したブロック図である。なお、図32に示す撮像装置800は、例えば、図30に示す撮像部12031に相当する。図32に示すように、撮像装置800は、光学系801と、固体撮像素子803と、制御ユニット805と、通信部807とを含む。 For example, FIG. 32 is a block diagram showing an example of a schematic configuration of an imaging apparatus applied to a moving body. Note that the imaging apparatus 800 illustrated in FIG. 32 corresponds to, for example, the imaging unit 12031 illustrated in FIG. As illustrated in FIG. 32, the imaging apparatus 800 includes an optical system 801, a solid-state imaging device 803, a control unit 805, and a communication unit 807.
 固体撮像素子803は、例えば、図30に示す撮像部12031に相当し得る。即ち、レンズ等のような光学系801を介して撮像装置800内に入射した光は、固体撮像素子803により電気信号に光電変換され、当該電気信号に応じた画像や、当該電気信号に応じた測距の情報が、制御ユニット805に出力される。 The solid-state imaging device 803 may correspond to, for example, the imaging unit 12031 illustrated in FIG. That is, light that has entered the imaging device 800 via the optical system 801 such as a lens is photoelectrically converted into an electric signal by the solid-state imaging device 803, and an image corresponding to the electric signal or an electric signal corresponding to the electric signal is obtained. Ranging information is output to the control unit 805.
 制御ユニット805は、例えば、ECU(Electronic Control Unit)として構成され、固体撮像素子803から出力される画像や測距の情報に基づき各種処理を実行する。具体的な一例として、制御ユニット805は、固体撮像素子803から出力される画像に対して各種解析処理を施すことで、解析結果に基づき外部の人、車両、障害物、標識又は路面上の文字等の物体の認識や、当該物体までの距離の測定を行う。 The control unit 805 is configured as an ECU (Electronic Control Unit), for example, and executes various processes based on an image output from the solid-state image sensor 803 and distance measurement information. As a specific example, the control unit 805 performs various kinds of analysis processing on the image output from the solid-state image sensor 803, so that an external person, a vehicle, an obstacle, a sign, or a character on the road surface is based on the analysis result. Etc., and the distance to the object is measured.
 また、制御ユニット805は、通信部807を介して車載ネットワーク(CAN:Controller Area Network)に接続される。通信部807は、所謂CAN通信とのインタフェースに相当する。このような構成に基づき、例えば、制御ユニット805は、車載ネットワークに接続された他の制御ユニット(例えば、図30に示した統合制御ユニット12050)との間で各種情報を送受信する。 Also, the control unit 805 is connected to a vehicle-mounted network (CAN: Controller Area Network) via the communication unit 807. The communication unit 807 corresponds to an interface with so-called CAN communication. Based on such a configuration, for example, the control unit 805 transmits / receives various information to / from other control units (for example, the integrated control unit 12050 shown in FIG. 30) connected to the in-vehicle network.
 以上のような構成に基づき、制御ユニット805は、例えば、上述したような物体の認識結果や当該物体までの距離の測定結果を利用することで、多様な機能を提供することが可能である。 Based on the configuration described above, the control unit 805 can provide various functions by using, for example, the recognition result of the object and the measurement result of the distance to the object as described above.
 上述した機能の具体的な一例として、以下に示すような機能が挙げられる。
 ・FCW(Pedestrian Detection for Forward Collision Warning)
 ・AEB(Automatic Emergency Braking)
 ・Vehicle Detection for FCW/AEB
 ・LDW(Lane Departure Warning)
 ・TJP(Traffic Jam Pilot)
 ・LKA(Lane Keeping Aid)
 ・VO ACC(Vision Only Adaptive Cruise Control)
 ・VO TSR(Vision Only Traffic Sign Recognition)
 ・IHC(Intelligent Head Ramp Control)
Specific examples of the functions described above include the following functions.
・ FCW (Pedestrian Detection for Forward Collision Warning)
・ AEB (Automatic Emergency Braking)
・ Vehicle Detection for FCW / AEB
・ LDW (Lane Departure Warning)
・ TJP (Traffic Jam Pilot)
・ LKA (Lane Keeping Aid)
・ VO ACC (Vision Only Adaptive Cruise Control)
・ VO TSR (Vision Only Traffic Sign Recognition)
・ IHC (Intelligent Head Ramp Control)
 より具体的な一例として、制御ユニット805は、車両が、人や他の車両等のような外部の物体に衝突しそうな状況下において、当該物体に衝突するまでの時間を算出することが可能である。そのため、例えば、このような時間の算出結果が統合制御ユニット12050に通知されることで、当該統合制御ユニット12050は、通知された情報を、上記FCWの実現に利用することが可能となる。 As a more specific example, the control unit 805 can calculate the time until the vehicle collides with an external object such as a person or another vehicle in a situation where the vehicle is likely to collide with the object. is there. Therefore, for example, when the calculation result of such time is notified to the integrated control unit 12050, the integrated control unit 12050 can use the notified information for realizing the FCW.
 また、他の一例として、制御ユニット805は、車両の前方の画像の解析結果に基づき、先行車のブレーキランプを検出することが可能である。即ち、当該検出結果が統合制御ユニット12050に通知されることで、統合制御ユニット12050は、通知された情報を、上記AEBの実現に利用することが可能である。 As another example, the control unit 805 can detect the brake lamp of the preceding vehicle based on the analysis result of the image ahead of the vehicle. That is, when the detection result is notified to the integrated control unit 12050, the integrated control unit 12050 can use the notified information for realizing the AEB.
 また、他の一例として、制御ユニット805は、車両の前方の画像の解析結果に基づき、当該車両が走行中のレーンの認識や、当該レーンの端や縁石等を認識することが可能である。そのため、当該認識結果が統合制御ユニット12050に通知されることで、統合制御ユニット12050は、通知された情報を、上記LDWの実現に利用することが可能である。 As another example, the control unit 805 can recognize a lane in which the vehicle is traveling, an edge of the lane, a curb, and the like based on an analysis result of an image in front of the vehicle. Therefore, when the recognition result is notified to the integrated control unit 12050, the integrated control unit 12050 can use the notified information for realizing the LDW.
 また、制御ユニット805は、車両の前方の画像の解析結果に基づき先行車の存在の有無を認識し、当該認識結果を統合制御ユニット12050に通知してもよい。これにより、統合制御ユニット12050は、例えば、上記TJPの実行時において、先行車の有無に応じて車速を制御することも可能となる。また、制御ユニット805は、車両の前方の画像の解析結果に基づき標識を認識し、当該認識結果を統合制御ユニット12050に通知してもよい。これにより、統合制御ユニット12050は、例えば、上記TJPの実行時において、標識の認識結果に応じて制限速度を認識し、当該制限速度に応じて車速を制御することも可能となる。同様に、制御ユニット805は、高速道路の入口及び出口の認識や、走行中の車両がカーブに差し掛かったか否かの認識等を行うことも可能であり、当該認識結果は、統合制御ユニット12050による車両制御に利用することが可能である。 Further, the control unit 805 may recognize the presence or absence of a preceding vehicle based on the analysis result of the image ahead of the vehicle, and notify the integrated control unit 12050 of the recognition result. As a result, the integrated control unit 12050 can control the vehicle speed according to the presence or absence of a preceding vehicle, for example, when the TJP is executed. Further, the control unit 805 may recognize the sign based on the analysis result of the image ahead of the vehicle, and notify the integrated control unit 12050 of the recognition result. Thereby, the integrated control unit 12050 can recognize the speed limit according to the recognition result of the sign and control the vehicle speed according to the speed limit, for example, when the TJP is executed. Similarly, the control unit 805 can also recognize the entrance and exit of the expressway, recognize whether or not the traveling vehicle has reached a curve, and the recognition result is obtained by the integrated control unit 12050. It can be used for vehicle control.
 また、制御ユニット805は、車両の前方の画像の解析結果に基づき、当該車両の前方に位置する光源を認識することも可能である。即ち、当該光源の認識結果が統合制御ユニット12050に通知されることで、当該統合制御ユニット12050は、通知された情報を、上記IHCの実現に利用することが可能である。具体的な一例として、統合制御ユニット12050は、認識された光源の光量に応じて、ヘッドランプの光量を制御することが可能である。また、他の一例として、統合制御ユニット12050は、認識された光源の位置に応じて、左右のヘッドランプのうちいずれかの光量を制限することも可能となる。 The control unit 805 can also recognize the light source located in front of the vehicle based on the analysis result of the image in front of the vehicle. That is, the integrated control unit 12050 is notified of the recognition result of the light source, so that the integrated control unit 12050 can use the notified information for realizing the IHC. As a specific example, the integrated control unit 12050 can control the light amount of the headlamp according to the recognized light amount of the light source. As another example, the integrated control unit 12050 can limit the amount of light of either the left or right headlamp according to the recognized position of the light source.
 また、前述したように、本実施形態に係る固体撮像素子を応用することで、例えば、固体撮像素子803に異常が発生した場合に、制御ユニット805は、固体撮像素子803から出力される情報に基づき当該異常を検知することが可能である。そのため、例えば、制御ユニット805が固体撮像素子803の異常の検知結果を、車載ネットワークを介して統合制御ユニット12050に通知することで、当該統合制御ユニット12050は、安全を確保するための各種制御を実行することが可能となる。 Further, as described above, by applying the solid-state imaging device according to the present embodiment, for example, when an abnormality occurs in the solid-state imaging device 803, the control unit 805 outputs information to be output from the solid-state imaging device 803. Based on this, it is possible to detect the abnormality. Therefore, for example, the control unit 805 notifies the integrated control unit 12050 of the abnormality detection result of the solid-state imaging device 803, so that the integrated control unit 12050 performs various controls for ensuring safety. It becomes possible to execute.
 具体的な一例として、統合制御ユニット12050は、ユーザに対して、各種出力装置を介して、固体撮像素子803に異常が発生したことを報知してもよい。なお、当該出力装置としては、例えば、図30に示す、オーディオスピーカ12061、表示部12062、及びインストルメントパネル12063等が挙げられる。 As a specific example, the integrated control unit 12050 may notify the user that an abnormality has occurred in the solid-state imaging device 803 via various output devices. Examples of the output device include an audio speaker 12061, a display unit 12062, an instrument panel 12063, and the like shown in FIG.
 また、他の一例として、統合制御ユニット12050は、固体撮像素子803に異常が発生したことを認識した場合に、認識結果に応じて車両の動作を制御してもよい。より具体的な一例として、統合制御ユニット12050は、上述したTJPやLKA等のような所謂自動制御の機能を制限してもよい。また、統合制御ユニット12050は、車速を制限する等のような、安全を確保するための制御を実行してもよい。 As another example, when the integrated control unit 12050 recognizes that an abnormality has occurred in the solid-state imaging device 803, the integrated control unit 12050 may control the operation of the vehicle according to the recognition result. As a more specific example, the integrated control unit 12050 may limit a so-called automatic control function such as TJP or LKA described above. Further, the integrated control unit 12050 may execute control for ensuring safety, such as limiting the vehicle speed.
 以上のように、本開示に係る技術を車等の移動体の車載システムに応用することにより、固体撮像素子803に異常が生じ、各種認識処理を正常に動作させることが困難となった場合においても、当該異常を検出することが可能となる。そのため、例えば、当該異常の検出結果に応じて、当該異常に関する報知情報をユーザに報知したり、各種認識処理に関連する構成の動作を制御する等のような、安全確保ための各種処置の実行を実現することが可能となる。 As described above, when the technology according to the present disclosure is applied to an in-vehicle system of a moving body such as a car, an abnormality occurs in the solid-state imaging device 803, and it is difficult to operate various recognition processes normally. In addition, the abnormality can be detected. Therefore, for example, in accordance with the detection result of the abnormality, execution of various measures for ensuring safety, such as notifying the user of notification information regarding the abnormality or controlling the operation of the configuration related to various recognition processes Can be realized.
 <<5.むすび>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<< 5. Conclusion >>
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 複数の画素と、
 前記複数の画素それぞれによる露光を制御する制御部と、
 前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間のうちの最後の露光結果に基づく画素信号の読み出しの完了後から、前記第1の期間よりも後の前記1回以上の露光が実行される第2の期間における最初の露光が開始されるまでの第3の期間に、所定の試験を実行する処理部と、
 を備える撮像装置。
(2)
 前記第1の期間及び前記第2の期間は、所定のフレームレートに応じた単位フレーム期間である、前記(1)に記載の撮像装置。
(3)
 前記第3の期間は、前記単位フレーム期間における垂直ブランキング期間に応じて設定される、前記(2)に記載の撮像装置。
(4)
 前記単位フレーム期間において前記画素により複数回の露光が実行され、
 前記複数回の露光間における露光時間の合計が、前記単位フレーム期間よりも短い、
 前記(3)に記載の撮像装置。
(5)
 前記垂直ブランキング期間は、前記複数回の露光間における露光比に応じて決定される、前記(4)に記載の撮像装置。
(6)
 前記制御部は、行列状に2次元配列された前記複数の画素それぞれによる露光の開始タイミングを行ごとに制御し、
 前記処理部は、前記行ごとに、当該行に含まれる画素による前記第1の期間のうちの前記最後の露光結果に基づく前記画素信号の読み出しの完了後から、前記第2の期間における前記最初の露光が開始されるまでの前記第3の期間に、前記試験を実行する、
 前記(1)~(5)のいずれか一項に記載の撮像装置。
(7)
 前記処理部は、前記試験として、前記一部の画素を対象とした試験を実行する、前記(1)~(6)のいずれか一項に記載の撮像装置。
(8)
 前記複数の画素それぞれに対して駆動信号を供給する駆動回路を備え、
 前記処理部は、前記試験として、前記駆動回路を対象とした試験を実行する、前記(1)~(7)のいずれか一項に記載の撮像装置。
(9)
 前記画素から読み出されたアナログの前記画素信号をデジタル信号に変換するAD変換部を備え、
 前記処理部は、前記試験として、前記AD変換部を対象とした試験を実行する、前記(1)~(8)のいずれか一項に記載の撮像装置。
(10)
 前記処理部は、前記試験として、前記一部の画素に接続された配線を対象とした試験を実行する、前記(1)~(9)のいずれか一項に記載の撮像装置。
(11)
 前記試験の結果に応じた情報が所定の出力先に出力されるように制御する出力制御部を備える、前記(1)~(10)のいずれか一項に記載の撮像装置。
(12)
 前記試験の結果に応じて、少なくとも一部の前記画素から出力される前記画素信号を補正する補正処理部を備える、前記(1)~(11)のいずれか一項に記載の撮像装置。
(13)
 複数の画素それぞれによる露光を制御する制御部と、
 前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間のうちの最後の露光結果に基づく画素信号の読み出しの完了後から、前記第1の期間よりも後の前記1回以上の露光が実行される第2の期間における最初の露光が開始されるまでの第3の期間に、前記一部の画素を対象とした試験を実行する処理部と、
 を備える制御装置。
(14)
 前記試験の結果に応じた情報が、所定の出力部に提示されるように制御する出力制御部を備える、前記(13)に記載の制御装置。
(15)
 前記試験の結果に応じて、前記複数の画素からの前記画素信号の読み出し結果に基づく画像を補正する補正処理部を備える、前記(13)または(14)に記載の制御装置。
(16)
 コンピュータが、
 複数の画素それぞれによる露光を制御することと、
 前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間のうちの最後の露光結果に基づく画素信号の読み出しの完了後から、前記第1の期間よりも後の前記1回以上の露光が実行される第2の期間における最初の露光が開始されるまでの第3の期間に、前記一部の画素を対象とした試験を実行することと、
 を含む制御方法。
The following configurations also belong to the technical scope of the present disclosure.
(1)
A plurality of pixels;
A control unit for controlling exposure by each of the plurality of pixels;
From the completion of the readout of the pixel signal based on the final exposure result in the first period in which at least one exposure is performed by at least some of the plurality of pixels, than in the first period. A processing unit that executes a predetermined test in a third period until the first exposure in the second period in which the one or more exposures are performed later;
An imaging apparatus comprising:
(2)
The imaging apparatus according to (1), wherein the first period and the second period are unit frame periods corresponding to a predetermined frame rate.
(3)
The imaging apparatus according to (2), wherein the third period is set according to a vertical blanking period in the unit frame period.
(4)
In the unit frame period, a plurality of exposures are performed by the pixels,
A total exposure time between the plurality of exposures is shorter than the unit frame period;
The imaging device according to (3).
(5)
The imaging device according to (4), wherein the vertical blanking period is determined according to an exposure ratio between the plurality of exposures.
(6)
The control unit controls the exposure start timing for each of the plurality of pixels arranged in a two-dimensional matrix, for each row,
For each row, the processing unit performs the first in the second period after the completion of reading out the pixel signal based on the last exposure result in the first period by the pixels included in the row. Performing the test in the third period until the exposure of
The imaging apparatus according to any one of (1) to (5).
(7)
The imaging apparatus according to any one of (1) to (6), wherein the processing unit executes a test for the some pixels as the test.
(8)
A drive circuit for supplying a drive signal to each of the plurality of pixels;
The imaging apparatus according to any one of (1) to (7), wherein the processing unit executes a test for the drive circuit as the test.
(9)
An AD converter that converts the analog pixel signal read from the pixel into a digital signal;
The imaging apparatus according to any one of (1) to (8), wherein the processing unit executes a test for the AD conversion unit as the test.
(10)
The imaging apparatus according to any one of (1) to (9), wherein the processing unit executes a test for wiring connected to the some pixels as the test.
(11)
The imaging apparatus according to any one of (1) to (10), further including an output control unit configured to control information according to a result of the test to be output to a predetermined output destination.
(12)
The imaging apparatus according to any one of (1) to (11), further including a correction processing unit that corrects the pixel signals output from at least some of the pixels in accordance with a result of the test.
(13)
A control unit for controlling exposure by each of a plurality of pixels;
From the completion of the readout of the pixel signal based on the final exposure result in the first period in which at least one exposure is performed by at least some of the plurality of pixels, than in the first period. A processing unit that performs a test on the partial pixels in a third period until the first exposure in the second period in which the one or more exposures are performed later;
A control device comprising:
(14)
The control device according to (13), further including an output control unit that performs control so that information according to the result of the test is presented to a predetermined output unit.
(15)
The control device according to (13) or (14), further including a correction processing unit that corrects an image based on a readout result of the pixel signal from the plurality of pixels according to the result of the test.
(16)
Computer
Controlling exposure by each of a plurality of pixels;
From the completion of the readout of the pixel signal based on the final exposure result in the first period in which at least one exposure is performed by at least some of the plurality of pixels, than in the first period. Performing a test on the partial pixels in a third period until the first exposure in the second period in which the one or more exposures are performed later;
Control method.
 1、1a、1c、1d 固体撮像装置
 2、2c 画素
 2a  ダミー画素
 3   画素アレイ部
 4   アドレスレコーダ
 5   画素タイミング駆動回路
 6   カラム信号処理回路
 7   センサコントローラ
 8   アナログ電位生成回路
 101 制御部
 111 画素アレイ部
 112 選択部
 114 定電流回路部
 121、122 画素
 131、132、133 スイッチ
 141 比較器
 143 カウンタ
 152 ノード
 153 カウンタ
 161、162 MOSトランジスタ
 211 センサデータユニット
 221 センサデータユニット
 401 DSP
DESCRIPTION OF SYMBOLS 1, 1a, 1c, 1d Solid- state imaging device 2, 2c Pixel 2a Dummy pixel 3 Pixel array part 4 Address recorder 5 Pixel timing drive circuit 6 Column signal processing circuit 7 Sensor controller 8 Analog potential generation circuit 101 Control part 111 Pixel array part 112 Selection unit 114 Constant current circuit unit 121, 122 Pixel 131, 132, 133 Switch 141 Comparator 143 Counter 152 Node 153 Counter 161, 162 MOS transistor 211 Sensor data unit 221 Sensor data unit 401 DSP

Claims (35)

  1.  車両に搭載され、前記車両の周辺領域を撮像して画像を生成する撮像装置と、
     前記車両に搭載され、前記車両を制御する機能に関する処理を実行する処理装置と、を備え、
     前記撮像装置は、複数の画素と、前記複数の画素それぞれによる露光を制御する制御部と、所定の試験を実行する処理部と、を有し、
     前記制御部は、前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間において画素信号の読み出しが完了した後に、1回以上の露光が実行される第2の期間において画素信号の読み出しが開始されるように露光を制御し、
     前記処理部は、前記第1の期間における画素信号の読み出しと前記第2の期間における画素信号の読み出しとの間である第3の期間に、前記所定の試験を実行し、
     前記処理装置は、前記所定の試験の結果に基づいて、前記車両を制御する機能を制限する、
     撮像システム。
    An imaging device that is mounted on a vehicle and images a peripheral region of the vehicle to generate an image;
    A processing device mounted on the vehicle and executing a process related to a function of controlling the vehicle,
    The imaging apparatus includes a plurality of pixels, a control unit that controls exposure by each of the plurality of pixels, and a processing unit that executes a predetermined test.
    The controller is configured to perform one or more exposures after reading of pixel signals is completed in a first period in which one or more exposures are performed by at least some of the plurality of pixels. Controlling the exposure so that readout of the pixel signal is started in the period of 2,
    The processing unit performs the predetermined test in a third period between reading of the pixel signal in the first period and reading of the pixel signal in the second period,
    The processing device limits a function of controlling the vehicle based on a result of the predetermined test.
    Imaging system.
  2.  前記処理部は、前記所定の試験の結果に基づいて、前記撮像装置の故障状態を検出し、
     前記処理装置は、前記撮像装置の故障状態を検出した場合に、前記車両を制御する機能を制限する、請求項1に記載の撮像システム。
    The processing unit detects a failure state of the imaging device based on the result of the predetermined test,
    The imaging system according to claim 1, wherein the processing device restricts a function of controlling the vehicle when a failure state of the imaging device is detected.
  3.  前記車両を制御する機能が制限された場合、前記車両を制御する機能が制限された旨を乗員に通知する、請求項1に記載の撮像システム。 The imaging system according to claim 1, wherein when the function of controlling the vehicle is restricted, the occupant is notified that the function of controlling the vehicle is restricted.
  4.  前記第1の期間及び前記第2の期間は、所定のフレームレートに応じた単位フレーム期
    間である、請求項1に記載の撮像システム。
    The imaging system according to claim 1, wherein the first period and the second period are unit frame periods according to a predetermined frame rate.
  5.  前記第3の期間は、前記単位フレーム期間における垂直ブランキング期間に応じて設定される、請求項4に記載の撮像システム。 The imaging system according to claim 4, wherein the third period is set according to a vertical blanking period in the unit frame period.
  6.  前記単位フレーム期間において前記複数の画素により複数回の露光が実行され、
     前記複数回の露光間における露光時間の合計が、前記単位フレーム期間よりも短い、請求項4に記載の撮像システム。
    A plurality of exposures are performed by the plurality of pixels in the unit frame period,
    The imaging system according to claim 4, wherein a total exposure time between the plurality of exposures is shorter than the unit frame period.
  7.  前記垂直ブランキング期間は、複数回の露光間における露光比に応じて決定される、請求項5に記載の撮像システム。 The imaging system according to claim 5, wherein the vertical blanking period is determined according to an exposure ratio between a plurality of exposures.
  8.  前記処理部は、前記第3の期間のうち、前記第1の期間における画素信号の読み出しと前記第2の期間における画素信号のシャッターとの間に、前記所定の試験を実行する、請求項1に記載の撮像システム。 2. The processing unit executes the predetermined test between readout of a pixel signal in the first period and shutter of the pixel signal in the second period in the third period. The imaging system described in 1.
  9.  前記制御部は、行列状に2次元配列された前記複数の画素それぞれによる露光の開始タ
    イミングを行ごとに制御し、
     前記処理部は、前記行ごとに、前記第3の期間に、前記試験を実行する、
     請求項8に記載の撮像システム。
    The control unit controls the exposure start timing for each of the plurality of pixels arranged in a two-dimensional matrix, for each row,
    The processing unit executes the test in the third period for each row.
    The imaging system according to claim 8.
  10.  前記第3の期間は、垂直ブランキング期間である、請求項1に記載の撮像システム。 The imaging system according to claim 1, wherein the third period is a vertical blanking period.
  11.  前記処理部は、前記試験として、前記一部の画素を対象とした試験を実行する、請求項
    1に記載の撮像システム。
    The imaging system according to claim 1, wherein the processing unit executes a test for the some pixels as the test.
  12.  前記複数の画素それぞれに対して駆動信号を供給する駆動回路を備え、
     前記処理部は、前記試験として、前記駆動回路を対象とした試験を実行する、請求項1
    に記載の撮像システム。
    A drive circuit for supplying a drive signal to each of the plurality of pixels;
    The processing unit executes a test on the drive circuit as the test.
    The imaging system described in 1.
  13.  前記画素から読み出されたアナログの前記画素信号をデジタル信号に変換するAD変換
    部を備え、
     前記処理部は、前記試験として、前記AD変換部を対象とした試験を実行する、請求項
    1に記載の撮像システム。
    An AD converter that converts the analog pixel signal read from the pixel into a digital signal;
    The imaging system according to claim 1, wherein the processing unit executes a test for the AD conversion unit as the test.
  14.  前記処理部は、前記試験として、前記一部の画素に接続された配線を対象とした試験を
    実行する、請求項1に記載の撮像システム。
    The imaging system according to claim 1, wherein the processing unit executes a test for wiring connected to the some pixels as the test.
  15.  前記試験の結果に応じた情報が所定の出力先に出力されるように制御する出力制御部を
    備える、請求項1に記載の撮像システム。
    The imaging system according to claim 1, further comprising an output control unit configured to control information according to the test result to be output to a predetermined output destination.
  16.  前記試験の結果に応じて、少なくとも一部の前記画素から出力される前記画素信号を補
    正する補正処理部を備える、請求項1に記載の撮像システム。
    The imaging system according to claim 1, further comprising: a correction processing unit that corrects the pixel signals output from at least some of the pixels according to the result of the test.
  17.  前記複数の画素は、第1の基板に配置され、
     前記制御部および前記処理部は、前記第1の基板と積層される第2の基板に配置される、請求項1に記載の撮像システム。
    The plurality of pixels are disposed on a first substrate;
    The imaging system according to claim 1, wherein the control unit and the processing unit are arranged on a second substrate stacked with the first substrate.
  18.  前記第1の基板に配置され、前記複数の画素に接続される画素制御線と、
     前記第2の基板に配置され、前記複数の画素それぞれに対して駆動信号を供給する駆動回路を備え、
     前記画素制御線の一端は、第1の接続電極を介して前記駆動回路に接続され、
     前記画素制御線の他端は、第2の接続電極を介して前記処理部に接続され、
     前記駆動回路は、前記第1の接続電極を介して前記画素制御線に前記駆動信号を供給し、
     前記処理部は、前記第1の接続電極、前記画素制御線、および前記第2の接続電極を介して供給される前記駆動信号に基づいて前記試験を実行する、請求項17に記載の撮像システム。
    A pixel control line disposed on the first substrate and connected to the plurality of pixels;
    A driving circuit disposed on the second substrate and supplying a driving signal to each of the plurality of pixels;
    One end of the pixel control line is connected to the drive circuit via a first connection electrode,
    The other end of the pixel control line is connected to the processing unit via a second connection electrode,
    The drive circuit supplies the drive signal to the pixel control line via the first connection electrode;
    The imaging system according to claim 17, wherein the processing unit performs the test based on the drive signal supplied via the first connection electrode, the pixel control line, and the second connection electrode. .
  19.  前記所定の試験の結果に応じた情報を出力する出力部を備える、請求項1に記載の撮像システム。 The imaging system according to claim 1, further comprising an output unit that outputs information according to a result of the predetermined test.
  20.  前記所定の試験の結果に応じて、前記複数の画素からの前記画素信号の読み出し結果に基づく画像を補正する補正処理部を備える、請求項1に記載の撮像システム。 The imaging system according to claim 1, further comprising: a correction processing unit that corrects an image based on a readout result of the pixel signal from the plurality of pixels according to a result of the predetermined test.
  21.  複数の画素と、
     前記複数の画素それぞれによる露光を制御する制御部と、
     所定の試験を実行する処理部と、を備え、
     前記制御部は、前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間において画素信号の読み出しが完了した後に、1回以上の露光が実行される第2の期間において画素信号の読み出しが開始されるように露光を制御し、
     前記処理部は、前記第1の期間における画素信号の読み出しと前記第2の期間における画素信号の読み出しとの間である第3の期間に、前記所定の試験を実行する、
     撮像装置。
    A plurality of pixels;
    A control unit for controlling exposure by each of the plurality of pixels;
    A processing unit for executing a predetermined test,
    The controller is configured to perform one or more exposures after reading of pixel signals is completed in a first period in which one or more exposures are performed by at least some of the plurality of pixels. Controlling the exposure so that readout of the pixel signal is started in the period of 2,
    The processing unit executes the predetermined test in a third period that is between reading of the pixel signal in the first period and reading of the pixel signal in the second period.
    Imaging device.
  22.  前記第1の期間及び前記第2の期間は、所定のフレームレートに応じた単位フレーム期
    間である、請求項21に記載の撮像装置。
    The imaging apparatus according to claim 21, wherein the first period and the second period are unit frame periods corresponding to a predetermined frame rate.
  23.  前記第3の期間は、前記単位フレーム期間における垂直ブランキング期間に応じて設定
    される、請求項22に記載の撮像装置。
    The imaging device according to claim 22, wherein the third period is set according to a vertical blanking period in the unit frame period.
  24.  前記単位フレーム期間において前記複数の画素により複数回の露光が実行され、
     前記複数回の露光間における露光時間の合計が、前記単位フレーム期間よりも短い、
     請求項23に記載の撮像装置。
    A plurality of exposures are performed by the plurality of pixels in the unit frame period,
    A total exposure time between the plurality of exposures is shorter than the unit frame period;
    The imaging device according to claim 23.
  25.  前記垂直ブランキング期間は、前記複数回の露光間における露光比に応じて決定される、請求項24に記載の撮像装置。 The imaging apparatus according to claim 24, wherein the vertical blanking period is determined according to an exposure ratio between the plurality of exposures.
  26.  前記処理部は、前記第3の期間のうち、前記第1の期間における画素信号の読み出しと前記第2の期間における画素信号のシャッターとの間に、前記所定の試験を実行する、請求項21に記載の撮像装置。 The processing unit executes the predetermined test between the readout of the pixel signal in the first period and the shutter of the pixel signal in the second period in the third period. The imaging device described in 1.
  27.  前記制御部は、行列状に2次元配列された前記複数の画素による露光の開始タイミングを行ごとに制御し、
     前記処理部は、前記行ごとに、前記第3の期間に、前記試験を実行する、
     請求項21に記載の撮像装置。
    The control unit controls the start timing of exposure by the plurality of pixels arranged two-dimensionally in a matrix for each row,
    The processing unit executes the test in the third period for each row.
    The imaging device according to claim 21.
  28.  前記第3の期間は、垂直ブランキング期間である、請求項21に記載の撮像装置。 The imaging apparatus according to claim 21, wherein the third period is a vertical blanking period.
  29.  前記処理部は、前記試験として、前記一部の画素を対象とした試験を実行する、請求項21に記載の撮像装置。 The imaging apparatus according to claim 21, wherein the processing unit executes a test for the some pixels as the test.
  30.  前記複数の画素それぞれに対して駆動信号を供給する駆動回路を備え、
     前記処理部は、前記試験として、前記駆動回路を対象とした試験を実行する、請求項21に記載の撮像装置。
    A drive circuit for supplying a drive signal to each of the plurality of pixels;
    The imaging device according to claim 21, wherein the processing unit executes a test for the drive circuit as the test.
  31.  前記画素から読み出されたアナログの前記画素信号をデジタル信号に変換するAD変換部を備え、
     前記処理部は、前記試験として、前記AD変換部を対象とした試験を実行する、請求項
    21に記載の撮像装置。
    An AD converter that converts the analog pixel signal read from the pixel into a digital signal;
    The imaging device according to claim 21, wherein the processing unit executes a test for the AD conversion unit as the test.
  32.  前記処理部は、前記試験として、前記一部の画素に接続された配線を対象とした試験を実行する、請求項21に記載の撮像装置。 The imaging device according to claim 21, wherein the processing unit executes a test for wiring connected to the some pixels as the test.
  33.  前記試験の結果に応じた情報が所定の出力先に出力されるように制御する出力制御部を備える、請求項21に記載の撮像装置。 The imaging apparatus according to claim 21, further comprising an output control unit configured to control information according to a result of the test to be output to a predetermined output destination.
  34.  前記試験の結果に応じて、少なくとも一部の前記画素から出力される前記画素信号を補正する補正処理部を備える、請求項21に記載の撮像装置。 The imaging apparatus according to claim 21, further comprising a correction processing unit that corrects the pixel signals output from at least some of the pixels according to the result of the test.
  35.  複数の画素と、
     前記複数の画素それぞれによる露光を制御する制御部と、
     前記複数の画素のうち少なくとも一部の画素による1回以上の露光が実行される第1の期間のうちの最後の露光結果
    に基づく画素信号の読み出しの完了後から、前記第1の期間よりも後の前記1回以上の露光が実行される第2の期間における最初の露光が開始されるまでの第3の期間に、所定の試験を実行する処理部と、
     を備える撮像装置。
    A plurality of pixels;
    A control unit for controlling exposure by each of the plurality of pixels;
    From the completion of the readout of the pixel signal based on the final exposure result in the first period in which at least one exposure is performed by at least some of the plurality of pixels, than in the first period. A processing unit that executes a predetermined test in a third period until the first exposure in the second period in which the one or more exposures are performed later;
    An imaging apparatus comprising:
PCT/JP2017/040155 2017-02-01 2017-11-07 Imaging system and imaging device WO2018142707A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
DE112017006977.7T DE112017006977T5 (en) 2017-02-01 2017-11-07 PICTURE SYSTEM AND PICTURE DEVICE
CN201780084589.XA CN110226325B (en) 2017-02-01 2017-11-07 Imaging system and imaging apparatus
US16/471,406 US10819928B2 (en) 2017-02-01 2017-11-07 Imaging system and imaging apparatus for detection of abnormalities associated with the imaging system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2017-016476 2017-02-01
JP2017016476 2017-02-01
JP2017206335A JP6953274B2 (en) 2017-02-01 2017-10-25 Imaging system and imaging device
JP2017-206335 2017-10-25

Publications (1)

Publication Number Publication Date
WO2018142707A1 true WO2018142707A1 (en) 2018-08-09

Family

ID=63039507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/040155 WO2018142707A1 (en) 2017-02-01 2017-11-07 Imaging system and imaging device

Country Status (2)

Country Link
CN (1) CN110226325B (en)
WO (1) WO2018142707A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111146222A (en) * 2019-12-10 2020-05-12 南京威派视半导体技术有限公司 Multi-block pixel array based on polycrystalline circle stacking technology
WO2020121699A1 (en) * 2018-12-11 2020-06-18 ソニーセミコンダクタソリューションズ株式会社 Imaging device
US20230209227A1 (en) * 2020-07-07 2023-06-29 Sony Semiconductor Solutions Corporation Imaging device and electronic apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158363A1 (en) * 2006-12-28 2008-07-03 Micron Technology, Inc. On-chip test system and method for active pixel sensor arrays
JP2010068179A (en) * 2008-09-10 2010-03-25 Dainippon Printing Co Ltd Solid-state imaging device, and method of driving the same
JP2014112760A (en) * 2012-12-05 2014-06-19 Sony Corp Solid-state image pickup device and electronic apparatus
JP2015501578A (en) * 2011-10-14 2015-01-15 オムロン株式会社 Method and apparatus for projective space monitoring
JP2015144475A (en) * 2015-03-11 2015-08-06 キヤノン株式会社 Imaging apparatus, control method of the same, program and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4305777B2 (en) * 2006-11-20 2009-07-29 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5083046B2 (en) * 2008-06-03 2012-11-28 ソニー株式会社 Imaging apparatus and imaging method
JP4868021B2 (en) * 2009-01-07 2012-02-01 ソニー株式会社 Solid-state imaging device and drive control method
CN102779334B (en) * 2012-07-20 2015-01-07 华为技术有限公司 Correction method and device of multi-exposure motion image
JP2014183206A (en) * 2013-03-19 2014-09-29 Sony Corp Solid-state imaging device, driving method of solid-state imaging device, and electronic apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158363A1 (en) * 2006-12-28 2008-07-03 Micron Technology, Inc. On-chip test system and method for active pixel sensor arrays
JP2010068179A (en) * 2008-09-10 2010-03-25 Dainippon Printing Co Ltd Solid-state imaging device, and method of driving the same
JP2015501578A (en) * 2011-10-14 2015-01-15 オムロン株式会社 Method and apparatus for projective space monitoring
JP2014112760A (en) * 2012-12-05 2014-06-19 Sony Corp Solid-state image pickup device and electronic apparatus
JP2015144475A (en) * 2015-03-11 2015-08-06 キヤノン株式会社 Imaging apparatus, control method of the same, program and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020121699A1 (en) * 2018-12-11 2020-06-18 ソニーセミコンダクタソリューションズ株式会社 Imaging device
EP3896958A4 (en) * 2018-12-11 2022-04-13 Sony Semiconductor Solutions Corporation Imaging device
US11381773B2 (en) 2018-12-11 2022-07-05 Sony Semiconductor Solutions Corporation Imaging device
JP7503500B2 (en) 2018-12-11 2024-06-20 ソニーセミコンダクタソリューションズ株式会社 Imaging device
CN111146222A (en) * 2019-12-10 2020-05-12 南京威派视半导体技术有限公司 Multi-block pixel array based on polycrystalline circle stacking technology
US20230209227A1 (en) * 2020-07-07 2023-06-29 Sony Semiconductor Solutions Corporation Imaging device and electronic apparatus

Also Published As

Publication number Publication date
CN110226325B (en) 2022-04-15
CN110226325A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
TWI820078B (en) solid-state imaging element
JP6953274B2 (en) Imaging system and imaging device
US11582416B2 (en) Solid-state image sensor, imaging device, and method of controlling solid-state image sensor
WO2020110484A1 (en) Solid-state image sensor, imaging device, and control method of solid-state image sensor
US20210218923A1 (en) Solid-state imaging device and electronic device
US11418746B2 (en) Solid-state image sensor, imaging device, and method of controlling solid-state image sensor
JPWO2019017092A1 (en) Analog-digital converter, solid-state image sensor, and method of controlling analog-digital converter
US20230300495A1 (en) Solid-state imaging device and control method of the same
WO2018142707A1 (en) Imaging system and imaging device
WO2018142706A1 (en) Imaging system, imaging device, and control device
EP3723361B1 (en) Imaging device
US11381773B2 (en) Imaging device
WO2022149388A1 (en) Imaging device and ranging system
WO2022009573A1 (en) Imaging device and imaging method
US20210006777A1 (en) Signal processing device, signal processing method, and signal processing system
WO2023032416A1 (en) Imaging device
US20240205557A1 (en) Imaging device, electronic apparatus, and imaging method
WO2023132151A1 (en) Image capturing element and electronic device
JP7489329B2 (en) Imaging device and imaging system
WO2024004377A1 (en) Solid-state imaging element, imaging device, and method for controlling solid-state imaging element
CN117981346A (en) Solid-state imaging element, control method of solid-state imaging element, and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17895003

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17895003

Country of ref document: EP

Kind code of ref document: A1