US20180199000A1 - Image capturing apparatus and moving object - Google Patents
Image capturing apparatus and moving object Download PDFInfo
- Publication number
- US20180199000A1 US20180199000A1 US15/846,337 US201715846337A US2018199000A1 US 20180199000 A1 US20180199000 A1 US 20180199000A1 US 201715846337 A US201715846337 A US 201715846337A US 2018199000 A1 US2018199000 A1 US 2018199000A1
- Authority
- US
- United States
- Prior art keywords
- signal
- unit
- period
- driving
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000006243 chemical reaction Methods 0.000 claims description 44
- 238000012546 transfer Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000001678 irradiating effect Effects 0.000 claims 1
- 238000000034 method Methods 0.000 description 42
- 239000003990 capacitor Substances 0.000 description 38
- 230000000875 corresponding effect Effects 0.000 description 25
- 238000002366 time-of-flight method Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 14
- 238000005259 measurement Methods 0.000 description 14
- 238000005070 sampling Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 10
- 101150080585 memb-1 gene Proteins 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H04N5/378—
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/46—Indirect determination of position data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/71—Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
- H04N25/75—Circuitry for providing, modifying or processing image signals from the pixel array
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H04N13/0239—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/77—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
- H04N25/771—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising storage means other than floating diffusion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/78—Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
-
- H04N5/2258—
-
- H04N5/23245—
-
- H04N5/374—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present invention relates to an image capturing apparatus and a moving object.
- Some image capturing apparatuses each including an image sensor read out both image signals each indicating the shape or the like of an object, and distance signals each indicating a distance between the object and the image capturing apparatus (see Japanese Patent Laid-Open No. 2008-116309).
- a TOF (Time Of Flight) method is used as an example of a method of measuring the distance between the object and the image capturing apparatus. According to this method, the object is irradiated with light, and then reflected light from the object is detected, making it possible to measure the distance between the object and the image capturing apparatus based on a time difference (delay time) from the timing of light irradiation to the timing of reflected light detection.
- Such an image capturing apparatus is applied to, for example, a vehicle-mounted camera, and used to detect an obstacle around a vehicle and a distance between the obstacle and the vehicle.
- the image capturing apparatus includes a plurality of pixels each including a photoelectric conversion element such as a photodiode. Note that in order to associate the shape or the like of a given object with a distance between the object and the image capturing apparatus appropriately, both the image signal and distance signal described above are preferably read out from the same pixel (same photoelectric conversion element).
- the present invention provides a new technique of obtaining both an image signal and a distance signal from the same pixel appropriately.
- One of the aspects of the present invention provides an image capturing apparatus that includes a first image sensor and a second image sensor each including a plurality of pixels arrayed in a matrix, a driving unit, and a signal readout unit, wherein each of the plurality of pixels includes a photoelectric conversion element, a first signal holding unit, a first transferring unit configured to transfer a signal according to an amount of charges generated in the photoelectric conversion element to the first signal holding unit, a second signal holding unit, and a second transferring unit configured to transfer a signal according to an amount of charges generated in the photoelectric conversion element to the second signal holding unit, and the driving units perform, on each of the first image sensor and the second image sensor, first driving of causing the first transferring unit to transfer, to the first signal holding unit, a signal according to an amount of charges generated in the photoelectric conversion element in a first period in accordance with an amount of light from an object which is not irradiated with light by a light irradiation unit, and causing the first signal holding unit to hold
- FIG. 1 is a block diagram for explaining an example of the arrangement of an image capturing apparatus
- FIGS. 2A and 2B are a block diagram and a circuit diagram for explaining an example the arrangement of an image capturing unit
- FIG. 3 is a timing chart for explaining an example of a method of driving pixels
- FIGS. 4A and 4B are timing charts for explaining the example of the method of driving the pixels
- FIG. 5 is a block diagram for explaining an example of the arrangement of an image capturing apparatus
- FIG. 6 is a flowchart for explaining an example of a control method at the time of image capturing
- FIG. 7 is a timing chart for explaining an example of a method of driving pixels
- FIGS. 8A and 8B are timing charts for explaining the example of the method of driving the pixels
- FIG. 9 is a timing chart for explaining an example of a method of driving pixels
- FIGS. 10A and 10B are timing charts for explaining the example of the method of driving the pixels
- FIG. 11 is a timing chart for explaining an example of a method of driving pixels
- FIGS. 12A and 12B are timing charts for explaining the example of the method of driving the pixels
- FIG. 13 is a timing chart for explaining an example of a method of implementing distance measurement based on a TOF method.
- FIGS. 14A and 14B are block diagrams for explaining an example of an image capturing system regarding a vehicle-mounted camera.
- FIG. 1 is a block diagram showing an example of the arrangement of the image capturing apparatus 1 .
- the image capturing apparatus 1 includes an image capturing unit 11 , a processor 12 , a light irradiation unit 13 , a display 14 , and an output unit 15 .
- the image capturing unit 11 includes a pixel array 111 and a controller 112 .
- the processor 12 can communicate with the controller 112 and obtains image data of an object (not shown) by controlling the pixel array 111 by the controller 112 , as will be described later in detail.
- the controller 112 can also control the light irradiation unit 13 to irradiate the object with light.
- the processor 12 can perform distance measurement according to a TOF (Time Of Flight) method based on reflected light from the object receiving light of the light irradiation unit 13 , in addition to obtaining the image data of the object by using the image capturing unit 11 . That is, the processor 12 obtains a distance with the object based on a time difference between a timing at which the light irradiation unit 13 irradiates the object and a timing at which the reflected light from the object is detected. Therefore, the controller 112 controls the light irradiation unit 13 based on a control signal from the processor 12 .
- TOF Time Of Flight
- the processor 12 is configured to control the light irradiation unit 13 by the controller 112 .
- the distance with the object is a distance from the image capturing apparatus 1 (in particular, the image capturing unit 11 ) to the object in this embodiment.
- the processor 12 can display an image (for example, a video such as a moving image) based on image data obtained from the image capturing unit 11 on the display 14 .
- a known display device such as a liquid crystal display or an organic EL display can be used for the display 14 .
- the processor 12 causes the output unit 15 to output a signal indicating the distance with the object obtained based on the above-described TOF method to an arithmetic unit (for example, a CPU (Central Processing Unit)) that performs a predetermined process based on the signal.
- the output unit 15 may be integrated with the display 14 and, for example, the distance with the object may be displayed together with the object on the display 14 .
- the processor 12 may be, for example, a device (for example, a PLD (Programmable Logic Device) such as an FPGA (Field Programmable Gate Array) or an integrated circuit capable of programming respective functions.
- the processor 12 may be an arithmetic device such as an MPU (Micro Processing Unit) or a DSP (Digital Signal Processor) for implementing the respective functions.
- the processor 12 may be an ASIC (Application Specific Integrated Circuit) or the like.
- the processor 12 includes a CPU and a memory, and the respective functions may be implemented on software. That is, the functions of the processor 12 can be implemented by hardware and/or software.
- FIG. 2A shows an example of the image capturing unit 11 .
- the image capturing unit 11 further includes a driving unit 113 and a signal readout unit 114 , in addition to the pixel array 111 and the controller 112 .
- the pixel array 111 includes a plurality of pixels PX arrayed in a matrix (so as to form a plurality of rows and a plurality of columns).
- the driving unit 113 is a vertical scanning circuit formed by a decoder, a shift register, or the like and drives the plurality of pixels PX for each row.
- the signal readout unit 114 includes a signal amplifier circuit 1141 and a sampling circuit 1142 arranged for each column, a multiplexer 1143 , and a horizontal scanning circuit 1144 formed by a decoder, a shift register, or the like. With such an arrangement, the signal readout unit 114 reads out signals for each column from the plurality of pixels PX driven by the driving unit 113 . As will be described later in detail, CDS (Correlated Double Sampling) processing is used in sampling by the sampling circuits 1142 .
- the controller 112 includes a timing generator, and performs the synchronous control of the pixels PX, driving unit 113 , and signal readout unit 114 .
- FIG. 2B shows an example of the circuit arrangement of the unit pixel PX.
- the pixel PX includes the photoelectric conversion element PD, various transistors T_GS 1 , T_GS 2 , T_TX 1 , T_TX 2 , T_OFD, T_RES, T_SF, and T_SEL, and capacitors C 1 , C 2 , and C_FD.
- a photodiode is used for the photoelectric conversion element PD.
- NMOS transistors are used for the transistors T_GS 1 , T_GS 2 , T_TX 1 , T_TX 2 , T_OFD, T_RES, T_SF, and T_SEL.
- other known switch elements may be used.
- the capacitors C 1 , C 2 , and C_FD correspond to capacitance components of the diffusion layer regions (drains/sources) of the NMOS transistors.
- the photoelectric conversion element PD is arranged such that one terminal is connected to the drain of the transistor T_GS 1 , the drain of the transistor T_GS 2 , and the source of the transistor T_OFD, and the other terminal is grounded.
- the source of the transistor T_GS 1 and the drain of the transistor T_TX 1 are connected to each other, and the capacitor C 1 is formed in their connection node.
- the source of the transistor T_GS 2 and the drain of the transistor T_TX 2 are connected to each other, and the capacitor C 2 is formed in their connection node.
- the drain of the transistor T_OFD is connected to a power supply voltage VDD.
- the source of the transistor T_TX 1 , the source of the transistor T_TX 2 , the source of the transistor T_RES, and the gate of the transistor T_SF are connected to each other, and the capacitor C_FD is formed in their connection node.
- the drain of the transistor T_RES and the drain of the transistor T_SF are connected to the power supply voltage VDD.
- the source of the transistor T_SF is connected to the drain of the transistor T_SEL.
- the source of the transistor T_SEL is connected to a column signal line LC for outputting the signal of the pixel PX.
- the transistor T_GS 1 Upon receiving a control signal P_GS 1 at the gate, the transistor T_GS 1 is controlled in a conductive state (ON) or a non-conductive state (OFF). In this embodiment, the transistor T_GS 1 is turned on in the conductive state when the signal P_GS 1 is set at high level (H level) and is turned off in the non-conductive state when the signal P_GS 1 is set at low level (L level). Similarly, upon receiving a control signal P_GS 2 at the gate, the transistor T_GS 2 is controlled in the conductive state or the non-conductive state.
- the transistors T_TX 1 , T_TX 2 , T_OFD, T_RES, and T_SEL are, respectively, controlled by control signals P_TX 1 , P_TX 2 , P_OFD, P_RES, and P_SEL.
- the control signal P_GS 1 and the like are supplied from the driving unit 113 to the respective pixels PX based on a synchronization signal of the controller 112 .
- the transistor T_GS 1 functions as the first transferring unit that transfers charges generated in the photoelectric conversion element PD to the capacitor C 1 .
- the transistor T_GS 2 functions as the second transferring unit that transfers charges generated in the photoelectric conversion element PD to the capacitor C 2 .
- the capacitor C 1 functions as the first signal holding unit that holds a signal (voltage) according to the amount of the charges generated in the photoelectric conversion element PD.
- the capacitor C 2 functions as the second signal holding unit.
- the transistor T_TX 1 functions as the third transferring unit that transfers the signal of the capacitor C 1 to the capacitor C_FD (capacitance unit).
- the transistor T_TX 2 functions as the fourth transferring unit that transfers the signal of the capacitor C 2 to the capacitor C_FD.
- the transistor T_RES is also referred to as a reset transistor and functions as a reset unit that resets the voltage of the capacitor C_FD.
- the transistor T_SF is also referred to as an amplification transistor and functions as a signal amplifying unit that performs a source follower operation.
- the transistor T_SEL is also referred to as a selection transistor, capable of outputting a signal according to the voltage of the source of the transistor T_SF to the column signal line LC as a pixel signal, and functions as a selection unit that selects whether to output the pixel signal.
- the transistor T_OFD is also referred to as an overflow drain transistor and functions as an overflow drain unit that ejects (discharges) the charges generated in the photoelectric conversion element PD. Alternatively, it can also be said that the transistor T_OFD functions as the second reset unit that resets the voltage of the photoelectric conversion element PD.
- the respective elements of the image capturing unit 11 may be formed by semiconductor chips, and the image capturing unit 11 may be referred to as an image sensor. Note that in this embodiment, the image capturing unit 11 is a CMOS image sensor. As another embodiment, however, a CCD image sensor may be used.
- FIG. 3 is a timing chart showing an example of a method of driving the pixels PX according to this embodiment.
- the abscissa indicates a time axis.
- a “frame” indicated in the ordinate corresponds to image data (frame data) of one still image which is formed based on the group of pixel signals obtained from all the plurality of pixels PX. In this embodiment, assuming that a moving image is shot, this frame data is obtained repeatedly.
- FR(n) denotes the nth frame data.
- Frame data FR(n ⁇ 1), FR(n+1), and FR(n+2) as frame data before and after the frame data FR(n) are illustrated additionally for understanding.
- periods for reading out the frame data FR(n ⁇ 1), FR(n), FR(n+1), and FR(n+2) are, respectively, periods T_FR(n ⁇ 1), T_FR(n), T_FR(n+1), and T_FR(n+2).
- Light irradiation indicated in the ordinate indicates the state (active/inactive) of the light irradiation unit 13 configured to irradiate an object with light. More specifically, light irradiation at H level indicates that the light irradiation unit 13 is active (light irradiation state), and light irradiation at L level indicates that the light irradiation unit 13 is inactive (light non-irradiation state).
- “Accumulated charges” indicated in the ordinate indicate charges generated and accumulated in the photoelectric conversion element PD, and reference symbols in FIG. 3 denote accumulated charge amounts in given periods.
- “QA 1 ( n )” denotes an amount of the charges accumulated in the photoelectric conversion element PD in the period T 1 _FR(n).
- a “held signal (C 1 )” indicated in the ordinate indicates a signal held in the capacitor C 1 , and its signal level is a voltage value corresponding to a charge amount transferred from the photoelectric conversion element PD by the transistor T_GS 1 .
- a “held signal (C 2 )” indicates a signal held in the capacitor C 2 , and its signal level is a voltage value corresponding to a charge amount transferred from the photoelectric conversion element PD by the transistor T_GS 2 .
- Readout operations” indicated in the ordinate indicate signal readout modes for each row from the plurality of pixels PX, and each block illustrated together with reference symbol indicates that a signal readout for a given row is performed.
- a block denoted by “ROW” indicates that a signal readout is performed on the pixels PX of the first row.
- the number of rows in the pixel array 111 is X (natural number equal to or larger than 2) (reference symbols from RX( 1 ) to RO(X) are illustrated).
- the readout operation of the frame data FR(n) is focused below for a descriptive convenience. However, the same also applies to the other frame data FR(n+1) and the like. Note that in order to facilitate understanding of the drawing, regarding the “accumulated charges”, “held signal (C 1 )”, “held signal (C 2 )”, and “readout operation”, portions related to the readout operation of the frame data FR(n) are illustrated by solid lines, and portions other than these are illustrated by broken lines.
- the period T_FR(n) for reading out the frame data FR(n) includes the period T 1 _FR(n), and periods T 2 _FR(n) and T 3 _FR(n).
- charges are accumulated in the photoelectric conversion element PD in the light non-irradiation state (light irradiation: L level).
- the accumulated charges QA 1 ( n ) in the period T 1 _FR(n) are based on the amount of light from the object and entering the pixels PX.
- a signal MemA 1 ( n ) corresponding to the accumulated charges QA 1 ( n ) is held in the capacitor C 1 . Note that the signal MemA 1 ( n ) is held over the periods T 2 _FR(n), T 3 _FR(n), and T 1 _FR(n+1).
- a signal MemA 2 ( n ) corresponding to the accumulated charges QA 2 ( n ) is held in the capacitor C 2 .
- the signal MemA 2 ( n ) is held over the periods T 3 _FR(n) and T 1 _FR(n+1), and a period T 2 _FR(n+1).
- the reflected light from the object is detected in the pixels PX with a delay according to a distance with the object from a light irradiation timing. Therefore, for example, as the distance with the object increases, the accumulated charges QA 2 ( n ) and the signal MemA 2 ( n ) corresponding to it become smaller. On the other hand, as this distance decreases, the accumulated charges QA 2 ( n ) and the signal MemA 2 ( n ) become larger.
- the readout operations are started, and the operations from the signal readout RO( 1 ) for the first row to the signal readout RO(X) for the Xth row are performed sequentially. These readout operations are performed between the periods T 3 _FR(n) and T 1 _FR(n+1) in which both the signals MemA 1 ( n ) and MemA 2 ( n ) are held in the capacitors C 1 and C 2 , respectively.
- the processor 12 described with reference to FIG. 1 can obtain image data and distance information based on the signals MemA 1 ( n ) and MemA 2 ( n ) read out as described above.
- the processor 12 obtains the signal MemA 1 ( n ) as an image signal indicating the shape or the like of the object.
- the processor 12 also obtains the signal MemA 2 ( n ) as a distance signal indicating the distance with the object.
- the accumulated charges QA 2 ( n ) as the origin of the signal MemA 2 ( n ) include not only a component according to the amount of the reflected light from the object irradiated by the light irradiation unit 13 but also a component other than this.
- the processor 12 subtracts the signal MemA 1 ( n ) from the signal MemA 2 ( n ) (removes a signal component corresponding to a case in which light irradiation is not performed by the light irradiation unit 13 from the signal MemA 2 ( n )) and based on that result, calculates the distance with the object.
- charges are not accumulated in the photoelectric conversion element PD in the period T 3 _FR(n). More specifically, the charges generated in the photoelectric conversion element PD in the period T 3 _FR(n) are ejected (discharged) by the transistor T_OFD. That is, the “accumulated charges” in the period T 3 _FR(n) are discarded by an overflow drain operation (OFD operation) and are indicated by stripe hatching in FIG. 3 (ditto for the other drawings described in embodiments to be described later).
- OFD operation overflow drain operation
- FIG. 4A is a timing chart for explaining the method of driving the pixels PX in FIG. 3 in detail.
- the pixels PX of the mth row and (m+1)th row are focused here. However, the same also applies to other rows.
- P_OFD(m+1)”, “P_GS 1 ( m +1)”, and “P_GS 2 ( m +1)” correspond to control signals for the (m+1)th row.
- a “readout operation of the mth row” indicates that the signal readout RO(m) is being performed for H level, and the signal readout RO(m) is not being performed for L level. The same also applies to a “readout operation of the (m+1)th row”.
- a pulse at H level is given to the signal P_OFD(m), resetting the photoelectric conversion element PD. Subsequently (after the signal P_OFD(m) is returned to L level), charges are generated and accumulated in the photoelectric conversion element PD.
- a pulse at H level is given to the signal P_GS 1 ( m ), holding the signal MemA 1 ( n ) corresponding to the accumulated charges QA 1 ( n ) in the period T 1 _FR(n) by the capacitor C 1 as the image signal.
- Control in the above-described periods T 1 _FR(n) and T 2 _FR(n) is shown only for the (m+1)th row in FIG. 4A . However, the control is performed at once in all the rows. That is, in all the plurality of pixels PX of the pixel array 111 , the signals MemA 1 ( n ) are held in the capacitors C 1 almost simultaneously, and the signals MemA 2 ( n ) are also held in the capacitors C 2 almost simultaneously. This makes it possible to equalize charge accumulation times for all the pixels PX and to implement a so-called global electronic shutter.
- the readout operations from the first row to the Xth row that is, the signal readouts RO( 1 ) to RO(X) are performed sequentially.
- the signal readouts RO( 1 ) to RO(X) are performed in the order of a row number here.
- the signal readouts RO( 1 ) to RO(X) may be performed in any order because the charge accumulation times are equalized for all the pixels PX, and the accumulated charges are held in the capacitors C 1 and C 2 .
- the signal readouts RO(m) and RO(m+1) are illustrated separately from each other at a boundary between the period T 3 _FR(n) and the period T 1 _FR(n+1). However, they may be performed at any timing between the periods T 3 _FR(n) and T 1 _FR(n+1).
- FIG. 4B is a timing chart for explaining the method of driving the pixels PX when the signal readouts RO(m) and RO(m+1) are performed in detail.
- P_SEL(m+1)”, “P_RES(m+1)”, “P_TX 1 ( m +1), and “P_TX 2 ( m +2)” correspond to control signals for the (m+1)th row.
- “Sampling by the signal readout unit” indicates that sampling by the sampling circuits 1142 is being performed in the signal readout unit 114 for H level, and the sampling is not being performed for L level. As described above (see FIG. 2A ), the signal readout unit 114 reads out the signals from the pixels PX for each row. Thus, when the signal readout unit 114 reads out the signals from the pixels PX of a given row, “sampling by the signal readout unit” at H level described above indicates that the signals from the pixels PX of the row are sampled.
- Periods for performing the signal readouts RO(m) and RO(m+1) are, respectively, periods T_RO(m) and T_RO(m+1). First, the period T_RO(m) will be described.
- the control signal P_SEL(m) is maintained at H level during the period T_RO(m).
- the period T_RO(m) includes periods T 0 _RO(m), T 1 _RO(m), T 2 _RO(m), T 3 _RO(m), and T 4 _RO(m).
- CDS processing is performed in the signal readout unit 114 . More specifically, after a pulse at H level is given to the control signal P_RES(m) in the period T 0 _RO(m), and the capacitor C_FD is reset, the voltage of the reset capacitor C_FD is sampled in the period T 1 _RO(m). “MemA 1 ( m )_N” denotes a signal obtained by this.
- a pulse at H level is given to the control signal P_TX 1 ( m ) at the last timing in the period T 1 _RO(m), and the transistor T_TX 1 transfers a signal MemA 1 ( m ) from the capacitor C 1 to the capacitor C_FD.
- the voltage of the capacitor C_FD to which the signal MemA 1 ( m ) is transferred is sampled.
- MemA 1 ( m )_S denotes a signal obtained by this.
- the signal MemA 1 ( n ) is obtained as the image signal for a descriptive convenience.
- this image signal is obtained in practice based on the above-described CDS processing using the signals MemA 1 ( m )_N and MemA 1 ( m )_S. That is, this image signal is a signal obtained by subtracting MemA 1 ( m )_N from MemA 1 ( m )_S.
- a pulse at H level is given to the control signal P_TX 2 ( m ) at the last timing in the period T 3 _RO(m), and the transistor T_TX 2 transfers a signal MemA 2 ( m ) from the capacitor C 2 to the capacitor C_FD.
- the voltage of the capacitor C_FD to which the signal MemA 2 ( m ) is transferred is sampled.
- MemA 2 ( m )_S denotes a signal obtained by this.
- the signal MemA 2 ( n ) is obtained as the distance signal for a descriptive convenience. In this embodiment, however, this distance signal is obtained in practice based on the above-described CDS processing using the signals MemA 2 ( m )_N and MemA 2 ( m )_S. That is, this distance signal is a signal obtained by subtracting MemA 2 ( m )_N from MemA 2 ( m )_S.
- the distance with the object is calculated based on a result of subtracting the signal MemA 1 ( n ) from the signal MemA 2 ( n ).
- this distance is calculated based on a result obtained by further subtracting the above-described image signal (the signal obtained by subtracting MemA 1 ( m )_N from MemA 1 ( m )_S) from the above-described distance signal (the signal obtained by subtracting MemA 2 ( m )_N from MemA 2 ( m )_S).
- the period T 1 _FR(n) and the period T 2 _FR(n) are equal in length. Therefore, a signal component corresponding to a case in which the light irradiation unit 13 does not perform light irradiation is removed appropriately (that is, a signal component based on the TOF method is extracted appropriately) by the above-described subtractions, making it possible to detect information on the distance with the object accurately based on this distance signal.
- the signal readout RO(m) is performed as described above.
- the control signal P_SEL(m) is maintained at H level, and the same control as in the period T_RO(m) is also performed for the (m+1)th row.
- the signal readout RO(m+1) is thus performed.
- the image signal indicating the shape or the like of the object and the distance signal indicating the distance with the object almost simultaneously from the same photoelectric conversion element PD of the same pixel PX. Therefore, it becomes possible to associate the shape or the like of the object with the distance with the object appropriately and to improve the detection accuracy of the object. For example, at the time of shooting a moving image, it becomes possible, while monitoring an object that may move, to detect a distance with the object almost simultaneously with this.
- the image capturing apparatus 1 is applied to, for example, a vehicle (four-wheel vehicle or the like) that includes an advanced driver assistance system (ADAS) such as an automatic brake. Therefore, in this embodiment, the method of driving the pixels PX in shooting the moving image is exemplified. However, the contents of this embodiment are also applicable to a case in which a still image is shot, as a matter of course.
- ADAS advanced driver assistance system
- an image capturing apparatus 1 further includes a second image capturing unit 11 B, in addition to an image capturing unit 11 .
- the image capturing unit 11 described in the first embodiment is referred to as an “image capturing unit 11 A”.
- the image capturing units 11 A and 11 B are arranged side by side so as to be spaced apart from each other.
- the image capturing unit 11 A and the image capturing unit 11 B can be configured in the same manner.
- the aforementioned pixel array 111 and controller 112 are, respectively, denoted by “ 111 A” and “ 112 A” for the image capturing unit 11 A, and “ 111 B” and “ 112 B” for the image capturing unit 11 B.
- a light irradiation unit 13 irradiates an object with light based on control by the controller 112 A.
- the light irradiation unit 13 may be controlled by the controller 112 B.
- a processor 12 obtains image data from both the image capturing units 11 A and 11 B. This allows the processor 12 to perform, in addition to distance measurement by the TOF method described above, distance measurement by a stereo method using two frame data obtained from both the image capturing units 11 A and 11 B. That is, the processor 12 can measure a distance with the object based on a parallax between the image capturing units 11 A and 11 B.
- the image capturing unit 11 A upon obtaining frame data FR(n), the image capturing unit 11 A outputs signals MemA 1 ( n ) and MemA 2 ( n ) to the processor 12 .
- the image capturing unit 11 B upon obtaining the frame data FR(n), the image capturing unit 11 B outputs a signal MemB 1 ( n ) corresponding to the signal MemA 1 ( n ) of the image capturing unit 11 A to the processor 12 . Note that in this embodiment, the image capturing unit 11 B does not output a signal corresponding to the signal MemA 2 ( n ).
- the processor 12 includes a stereo-type distance calculation unit 121 , a TOF-type distance calculation unit 122 , a determination unit 123 , and a selector 124 .
- the distance calculation unit 121 receives the signal MemA 1 ( n ) from the image capturing unit 11 A, receives the signal MemB 1 ( n ) from the image capturing unit 11 B, and calculates a distance based on the stereo method using these signals MemA 1 ( n ) and MemB 1 ( n ).
- the distance calculation unit 122 receives both the signals MemA 1 ( n ) and MemA 2 ( n ) from the image capturing unit 11 A, and calculates the distance based on the TOF method (see the first embodiment).
- the determination unit 123 determines whether the calculation result satisfies a predetermined condition and outputs the determination result to the selector 124 .
- the calculation result of the distance calculation unit 121 and the calculation result of the distance calculation unit 122 can be input to the selector 124 .
- the selector 124 selects one of the calculation result of the distance calculation unit 121 and the calculation result of the distance calculation unit 122 based on this determination result, and outputs it to an output unit 15 .
- image data output to a display 14 can be formed by a group of image signals based on the signal MemA 1 ( n ) and/or the signal MemB 1 ( n ).
- the function of the processor 12 can be implemented by hardware and/or software. Therefore, in this embodiment, the above-described elements 121 to 124 are shown as elements independent of each other for a descriptive purpose. However, the individual functions of these elements 121 to 124 may be implemented by a single element.
- FIG. 6 is a flowchart showing an example of a control method at the time of image capturing.
- step S 100 (to be simply referred to as “S 100 ” hereinafter, and ditto for other steps), the distance calculation unit 121 calculates a distance by the stereo method.
- the determination unit 123 determines whether a calculation result in S 100 satisfies a predetermined condition. If this predetermined condition holds, the process advances to S 120 in which the distance calculation unit 122 calculates the distance by the TOF method. If this predetermined condition does not hold, the process advances to S 130 .
- S 130 the display 14 displays an image, and the output unit 15 outputs distance information.
- the distance information output here is given according to the calculation result (the calculation result based on the stereo method) in S 100 if the predetermined condition does not hold in S 110 and is given according to the calculation result (the calculation result based on the TOF method) in S 120 if the predetermined condition holds in S 110 .
- the predetermined condition in S 110 the fact that the luminance of the object is smaller than a predetermined reference value, the distance with the object is larger than a predetermined reference value, or the like is given. That is, in a case in which a shooting environment is comparatively dark or in a case in which a detection target (object) is positioned comparatively far, the distance information calculated based on the TOF method is adopted.
- the process advances to S 120 if one of these examples holds. As another embodiment, however, the process may advance to S 120 if two or more, or all of these examples hold.
- FIG. 7 is a timing chart showing an example of a method of driving pixels PX according to this embodiment as in FIG. 3 (see the first embodiment).
- the contents of an operation and control of the image capturing unit 11 A are the same as in FIG. 3 , and thus a description thereof will be omitted here.
- the contents of the operation and control of the image capturing unit 11 B in a period T 1 _FR(n) are the same as those of the image capturing unit 11 A. That is, in the period T 1 _FR(n), charges QB 1 ( n ) are accumulated in a photoelectric conversion element PD. Then, at the last timing in the period T 1 _FR(n), the signal MemB 1 ( n ) corresponding to the accumulated charges QB 1 ( n ) is held in a capacitor C 1 .
- FIG. 8A shows a timing chart for explaining the method of driving the pixels PX in FIG. 7 in detail as in FIG. 4A (see the first embodiment).
- the contents of the operation and control of the image capturing unit 11 A are the same as in FIG. 4A , and thus a description thereof will be omitted here.
- the image capturing unit 11 B is the same as the image capturing unit 11 A except that a pulse at H level is not given to control signals P_GS 2 ( m ) and P_GS 2 ( m +1) at the last timing in the period T 2 _FR(n). This also applies to other periods such as the period T_FR(n+1).
- portions different from the case of the image capturing unit 11 A are indicated by broken lines.
- the same operation as in the image capturing unit 11 described in the first embodiment is performed in the image capturing unit 11 A.
- the image capturing unit 11 B the charges QB 1 ( n ) are accumulated, and the signal MemB 1 ( n ) is held in order to obtain an image signal, and an operation and control to obtain a distance signal are omitted.
- FIG. 8B shows a timing chart for explaining the method of driving the pixels PX when signal readouts RO(m) and RO(m+1) are performed in detail as in FIG. 4B (see the first embodiment).
- the contents of the operation and control of the image capturing unit 11 A are the same as in FIG. 4B , and thus a description thereof will be omitted here.
- the image capturing unit 11 B is the same as the image capturing unit 11 A except for the following three points.
- a pulse at H level is not given to a control signal P_RES(m) at the last timing in a period T 2 _RO(m).
- a pulse at H level is not given to a control signal P_TX 2 ( m ) at the last timing in a period T 3 _RO(m).
- sampling is not performed in periods T 3 _RO(m) and T 4 _RO(m). This also applies to other periods such as the period T_RO(m+1).
- portions different from the case of the image capturing unit 11 A portions different from the case of the image capturing unit 11 A (portions to which the above-described pulse at H level is not given and portions in which sampling is not performed) are indicated by broken lines.
- an operation and control to obtain an image signal are performed in the image capturing unit 11 B as in the image capturing unit 11 A.
- an operation and control to obtain a distance signal are omitted.
- the processor 12 includes, as operation modes, the first mode in which distance measurement based on the stereo type is performed and the second mode in which distance measurement based on the TOF method is performed.
- a mode is exemplified in which the first mode is set in advance, and a shift from the first mode to the second mode is made if the predetermined condition in S 110 holds.
- a mode may be possible in which the second mode is set in advance, and a shift from the second mode to the first mode is made if the predetermined condition does not hold.
- the third embodiment will be described with reference to FIGS. 9 to 10B .
- This embodiment is different from the aforementioned second embodiment mainly in that an operation and control to obtain a distance signal are also performed in an image capturing unit 11 B. That is, the image capturing unit 11 B outputs a signal MemB 2 ( n ) corresponding to a signal MemA 2 ( n ) of an image capturing unit 11 A to a processor 12 , in addition to the signal MemB 1 ( n ) described in the second embodiment.
- FIG. 9 is a timing chart showing an example of a method of driving pixels PX according to this embodiment as in FIG. 7 (see the second embodiment).
- the contents of an operation and control of the image capturing unit 11 A are the same as in FIG. 7 , and thus a description thereof will be omitted here.
- the image capturing unit 11 B focusing on, for example, a period T_FR(n), the contents of the operation and control of the image capturing unit 11 B in periods T 1 _FR(n) and T 2 _FR(n) are the same as in FIG. 7 .
- a period T 3 _FR(n) charges QB 2 ( n ) are accumulated in a photoelectric conversion element PD.
- a signal MemB 2 ( n ) corresponding to the accumulated charges QB 2 ( n ) is held in a capacitor C 2 .
- the period T_FR(n) further includes a period T 4 _FR(n) as a next period thereof.
- signal readouts RO( 1 ) to RO(X) can be performed between the period T 4 _FR(n) and a period T 1 FR(n+1).
- distance measurement based on a TOF method is performed by using both the signals MemA 2 ( n ) and MemB 2 ( n ).
- the signal MemA 2 ( n ) becomes smaller, and the signal MemB 2 ( n ) becomes larger as a distance with an object increases and on the other hand, the signal MemA 2 ( n ) becomes larger, and the signal MemB 2 ( n ) becomes smaller as this distance decreases. Therefore, according to this embodiment, it is possible to improve the accuracy of distance measurement based on the TOF method by using both the signals MemA 2 ( n ) and MemB 2 ( n ).
- the period T 2 _FR(n) and the period T 3 _FR(n) are equal in time, and shorter than the period T 1 _FR(n). It is possible to increase a frame rate (the number of frame data that can be obtained per unit time) by shortening the periods T 2 _FR(n) and T 3 _FR(n). Together with this, the amount of irradiation light of a light irradiation unit 13 may be increased. This makes it possible to further improve the accuracy of distance measurement based on the TOF method.
- this embodiment is further advantageous in calculating the distance with the object accurately and improving the frame rate.
- signal components corresponding to a case in which the light irradiation unit 13 does not perform light irradiation may be removed from the signals MemA 2 ( n ) and MemB 2 ( n ) by using the signals MemA 1 ( n ) and MemB 1 ( n ).
- this calculation can be performed by using a coefficient corresponding to the ratio of the periods T 2 _FR(n) and T 3 _FR(n), and the period T 1 _FR(n).
- FIG. 10A shows a timing chart for explaining the method of driving the pixels PX in FIG. 9 in detail as in FIG. 8A (see the second embodiment).
- the contents of the operation and control of the image capturing unit 11 A are the same as in FIG. 8A , and thus a description thereof will be omitted here.
- the image capturing unit 11 B focusing on, for example, the period T_FR(n), the same operation and control as those of image capturing unit 11 A are performed in the period T 1 _FR(n).
- a pulse at H level is given to control signals P_OFD(m) and P_OFD(m+1). Consequently, charges generated in the photoelectric conversion element PD in the period T 2 _FR(n) are ejected (discharged) by the transistor T_OFD.
- FIG. 10B shows a timing chart for explaining the method of driving the pixels PX when signal readouts RO(m) and RO(m+1) are performed in detail as in FIG. 8B (see the second embodiment).
- the contents of the operations and control of the image capturing units 11 A and 11 B are the same as those of the image capturing unit 11 A described with reference to FIG. 8B . That is, the operation and control to obtain image signals are performed in both the image capturing units 11 A and 11 B, and the operation and control to read out distance signals are also performed in both the image capturing units 11 A and 11 B.
- FIG. 13 is a timing chart for explaining an example of a method of the distance measurement based on the TOF method described above.
- a period of light irradiation by the light irradiation unit 13 matches the period T 2 _FR(n).
- reflected light from the object is detected with a delay by a time according to the distance with the object. As shown in FIG. 13 , this delay time is denoted by a time to.
- e 0 a total of signal components corresponding to the above-described reflected light
- e 1 a component corresponding to the above-described reflected light of the signal MemA 2 ( n ), and
- e 1 is the component corresponding to the reflected light detected during the period T 2 _FR(n)
- e 2 is the component corresponding to the reflected light detected during the period T 3 _FR(n). Since:
- e 1 is calculated appropriately by obtaining a difference between the signals MemA 1 ( n ) and MemA 2 ( n ).
- e 2 is calculated appropriately by obtaining a difference between the signals MemB 1 ( n ) and MemB 2 ( n ).
- the delay time t 0 can be represented by:
- the delay time t 0 can be calculated based on Ta, e 1 , and e 2 . Therefore, according to this modification, it is possible to perform distance measurement based on the TOF method with the comparatively simple arrangement and to calculate the distance with the object appropriately even if, for example, the light reflectance of the object is not 1.
- the fourth embodiment will be described with reference to FIGS. 11 to 12B .
- This embodiment is different from the aforementioned third embodiment mainly in that an operation and control to obtain a distance signal is performed (repeated) a plurality of times while obtaining frame data of one frame. More specifically, focusing on a period T_FR(n), in this embodiment, a series of operations in the periods T 2 _FR(n) and T 3 _FR(n) described with reference to FIGS. 9 to 10B is repeated K times (K>2).
- FIG. 11 is a timing chart showing an example of a method of driving pixels PX according to this embodiment as in FIG. 9 (see the third embodiment).
- first periods T 2 _FR(n) and T 3 _FR(n) are, respectively, denoted by “T 2 ( 1 )_FR(n)” and “T 3 ( 1 )_FR(n)” in FIG. 11 .
- the same also applies to periods after second periods (for example, Kth periods are denoted by “T 2 (K)_FR(n)” and “T 3 (K)_FR(n)”).
- this embodiment it is possible to average errors of distance information and to further improve the calculation accuracy of a distance with an object, as compared to the third embodiment (that is, a case in which the series of operations described above is performed only once). It also becomes possible to further improve the calculation accuracy of the distance with the object by further shortening the individual periods of T 2 ( 1 )_FR(n) to T 2 (K)_FR(n) and T 3 ( 1 )_FR(n) to T 3 (K)_FR(n).
- FIGS. 12A and 12B show timing charts for explaining the method of driving the pixels PX in detail as in FIGS. 10A and 10B (see the third embodiment). These timing charts are the same as those in the third embodiment except that the series of operations in the periods T 2 _FR(n) and T 3 _FR(n) of FIG. 10A is repeated K time (K>2). Note that also in this embodiment, signal readouts RO( 1 ) to RO(X) can be performed between periods T 4 _FR(n) and T 1 _FR(n+1).
- FIG. 14A shows an example of an image capturing system regarding a vehicle-mounted camera.
- An image capturing system 1000 includes the image capturing apparatus in each embodiment described above as an image capturing apparatus 1010 .
- the image capturing system 1000 includes an image processing unit 1030 that performs image processing on a plurality of image data obtained by the image capturing apparatus 1010 and a parallax obtaining unit 1040 that obtains a parallax (phase difference of parallax images) from the plurality of image data obtained by the image capturing system 1000 .
- the image capturing system 1000 is in the form of a stereo camera that includes the plurality of image capturing apparatuses 1010 , this parallax can be obtained by using signals output from the plurality of image capturing apparatuses 1010 , respectively.
- the image capturing system 1000 includes a distance obtaining unit 1050 that obtains a distance to a target based on the obtained parallax and a collision determination unit 1060 that determines whether there is a collision possibility based on the obtained distance.
- the parallax obtaining unit 1040 and the distance obtaining unit 1050 are examples of a distance information obtaining means for obtaining distance information to the target. That is, the distance information is information about a parallax, a defocus amount, the distance to the target, and the like.
- the collision determination unit 1060 may determine the collision possibility using one of these pieces of distance information.
- the distance information obtaining means may be implemented by hardware designed for a special purpose, may be implemented by a software module, or may be implemented by a combination of these.
- the distance information obtaining means may be implemented by an FPGA (Field Programmable Gate Array), an ASIC (Application Specific Integrated Circuit), or the like.
- the distance information obtaining means may be implemented by a combination of the FPGA and the ASIC.
- the image capturing system 1000 is connected to a vehicle information obtaining apparatus 1310 , and can obtain vehicle information about a vehicle speed, a yaw rate, a steering angle, and the like.
- the image capturing system 1000 is also connected to a control ECU 1410 serving as a control apparatus that, based on a determination result in the collision determination unit 1060 , outputs a control signal generating a braking force to a vehicle.
- the image capturing system 1000 is also connected to a warning apparatus 1420 that issues a warning to a driver based on a determination result in the collision determination unit 1060 .
- the control ECU 1410 performs vehicle control to avoid a collision or reduce damage such as braking, the release of an accelerator, suppression of an engine output, and the like.
- the warning apparatus 1420 warns a user by, for example, generating an alarm such as a sound, displaying warning information on a screen such as a car navigation system, or giving vibrations to a seatbelt and a steering.
- the image capturing system 1000 captures an image of the surrounding, for example, front side or back side of the vehicle.
- FIG. 14B shows an image capturing system when the image capturing system 1000 captures the image of the front side of the vehicle.
- the control that avoids a collision with another vehicle has been described above.
- the present invention is also applicable to control of automatic driving following another vehicle, control of automatic driving not to drive off a lane, or the like.
- the image capturing system is applicable not only to a vehicle such as a four-wheel vehicle but also to, for example, a moving object (moving apparatus) such as ship, an airplane, or an industrial robot.
- the contents above are also applicable not only to the moving object but also widely to a device using object recognition such as an ITS (Intelligent Transportation System).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Measurement Of Optical Distance (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
Description
- The present invention relates to an image capturing apparatus and a moving object.
- Some image capturing apparatuses each including an image sensor read out both image signals each indicating the shape or the like of an object, and distance signals each indicating a distance between the object and the image capturing apparatus (see Japanese Patent Laid-Open No. 2008-116309). A TOF (Time Of Flight) method is used as an example of a method of measuring the distance between the object and the image capturing apparatus. According to this method, the object is irradiated with light, and then reflected light from the object is detected, making it possible to measure the distance between the object and the image capturing apparatus based on a time difference (delay time) from the timing of light irradiation to the timing of reflected light detection. Such an image capturing apparatus is applied to, for example, a vehicle-mounted camera, and used to detect an obstacle around a vehicle and a distance between the obstacle and the vehicle.
- The image capturing apparatus includes a plurality of pixels each including a photoelectric conversion element such as a photodiode. Note that in order to associate the shape or the like of a given object with a distance between the object and the image capturing apparatus appropriately, both the image signal and distance signal described above are preferably read out from the same pixel (same photoelectric conversion element).
- The present invention provides a new technique of obtaining both an image signal and a distance signal from the same pixel appropriately.
- One of the aspects of the present invention provides an image capturing apparatus that includes a first image sensor and a second image sensor each including a plurality of pixels arrayed in a matrix, a driving unit, and a signal readout unit, wherein each of the plurality of pixels includes a photoelectric conversion element, a first signal holding unit, a first transferring unit configured to transfer a signal according to an amount of charges generated in the photoelectric conversion element to the first signal holding unit, a second signal holding unit, and a second transferring unit configured to transfer a signal according to an amount of charges generated in the photoelectric conversion element to the second signal holding unit, and the driving units perform, on each of the first image sensor and the second image sensor, first driving of causing the first transferring unit to transfer, to the first signal holding unit, a signal according to an amount of charges generated in the photoelectric conversion element in a first period in accordance with an amount of light from an object which is not irradiated with light by a light irradiation unit, and causing the first signal holding unit to hold the signal as an image signal, perform, on the first image sensor, second driving of causing the second transferring unit to hold, in the second signal holding unit, a signal generated in the photoelectric conversion element in a second period based on reflected light from the object irradiated with the light by the light irradiation unit, and perform, on the second image sensor, third driving of causing the second transferring unit to hold, in the second signal holding unit, a signal generated in the photoelectric conversion element in a third period based on reflected light from the object irradiated with the light by the light irradiation unit, the third period including a period which does not overlap the second period.
- Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
-
FIG. 1 is a block diagram for explaining an example of the arrangement of an image capturing apparatus; -
FIGS. 2A and 2B are a block diagram and a circuit diagram for explaining an example the arrangement of an image capturing unit; -
FIG. 3 is a timing chart for explaining an example of a method of driving pixels; -
FIGS. 4A and 4B are timing charts for explaining the example of the method of driving the pixels; -
FIG. 5 is a block diagram for explaining an example of the arrangement of an image capturing apparatus; -
FIG. 6 is a flowchart for explaining an example of a control method at the time of image capturing; -
FIG. 7 is a timing chart for explaining an example of a method of driving pixels; -
FIGS. 8A and 8B are timing charts for explaining the example of the method of driving the pixels; -
FIG. 9 is a timing chart for explaining an example of a method of driving pixels; -
FIGS. 10A and 10B are timing charts for explaining the example of the method of driving the pixels; -
FIG. 11 is a timing chart for explaining an example of a method of driving pixels; -
FIGS. 12A and 12B are timing charts for explaining the example of the method of driving the pixels; -
FIG. 13 is a timing chart for explaining an example of a method of implementing distance measurement based on a TOF method; and -
FIGS. 14A and 14B are block diagrams for explaining an example of an image capturing system regarding a vehicle-mounted camera. - Preferred embodiments of the present invention will now be described with reference to the accompanying drawings. Note that the drawings are illustrated for the purpose of only explaining a structure or an arrangement, and the sizes of illustrated members do not always reflect actual sizes. The same reference numerals denote the same members or the same constituent elements throughout the drawings, and a description of repetitive contents will be omitted hereinafter.
- An image capturing apparatus (image capturing system) 1 of the first embodiment will be described with reference to
FIGS. 1 to 4B .FIG. 1 is a block diagram showing an example of the arrangement of theimage capturing apparatus 1. Theimage capturing apparatus 1 includes animage capturing unit 11, aprocessor 12, alight irradiation unit 13, adisplay 14, and anoutput unit 15. - The
image capturing unit 11 includes apixel array 111 and acontroller 112. Theprocessor 12 can communicate with thecontroller 112 and obtains image data of an object (not shown) by controlling thepixel array 111 by thecontroller 112, as will be described later in detail. - In this embodiment, the
controller 112 can also control thelight irradiation unit 13 to irradiate the object with light. As will be described later in detail, theprocessor 12 can perform distance measurement according to a TOF (Time Of Flight) method based on reflected light from the object receiving light of thelight irradiation unit 13, in addition to obtaining the image data of the object by using theimage capturing unit 11. That is, theprocessor 12 obtains a distance with the object based on a time difference between a timing at which thelight irradiation unit 13 irradiates the object and a timing at which the reflected light from the object is detected. Therefore, thecontroller 112 controls thelight irradiation unit 13 based on a control signal from theprocessor 12. In other words, theprocessor 12 is configured to control thelight irradiation unit 13 by thecontroller 112. Note that the distance with the object is a distance from the image capturing apparatus 1 (in particular, the image capturing unit 11) to the object in this embodiment. - The
processor 12 can display an image (for example, a video such as a moving image) based on image data obtained from theimage capturing unit 11 on thedisplay 14. A known display device such as a liquid crystal display or an organic EL display can be used for thedisplay 14. Together with this, theprocessor 12 causes theoutput unit 15 to output a signal indicating the distance with the object obtained based on the above-described TOF method to an arithmetic unit (for example, a CPU (Central Processing Unit)) that performs a predetermined process based on the signal. Alternatively, theoutput unit 15 may be integrated with thedisplay 14 and, for example, the distance with the object may be displayed together with the object on thedisplay 14. - The
processor 12 may be, for example, a device (for example, a PLD (Programmable Logic Device) such as an FPGA (Field Programmable Gate Array) or an integrated circuit capable of programming respective functions. Alternatively, theprocessor 12 may be an arithmetic device such as an MPU (Micro Processing Unit) or a DSP (Digital Signal Processor) for implementing the respective functions. Alternatively, theprocessor 12 may be an ASIC (Application Specific Integrated Circuit) or the like. Alternatively, theprocessor 12 includes a CPU and a memory, and the respective functions may be implemented on software. That is, the functions of theprocessor 12 can be implemented by hardware and/or software. -
FIG. 2A shows an example of theimage capturing unit 11. Theimage capturing unit 11 further includes adriving unit 113 and asignal readout unit 114, in addition to thepixel array 111 and thecontroller 112. Thepixel array 111 includes a plurality of pixels PX arrayed in a matrix (so as to form a plurality of rows and a plurality of columns). In this embodiment, thedriving unit 113 is a vertical scanning circuit formed by a decoder, a shift register, or the like and drives the plurality of pixels PX for each row. Thesignal readout unit 114 includes asignal amplifier circuit 1141 and asampling circuit 1142 arranged for each column, amultiplexer 1143, and ahorizontal scanning circuit 1144 formed by a decoder, a shift register, or the like. With such an arrangement, thesignal readout unit 114 reads out signals for each column from the plurality of pixels PX driven by the drivingunit 113. As will be described later in detail, CDS (Correlated Double Sampling) processing is used in sampling by thesampling circuits 1142. Thecontroller 112 includes a timing generator, and performs the synchronous control of the pixels PX, drivingunit 113, andsignal readout unit 114. -
FIG. 2B shows an example of the circuit arrangement of the unit pixel PX. The pixel PX includes the photoelectric conversion element PD, various transistors T_GS1, T_GS2, T_TX1, T_TX2, T_OFD, T_RES, T_SF, and T_SEL, and capacitors C1, C2, and C_FD. In this embodiment, a photodiode is used for the photoelectric conversion element PD. However, another known photodetection element may be used. In this embodiment, NMOS transistors are used for the transistors T_GS1, T_GS2, T_TX1, T_TX2, T_OFD, T_RES, T_SF, and T_SEL. However, other known switch elements may be used. The capacitors C1, C2, and C_FD correspond to capacitance components of the diffusion layer regions (drains/sources) of the NMOS transistors. - The photoelectric conversion element PD is arranged such that one terminal is connected to the drain of the transistor T_GS1, the drain of the transistor T_GS2, and the source of the transistor T_OFD, and the other terminal is grounded. The source of the transistor T_GS1 and the drain of the transistor T_TX1 are connected to each other, and the capacitor C1 is formed in their connection node. Similarly, the source of the transistor T_GS2 and the drain of the transistor T_TX2 are connected to each other, and the capacitor C2 is formed in their connection node. The drain of the transistor T_OFD is connected to a power supply voltage VDD.
- The source of the transistor T_TX1, the source of the transistor T_TX2, the source of the transistor T_RES, and the gate of the transistor T_SF are connected to each other, and the capacitor C_FD is formed in their connection node. The drain of the transistor T_RES and the drain of the transistor T_SF are connected to the power supply voltage VDD. The source of the transistor T_SF is connected to the drain of the transistor T_SEL. The source of the transistor T_SEL is connected to a column signal line LC for outputting the signal of the pixel PX.
- Upon receiving a control signal P_GS1 at the gate, the transistor T_GS1 is controlled in a conductive state (ON) or a non-conductive state (OFF). In this embodiment, the transistor T_GS1 is turned on in the conductive state when the signal P_GS1 is set at high level (H level) and is turned off in the non-conductive state when the signal P_GS1 is set at low level (L level). Similarly, upon receiving a control signal P_GS2 at the gate, the transistor T_GS2 is controlled in the conductive state or the non-conductive state. Similarly, the transistors T_TX1, T_TX2, T_OFD, T_RES, and T_SEL are, respectively, controlled by control signals P_TX1, P_TX2, P_OFD, P_RES, and P_SEL. The control signal P_GS1 and the like are supplied from the driving
unit 113 to the respective pixels PX based on a synchronization signal of thecontroller 112. - The transistor T_GS1 functions as the first transferring unit that transfers charges generated in the photoelectric conversion element PD to the capacitor C1. The transistor T_GS2 functions as the second transferring unit that transfers charges generated in the photoelectric conversion element PD to the capacitor C2. The capacitor C1 functions as the first signal holding unit that holds a signal (voltage) according to the amount of the charges generated in the photoelectric conversion element PD. Similarly, the capacitor C2 functions as the second signal holding unit. The transistor T_TX1 functions as the third transferring unit that transfers the signal of the capacitor C1 to the capacitor C_FD (capacitance unit). The transistor T_TX2 functions as the fourth transferring unit that transfers the signal of the capacitor C2 to the capacitor C_FD.
- The transistor T_RES is also referred to as a reset transistor and functions as a reset unit that resets the voltage of the capacitor C_FD. The transistor T_SF is also referred to as an amplification transistor and functions as a signal amplifying unit that performs a source follower operation. The transistor T_SEL is also referred to as a selection transistor, capable of outputting a signal according to the voltage of the source of the transistor T_SF to the column signal line LC as a pixel signal, and functions as a selection unit that selects whether to output the pixel signal. The transistor T_OFD is also referred to as an overflow drain transistor and functions as an overflow drain unit that ejects (discharges) the charges generated in the photoelectric conversion element PD. Alternatively, it can also be said that the transistor T_OFD functions as the second reset unit that resets the voltage of the photoelectric conversion element PD.
- The respective elements of the
image capturing unit 11 may be formed by semiconductor chips, and theimage capturing unit 11 may be referred to as an image sensor. Note that in this embodiment, theimage capturing unit 11 is a CMOS image sensor. As another embodiment, however, a CCD image sensor may be used. -
FIG. 3 is a timing chart showing an example of a method of driving the pixels PX according to this embodiment. InFIG. 3 , the abscissa indicates a time axis. A “frame” indicated in the ordinate corresponds to image data (frame data) of one still image which is formed based on the group of pixel signals obtained from all the plurality of pixels PX. In this embodiment, assuming that a moving image is shot, this frame data is obtained repeatedly. InFIG. 3 , FR(n) denotes the nth frame data. Frame data FR(n−1), FR(n+1), and FR(n+2) as frame data before and after the frame data FR(n) are illustrated additionally for understanding. - Note that periods for reading out the frame data FR(n−1), FR(n), FR(n+1), and FR(n+2) are, respectively, periods T_FR(n−1), T_FR(n), T_FR(n+1), and T_FR(n+2).
- “Light irradiation” indicated in the ordinate indicates the state (active/inactive) of the
light irradiation unit 13 configured to irradiate an object with light. More specifically, light irradiation at H level indicates that thelight irradiation unit 13 is active (light irradiation state), and light irradiation at L level indicates that thelight irradiation unit 13 is inactive (light non-irradiation state). - “Accumulated charges” indicated in the ordinate indicate charges generated and accumulated in the photoelectric conversion element PD, and reference symbols in
FIG. 3 denote accumulated charge amounts in given periods. For example, “QA1(n)” denotes an amount of the charges accumulated in the photoelectric conversion element PD in the period T1_FR(n). - A “held signal (C1)” indicated in the ordinate indicates a signal held in the capacitor C1, and its signal level is a voltage value corresponding to a charge amount transferred from the photoelectric conversion element PD by the transistor T_GS1. Similarly, a “held signal (C2)” indicates a signal held in the capacitor C2, and its signal level is a voltage value corresponding to a charge amount transferred from the photoelectric conversion element PD by the transistor T_GS2.
- “Readout operations” indicated in the ordinate indicate signal readout modes for each row from the plurality of pixels PX, and each block illustrated together with reference symbol indicates that a signal readout for a given row is performed. For example, a block denoted by “ROW” indicates that a signal readout is performed on the pixels PX of the first row. Note that in this embodiment, the number of rows in the
pixel array 111 is X (natural number equal to or larger than 2) (reference symbols from RX(1) to RO(X) are illustrated). - The readout operation of the frame data FR(n) is focused below for a descriptive convenience. However, the same also applies to the other frame data FR(n+1) and the like. Note that in order to facilitate understanding of the drawing, regarding the “accumulated charges”, “held signal (C1)”, “held signal (C2)”, and “readout operation”, portions related to the readout operation of the frame data FR(n) are illustrated by solid lines, and portions other than these are illustrated by broken lines.
- The period T_FR(n) for reading out the frame data FR(n) includes the period T1_FR(n), and periods T2_FR(n) and T3_FR(n). In the period T1_FR(n), charges are accumulated in the photoelectric conversion element PD in the light non-irradiation state (light irradiation: L level). The accumulated charges QA1(n) in the period T1_FR(n) are based on the amount of light from the object and entering the pixels PX. Then, at the last timing in the period T1_FR(n), a signal MemA1(n) corresponding to the accumulated charges QA1(n) is held in the capacitor C1. Note that the signal MemA1(n) is held over the periods T2_FR(n), T3_FR(n), and T1_FR(n+1).
- In the period T2_FR(n), charges are accumulated in the photoelectric conversion element PD in a light irradiation state (light irradiation: H level). Accumulated charges QA2(n) in the period T2_FR(n) are based on the amount of light which is reflected from the object irradiated with light by the
light irradiation unit 13 and enters the pixels PX. Note that not only this reflected light but also light in a case in which light irradiation is not performed enters the pixels PX. Therefore, it should be noted that the accumulated charges QA2(n) include not only a component according to the amount of this reflected light but also a component other than this. Then, at the last timing in the period T2_FR(n), a signal MemA2(n) corresponding to the accumulated charges QA2(n) is held in the capacitor C2. Note that the signal MemA2(n) is held over the periods T3_FR(n) and T1_FR(n+1), and a period T2_FR(n+1). - Note that the reflected light from the object is detected in the pixels PX with a delay according to a distance with the object from a light irradiation timing. Therefore, for example, as the distance with the object increases, the accumulated charges QA2(n) and the signal MemA2(n) corresponding to it become smaller. On the other hand, as this distance decreases, the accumulated charges QA2(n) and the signal MemA2(n) become larger.
- As will be described later in detail, in the period T3_FR(n), the readout operations are started, and the operations from the signal readout RO(1) for the first row to the signal readout RO(X) for the Xth row are performed sequentially. These readout operations are performed between the periods T3_FR(n) and T1_FR(n+1) in which both the signals MemA1(n) and MemA2(n) are held in the capacitors C1 and C2, respectively.
- The
processor 12 described with reference toFIG. 1 can obtain image data and distance information based on the signals MemA1(n) and MemA2(n) read out as described above. Theprocessor 12 obtains the signal MemA1(n) as an image signal indicating the shape or the like of the object. Theprocessor 12 also obtains the signal MemA2(n) as a distance signal indicating the distance with the object. Note that as described above, the accumulated charges QA2(n) as the origin of the signal MemA2(n) include not only a component according to the amount of the reflected light from the object irradiated by thelight irradiation unit 13 but also a component other than this. Therefore, in this embodiment, theprocessor 12 subtracts the signal MemA1(n) from the signal MemA2(n) (removes a signal component corresponding to a case in which light irradiation is not performed by thelight irradiation unit 13 from the signal MemA2(n)) and based on that result, calculates the distance with the object. - Note that as will be described later in detail, charges are not accumulated in the photoelectric conversion element PD in the period T3_FR(n). More specifically, the charges generated in the photoelectric conversion element PD in the period T3_FR(n) are ejected (discharged) by the transistor T_OFD. That is, the “accumulated charges” in the period T3_FR(n) are discarded by an overflow drain operation (OFD operation) and are indicated by stripe hatching in
FIG. 3 (ditto for the other drawings described in embodiments to be described later). -
FIG. 4A is a timing chart for explaining the method of driving the pixels PX inFIG. 3 in detail. For a descriptive convenience, the pixels PX of the mth row and (m+1)th row are focused here. However, the same also applies to other rows. - “P_OFD(m)”, “P_GS1(m)”, and “P_GS2(m)” indicated in the ordinate, respectively, denote control signals for controlling the transistors T_OFD, T_GS1, and T_GS2 of the pixels PX of the mth row (see
FIG. 2B ). Similarly, “P_OFD(m+1)”, “P_GS1(m+1)”, and “P_GS2(m+1)” correspond to control signals for the (m+1)th row. A “readout operation of the mth row” indicates that the signal readout RO(m) is being performed for H level, and the signal readout RO(m) is not being performed for L level. The same also applies to a “readout operation of the (m+1)th row”. - Before the period T1_FR(n) starts, that is, at the last timing of a period T3_FR(n−1), a pulse at H level is given to the signal P_OFD(m), resetting the photoelectric conversion element PD. Subsequently (after the signal P_OFD(m) is returned to L level), charges are generated and accumulated in the photoelectric conversion element PD. At the last timing in the period T1_FR(n), a pulse at H level is given to the signal P_GS1(m), holding the signal MemA1(n) corresponding to the accumulated charges QA1(n) in the period T1_FR(n) by the capacitor C1 as the image signal.
- Subsequently (after the signal P_GS1(m) is returned to L level), charges are generated and accumulated again in the photoelectric conversion element PD in the period T2_FR(n). As described above, light irradiation by the
light irradiation unit 13 is performed in the period T2_FR(n). At the last timing in the period T2_FR(n), a pulse at H level is given to the signal P_GS2(m), holding the signal MemA2(n) corresponding to the accumulated charges QA2(n) in the period T2_FR(n) by the capacitor C2 as the distance signal. - Control in the above-described periods T1_FR(n) and T2_FR(n) is shown only for the (m+1)th row in
FIG. 4A . However, the control is performed at once in all the rows. That is, in all the plurality of pixels PX of thepixel array 111, the signals MemA1(n) are held in the capacitors C1 almost simultaneously, and the signals MemA2(n) are also held in the capacitors C2 almost simultaneously. This makes it possible to equalize charge accumulation times for all the pixels PX and to implement a so-called global electronic shutter. - Subsequently, in the period T3_FR(n), the readout operations from the first row to the Xth row, that is, the signal readouts RO(1) to RO(X) are performed sequentially. The signal readouts RO(1) to RO(X) are performed in the order of a row number here. However, the signal readouts RO(1) to RO(X) may be performed in any order because the charge accumulation times are equalized for all the pixels PX, and the accumulated charges are held in the capacitors C1 and C2. The signal readouts RO(m) and RO(m+1) are illustrated separately from each other at a boundary between the period T3_FR(n) and the period T1_FR(n+1). However, they may be performed at any timing between the periods T3_FR(n) and T1_FR(n+1).
-
FIG. 4B is a timing chart for explaining the method of driving the pixels PX when the signal readouts RO(m) and RO(m+1) are performed in detail. “P_SEL(m)”, “P_RES(m)”, “P_TX1(m)”, and “P_TX2(m)” indicated in the ordinate, respectively, denote control signals for controlling the transistors T_SEL, T_RES, T_TX1, and T_TX2 of the pixels PX of the mth row. Similarly, “P_SEL(m+1)”, “P_RES(m+1)”, “P_TX1(m+1), and “P_TX2(m+2)” correspond to control signals for the (m+1)th row. - “Sampling by the signal readout unit” indicates that sampling by the
sampling circuits 1142 is being performed in thesignal readout unit 114 for H level, and the sampling is not being performed for L level. As described above (seeFIG. 2A ), thesignal readout unit 114 reads out the signals from the pixels PX for each row. Thus, when thesignal readout unit 114 reads out the signals from the pixels PX of a given row, “sampling by the signal readout unit” at H level described above indicates that the signals from the pixels PX of the row are sampled. - Periods for performing the signal readouts RO(m) and RO(m+1) are, respectively, periods T_RO(m) and T_RO(m+1). First, the period T_RO(m) will be described. The control signal P_SEL(m) is maintained at H level during the period T_RO(m). The period T_RO(m) includes periods T0_RO(m), T1_RO(m), T2_RO(m), T3_RO(m), and T4_RO(m).
- As described with reference to
FIG. 2A , CDS processing is performed in thesignal readout unit 114. More specifically, after a pulse at H level is given to the control signal P_RES(m) in the period T0_RO(m), and the capacitor C_FD is reset, the voltage of the reset capacitor C_FD is sampled in the period T1_RO(m). “MemA1(m)_N” denotes a signal obtained by this. - After completion of this sampling, a pulse at H level is given to the control signal P_TX1(m) at the last timing in the period T1_RO(m), and the transistor T_TX1 transfers a signal MemA1(m) from the capacitor C1 to the capacitor C_FD. Subsequently, in the period T2_RO(m), the voltage of the capacitor C_FD to which the signal MemA1(m) is transferred is sampled. “MemA1(m)_S” denotes a signal obtained by this.
- In CDS processing, a difference between the signal MemA1(m) N and the signal MemA1(m)_S thus obtained is obtained, removing an offset component caused by a circuit arrangement, characteristic variations, or the like. In the description of
FIG. 3 above, the signal MemA1(n) is obtained as the image signal for a descriptive convenience. In this embodiment, however, this image signal is obtained in practice based on the above-described CDS processing using the signals MemA1(m)_N and MemA1(m)_S. That is, this image signal is a signal obtained by subtracting MemA1(m)_N from MemA1(m)_S. - Subsequently, after a pulse at H level is given to the control signal P_RES(m) at the last timing in the period T2_RO(m), and the capacitor C_FD is reset, the voltage of the reset capacitor C_FD is sampled in the period T3_RO(m). “MemA2(m)_N” denotes a signal obtained by this.
- After completion of this sampling, a pulse at H level is given to the control signal P_TX2 (m) at the last timing in the period T3_RO(m), and the transistor T_TX2 transfers a signal MemA2(m) from the capacitor C2 to the capacitor C_FD. Subsequently, in the period T4_RO(m), the voltage of the capacitor C_FD to which the signal MemA2(m) is transferred is sampled. “MemA2(m)_S” denotes a signal obtained by this.
- Then, as in the signals MemA1(m)_N and MemA1(m)_S, a difference between the signal MemA2(m)_N and the signal MemA2(m)_S is obtained by CDS processing, removing the offset component. In the description of
FIG. 3 above, the signal MemA2(n) is obtained as the distance signal for a descriptive convenience. In this embodiment, however, this distance signal is obtained in practice based on the above-described CDS processing using the signals MemA2(m)_N and MemA2(m)_S. That is, this distance signal is a signal obtained by subtracting MemA2 (m)_N from MemA2(m)_S. - Moreover, in the description of
FIG. 3 above, the distance with the object is calculated based on a result of subtracting the signal MemA1(n) from the signal MemA2(n). Hence, this distance is calculated based on a result obtained by further subtracting the above-described image signal (the signal obtained by subtracting MemA1(m)_N from MemA1(m)_S) from the above-described distance signal (the signal obtained by subtracting MemA2(m)_N from MemA2(m)_S). - Note that in this embodiment, the period T1_FR(n) and the period T2_FR(n) are equal in length. Therefore, a signal component corresponding to a case in which the
light irradiation unit 13 does not perform light irradiation is removed appropriately (that is, a signal component based on the TOF method is extracted appropriately) by the above-described subtractions, making it possible to detect information on the distance with the object accurately based on this distance signal. - The signal readout RO(m) is performed as described above.
- In the next period T_RO(m+1), the control signal P_SEL(m) is maintained at H level, and the same control as in the period T_RO(m) is also performed for the (m+1)th row. The contents of an operation and control in periods T0_RO(m+1) to T4_RO(m+1) for the (m+1)th row, respectively, correspond to those in the periods T0_RO(m) to T4_RO(m) for the mth row. The signal readout RO(m+1) is thus performed.
- According to this embodiment, it is possible to obtain both the image signal indicating the shape or the like of the object and the distance signal indicating the distance with the object almost simultaneously (while reading out frame data of one frame) from the same photoelectric conversion element PD of the same pixel PX. Therefore, it becomes possible to associate the shape or the like of the object with the distance with the object appropriately and to improve the detection accuracy of the object. For example, at the time of shooting a moving image, it becomes possible, while monitoring an object that may move, to detect a distance with the object almost simultaneously with this.
- The
image capturing apparatus 1 is applied to, for example, a vehicle (four-wheel vehicle or the like) that includes an advanced driver assistance system (ADAS) such as an automatic brake. Therefore, in this embodiment, the method of driving the pixels PX in shooting the moving image is exemplified. However, the contents of this embodiment are also applicable to a case in which a still image is shot, as a matter of course. - The second embodiment will be described with reference to
FIGS. 5 to 8B . As shown inFIG. 5 , in this embodiment, animage capturing apparatus 1 further includes a secondimage capturing unit 11B, in addition to animage capturing unit 11. In order to discriminate them from each other, theimage capturing unit 11 described in the first embodiment is referred to as an “image capturing unit 11A”. Theimage capturing units image capturing unit 11A and theimage capturing unit 11B can be configured in the same manner. In order to discriminate them from each other, theaforementioned pixel array 111 andcontroller 112 are, respectively, denoted by “111A” and “112A” for theimage capturing unit 11A, and “111B” and “112B” for theimage capturing unit 11B. - In this embodiment, a
light irradiation unit 13 irradiates an object with light based on control by thecontroller 112A. As another embodiment, however, thelight irradiation unit 13 may be controlled by thecontroller 112B. - A
processor 12 obtains image data from both theimage capturing units processor 12 to perform, in addition to distance measurement by the TOF method described above, distance measurement by a stereo method using two frame data obtained from both theimage capturing units processor 12 can measure a distance with the object based on a parallax between theimage capturing units - For example, as described above in the first embodiment, upon obtaining frame data FR(n), the
image capturing unit 11A outputs signals MemA1(n) and MemA2(n) to theprocessor 12. On the other hand, upon obtaining the frame data FR(n), theimage capturing unit 11B outputs a signal MemB1(n) corresponding to the signal MemA1(n) of theimage capturing unit 11A to theprocessor 12. Note that in this embodiment, theimage capturing unit 11B does not output a signal corresponding to the signal MemA2(n). - The
processor 12 includes a stereo-typedistance calculation unit 121, a TOF-typedistance calculation unit 122, adetermination unit 123, and aselector 124. Thedistance calculation unit 121 receives the signal MemA1(n) from theimage capturing unit 11A, receives the signal MemB1(n) from theimage capturing unit 11B, and calculates a distance based on the stereo method using these signals MemA1(n) and MemB1(n). On the other hand, thedistance calculation unit 122 receives both the signals MemA1(n) and MemA2(n) from theimage capturing unit 11A, and calculates the distance based on the TOF method (see the first embodiment). - As will be described later in detail, upon receiving a calculation result of the
distance calculation unit 121, thedetermination unit 123 determines whether the calculation result satisfies a predetermined condition and outputs the determination result to theselector 124. The calculation result of thedistance calculation unit 121 and the calculation result of thedistance calculation unit 122 can be input to theselector 124. Upon receiving the determination result of thedetermination unit 123, theselector 124 selects one of the calculation result of thedistance calculation unit 121 and the calculation result of thedistance calculation unit 122 based on this determination result, and outputs it to anoutput unit 15. Note that image data output to adisplay 14 can be formed by a group of image signals based on the signal MemA1(n) and/or the signal MemB1(n). - As described above, the function of the
processor 12 can be implemented by hardware and/or software. Therefore, in this embodiment, the above-describedelements 121 to 124 are shown as elements independent of each other for a descriptive purpose. However, the individual functions of theseelements 121 to 124 may be implemented by a single element. -
FIG. 6 is a flowchart showing an example of a control method at the time of image capturing. First, in step S100 (to be simply referred to as “S100” hereinafter, and ditto for other steps), thedistance calculation unit 121 calculates a distance by the stereo method. Then, in S110, thedetermination unit 123 determines whether a calculation result in S100 satisfies a predetermined condition. If this predetermined condition holds, the process advances to S120 in which thedistance calculation unit 122 calculates the distance by the TOF method. If this predetermined condition does not hold, the process advances to S130. In S130, thedisplay 14 displays an image, and theoutput unit 15 outputs distance information. The distance information output here is given according to the calculation result (the calculation result based on the stereo method) in S100 if the predetermined condition does not hold in S110 and is given according to the calculation result (the calculation result based on the TOF method) in S120 if the predetermined condition holds in S110. - As an example of the predetermined condition in S110, the fact that the luminance of the object is smaller than a predetermined reference value, the distance with the object is larger than a predetermined reference value, or the like is given. That is, in a case in which a shooting environment is comparatively dark or in a case in which a detection target (object) is positioned comparatively far, the distance information calculated based on the TOF method is adopted. In this embodiment, the process advances to S120 if one of these examples holds. As another embodiment, however, the process may advance to S120 if two or more, or all of these examples hold.
-
FIG. 7 is a timing chart showing an example of a method of driving pixels PX according to this embodiment as inFIG. 3 (see the first embodiment). The contents of an operation and control of theimage capturing unit 11A are the same as inFIG. 3 , and thus a description thereof will be omitted here. - Regarding the
image capturing unit 11B, focusing on, for example, a period T_FR(n) corresponding to the data frame FR(n), the contents of the operation and control of theimage capturing unit 11B in a period T1_FR(n) are the same as those of theimage capturing unit 11A. That is, in the period T1_FR(n), charges QB1(n) are accumulated in a photoelectric conversion element PD. Then, at the last timing in the period T1_FR(n), the signal MemB1(n) corresponding to the accumulated charges QB1(n) is held in a capacitor C1. On the other hand, in periods T2_FR(n) and T3_FR(n), charges are not accumulated in the photoelectric conversion element PD. More specifically, charges generated in the photoelectric conversion element PD in the periods T2_FR(n) and T3_FR(n) are ejected (discharged) by a transistor T_OFD. Therefore, in this embodiment, a capacitor C2 of each pixel PX in theimage capturing unit 11B is not used. InFIG. 7 , “disuse” is indicated for the “held signal (C2)” of theimage capturing unit 11B. -
FIG. 8A shows a timing chart for explaining the method of driving the pixels PX inFIG. 7 in detail as inFIG. 4A (see the first embodiment). The contents of the operation and control of theimage capturing unit 11A are the same as inFIG. 4A , and thus a description thereof will be omitted here. - Focusing on, for example, the period T_FR(n), the
image capturing unit 11B is the same as theimage capturing unit 11A except that a pulse at H level is not given to control signals P_GS2(m) and P_GS2(m+1) at the last timing in the period T2_FR(n). This also applies to other periods such as the period T_FR(n+1). InFIG. 8A , portions different from the case of theimage capturing unit 11A (portions to which the above-described pulse at H level is not given) are indicated by broken lines. - That is, in this embodiment, the same operation as in the
image capturing unit 11 described in the first embodiment is performed in theimage capturing unit 11A. On the other hand, in theimage capturing unit 11B, the charges QB1(n) are accumulated, and the signal MemB1(n) is held in order to obtain an image signal, and an operation and control to obtain a distance signal are omitted. -
FIG. 8B shows a timing chart for explaining the method of driving the pixels PX when signal readouts RO(m) and RO(m+1) are performed in detail as inFIG. 4B (see the first embodiment). The contents of the operation and control of theimage capturing unit 11A are the same as inFIG. 4B , and thus a description thereof will be omitted here. - Focusing on, for example, a period T_RO(m) corresponding to the signal readout RO(m) of the mth row, the
image capturing unit 11B is the same as theimage capturing unit 11A except for the following three points. First, a pulse at H level is not given to a control signal P_RES(m) at the last timing in a period T2_RO(m). Second, a pulse at H level is not given to a control signal P_TX2(m) at the last timing in a period T3_RO(m). Then, third, sampling is not performed in periods T3_RO(m) and T4_RO(m). This also applies to other periods such as the period T_RO(m+1). InFIG. 8B , portions different from the case of theimage capturing unit 11A (portions to which the above-described pulse at H level is not given and portions in which sampling is not performed) are indicated by broken lines. - That is, in this embodiment, an operation and control to obtain an image signal are performed in the
image capturing unit 11B as in theimage capturing unit 11A. On the other hand, an operation and control to obtain a distance signal are omitted. - According to this embodiment, in addition to being able to perform distance measurement based on the TOF method accurately as in the first embodiment, it is also possible to perform distance measurement based on the stereo method. That is, according to this embodiment, the
processor 12 includes, as operation modes, the first mode in which distance measurement based on the stereo type is performed and the second mode in which distance measurement based on the TOF method is performed. Note that in this embodiment, a mode is exemplified in which the first mode is set in advance, and a shift from the first mode to the second mode is made if the predetermined condition in S110 holds. However, a mode may be possible in which the second mode is set in advance, and a shift from the second mode to the first mode is made if the predetermined condition does not hold. As described above, according to this embodiment, in addition to obtaining the same effect as in the first embodiment, it is possible to change a method of measuring the distance with the object in accordance with the shooting environment or the like and to calculate this distance more accurately. - The third embodiment will be described with reference to
FIGS. 9 to 10B . This embodiment is different from the aforementioned second embodiment mainly in that an operation and control to obtain a distance signal are also performed in animage capturing unit 11B. That is, theimage capturing unit 11B outputs a signal MemB2(n) corresponding to a signal MemA2(n) of animage capturing unit 11A to aprocessor 12, in addition to the signal MemB1(n) described in the second embodiment. -
FIG. 9 is a timing chart showing an example of a method of driving pixels PX according to this embodiment as inFIG. 7 (see the second embodiment). The contents of an operation and control of theimage capturing unit 11A are the same as inFIG. 7 , and thus a description thereof will be omitted here. - Regarding the
image capturing unit 11B, focusing on, for example, a period T_FR(n), the contents of the operation and control of theimage capturing unit 11B in periods T1_FR(n) and T2_FR(n) are the same as inFIG. 7 . On the other hand, in a period T3_FR(n), charges QB2(n) are accumulated in a photoelectric conversion element PD. Then, at the last timing in the period T3_FR(n), a signal MemB2(n) corresponding to the accumulated charges QB2(n) is held in a capacitor C2. The period T_FR(n) further includes a period T4_FR(n) as a next period thereof. In the period T4_FR(n), charges are not accumulated in the photoelectric conversion element PD. More specifically, charges generated in the photoelectric conversion element PD in the period T4_FR(n) are ejected (discharged) by a transistor T_OFD. - Note that in this embodiment, signal readouts RO(1) to RO(X) can be performed between the period T4_FR(n) and a period T1FR(n+1).
- In this embodiment, distance measurement based on a TOF method is performed by using both the signals MemA2(n) and MemB2(n). For example, the signal MemA2(n) becomes smaller, and the signal MemB2(n) becomes larger as a distance with an object increases and on the other hand, the signal MemA2(n) becomes larger, and the signal MemB2(n) becomes smaller as this distance decreases. Therefore, according to this embodiment, it is possible to improve the accuracy of distance measurement based on the TOF method by using both the signals MemA2(n) and MemB2(n).
- In this embodiment, the period T2_FR(n) and the period T3_FR(n) are equal in time, and shorter than the period T1_FR(n). It is possible to increase a frame rate (the number of frame data that can be obtained per unit time) by shortening the periods T2_FR(n) and T3_FR(n). Together with this, the amount of irradiation light of a
light irradiation unit 13 may be increased. This makes it possible to further improve the accuracy of distance measurement based on the TOF method. - Therefore, in addition to obtaining the same effect as in the second embodiment, this embodiment is further advantageous in calculating the distance with the object accurately and improving the frame rate.
- Note that in the same procedure as in the first embodiment, signal components corresponding to a case in which the
light irradiation unit 13 does not perform light irradiation may be removed from the signals MemA2(n) and MemB2(n) by using the signals MemA1(n) and MemB1(n). In this case, this calculation can be performed by using a coefficient corresponding to the ratio of the periods T2_FR(n) and T3_FR(n), and the period T1_FR(n). -
FIG. 10A shows a timing chart for explaining the method of driving the pixels PX inFIG. 9 in detail as inFIG. 8A (see the second embodiment). The contents of the operation and control of theimage capturing unit 11A are the same as inFIG. 8A , and thus a description thereof will be omitted here. - Regarding the
image capturing unit 11B, focusing on, for example, the period T_FR(n), the same operation and control as those ofimage capturing unit 11A are performed in the period T1_FR(n). On the other hand, at the last timing in the period T2_FR(n), a pulse at H level is given to control signals P_OFD(m) and P_OFD(m+1). Consequently, charges generated in the photoelectric conversion element PD in the period T2_FR(n) are ejected (discharged) by the transistor T_OFD. Subsequently (after the control signal P_OFD(m) and the like are returned to L level), in the period T3_FR(n), charges are generated and accumulated again in the photoelectric conversion element PD. At the last timing in the period T3_FR(n), a pulse at H level is given to control signals P_GS2(m) and P_GS2(m+1). Consequently, the signal MemB2(n) corresponding to the accumulated charges QB2(n) in the period T3_FR(n) is held in the capacitor C2. This also applies to the other periods such as the period T_FR(n+1). InFIG. 10A , portions different fromFIG. 8A (see the second embodiment) (portions to which the above-described pulse at H level is given) are indicated by broken lines. -
FIG. 10B shows a timing chart for explaining the method of driving the pixels PX when signal readouts RO(m) and RO(m+1) are performed in detail as inFIG. 8B (see the second embodiment). In this embodiment, the contents of the operations and control of theimage capturing units image capturing unit 11A described with reference toFIG. 8B . That is, the operation and control to obtain image signals are performed in both theimage capturing units image capturing units - As a modification, it is also possible to set the periods T1_FR(n), T2_FR(n), and T3_FR(n) to times (for example, a period Ta) equal to each other. That is:
-
T1_FR(n)=T2_FR(n)=T3_FR(n)≡Ta - According to this method, it is possible to perform distance measurement based on the TOF method with a comparatively simple arrangement. This will be described with reference to
FIG. 13 . -
FIG. 13 is a timing chart for explaining an example of a method of the distance measurement based on the TOF method described above. A period of light irradiation by thelight irradiation unit 13 matches the period T2_FR(n). As described above, however, reflected light from the object is detected with a delay by a time according to the distance with the object. As shown inFIG. 13 , this delay time is denoted by a time to. - At this time, the following equations hold. That is:
-
e0=e1+e2, -
e1=e0×(1−t0/Ta), and -
e2=e0×(t0/Ta) - wherein
- e0: a total of signal components corresponding to the above-described reflected light,
- e1: a component corresponding to the above-described reflected light of the signal MemA2(n), and
- e2: a component corresponding to the above-described reflected light of the signal MemB2(n). That is, e1 is the component corresponding to the reflected light detected during the period T2_FR(n), and e2 is the component corresponding to the reflected light detected during the period T3_FR(n). Since:
-
T1_FR(n)=T2_FR(n)=T3_FR(n), - it is possible to remove a component other than the components corresponding to the above-described reflected light from the signals MemA2(n) and MemB2(n). For example, e1 is calculated appropriately by obtaining a difference between the signals MemA1(n) and MemA2(n). Similarly, e2 is calculated appropriately by obtaining a difference between the signals MemB1(n) and MemB2(n).
- From equations described above, the delay time t0 can be represented by:
-
t0=Ta/(1+e1/e2) - That is, the delay time t0 can be calculated based on Ta, e1, and e2. Therefore, according to this modification, it is possible to perform distance measurement based on the TOF method with the comparatively simple arrangement and to calculate the distance with the object appropriately even if, for example, the light reflectance of the object is not 1.
- The fourth embodiment will be described with reference to
FIGS. 11 to 12B . This embodiment is different from the aforementioned third embodiment mainly in that an operation and control to obtain a distance signal is performed (repeated) a plurality of times while obtaining frame data of one frame. More specifically, focusing on a period T_FR(n), in this embodiment, a series of operations in the periods T2_FR(n) and T3_FR(n) described with reference toFIGS. 9 to 10B is repeated K times (K>2). -
FIG. 11 is a timing chart showing an example of a method of driving pixels PX according to this embodiment as inFIG. 9 (see the third embodiment). Note that regarding the series of repeated operations described above, first periods T2_FR(n) and T3_FR(n) are, respectively, denoted by “T2(1)_FR(n)” and “T3(1)_FR(n)” inFIG. 11 . The same also applies to periods after second periods (for example, Kth periods are denoted by “T2(K)_FR(n)” and “T3(K)_FR(n)”). - According to this embodiment, it is possible to average errors of distance information and to further improve the calculation accuracy of a distance with an object, as compared to the third embodiment (that is, a case in which the series of operations described above is performed only once). It also becomes possible to further improve the calculation accuracy of the distance with the object by further shortening the individual periods of T2(1)_FR(n) to T2(K)_FR(n) and T3(1)_FR(n) to T3(K)_FR(n).
-
FIGS. 12A and 12B show timing charts for explaining the method of driving the pixels PX in detail as inFIGS. 10A and 10B (see the third embodiment). These timing charts are the same as those in the third embodiment except that the series of operations in the periods T2_FR(n) and T3_FR(n) ofFIG. 10A is repeated K time (K>2). Note that also in this embodiment, signal readouts RO(1) to RO(X) can be performed between periods T4_FR(n) and T1_FR(n+1). -
FIG. 14A shows an example of an image capturing system regarding a vehicle-mounted camera. Animage capturing system 1000 includes the image capturing apparatus in each embodiment described above as animage capturing apparatus 1010. Theimage capturing system 1000 includes animage processing unit 1030 that performs image processing on a plurality of image data obtained by theimage capturing apparatus 1010 and aparallax obtaining unit 1040 that obtains a parallax (phase difference of parallax images) from the plurality of image data obtained by theimage capturing system 1000. - If the
image capturing system 1000 is in the form of a stereo camera that includes the plurality ofimage capturing apparatuses 1010, this parallax can be obtained by using signals output from the plurality ofimage capturing apparatuses 1010, respectively. - The
image capturing system 1000 includes adistance obtaining unit 1050 that obtains a distance to a target based on the obtained parallax and acollision determination unit 1060 that determines whether there is a collision possibility based on the obtained distance. Note that theparallax obtaining unit 1040 and thedistance obtaining unit 1050 are examples of a distance information obtaining means for obtaining distance information to the target. That is, the distance information is information about a parallax, a defocus amount, the distance to the target, and the like. Thecollision determination unit 1060 may determine the collision possibility using one of these pieces of distance information. The distance information obtaining means may be implemented by hardware designed for a special purpose, may be implemented by a software module, or may be implemented by a combination of these. Alternatively, the distance information obtaining means may be implemented by an FPGA (Field Programmable Gate Array), an ASIC (Application Specific Integrated Circuit), or the like. Alternatively, the distance information obtaining means may be implemented by a combination of the FPGA and the ASIC. - The
image capturing system 1000 is connected to a vehicleinformation obtaining apparatus 1310, and can obtain vehicle information about a vehicle speed, a yaw rate, a steering angle, and the like. Theimage capturing system 1000 is also connected to acontrol ECU 1410 serving as a control apparatus that, based on a determination result in thecollision determination unit 1060, outputs a control signal generating a braking force to a vehicle. Theimage capturing system 1000 is also connected to awarning apparatus 1420 that issues a warning to a driver based on a determination result in thecollision determination unit 1060. For example, if there is the collision possibility as the determination result of thecollision determination unit 1060, thecontrol ECU 1410 performs vehicle control to avoid a collision or reduce damage such as braking, the release of an accelerator, suppression of an engine output, and the like. Thewarning apparatus 1420 warns a user by, for example, generating an alarm such as a sound, displaying warning information on a screen such as a car navigation system, or giving vibrations to a seatbelt and a steering. - In this embodiment, the
image capturing system 1000 captures an image of the surrounding, for example, front side or back side of the vehicle. -
FIG. 14B shows an image capturing system when theimage capturing system 1000 captures the image of the front side of the vehicle. The control that avoids a collision with another vehicle has been described above. However, the present invention is also applicable to control of automatic driving following another vehicle, control of automatic driving not to drive off a lane, or the like. Further, the image capturing system is applicable not only to a vehicle such as a four-wheel vehicle but also to, for example, a moving object (moving apparatus) such as ship, an airplane, or an industrial robot. Furthermore, the contents above are also applicable not only to the moving object but also widely to a device using object recognition such as an ITS (Intelligent Transportation System). - Several preferred embodiments have been described above. However, the present invention is not limited to these examples and may partially be modified without departing from the scope of the invention. For example, a known element may be added to a given embodiment, or a part of a given embodiment may be applied to another embodiment or deleted. Individual terms described in this specification are merely used for the purpose of explaining the present invention, and the present invention is not limited to the strict meanings of the terms and can also incorporate their equivalents.
- While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
- This application claims the benefit of Japanese Patent Application No. 2017-002115, filed on Jan. 10, 2017, which is hereby incorporated by reference herein in its entirety.
Claims (10)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017002115A JP6865589B2 (en) | 2017-01-10 | 2017-01-10 | Imaging device |
JP2017-002115 | 2017-01-10 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180199000A1 true US20180199000A1 (en) | 2018-07-12 |
US10382714B2 US10382714B2 (en) | 2019-08-13 |
Family
ID=62783705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/846,337 Active 2038-01-22 US10382714B2 (en) | 2017-01-10 | 2017-12-19 | Image capturing apparatus and moving object |
Country Status (2)
Country | Link |
---|---|
US (1) | US10382714B2 (en) |
JP (1) | JP6865589B2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7297433B2 (en) | 2018-12-11 | 2023-06-26 | キヤノン株式会社 | Photoelectric conversion device and imaging system |
JP2020108061A (en) | 2018-12-28 | 2020-07-09 | キヤノン株式会社 | Imaging device and imaging system |
JP2022065489A (en) | 2020-10-15 | 2022-04-27 | キヤノン株式会社 | Imaging device and imaging system |
JP2022114353A (en) | 2021-01-26 | 2022-08-05 | キヤノン株式会社 | Imaging apparatus, electronic apparatus, and image creating apparatus |
JP2024011627A (en) * | 2022-07-15 | 2024-01-25 | キヤノン株式会社 | Optical device, on-vehicle system, and movable device |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4045645B2 (en) * | 1998-05-19 | 2008-02-13 | 株式会社ニコン | Interpolation processing apparatus and recording medium recording interpolation processing program |
JP4235729B2 (en) * | 2003-02-03 | 2009-03-11 | 国立大学法人静岡大学 | Distance image sensor |
JP4415978B2 (en) * | 2006-08-02 | 2010-02-17 | ソニー株式会社 | Image signal processing apparatus and image signal processing method |
JP4939901B2 (en) | 2006-11-02 | 2012-05-30 | 富士フイルム株式会社 | Distance image generation method and apparatus |
JP5635938B2 (en) | 2011-03-31 | 2014-12-03 | 本田技研工業株式会社 | Solid-state imaging device |
JP5657456B2 (en) | 2011-03-31 | 2015-01-21 | 本田技研工業株式会社 | Solid-state imaging device |
JP5923755B2 (en) * | 2011-10-13 | 2016-05-25 | パナソニックIpマネジメント株式会社 | Depth estimation imaging device and imaging device |
JP5956755B2 (en) | 2012-01-06 | 2016-07-27 | キヤノン株式会社 | Solid-state imaging device and imaging system |
JP6374690B2 (en) * | 2014-04-01 | 2018-08-15 | キヤノン株式会社 | Imaging apparatus, control method therefor, program, and storage medium |
JP6339851B2 (en) | 2014-05-01 | 2018-06-06 | キヤノン株式会社 | Solid-state imaging device and driving method thereof |
JP6385192B2 (en) | 2014-08-14 | 2018-09-05 | キヤノン株式会社 | Imaging apparatus, imaging system, and driving method of imaging system |
US10070088B2 (en) * | 2015-01-05 | 2018-09-04 | Canon Kabushiki Kaisha | Image sensor and image capturing apparatus for simultaneously performing focus detection and image generation |
JP6666620B2 (en) * | 2015-02-20 | 2020-03-18 | 国立大学法人静岡大学 | Range image measurement device |
JP6645682B2 (en) * | 2015-03-17 | 2020-02-14 | キヤノン株式会社 | Range acquisition device, range image signal correction device, imaging device, range image quantization device, and method |
US10230874B2 (en) * | 2015-04-21 | 2019-03-12 | Sony Corporation | Imaging device and imaging control method |
JP6584131B2 (en) | 2015-05-08 | 2019-10-02 | キヤノン株式会社 | Imaging apparatus, imaging system, and signal processing method |
JP6628497B2 (en) | 2015-05-19 | 2020-01-08 | キヤノン株式会社 | Imaging device, imaging system, and image processing method |
JP7009091B2 (en) * | 2017-06-20 | 2022-01-25 | キヤノン株式会社 | Distance information generator, image pickup device, distance information generation method, and program |
-
2017
- 2017-01-10 JP JP2017002115A patent/JP6865589B2/en active Active
- 2017-12-19 US US15/846,337 patent/US10382714B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
JP2018113552A (en) | 2018-07-19 |
US10382714B2 (en) | 2019-08-13 |
JP6865589B2 (en) | 2021-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10382714B2 (en) | Image capturing apparatus and moving object | |
US9621860B2 (en) | Image capturing apparatus and control method thereof, and storage medium | |
US10504949B2 (en) | Solid-state imaging device, method of driving solid-state imaging device, imaging system, and movable object | |
US10110835B2 (en) | Imaging apparatus, imaging system, and moving object | |
US10194103B2 (en) | Solid-state imaging device and method of driving solid-state imaging device with clipping level set according to transfer operation frequency | |
JP6806553B2 (en) | Imaging device, driving method of imaging device and imaging system | |
US10142574B2 (en) | Imaging device, imaging system, and moving object | |
JP7075208B2 (en) | Imaging equipment and imaging system | |
CN108259789B (en) | Solid-state imaging device | |
US9838611B2 (en) | Image capturing apparatus for obtaining normal image and range image and control method thereof | |
US10304894B2 (en) | Imaging sensor, imaging system, and moving body | |
CN110139008B (en) | Imaging device, imaging system, and moving body | |
JP6976776B2 (en) | Solid-state image sensor, image sensor, and mobile object | |
US11189649B2 (en) | Photoelectric conversion apparatus and image pickup system | |
US20200314378A1 (en) | Photoelectric conversion apparatus, signal processing circuit, image capturing system, and moving object | |
US11778347B2 (en) | Photoelectric conversion device | |
US11700467B2 (en) | Photoelectric conversion device, photoelectric conversion system, and movable body | |
US11770628B2 (en) | Imaging device and imaging system outputting signals of light in different wavelength bands | |
JP2019036770A (en) | Imaging apparatus and imaging system | |
US20240155268A1 (en) | Photoelectric conversion device, photoelectric conversion system, moving body, and method of driving photoelectric conversion device | |
FR3113992A1 (en) | Method for capturing a sequence of images, corresponding imaging device and imaging system comprising such a device. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MICHIMATA, KENJI;ONISHI, TOMOYA;SIGNING DATES FROM 20180126 TO 20180326;REEL/FRAME:045728/0527 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |