WO2021059748A1 - 情報処理装置、補正方法およびプログラム - Google Patents
情報処理装置、補正方法およびプログラム Download PDFInfo
- Publication number
- WO2021059748A1 WO2021059748A1 PCT/JP2020/029674 JP2020029674W WO2021059748A1 WO 2021059748 A1 WO2021059748 A1 WO 2021059748A1 JP 2020029674 W JP2020029674 W JP 2020029674W WO 2021059748 A1 WO2021059748 A1 WO 2021059748A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light
- image information
- unit
- pixel
- light receiving
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 27
- 238000012937 correction Methods 0.000 title claims description 126
- 238000000034 method Methods 0.000 title claims description 93
- 229920006395 saturated elastomer Polymers 0.000 claims abstract description 97
- 230000008859 change Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 description 65
- 238000010586 diagram Methods 0.000 description 33
- 230000008569 process Effects 0.000 description 29
- 238000003384 imaging method Methods 0.000 description 27
- 230000000875 corresponding effect Effects 0.000 description 24
- 238000005259 measurement Methods 0.000 description 22
- 238000004364 calculation method Methods 0.000 description 16
- 238000001514 detection method Methods 0.000 description 12
- 239000004065 semiconductor Substances 0.000 description 12
- 238000009792 diffusion process Methods 0.000 description 10
- 230000001276 controlling effect Effects 0.000 description 8
- 238000007667 floating Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000003321 amplification Effects 0.000 description 6
- 238000003199 nucleic acid amplification method Methods 0.000 description 6
- 238000012546 transfer Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 5
- 101100083446 Danio rerio plekhh1 gene Proteins 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/46—Indirect determination of position data
- G01S17/48—Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4868—Controlling received signal intensity or exposure of sensor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4865—Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
Definitions
- This disclosure relates to information processing devices, correction methods and programs.
- ToF Time of Flight
- ToF type distance measuring device a distance measuring device that performs ToF type distance measurement
- the intensity of the light received by the light receiving unit is high, and the amount of light received by the light receiving unit may be saturated.
- an object of the present disclosure is to provide an information processing device, a correction method, and a program capable of suppressing a decrease in accuracy of an image acquired by a ToF type distance measuring device.
- an information processing device includes a control unit.
- the control unit is a light receiving sensor that receives the reflected light reflected from the object to be measured by the emitted light emitted from the light source, and determines the saturation region of the light receiving image information generated based on the pixel signal output by the light receiving sensor. To detect.
- the pixel signal is used to calculate the distance to the object to be measured.
- the saturated region is a region of the received image information generated based on the saturated pixel signal.
- the control unit corrects the received image information in the saturation region based on the pixel signal.
- the present disclosure is suitable for use in a technique for performing distance measurement using light.
- an indirect ToF (Time of Flight) method will be described as one of the distance measuring methods applied to the embodiment in order to facilitate understanding.
- the light source light modulated by PWM Pulse Width Modulation
- the reflected light is received by the light receiving element, and the received reflection is received.
- PWM Pulse Width Modulation
- FIG. 1 is a block diagram showing an example of the configuration of an electronic device using the distance measuring device applied to each embodiment.
- the electronic device 1 includes a distance measuring device 10 and an application unit 20.
- the application unit 20 is realized by, for example, operating a program on a CPU (Central Processing Unit), requests the distance measuring device 10 to execute distance measurement, and measures distance information or the like which is the result of distance measurement. Receive from device 10.
- CPU Central Processing Unit
- the distance measuring device 10 includes a light source unit 11, a light receiving unit 12, and a distance measuring processing unit 13.
- the light source unit 11 includes, for example, a light emitting element that emits light having a wavelength in the infrared region, and a drive circuit that drives the light emitting element to emit light.
- an LED Light Emitting Diode
- a VCSEL Vertical Cavity Surface Emitting LASER
- the light emitting element of the light source unit 11 emits light is described as “the light source unit 11 emits light”.
- the light receiving unit 12 includes, for example, a light receiving element that detects light having a wavelength in the infrared region, and a signal processing circuit that outputs a pixel signal corresponding to the light detected by the light receiving element.
- a photodiode may be applied as the light receiving element included in the light receiving unit 12.
- the light receiving element included in the light receiving unit 12 receives light is described as “the light receiving unit 12 receives light”.
- the distance measurement processing unit 13 executes the distance measurement processing in the distance measurement device 10 in response to a distance measurement instruction from the application unit 20, for example.
- the distance measuring processing unit 13 generates a light source control signal for driving the light source unit 11 and supplies the light source control signal to the light source unit 11.
- the distance measuring processing unit 13 controls the light reception by the light receiving unit 12 in synchronization with the light source control signal supplied to the light source unit 11.
- the distance measuring processing unit 13 generates an exposure control signal for controlling the exposure period in the light receiving unit 12 in synchronization with the light source control signal, and supplies the light receiving unit 12.
- the light receiving unit 12 outputs a valid pixel signal within the exposure period indicated by the exposure control signal.
- the distance measuring processing unit 13 calculates the distance information based on the pixel signal output from the light receiving unit 12 in response to the light reception. Further, the distance measuring processing unit 13 may generate predetermined image information based on the pixel signal. The distance measuring processing unit 13 passes the distance information and the image information calculated and generated based on the pixel signal to the application unit 20.
- the distance measurement processing unit 13 generates a light source control signal for driving the light source unit 11 and supplies the light source control signal to the light source unit 11 in accordance with an instruction from the application unit 20 to execute the distance measurement, for example. ..
- the distance measuring processing unit 13 generates a light source control signal modulated into a rectangular wave having a predetermined duty by PWM, and supplies the light source control signal to the light source unit 11.
- the distance measuring processing unit 13 controls the light received by the light receiving unit 12 based on the exposure control signal synchronized with the light source control signal.
- the light source unit 11 irradiates light modulated according to the light source control signal generated by the distance measuring processing unit 13.
- the light source unit 11 blinks and emits light according to a predetermined duty according to the light source control signal.
- the light emitted from the light source unit 11 is emitted from the light source unit 11 as emission light 30.
- the emitted light 30 is reflected by, for example, the object to be measured 31, and is received by the light receiving unit 12 as the reflected light 32.
- the light receiving unit 12 supplies a pixel signal corresponding to the light received by the reflected light 32 to the distance measuring processing unit 13.
- the light receiving unit 12 receives ambient ambient light in addition to the reflected light 32, and the pixel signal includes the component of the ambient light together with the component of the reflected light 32.
- the distance measuring processing unit 13 executes light reception by the light receiving unit 12 a plurality of times in different phases for each light receiving element.
- the distance measuring processing unit 13 calculates the distance D to the object to be measured based on the difference between the pixel signals due to the light reception in different phases.
- the distance measuring processing unit 13 includes the first image information obtained by extracting the component of the reflected light 32 based on the difference of the pixel signals, and the second image information including the component of the reflected light 32 and the component of the ambient light. , Is calculated.
- the first image information is referred to as reflected light image information
- the value of each pixel of the reflected light image information is referred to as a pixel value Confidence (or Confidence value).
- the second image information is referred to as IR image information, and the value of each pixel of the IR image information is referred to as a pixel value IR (or IR value). Further, the reflected light image information and the IR image information are collectively referred to as received image information.
- FIG. 2 is a diagram for explaining the principle of the indirect ToF method.
- light modulated by a sine wave is used as the emission light 30 emitted by the light source unit 11.
- the reflected light 32 is a sine wave having a phase difference phase corresponding to the distance D with respect to the emitted light 30.
- the distance measuring processing unit 13 samples the pixel signal that has received the reflected light 32 a plurality of times for each phase, and acquires a light amount value (pixel signal value) indicating the amount of light for each sampling.
- a light amount value pixel signal value
- the distance information is calculated based on the difference between the light quantity values of the sets whose phases are different by 180 ° among the phases 0 °, 90 °, 180 ° and 270 °.
- FIG. 3 is a diagram showing an example in which the emitted light 30 from the light source unit 11 is a rectangular wave modulated by PWM.
- the emitted light 30 by the light source unit 11 and the reflected light 32 reaching the light receiving unit 12 are shown.
- the light source unit 11 periodically blinks with a predetermined duty to emit the emitted light 30.
- the phase 0 ° 0 °
- the phase 90 ° 90 °
- the phase 180 ° 180 °
- the period during which the exposure control signal is in the high state is defined as the exposure period during which the light receiving element of the light receiving unit 12 outputs a valid pixel signal.
- the emission light 30 is emitted from the light source unit 11 at the time point t 0 , and the emission light 30 is applied to the time point t 1 after a delay corresponding to the distance D from the time point t 0 to the object to be measured.
- the reflected light 32 reflected by the object to be measured reaches the light receiving unit 12.
- the light receiving unit 12 starts the exposure period having a phase of 0 ° in synchronization with the time point t 0 of the injection timing of the emission light 30 in the light source unit 11 according to the exposure control signal from the distance measuring processing unit 13.
- the light receiving unit 12 starts the exposure period of phase 90 °, phase 180 ° and phase 270 ° according to the exposure control signal from the distance measuring processing unit 13.
- the exposure period in each phase follows the duty of the emitted light 30.
- the exposure periods of the respective phases are shown to be in parallel in time, but in reality, the light receiving unit 12 has the exposure periods of the respective phases sequentially.
- the specified light intensity values C 0 , C 90 , C 180 and C 270 for each phase are acquired, respectively.
- the arrival timing of the reflected light 32 is the time point t 1 , t 2 , t 3 , ...
- the light amount value C 0 at the phase 0 ° is the time point from the time point t 0 to the phase 0 °. It is acquired as an integrated value of the amount of received light up to the end of the exposure period including t 0.
- the light intensity value C 180 is the fall of the reflected light 32 included in the exposure period from the start of the exposure period at the phase 180 °. It is acquired as an integral value of the amount of received light up to the time point t 2.
- phase C90 and the phase 270 ° which is 180 ° different from the phase 90 °
- the period during which the reflected light 32 reaches within each exposure period is the same as in the case of the above-mentioned phases 0 ° and 180 °.
- the integrated value of the received light amount of is acquired as the light amount values C 90 and C 270.
- phase difference phase is calculated by the following equation (3).
- the phase difference phase is defined in the range of (0 ⁇ phase ⁇ 2 ⁇ ).
- phase tan -1 (Q / I)... (3)
- the component of the reflected light 32 (pixel value Confidence of the reflected light image information) can be extracted from the component of the light received by the light receiving unit 12.
- one pixel of the reflected light image information is calculated from the light quantity values C 0 , C 90 , C 180 and C 270 in the four phases of the light receiving unit 12.
- the light intensity values C 0 , C 90 , C 180 and C 270 for each phase are acquired from the corresponding light receiving elements of the light receiving unit 12.
- FIG. 4 is a diagram showing an example of the amount of light received by the light receiving unit 12.
- the pixel signal output by the light receiving unit 12 includes a direct current component such as a so-called dark current (dark noise).
- the amount of light received by the light receiving unit 12 is the sum of the amount of directly reflected light, the amount of ambient light, and dark noise.
- FIG. 5 is a diagram for explaining a first method of acquiring each light intensity value and calculating each information applied to each embodiment.
- the light receiving unit 12 sequentially acquires the respective light intensity values C 0 , C 90 , C 180, and C 270 for each phase.
- the light receiving unit 12 is exposed at a phase of 0 ° during the period from the time points t 10 to t 11 , and the time points t 12 to t are sandwiched between the time points t 11 and a predetermined time (for example, the processing switching time). Exposure with a phase of 90 ° is performed for 13 periods.
- light reception is performed at a phase of 180 ° during the period from the time point t 13 to the time point t 14 to t 15 with a predetermined time in between, and during the period from the time point t 15 to the time point t 16 to t 17 with a predetermined time in between.
- Exposure is performed with a phase of 270 °.
- the sequence of exposure according to each phase is defined as 1 microframe ( ⁇ Frame).
- the period of time t 10 ⁇ t 18 becomes the period of one micro-frame.
- the period of one microframe is shorter than the period of one frame of imaging (for example, 1/30 sec), and the processing of one microframe can be executed a plurality of times within the period of one frame.
- the ranging processing unit 13 stores, for example, the respective light intensity values C 0 , C 90 , C 180, and C 270 acquired sequentially in each phase, which are acquired within the period of one microframe, in a memory.
- the distance measuring processing unit 13 calculates the distance information Dept and the pixel value Confidence of the reflected light image information based on the respective light amount values C 0 , C 90 , C 180 and C 270 stored in the memory.
- FIG. 6 is a diagram for explaining a second method of acquiring each light intensity value and calculating each information applied to each embodiment.
- the light receiving unit 12 includes two reading circuits (referred to as taps A and tap B) for one light receiving element, and the readings by the taps A and B are sequentially executed (alternately).
- taps A and tap B the reading circuits for taps A and B are sequentially executed (alternately).
- the reading method by tap A and tap B is also referred to as a two-tap method.
- FIG. 6 is a diagram for explaining a second method of acquiring each light intensity value and calculating each information applied to each embodiment.
- the emitted light 30 by the light source unit 11 and the reflected light 32 reaching the light receiving unit 12 are shown.
- the light source unit 11 emits emission light 30 that blinks at a predetermined cycle.
- the light source unit 11 emits the emission light 30 that emits light during the period T in, for example, one cycle.
- FIG. 6 further shows an exposure control signal (DIMIX_A) at the tap A having a phase of 0 ° of the light receiving unit 12 and an exposure control signal (DIMIX_B) at the tap B.
- the period in which the exposure control signals (DIMIX_A, DIMIX_B) are in the high state is defined as the exposure period in which the light receiving unit 12 outputs a valid pixel signal.
- DIMIX_A and DIMIX_B are exposure control signals having an exposure period according to the duty of the emission light 30. Further, DIMIX_A and DIMIX_B are signals whose phases are 180 ° out of phase with each other.
- the light receiving unit 12 starts the exposure period according to the exposure control signal (DIMIX_A) from the distance measuring processing unit 13 in synchronization with the time point t 10 of the injection timing of the emitted light 30 in the light source unit 11.
- the light receiving unit 12 starts the exposure period according to the exposure control signal (DIMIX_B) from the distance measuring processing unit 13 in synchronization with the time point t 12 whose phase is 180 ° different from that of DIMIX_A.
- the light receiving unit 12 acquires the light intensity values (pixel signals) A 0 and B 0 for each of the taps A and B having a phase of 0 °.
- the arrival timing of the reflected light 32 is the time point t 11 , t 14 , t 13 , ...
- the light amount value A 0 at the tap A having a phase of 0 ° is the corresponding time point t 10 to DIMIX_A. It is acquired as an integral value of the amount of received light up to the end point t 12 of the exposure period including the time point t 0.
- the light quantity value B 0 is, from the start t 12 of the exposure period in the DIMIX_B, reflected light included in the exposure period 32 is obtained as the integral value of the amount of light received to the time t 13 the fall of the.
- the light intensity values A 0 and B 0 are acquired in the same manner after the arrival timing t 14 of the next reflected light 32.
- the light amount values A 0 and B 0 acquired by the light receiving unit 12 using the exposure control signals (DIMIX_A, DIMIX_B) having a phase of 0 ° have been described.
- the light receiving unit 12 uses the exposure control signals (DIMIX_A, DIMIX_B) of each of the phases 90 °, 180 °, and 270 °, and the light intensity values A 90 , B 90 , A 180 , B 180 , and A 270. , Get B 270 .
- FIG. 7 is a diagram showing an example of an exposure control signal in each phase.
- ⁇ 270 °)
- the exposure control signals (DIMIX_A, DIMIX_B) for each are shown.
- DIMIX_A having a phase of 90 ° is an exposure control signal having a phase shifted by 90 ° from the emission timing of the emission light 30, and DIMIX_B having a phase of 90 ° is an exposure control signal having a phase 180 ° different from that of DIMIX_A having a phase of 90 °.
- DIMIX_A having a phase of 180 ° is an exposure control signal having a phase 180 ° out of phase with the emission timing of the emission light 30, and DIMIX_B having a phase of 180 ° is an exposure control signal having a phase 180 ° different from that of DIMIX_A having a phase of 180 °. Is.
- DIMIX_A having a phase of 270 ° is an exposure control signal having a phase shifted by 270 ° from the emission timing of the emission light 30, and DIMIX_B having a phase of 270 ° is an exposure control signal having a phase 180 ° different from that of DIMIX_A having a phase of 270 °. ..
- the exposure period in each phase follows the duty of the emitted light 30.
- FIG. 8 is a diagram showing an example of the exposure period of tap A and tap B at each phase of 0 °, 90 °, 180 ° and 270 ° of each light receiving unit 12 (each light receiving element).
- the exposure periods of each phase are shown side by side in parallel with each other in phase.
- exposures by taps A and B with phases 0 ° are sequentially (alternately) performed.
- the exposure by tap A and tap B having a phase of 180 ° is delayed by 180 ° from the exposure by tap A and tap B having a phase of 0 °, and the exposure by tap A and tap B is executed sequentially.
- the phases of the exposure period of the tap A at the phase 0 ° and the exposure period of the tap B at the phase 180 ° coincide with each other.
- the phase of the exposure period by tap B at phase 0 ° and the exposure period by tap A at phase 180 ° coincide with each other.
- FIG. 9 is a diagram for explaining the light receiving timing by the light receiving unit 12. As shown in FIG. 9, the light receiving unit 12 sequentially executes reading of tap A and tap B in each phase. Further, the light receiving unit 12 sequentially executes reading of each phase within a period of 1 microframe.
- the light receiving unit 12 performs exposure at a phase of 0 ° during the period from time points t 20 to t 21.
- the distance measuring processing unit 13 obtains a light amount value A 0 and a light amount value B 0 , respectively, based on the pixel signals read by the tap A and the tap B respectively.
- the light receiving unit 12 performs exposure at a phase of 90 ° during a period from time points t 22 to t 23 with a predetermined time from time point t 21.
- the distance measuring processing unit 13 obtains the light amount value A 90 and the light amount value B 90 , respectively, based on the pixel signals read by the tap A and the tap B respectively.
- the distance measuring processing unit 13 obtains the light amount value A 180 and the light amount value B 180 , respectively, based on the pixel signals read by the tap A and the tap B, respectively. Further, the light receiving unit 12 performs exposure at a phase of 270 ° during a period from time point t 26 to t 27 with a predetermined time between the time point t 25. The distance measuring processing unit 13 obtains the light amount value A 270 and the light amount value B 270 , respectively, based on the pixel signals read by the tap A and the tap B respectively.
- the readings by taps A and B are sequentially executed for each of the phases 0 °, 90 °, 180 ° and 270 ° shown in FIG. 9, and the light intensity values based on the readings of taps A and B for each phase are further executed.
- the method of obtaining the above is called a 2-tap method (4 phase).
- the differences I and Q are calculated using the following equations (7) using the respective light intensity values A 0 and B 0 , A 90 and B 90 , A 180 and B 180 , and A 270 and B 270. ) And (8).
- the pixel value confidence of the phase difference phase, the distance information depth, and the reflected light image information is determined by the above equations (3), (4), and the above-mentioned equations (3), (4), and the difference I and Q calculated by these equations (7) and (8). Calculated according to (6).
- the exposure period in each phase is made redundant by tap A and tap B. Therefore, it is possible to improve the calculated distance information depth and the S / N ratio of the reflected light image information.
- FIG. 10 is a diagram for explaining a third method of acquiring each light intensity value and calculating each information applied to each embodiment.
- the light receiving unit 12 includes taps A and B as in the second method described above, and is different from the second method in that reading from the taps A and B is sequentially executed. It is the same.
- the third method is different from the second method in that the light receiving unit 12 sequentially executes the above-mentioned readings of the phases 0 ° and 90 ° and does not execute the readings of the phases 180 ° and 270 °.
- the light receiving unit 12 includes taps A and B as in the second method described above, and sequentially executes reading from the taps A and B. Further, the light receiving unit 12 sequentially executes reading of the phases 0 ° and 90 ° out of the above-mentioned phases 0 °, 90 °, 180 ° and 270 °.
- the period of reading out the phases of 0 ° and 90 ° is defined as a period of 1 microframe.
- the read sequence is the same sequence as the time points t 20 to 24 in FIG. 9 described above. That is, the light receiving unit 12 performs exposure at a phase of 0 ° during the period from time point t 30 to t 31.
- the distance measuring processing unit 13 obtains a light amount value A 0 and a light amount value B 0 , respectively, based on the pixel signals read by the tap A and the tap B respectively.
- the light receiving unit 12 performs exposure at a phase of 90 ° during a period from time point t 32 to t 33 with a predetermined time from time point t 31.
- the distance measuring processing unit 13 obtains the light amount value A 90 and the light amount value B 90 , respectively, based on the pixel signals read by the tap A and the tap B respectively.
- the readings by taps A and B are sequentially executed for each of the phases 0 ° and 90 shown in FIG. 10, and the respective light intensity values based on the readings of taps A and B are obtained for each of the phases 0 ° and 90 °.
- the method is called a 2-tap method (2 phase).
- the exposure control signals DIMIX_A and DIMIX_B in tap A and tap B of each phase are signals whose phases are inverted. Therefore, DIMIX_A having a phase of 0 ° and DIMIX_B having a phase of 180 ° are signals having the same phase. Similarly, DIMIX_B having a phase of 0 ° and DIMIX_A having a phase of 180 ° are signals having the same phase. Further, DIMIX_A having a phase of 90 ° and DIMIX_B having a phase of 270 ° are signals having the same phase, and DIMIX_B having a phase of 90 ° and DIMIX_A having a phase of 270 ° are signals having the same phase.
- the light quantity value B 0 is the same as the read value of the light receiving unit 12 in the phase 180 °
- the light quantity value B 90 is the same as the read value of the light receiving unit 12 in the phase 270 °.
- the phase 90 ° it corresponds to executing the reading of the phase 90 ° and the phase 270 ° whose phase is 180 ° different from the phase 90 °.
- the exposure period of the tap B in the phase 0 ° can be said to be the exposure period in the phase 180 °. Further, it can be said that the exposure period of the tap B in the phase 90 ° is the exposure period in the phase 270 °. Therefore, in the case of this third method, the differences I and Q are calculated by the following equations (9) and (10) using the respective light intensity values A 0 and B 0 , and A 90 and B 90.
- the pixel value confidence of the phase difference phase, the distance information depth, and the reflected light image information is determined by the above equations (3), (4), and the above equations (3), (4), and the difference I and Q calculated by these equations (9) and (10). It can be calculated by (6).
- the IR image information is image information including the component of the reflected light 32 and the component of the ambient light.
- the light received by the light receiving unit 12 includes a direct current component such as a dark current (dark noise) in addition to the reflected light 32 component and the ambient light component. Therefore, the IR image information is calculated by subtracting the DC component from the pixel signal output by the light receiving unit 12. Specifically, the pixel value IR of the IR image information is calculated using the following equation (11).
- C FPN , A FPN and B FPN are direct current components such as dark current (dark noise) and are fixed pattern noise.
- C FPN , A FPN and B FPN shall be obtained in advance by experiments, simulations, etc.
- C FPN , A FPN, and B FPN may be used as pixel signals output from the light receiving unit 12, for example, when the light receiving unit 12 is not receiving light.
- the pixel signal is acquired by the distance measuring device 10 acquiring the signal output from the light receiving unit 12 before the light source unit 11 emits the emitted light 30.
- the pixel value IR of the IR image information in the phase 0 ° is calculated has been described, but the pixels of the IR image information in other phases (phases 90 °, 180 ° and 270 °) are similarly described.
- the value IR may be calculated.
- the average value of the pixel value IR calculated for each phase may be used as the pixel value IR of the IR image information calculated from the reflected light 32.
- FIG. 11 is a block diagram showing a configuration of an example of an electronic device applied to each embodiment.
- the electronic device 1 has a CPU (Central Processing Unit) 100, a ROM (Read Only Memory) 101, a RAM (Random Access Memory) 102, a storage 103, a UI (User Interface) unit 104, and an interface. (I / F) 105 and.
- the electronic device 1 includes a light source unit 110 and a sensor unit 111 corresponding to the light source unit 11 and the light receiving unit 12 of FIG. 1, respectively.
- a smartphone multifunctional mobile phone terminal
- a tablet-type personal computer can be considered.
- the device to which the electronic device 1 is applied is not limited to these smartphones and tablet-type personal computers.
- the storage 103 is a non-volatile storage medium such as a flash memory or a hard disk drive.
- the storage 103 can store various data and a program for operating the CPU 100. Further, the storage 103 can store an application program (hereinafter, abbreviated as an application) for realizing the application unit 20 described with reference to FIG.
- the ROM 101 stores in advance a program and data for operating the CPU 100.
- the RAM 102 is a volatile storage medium for storing data.
- the CPU 100 operates using the RAM 102 as a work memory according to a program stored in the storage 103 or the ROM 101, and controls the overall operation of the electronic device 1.
- the UI unit 104 is provided with various controls for operating the electronic device 1, display elements for displaying the status of the electronic device 1, and the like.
- the UI unit 104 may further include a display that displays an image captured by the sensor unit 111, which will be described later.
- the display may be a touch panel in which a display device and an input device are integrally formed, and various controls may be configured by each component displayed on the touch panel.
- the light source unit 110 includes a light emitting element such as an LED or a VCSEL, and a driver for driving the light emitting element.
- the driver generates a drive signal having a predetermined duty according to the instruction of the CPU 100.
- the light emitting element emits light according to a drive signal generated by the driver, and emits light modulated by PWM as emission light 30.
- the sensor unit 111 drives a pixel array unit in which a plurality of light receiving elements are arranged in an array, and a plurality of light receiving elements arranged in the pixel array unit, and outputs a pixel signal read from each light receiving element. Including the circuit.
- the pixel signal output from the sensor unit 111 is supplied to the CPU 100.
- FIG. 12 is a block diagram showing an example of the configuration of the sensor unit 111 applied to each embodiment.
- the sensor unit 111 has a laminated structure including a sensor chip 1110 and a circuit chip 1120 laminated on the sensor chip 1110.
- the sensor chip 1110 and the circuit chip 1120 are electrically connected through a connecting portion (not shown) such as a via (VIA) or a Cu—Cu connection.
- a connecting portion such as a via (VIA) or a Cu—Cu connection.
- VIA via
- FIG. 8 the state in which the wiring of the sensor chip 1110 and the wiring of the circuit chip 1120 are connected by the connection portion is shown.
- the pixel area 1111 includes a plurality of pixels 1112 arranged in an array on the sensor chip 1110. For example, one frame of image signal is formed based on the pixel signals output from the plurality of pixels 1112 included in the pixel area 1111.
- Each pixel 1112 arranged in the pixel area 1111 is capable of receiving infrared light, for example, performs photoelectric conversion based on the received infrared light, and outputs an analog pixel signal.
- Two vertical signal lines VSL 1 and VSL 2 are connected to each pixel 1112 included in the pixel area 1111.
- a vertical drive circuit 1121 In the sensor unit 111, a vertical drive circuit 1121, a column signal processing unit 1122, a timing control circuit 1123, and an output circuit 1124 are further arranged on the circuit chip 1120.
- the timing control circuit 1123 controls the drive timing of the vertical drive circuit 1121 according to the element control signal supplied from the outside via the control line 150. Further, the timing control circuit 1123 generates a vertical synchronization signal based on the element control signal.
- the column signal processing unit 1122 and the output circuit 1124 execute their respective processes in synchronization with the vertical synchronization signal generated by the timing control circuit 1123.
- each pixel 1112 includes two taps A (TAP_A) and taps B (TAP_B) that accumulate charges generated by photoelectric conversion, respectively.
- the vertical signal line VSL 1 is connected to tap A of pixel 1112, and the vertical signal line VSL 2 is connected to tap B of pixel 1112.
- the vertical signal line VSL 1 outputs a pixel signal AIN P1 which is an analog pixel signal based on the charge of tap A of the pixel 1112 of the corresponding pixel sequence. Further, the vertical signal line VSL 2 outputs a pixel signal AIN P2 which is an analog pixel signal based on the charge of the tap B of the pixel 1112 of the corresponding pixel string.
- the vertical drive circuit 1121 drives each pixel 1112 included in the pixel area 1111 in units of pixel rows according to the timing control by the timing control circuit 1123, and outputs the pixel signals AIN P1 and AIN P2.
- the pixel signals AIN P1 and AIN P2 output from each pixel 1112 are supplied to the column signal processing unit 1122 via the vertical signal lines VSL 1 and VSL 2 of each column.
- the column signal processing unit 1122 includes, for example, a plurality of AD converters provided for each pixel sequence corresponding to the pixel array of the pixel area 1111. Each AD converter included in the column signal processing unit 1122 performs AD conversion on the pixel signals AIN P1 and AIN P2 supplied via the vertical signal lines VSL 1 and VSL 2 , and is converted into a digital signal.
- the pixel signals AIN P1 and AIN P2 are supplied to the output circuit 1124.
- the output circuit 1124 executes signal processing such as CDS (Correlated Double Sampling) processing on the pixel signals AIN P1 and AIN P2 converted into digital signals output from the column signal processing unit 1122, and performs signal processing.
- the pixel signals AIN P1 and AIN P2 are output to the outside of the sensor unit 111 via the output line 51 as pixel signals read from tap A and pixel signals read from tap B, respectively.
- FIG. 13 is a circuit diagram showing a configuration of an example of pixels 1112 applied to each embodiment.
- Pixels 1112 are photodiodes 231 and two transfer transistors 232 and 237, two reset transistors 233 and 238, two stray diffusion layers 234 and 239, two amplification transistors 235 and 240, and two selection transistors 236 and 241. including.
- the floating diffusion layers 234 and 239 correspond to the above-mentioned tap A (described as TAP_A) and tap B (described as TAP_B), respectively.
- the photodiode 231 is a light receiving element that generates an electric charge by photoelectrically converting the received light.
- the photodiode 231 is arranged on the back surface with respect to the front surface, with the surface on which the circuit is arranged on the semiconductor substrate as the front surface.
- Such a solid-state image sensor is called a back-illuminated solid-state image sensor.
- a front-illuminated type configuration in which the photodiode 231 is arranged on the surface can also be used.
- the overflow transistor 242 is connected between the cathode of the photodiode 231 and the power supply line VDD, and has a function of resetting the photodiode 231. That is, the overflow transistor 242 is turned on according to the overflow gate signal OFG supplied from the vertical drive circuit 1121, so that the electric charge of the photodiode 231 is sequentially discharged to the power supply line VDD.
- the transfer transistor 232 is connected between the cathode of the photodiode 231 and the floating diffusion layer 234. Further, the transfer transistor 237 is connected between the cathode of the photodiode 231 and the floating diffusion layer 239. The transfer transistors 232 and 237 sequentially transfer the charges generated by the photodiode 231 to the stray diffusion layers 234 and 239 in response to the transfer signal TRG supplied from the vertical drive circuit 1121, respectively.
- the floating diffusion layers 234 and 239 corresponding to the tap A and the tap B accumulate the electric charge transferred from the photodiode 231 and convert it into a voltage signal having a voltage value corresponding to the accumulated electric charge, which is an analog pixel signal.
- the pixel signals AIN P1 and AIN P2 are generated, respectively.
- two reset transistors 233 and 238 are connected between the power supply line VDD and the floating diffusion layers 234 and 239, respectively.
- the reset transistors 233 and 238 are turned on in response to the reset signals RST and RST p supplied from the vertical drive circuit 1121 to extract charges from the floating diffusion layers 234 and 239, respectively, and the floating diffusion layers 234 and 239. To initialize.
- the two amplification transistors 235 and 240 are connected between the power supply line VDD and the selection transistors 236 and 241 respectively. Each amplification transistor 235 and 240 amplifies a voltage signal whose charge is converted into a voltage at each of the floating diffusion layers 234 and 239, respectively.
- the selection transistor 236 is connected between the amplification transistor 235 and the vertical signal line VSL 1. Further, the selection transistor 241 is connected between the amplification transistor 240 and the vertical signal line VSL 2. The selection transistors 236 and 241 are turned on according to the selection signals SEL and SEL p supplied from the vertical drive circuit 1121 to obtain the pixel signals AIN P1 and AIN P2 amplified by the amplification transistors 235 and 240, respectively. , Output to the vertical signal line VSL 1 and the vertical signal line VSL 2, respectively.
- the vertical signal line VSL 1 and the vertical signal line VSL 2 connected to the pixel 1112 are connected to the input end of one AD converter included in the column signal processing unit 1122 for each pixel sequence.
- the vertical signal line VSL 1 and the vertical signal line VSL 2 supply the pixel signals AIN P1 and AIN P2 output from the pixel 1112 to the AD converter included in the column signal processing unit 1122 for each pixel row.
- the laminated structure of the sensor unit 111 will be schematically described with reference to FIGS. 14 and 15.
- the sensor unit 111 is formed by a two-layer structure in which semiconductor chips are laminated in two layers.
- FIG. 14 is a diagram showing an example in which the sensor unit 111 applied to each embodiment is formed by a two-layer structure laminated CIS (Complementary Metal Oxide Semiconductor Image Sensor).
- the pixel area 1111 is formed on the semiconductor chip of the first layer which is the sensor chip 1110, and the circuit portion is formed on the semiconductor chip of the second layer which is the circuit chip 1120.
- the circuit unit includes, for example, a vertical drive circuit 1121, a column signal processing unit 1122, a timing control circuit 1123, and an output circuit 1124.
- the sensor chip 1110 may include a pixel area 1111 and, for example, a vertical drive circuit 1121.
- the sensor unit 111 is configured as one solid-state image sensor by bonding the sensor chip 1110 and the circuit chip 1120 while electrically contacting each other.
- the sensor unit 111 is formed by a three-layer structure in which semiconductor chips are laminated in three layers.
- FIG. 15 is a diagram showing an example in which the sensor unit 111 applied to each embodiment is formed by a laminated CIS having a three-layer structure.
- the pixel area 1111 is formed on the semiconductor chip of the first layer, which is the sensor chip 1110.
- the circuit chip 1120 described above is divided into a first circuit chip 1120a made of a semiconductor chip of the second layer and a second circuit chip 1120b made of a semiconductor chip of the third layer.
- the sensor unit 111 is attached to one solid by attaching the sensor chip 1110, the first circuit chip 1120a, and the second circuit chip 1120b to each other while making electrical contact with each other. It is configured as an image sensor.
- the distance measuring device generates reflected light image information in addition to the distance D to the object to be measured based on the reflected light 32 received by the light receiving unit 12. At this time, for example, if the intensity of the reflected light 32 received by the light receiving unit 12 is high and the light intensity is saturated, the accuracy of generating the reflected light image information may decrease.
- the reflected light image information when the light intensity is saturated will be described with reference to FIG. 16, and a method for correcting the reflected light image information will be described. Note that FIG. 16 is a diagram for explaining an outline of the correction method according to the first embodiment of the present disclosure.
- the reflected light 32 includes ambient light and dark noise in addition to the directly reflected light reflected by the object to be measured 31.
- the intensity of the reflected light 32 received by the light receiving unit 12 becomes high, and the light intensity values C 0 , C 90 , C 180 and C 270 may be saturated.
- the intensity of the reflected light 32 received by the light receiving unit 12 is high.
- the light intensity values C 0 , C 90 , C 180 and C 270 may be saturated.
- the intensity of the reflected light 32 received by the light receiving unit 12 is high, and the light intensity values C 0 , C 90 , C 180 and C 270 are saturated at the light intensity value C max. It shall be.
- the pixel value Confidence of the reflected light image information is calculated by the above-mentioned equations (5) to (8).
- the I and Q components become zero, and the pixel value Confidence of the reflected light image information also becomes zero.
- the pixel value Confidence of the corresponding reflected light image information becomes zero, for example, as shown in image I2 of FIG. Region R sa (hereinafter, also referred to as saturated region R sa) is generated. As shown in the graph G2 of FIG.
- the graph G2 is a graph showing the pixel value Confidence of the reflected light image information in the line segment AA'of the image I2.
- the reflected light image information is discontinuous in this way, a problem may occur in the processing by the application unit 20.
- the application unit 20 recognizes the saturated region R sa of the reflected light image information as a feature, an error may occur in the recognition result of the reflected light image information.
- the application unit 20 performs face recognition using reflected light image information, if the saturated region R sa is recognized as a facial feature (for example, a mole), face recognition may not be performed correctly. ..
- the discontinuity of the reflected light image information is eliminated by correcting the pixel value Confidence in the saturation region R sa of the reflected light image information. As a result, it is possible to suppress a decrease in the generation accuracy of the reflected light image information and suppress the occurrence of a defect in the application unit 20.
- the pixel value Confidence in the saturation region R sa of the reflected light image information is corrected from zero to a predetermined value.
- the correction value P max1 of the pixel value Confidence is larger than the pixel values P b1 of the non-saturated region R nsa and the pixel values P b of M b2 in contact with the saturated region R sa (P max1 >. P b ).
- the discontinuity of the reflected light image information can be eliminated.
- the boundary line is shown by a black line in order to make the boundary between the saturated region R sa and the non-saturated region R n sa easy to understand.
- FIG. 17 is a functional block diagram of an example for explaining the function of the distance measuring device according to the first embodiment of the present disclosure.
- the distance measuring device 10a includes a light source unit 11, a light receiving unit 12, a control unit 40, a distance measuring unit 50, and a correction unit 60.
- the light source unit 11, the light receiving unit 12, the control unit 40, the distance measuring unit 50, and the correction unit 60 are, for example, a predetermined program on the CPU 100 (see FIG. 11). Consists of operating.
- control unit 40 the distance measuring unit 50, and the correction unit 60 may be configured by a hardware circuit that operates in cooperation with each other.
- the device including the control unit 40, the distance measuring unit 50, and the correction unit 60 is also simply referred to as an information processing device.
- the two-tap method (4 phase) is applied to the acquisition of each light intensity value and the calculation of each information at each phase 0 °, 90 °, 180 ° and 270 ° in the light receiving unit 12. To do.
- each light amount value may be acquired and each information may be calculated by using a method other than the 2-tap method (4 phase).
- the control unit 40 generates a light source control signal and supplies it to the light source unit 11.
- the light source control signal includes information that specifies, for example, the duty in PWM modulation, the intensity of light emitted by the light source unit 11, the timing of emission, and the like.
- the light source unit 11 emits the emission light 30 (see FIG. 1) modulated by PWM in response to the light source control signal supplied from the control unit 40.
- control unit 40 generates an exposure control signal and supplies it to the light receiving unit 12.
- the exposure control signal includes information that controls the light receiving unit 12 to perform exposure with an exposure length according to the duty of the light source unit 11 in different phases. Further, the exposure control signal further includes information for controlling the exposure amount in the light receiving unit 12.
- the pixel signals of each phase output from the light receiving unit 12 are supplied to the distance measuring unit 50.
- the distance measuring unit 50 calculates the pixel value Confidence of the distance information depth and the reflected light image information based on the pixel signals of each phase supplied from the light receiving unit 12.
- the distance measuring unit 50 passes the calculated distance information depth and the pixel value Confidence of the reflected light image information to, for example, the application unit 20.
- FIG. 18 is a diagram for explaining a method of calculating the pixel value Confidence of the reflected light image information in the 2-tap method (4 phase).
- the pixel signal used for calculating the distance D1 to the object to be measured 31A and the pixel signal used to calculate the distance D2 to the object to be measured 31B are shown for each tap of each phase.
- the object to be measured 31A and the object to be measured 31B may be different objects to be measured arranged in the same space. Alternatively, the same object to be measured may be measured in different frames, or the same object to be measured may be measured in different places.
- the pixel signal includes a directly reflected light component, an ambient light component, and a dark noise component.
- the pixel value Confidence of the reflected light image information is calculated from the components of the directly reflected light. Specifically, as described above, the pixel value Confidence of the reflected light image information is calculated using the following equations (5), (7) and (8).
- I (A 0- B 0 )-(A 180- B 180 ) ... (7)
- the pixel signal used for calculating the distance D2 is not saturated.
- the pixel signal used for calculating the distance D1 is saturated with the tap A having a phase of 0 ° and the tap B having a phase of 180 °. Therefore, the pixel value Confidence of the reflected light image information corresponding to the distance D2 can be calculated with high accuracy, but the pixel value Confidence of the reflected light image information corresponding to the distance D1 cannot be calculated with high accuracy. Therefore, in the present embodiment, when the light receiving element of the light receiving unit 12 is saturated, the correction unit 60 corrects the pixel value Confidence of the reflected light image information, and the control unit 40 adjusts the control signal in the next frame.
- control unit 40 controls the exposure amount in the light receiving unit 12 based on each pixel signal of each phase (for example, phases 0 °, 90 °, 180 ° and 270 °) supplied from the light receiving unit 12.
- This control signal generated by the control unit 40 is for the distance measuring unit 50 to appropriately calculate the distance information depth regardless of the scene to be imaged.
- the control unit 40 generates a control signal so as to adjust each light amount value based on the pixel signal of each phase to a value within an appropriate range.
- one or more pixel signals among the pixel signals corresponding to each phase are saturated, or the level is equal to or less than a predetermined level.
- the differences I and Q may not be calculated appropriately.
- the reliability of the distance information depth calculated by the distance measuring unit 50 based on the differences I and Q is also low.
- control unit 40 obtains a control signal for controlling each light amount value based on each pixel signal of each phase to a value within an appropriate range.
- the control unit 40 controls the gain and exposure time of the light receiving unit 12 and the duty and intensity of light emission by the light source unit 11 based on the obtained control signal, and adjusts the amount of light received by the light receiving unit 12 to be appropriate. To do.
- the control unit 40 when the reflectance of the object to be measured 31 is low, or when the distance indicated in the distance information depth calculated by the distance measuring unit 50 is equal to or greater than a predetermined distance, the S / N of the calculated distance information depth becomes low. The accuracy of this distance information depth is reduced.
- the control unit 40 generates a control signal for controlling the light receiving unit 12 so that the exposure time by the light receiving unit 12 becomes longer in order to maintain the S / N of the distance information depth calculated by the distance measuring unit 50. To do.
- the control unit 40 stores the generated control signal in a register (not shown) or the like.
- the control unit 40 executes light emission by the light source unit 11 and light reception by the light receiving unit 12 at predetermined frame intervals.
- the control unit 40 performs processing for one frame based on the control information stored in the register, obtains a control signal based on the processing result, and updates the control signal stored in the register.
- the correction unit 60 corrects the pixel value Confidence of the reflected light image information by using each pixel signal of each phase.
- the correction unit 60 includes a saturation region detection unit 61, a saturation value estimation unit 62, and a saturation region compensation unit 63.
- the saturation region detection unit 61 detects the saturation region R sa of the reflected light image information.
- the pixel signal output by the light receiving element of the light receiving unit 12 includes saturation information indicating whether or not the pixel signal is saturated.
- the saturation region detection unit 61 detects the saturation region R sa by detecting a light receiving element in which the pixel signal is saturated based on the saturation information.
- the saturated region detection unit 61 may detect the saturated light receiving element, that is, the saturated region R sa , by determining whether or not the value is the value when the pixel signal is saturated.
- the saturation region detection unit 61 detects the saturation region R sa by determining whether or not the pixel value Confidence of the reflected light image information is saturated (for example, the pixel value Confidence is zero). It may be.
- the saturation value estimation unit 62 estimates a correction value used for correction of the pixel value Confidence of the reflected light image information by the saturation region compensation unit 63. Saturated value estimating unit 62, the pixel value Confidence of non-saturation region R nsa adjacent to the saturation region R sa ambient, i.e., based on the pixel signal does not saturate adjacent to the saturation region R sa ambient, to estimate the correction value.
- FIG. 19 is a diagram for explaining the correction value estimated by the saturation value estimation unit 62.
- the saturated region is displayed in white, and the non-saturated region adjacent to the periphery of the saturated region is indicated by a black line.
- the saturation value estimation unit 62 corrects, for example, based on the average value of the pixel value Confidence of the non-saturation region R nsa (the region shown by the black line in FIG. 19) adjacent (located) around the first saturation region R sa1. Estimate the value.
- the saturation value estimation unit 62 detects the boundary between the first saturated region R sa1 and the non-saturated region R nsa by scanning the reflected light image information in a matrix for each row or column, for example.
- the saturation value estimation unit 62 detects the pixel value Confidence of the non-saturation region R nsa at the detected boundary.
- the saturation value estimation unit 62 is adjacent to the periphery of the first saturation region R sa1 by detecting the pixel value Confidence of the non-saturation region R nsa adjacent to the first saturation region R sa1 for all rows and all columns. All the pixel value Confidence of the non-saturated region R nsa is detected. Saturated value estimating unit 62, the non-saturation region R nsa adjacent the average value of the pixel values Confidence of all the non-saturation region R nsa detected around the first saturation region R sa1 (region indicated by a white line in FIG. 19) It is calculated as the average value of the pixel value Confidence of.
- the saturation value estimation unit 62 sets a value obtained by adding a constant value to the average value of the pixel value Confidence of the non-saturation region R nsa (the region shown by the white line in FIG. 19) adjacent to the periphery of the first saturation region R sa1. Estimate as a correction value.
- the saturation value estimation unit 62 estimates the correction value of the second saturation region R sa2 in the same manner as the first saturation region R sa1.
- the saturation region compensation unit 63 corrects the pixel value Confidence of the saturation region R sa detected by the saturation region detection unit 61 with the correction value estimated by the saturation value estimation unit 62. As shown in FIG. 20, for example, the saturation region compensation unit 63 corrects the pixel value Confidence by replacing the pixel value Confidence of the saturation region R sa with a correction value.
- FIG. 20 is a diagram for explaining the correction of the pixel value Confidence by the saturation region compensation unit 63.
- FIG. 20 for example, among the reflected light image information arranged in a matrix, the reflected light image information in a predetermined line is shown.
- the graph shown on the left side of FIG. 20 is a graph showing the reflected light image information before correction.
- the saturation region compensation unit 63 replaces the pixel value Confidence of the reflected light image information in the saturation region R sa with a correction value. Thereby, the discontinuity of the reflected light image information can be improved.
- FIG. 21 is a diagram showing an example of reflected light image information before correction by the saturation region compensation unit 63.
- a black saturated region R sa is generated in the reflected light image information I5.
- the saturated region R sa is generated in the reflected light image information I5 in this way, for example, the accuracy of face recognition by the application unit 20 in the subsequent stage may decrease. This is because the application unit 20 may recognize the saturated region R sa as a feature of the reflected light image information I5.
- the saturation region compensation unit 63 corrects the pixel value Confidence in the saturation region R sa of the reflected light image information.
- FIG. 22 is a diagram showing an example of the reflected light image information corrected by the saturation region compensation unit 63. As shown in the reflected light image information I6 of FIG. 22, the saturation region compensation unit 63 corrects the saturation region R sa , so that the saturation region R sa, which was displayed in black in FIG. 21, is displayed in white. That is, by correcting such saturation region R sa is displayed in white, it is possible to eliminate discontinuity between saturation region R sa and non-saturation region R nsa.
- the saturation region compensation unit 63 corrects the discontinuity of the reflected light image information to suppress a decrease in the accuracy of the reflected light image information and suppress a defect (for example, a decrease in the face recognition system) caused by the application unit 20. be able to.
- FIG. 23 is a flowchart showing an example of the correction process in the distance measuring device 10a according to the first embodiment. Such correction processing is started, for example, by passing an image pickup start instruction instructing the start of imaging (distance measurement) to the distance measuring device 10a from the application unit 20.
- control unit 40 of the distance measuring device 10a controls the light source unit 11 and the light receiving unit 12 based on the control signal stored in the register to perform imaging (step S101).
- the pixel signals of each phase obtained by imaging are passed from the light receiving unit 12 to the control unit 40, the distance measuring unit 50, and the correction unit 60.
- the distance measuring unit 50 of the distance measuring device 10a calculates the pixel value Confidence of the distance information depth and the reflected light image information based on the imaging result imaged in step S101 (step S102).
- the distance measuring unit 50 of the distance measuring device 10a outputs the calculated distance information depth to, for example, the application unit 20, and outputs the pixel value Confidence of the reflected light image information to the application unit 20 and the correction unit 60.
- the saturation region detection unit 61 of the distance measuring device 10a calculates the saturation region R sa of the reflected light image information based on the imaging result imaged in step S101 (step S103).
- the saturation region detection unit 61 calculates the saturation region R sa of the reflected light image information by detecting the light receiving element in which the pixel signal is saturated.
- the saturation value estimation unit 62 of the distance measuring device 10a calculates a correction value based on the saturation region R sa calculated in step S103 and the pixel value Confidence of the reflected light image information calculated in step S102 (step S104). More specifically, the saturation value estimation unit 62 estimates as a correction value a value obtained by adding a predetermined value to the average value of the pixel value Confidence of the reflected light image information of the non-saturation region R nsa around the saturation region R sa. ..
- the saturation region compensation unit 63 of the ranging device 10a corrects the pixel value Confidence of the reflected light image information of the saturation region R sa based on the correction value estimated by the saturation value estimation unit 62 in step S104 (step S105).
- the saturation region compensation unit 63 replaces the pixel value Confidence value of the reflected light image information with the correction value by adding the calculated correction value to the pixel value Confidence of the reflected light image information of the saturation region R sa.
- the control unit 40 of the distance measuring device 10a obtains a control signal for controlling the light source unit 11 and the light receiving unit 12 based on each pixel signal of each phase captured in step S101 (step S106).
- the control unit 40 stores the obtained control signal in a register or the like.
- the ranging device 10a determines whether or not the imaging is completed (step S107).
- the distance measuring device 10a receives, for example, an imaging end instruction instructing the end of imaging from the application unit 20, the ranging device 10a determines that the imaging has been completed (step S107, “Yes”). In this case, the ranging device 10a ends the correction process.
- step S107 determines that the imaging has not been completed.
- the process returns to step S101.
- steps S101 to S107 are repeated, for example, in units of one frame.
- the distance measuring device 10a (an example of the information processing device) according to the first embodiment includes a correction unit 60 (an example of a control unit).
- the correction unit 60 is a pixel signal (pixel signal) output by a light receiving unit 12 (an example of a light receiving sensor) that receives the reflected light 32 reflected by the light source unit 11 (an example of a light source) and reflected by the object 31 to be measured.
- the saturation region R sa of the reflected light image information (an example of the received light image information) generated based on (one example) is detected.
- the pixel signal is used to calculate the distance to the object to be measured 31.
- the saturated region R sa is a region of reflected light image information generated based on a saturated pixel signal.
- the correction unit 60 corrects the reflected light image information in the saturation region R sa based on the pixel signal.
- the discontinuity of the received light image information (reflected light image information in the first embodiment) can be improved, and the decrease in the accuracy of the received image information can be suppressed.
- the ranging device uses the IR image information to correct the saturation region R sa of the reflected light image information.
- FIG. 24 is a block diagram for explaining an example of the function of the distance measuring device 10b according to the second embodiment.
- the distance measuring device 10b shown in FIG. 24 includes a correction unit 60b instead of the correction unit 60 in FIG.
- the correction unit 60b does not include the saturation value estimation unit 62 of FIG. 17, but instead includes an IR calculation unit 64.
- the correction unit 60b includes a saturation region compensation unit 63b instead of the saturation region compensation unit 63 of FIG.
- the correction unit 60b may be configured by operating a program on the CPU 100 (see FIG. 11), or may be realized by a hardware circuit.
- the IR calculation unit 64 calculates IR image information based on the pixel signal output by the light receiving unit 12.
- the IR image information is calculated based on the above-mentioned equation (11) or equation (12).
- the IR image information is calculated by subtracting a DC component such as a dark current (dark noise) from the pixel signal. Therefore, even in the saturation region R sa , the pixel value IR of the IR image information does not become zero, and the IR image information becomes image information that maintains continuity even when the saturation region R sa occurs.
- the saturation region compensation unit 63b corrects the reflected light image information of the saturation region R sa based on the reflected light image information and the IR image information.
- the saturation region compensation unit 63 corrects the reflected light image information according to the gradient (change rate) of the IR image information in the saturation region R sa. Such correction will be described in detail with reference to FIG.
- FIG. 25 is a diagram for explaining the correction of the saturation region R sa by the saturation region compensation unit 63b.
- FIG. 25 shows a graph corresponding to one row (or one column) of the reflected light image information and the IR image information.
- the graph on the left side of FIG. 25 is a graph showing IR image information generated by the IR calculation unit 64.
- the graph showing the IR image information is a continuous graph that does not become zero even in the saturation region R sa.
- the lower left graph of FIG. 25 is a graph showing the reflected light image information generated by the ranging unit 50. As described above, the graph showing the reflected light image information is a discontinuous graph because it becomes zero in the saturation region R sa.
- IR image information is information including a component of directly reflected light and a component of ambient light.
- the reflected light image information is information including a component of directly reflected light.
- the components of ambient light are considered to be the same. Therefore, the component that contributes to the change in the pixel value IR of the IR image information and the component that contributes to the change in the pixel value Confidence of the reflected light image information are both the same component of the directly reflected light, and the rate of change is the same. it is conceivable that.
- the saturation region compensation unit 63b corrects the pixel value Confidence of the reflected light image information in the saturation region R sa according to the gradient (change rate) of the pixel value IR of the IR image information. Specifically, by multiplying the value of the pixel adjacent to the pixel of the reflected light image information to be corrected (hereinafter, also referred to as the correction pixel) by the rate of change of the pixel value IR of the IR image information corresponding to the correction pixel, Calculate the correction value of the correction pixel. The saturation region compensation unit 63b corrects the pixel value Confidence of the correction pixel using the calculated correction value.
- the saturation region compensation unit 63b calculates the correction value in order from the pixels of the saturation region R sa adjacent to the non-saturation region R nsa, and moves the pixels to be corrected sequentially in the horizontal direction or the vertical direction while the saturation region R sa.
- the correction value is calculated for all the pixels included in.
- the graph on the right side of FIG. 25 is a graph showing the reflected light image information after correction by the saturation region compensation unit 63b.
- the corrected reflected light image information is a graph having a pixel value Confidence having the same gradient (change rate) as the IR image information in the saturation region R sa, and the discontinuity is eliminated. You can see that.
- the saturation region compensation unit 63b corrects the reflected light image information according to the gradient (change rate) of the IR image information, so that the correction can be performed according to the change of the actual directly reflected light component. , It is possible to further suppress a decrease in accuracy of reflected light image information.
- the saturation region compensation unit 63b corrects the reflected light image information for each row or column, but the present invention is not limited to this.
- the saturation region compensation unit 63b may calculate the correction value of the reflected light image information for each row and each column. In this case, two correction values corresponding to the row and column directions are calculated for one correction pixel.
- the saturation region compensation unit 63b may correct the correction pixel by using, for example, the average value of the two correction values.
- FIG. 26 is a flowchart showing an example of the correction process in the distance measuring device 10b according to the second embodiment. Similar to the correction process of FIG. 23, such a correction process is started when, for example, the application unit 20 passes an image pickup start instruction instructing the distance measurement device 10b to start imaging (distance measurement).
- steps S101 to S103 are the same as the corresponding processes of FIG. 23 described above, and thus detailed description thereof will be omitted here.
- the process shifts to step S201.
- the IR calculation unit 64 of the distance measuring device 10b calculates IR image information based on the imaging result imaged in step S101 (step S201).
- the IR calculation unit 64 outputs the calculated IR image information to the saturation region compensation unit 63b.
- the IR calculation unit 64 may output the calculated IR image information to the application unit 20.
- the saturation region compensation unit 63b of the ranging device 10b corrects the reflected light image information of the saturation region R sa based on the gradient of the IR image information calculated by the IR calculation unit 64 in step S104 (step S202).
- the saturation region compensation unit 63b corrects the correction pixel by multiplying the pixel value Confidence of the pixel adjacent to the correction pixel by the rate of change of the pixel value IR of the IR image information corresponding to the correction pixel.
- the control unit 40 of the distance measuring device 10b obtains a control signal for controlling the light source unit 11 and the light receiving unit 12 based on each pixel signal of each phase captured in step S101 (step S106).
- the control unit 40 stores the obtained control signal in a register or the like.
- the ranging device 10b determines whether or not the imaging is completed (step S107).
- the distance measuring device 10b receives, for example, an imaging end instruction instructing the end of imaging from the application unit 20, it determines that the imaging has been completed (step S107, “Yes”). In this case, the ranging device 10b ends the correction process.
- step S107 determines that the imaging has not been completed.
- the process returns to step S101.
- steps S101 to S107 are repeated, for example, in units of one frame.
- the distance measuring device 10b (an example of an information processing device) according to the second embodiment includes a correction unit 60b (an example of a control unit).
- the correction unit 60b corrects the pixel value Confidence in the saturated region of the reflected light image information according to the gradient (change rate) of the pixel value IR of the IR image information.
- the discontinuity of the received light image information (reflected light image information in the second embodiment) can be improved, and the decrease in the accuracy of the received image information can be suppressed.
- the ranging device corrects the saturation region R sa of the IR image information.
- FIG. 27 is a block diagram for explaining an example of the function of the distance measuring device 10c according to the third embodiment.
- the distance measuring device 10c shown in FIG. 27 includes a correction unit 60c instead of the correction unit 60b in FIG. 24.
- the correction unit 60c includes a saturation value estimation unit 62c instead of the saturation value estimation unit 62 in FIG.
- the correction unit 60c includes a saturation region compensation unit 63 instead of the saturation region compensation unit 63 of FIG.
- the correction unit 60c may be configured by operating a program on the CPU 100 (see FIG. 11), or may be realized by a hardware circuit.
- the saturation value estimation unit 62c estimates the correction value of the pixel value IR in the saturation region R sa of the IR image information.
- the saturation value estimation unit 62c estimates, for example, a predetermined value as a correction value.
- the saturation value estimation unit 62c may estimate the correction value based on the average value of the pixel value IR of the non-saturation region R nsa located around the saturation region R sa in the IR image information.
- the saturation value estimation unit 62c may estimate the correction value by adding a predetermined value to the average value.
- the IR image information does not become discontinuous even if the saturation region R sa exists.
- the pixel value IR is calculated based on the saturated pixel signal in the saturated region R sa. Therefore, the pixel value IR in the saturated region R sa is not a correct value, but a saturated value (a value clipped to a predetermined value). Therefore, in the present embodiment, the deterioration of the accuracy of the IR image information is suppressed by correcting the pixel value IR of the saturation region R sa of the IR image information.
- the saturation area detection unit 61 by detecting the saturation region R sa of the reflected light image information, it is assumed to detect the saturation region R sa corresponding IR image information is not limited to this.
- the saturation region detection unit 61 may detect the saturation region R sa of the IR image information by determining whether or not the pixel value IR of the IR image information is a value when the IR is saturated.
- the correction unit 60c may correct the reflected light image information in addition to the IR image information. Since the correction of the reflected light image information is the same as that of the first and second embodiments, the description thereof will be omitted.
- FIG. 28 is a flowchart showing an example of correction processing in the distance measuring device 10c according to the third embodiment. Similar to the correction process of FIG. 23, such a correction process is started when, for example, the application unit 20 passes an image pickup start instruction instructing the distance measurement device 10c to start imaging (distance measurement).
- steps S101 to S201 are the same as the corresponding processes of FIG. 26 described above, and thus detailed description thereof will be omitted here.
- the process shifts to step S301.
- the saturation value estimation unit 62c of the distance measuring device 10c calculates a correction value based on the saturation region R sa calculated in step S103 and the IR image information calculated in step S201 (step S301).
- the saturation region compensation unit 63c of the ranging device 10c corrects the IR image information of the saturation region R sa based on the correction value calculated by the saturation value estimation unit 62c in step S301 (step S302).
- the control unit 40 of the distance measuring device 10c obtains a control signal for controlling the light source unit 11 and the light receiving unit 12 based on each pixel signal of each phase captured in step S101 (step S106).
- the control unit 40 stores the obtained control signal in a register or the like.
- the ranging device 10c determines whether or not the imaging is completed (step S107).
- the distance measuring device 10a receives, for example, an imaging end instruction instructing the end of imaging from the application unit 20, the ranging device 10a determines that the imaging has been completed (step S107, “Yes”). In this case, the ranging device 10c ends the correction process.
- step S107 determines that the imaging has not been completed (step S107, "No")
- the process returns to step S101.
- steps S101 to S107 are repeated, for example, in units of one frame.
- the distance measuring device 10c (an example of an information processing device) according to the third embodiment includes a correction unit 60c (an example of a control unit).
- the correction unit 60c corrects the pixel value in the saturation region of the IR image information (an example of the received image information). As a result, it is possible to suppress a decrease in the accuracy of the received image information (IR image information in the third embodiment).
- the distance measuring device 10a has been described as being configured as a hardware device by an electronic device 1 including a CPU 100, a ROM 101, a RAM 102, a UI unit 104, a storage 103, an I / F 105, and the like. This is not limited to this example.
- the distance measuring device 10a including the control unit 40, the distance measuring unit 50, and the correction unit 60 shown in FIG. 17 is included. As a whole, it can be configured as one semiconductor element. This can be similarly applied to the distance measuring devices 10b and 10c according to the second and third embodiments.
- the present invention is not limited to this.
- the pixel value Confidence of the reflected light image information may not become zero.
- the pixel value Confidence includes an error, and the reflected light image information may be discontinuous. There is. Therefore, even when a part of the pixel signal in each phase of the light receiving unit 12 is saturated in this way, the correction processing by the correction units 60 and 60b may be performed.
- the correction units 60, 60b, 60c perform the correction of the received image information, but the present invention is not limited to this.
- the application unit 20 may perform correction of the received image information.
- the electronic device 1 of FIG. 1 is an information processing device that corrects the received image information.
- correction units 60, 60b, 60c of the above embodiment may be realized by a dedicated computer system or a general-purpose computer system.
- a program for executing the above-mentioned correction processing operation is stored and distributed in a computer-readable recording medium such as an optical disk, a semiconductor memory, a magnetic tape, a flexible disk, or a hard disk. Then, for example, the program is installed in a computer and the above-mentioned processing is executed to configure an information processing device including the correction unit 60.
- the information processing device may be an external device (for example, a personal computer) of the electronic device 1. Further, the information processing device may be an internal device (for example, a control unit 40) of the electronic device 1.
- the above communication program may be stored in a disk device provided in a server device on a network such as the Internet so that it can be downloaded to a computer or the like.
- the above-mentioned functions may be realized by collaboration between the OS (Operating System) and the application software.
- the part other than the OS may be stored in a medium and distributed, or the part other than the OS may be stored in the server device so that it can be downloaded to a computer or the like.
- each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of them may be functionally or physically distributed / physically in arbitrary units according to various loads and usage conditions. Can be integrated and configured.
- the present technology can also have the following configurations.
- It is a light receiving sensor that receives the reflected light reflected by the object to be measured by the emitted light emitted from the light source, and detects the saturated region of the light receiving image information generated based on the pixel signal output by the light receiving sensor.
- the pixel signal is used to calculate the distance to the object to be measured.
- the saturated region is a region of the received image information generated based on the saturated pixel signal.
- An information processing device including a control unit that corrects the received image information in the saturation region based on the pixel signal.
- (2) The information processing apparatus according to (1), wherein the received image information is image information generated according to a component of the reflected light included in the pixel signal.
- the information processing apparatus wherein the received image information is image information generated according to a component of the reflected light and a component of ambient light included in the pixel signal.
- the control unit The information according to (2) or (3), which corrects the pixel value in the saturated region based on the pixel value of the received image information adjacent to the saturated region in the non-saturated region in which the pixel signal is not saturated. Processing equipment.
- the control unit Of the non-saturated region where the pixel signal is not saturated the pixel value in the saturated region is set using a correction value calculated based on the average value of the pixel values of the received image information located around the saturated region.
- the information processing apparatus according to (4) for correction.
- the information processing apparatus according to (5), wherein the correction value is a value larger than the average value.
- the control unit The information processing apparatus according to (4), wherein the pixel value in the saturation region is corrected according to the rate of change of the received light value calculated according to the component of the reflected light and the component of the ambient light contained in the pixel signal. .. (8) It is a light receiving sensor that receives the reflected light reflected by the object to be measured by the emitted light emitted from the light source, and detects the saturated region of the light receiving image information generated based on the pixel signal output by the light receiving sensor. , The pixel signal is used to calculate the distance to the object to be measured.
- the saturated region is a region of the received image information generated based on the saturated pixel signal. Correcting the received image information in the saturation region based on the pixel signal, and Correction method including.
- (9) Computer It is a light receiving sensor that receives the reflected light reflected by the object to be measured by the emitted light emitted from the light source, and detects the saturated region of the light receiving image information generated based on the pixel signal output by the light receiving sensor. The pixel signal is used to calculate the distance to the object to be measured.
- the saturated region is a region of the received image information generated based on the saturated pixel signal.
- a program for functioning as a control unit that corrects the received image information in the saturated region based on the pixel signal.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Theoretical Computer Science (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
Description
1.はじめに
1.1.各実施形態に共通の構成
1.2.各実施形態に適用される間接ToF方式による測距について
1.3.各実施形態に適用される構成
2.第1の実施形態
2.1.補正処理の概要
2.2.測距装置の構成例
2.3.測距装置における補正処理
3.第2の実施形態
3.1.測距装置の構成例
3.2.測距装置における補正処理
4.第3の実施形態
4.1.測距装置の構成例
4.2.測距装置における補正処理
5.変形例
6.むすび
<1.1.各実施形態に共通の構成>
本開示は、光を用いて測距を行う技術に用いて好適なものである。本開示の実施形態の説明に先んじて、理解を容易とするために、実施形態に適用される測距方式の一つとして、間接ToF(Time of Flight)方式について説明する。間接ToF方式は、例えばPWM(Pulse Width Modulation)により変調された光源光(例えば赤外領域のレーザ光)を被測定物に照射してその反射光を受光素子にて受光し、受光された反射光における位相差に基づき、被測定物に対する測距を行う技術である。
次に、各実施形態に適用される間接ToF方式による測距について説明する。図2は、間接ToF方式の原理を説明するための図である。図2において、光源部11が射出する射出光30として、正弦波により変調された光を用いている。反射光32は、理想的には、射出光30に対して、距離Dに応じた位相差phaseを持った正弦波となる。
I=C0-C180 …(1)
Q=C90-C270 …(2)
phase=tan-1(Q/I) …(3)
Depth=(phase×range)/2π …(4)
Confidence=|I|+|Q| …(5)
図5は、各実施形態に適用される各光量値の取得および各情報の算出の第1の方法を説明するための図である。図5において、受光部12は、各位相についてシーケンシャルに各光量値C0、C90、C180およびC270を取得する。図5の例では、受光部12は、時点t10~t11の期間に位相0°による露光を行い、時点t11から所定の時間(例えば処理の切り替え時間)を挟んだ時点t12~t13の期間に位相90°による露光を行う。以下同様に、時点t13から所定の時間を挟んだ時点t14~t15の期間に位相180°による受光を行い、時点t15から所定の時間を挟んだ時点t16~t17の期間に位相270°による露光を行う。
Confidence=(I2+Q2)1/2 …(6)
図6は、各実施形態に適用される各光量値の取得および各情報の算出の第2の方法を説明するための図である。この第2の方法では、受光部12は、1つの受光素子に対して2つの読み出し回路(タップA、タップBとする)を備え、このタップAおよびタップBによる読み出しをシーケンシャル(交互)に実行する(詳細は図13を用いて後述する)。以下、タップAおよびタップBによる読み出し方式を2タップ方式ともいう。
I=C0-C180=(A0-B0)-(A180-B180) …(7)
Q=C90-C270=(A90-B90)-(A270-B270) …(8)
図10は、各実施形態に適用される各光量値の取得および各情報の算出の第3の方法を説明するための図である。この第3の方法では、受光部12は、上述の第2の方法と同様にタップAおよびタップBを備え、このタップAおよびタップBからの読み出しをシーケンシャルに実行する点で第2の方法と同じである。一方、第3の方式では、受光部12は、上述した各位相0°、90°の読み出しをシーケンシャルに実行し、位相180°、270°の読み出しを実行しない点で第2の方法と異なる。
I=C0-C180=(A0-B0) …(9)
Q=C90-C270=(A90-B90) …(10)
RAW=C0-CFPN=(A0-AFPN)+(B0-BFPN) …(11)
次に、各実施形態に適用される構成の例について説明する。図11は、各実施形態に適用される電子機器の一例の構成を示すブロック図である。図11において、電子機器1は、CPU(Central Processing Unit)100と、ROM(Read Only Memory)101と、RAM(Random Access Memory)102と、ストレージ103と、UI(User Interface)部104と、インタフェース(I/F)105と、を含む。さらに、電子機器1は、図1の光源部11および受光部12にそれぞれ対応する光源ユニット110およびセンサユニット111と、を含む。
<2.1.補正処理の概要>
次に、本開示の第1の実施形態について説明する。本実施形態に係る測距装置は、受光部12が受光した反射光32に基づき、被測定物までの距離Dに加え、反射光画像情報を生成する。このとき、例えば受光部12が受光した反射光32の強度が高く、光強度が飽和してしまうと、反射光画像情報の生成精度が低下する場合がある。以下、図16を用いて、光強度が飽和した場合の反射光画像情報について説明し、かかる反射光画像情報の補正方法について説明する。なお、図16は、本開示の第1の実施形態に係る補正方法の概要を説明するための図である。
図17は、本開示の第1の実施形態に係る測距装置の機能を説明するための一例の機能ブロック図である。図17において、測距装置10aは、光源部11と、受光部12と、制御部40と、測距部50と、補正部60と、を含む。これら光源部11、受光部12、制御部40、測距部50および補正部60のうち、制御部40、測距部50および補正部60は、例えばCPU100(図11参照)上で所定のプログラムが動作することで構成される。これに限らず、制御部40、測距部50および補正部60のうち一部または全部を、互いに協働して動作するハードウェア回路により構成してもよい。なお、以下、制御部40、測距部50および補正部60で構成される装置を単に情報処理装置とも呼ぶ。
I=(A0-B0)-(A180-B180) …(7)
Q=(A90-B90)-(A270-B270) …(8)
Confidence=|I|+|Q| …(5)
図23は、第1の実施形態に係る測距装置10aにおける補正処理の一例を示すフローチャートである。かかる補正処理は、例えば、アプリケーション部20から測距装置10aに対して、撮像(測距)の開始を指示する撮像開始指示が渡されることで開始される。
次に、本開示の第2の実施形態について説明する。第2の実施形態に係る測距装置は、IR画像情報を用いて、反射光画像情報の飽和領域Rsaを補正する。
図24は、第2の実施形態に係る測距装置10bの機能の一例を説明するためのブロック図である。図24に示す測距装置10bは、図17の補正部60に代えて補正部60bを備える。補正部60bは、図17の飽和値推定部62を備えておらず、代わりにIR算出部64を備える。また、補正部60bは、図17の飽和領域補償部63に代えて飽和領域補償部63bを備える。補正部60bは、CPU100(図11参照)上でプログラムが動作することで構成されてもよいし、ハードウェア回路により実現してもよい。
図26は、第2の実施形態に係る測距装置10bにおける補正処理の一例を示すフローチャートである。かかる補正処理は、図23の補正処理と同様に、例えば、アプリケーション部20から測距装置10bに対して、撮像(測距)の開始を指示する撮像開始指示が渡されることで開始される。
次に、本開示の第3の実施形態について説明する。第3の実施形態に係る測距装置は、IR画像情報の飽和領域Rsaを補正する。
図27は、第3の実施形態に係る測距装置10cの機能の一例を説明するためのブロック図である。図27に示す測距装置10cは、図24の補正部60bに代えて補正部60cを備える。補正部60cは、図17の飽和値推定部62に代えて、飽和値推定部62cを備える。補正部60cは、図17の飽和領域補償部63に代えて、飽和領域補償部63を備える。補正部60cは、CPU100(図11参照)上でプログラムが動作することで構成されてもよいし、ハードウェア回路により実現してもよい。
図28は、第3の実施形態に係る測距装置10cにおける補正処理の一例を示すフローチャートである。かかる補正処理は、図23の補正処理と同様に、例えば、アプリケーション部20から測距装置10cに対して、撮像(測距)の開始を指示する撮像開始指示が渡されることで開始される。
上記第1の実施形態では、測距装置10aがCPU100、ROM101、RAM102、UI部104、ストレージ103、I/F105などを含む電子機器1により、ハードウェア装置として構成されるように説明したが、これはこの例に限定されない。例えば、図11または図12に示した、半導体チップを積層して構成したセンサユニット111に対して、図17に示した制御部40、測距部50および補正部60を含め、測距装置10aの全体として、1つの半導体素子として構成することもできる。これは、第2、第3の実施形態に係る測距装置10b、10cにも、同様に適用できる。
(1)
光源から照射された射出光が被測定物に反射した反射光を受光する受光センサであって、前記受光センサが出力する画素信号に基づいて生成される受光画像情報の飽和領域を検出し、
前記画素信号は前記被測定物までの距離の算出に用いられ、
前記飽和領域は、飽和した前記画素信号に基づいて生成される前記受光画像情報の領域であり、
前記飽和領域の前記受光画像情報を前記画素信号に基づき補正する制御部
を備える情報処理装置。
(2)
前記受光画像情報は、前記画素信号に含まれる前記反射光の成分に応じて生成される画像情報である
(1)に記載の情報処理装置。
(3)
前記受光画像情報は、前記画素信号に含まれる前記反射光の成分および環境光の成分に応じて生成される画像情報である
(1)に記載の情報処理装置。
(4)
前記制御部は、
前記画素信号が飽和していない非飽和領域のうち前記飽和領域に隣接する前記受光画像情報の画素値に基づき、前記飽和領域の前記画素値を補正する
(2)または(3)に記載の情報処理装置。
(5)
前記制御部は、
前記画素信号が飽和していない非飽和領域のうち前記飽和領域の周囲に位置する前記受光画像情報の前記画素値の平均値に基づいて算出した補正値を用いて前記飽和領域の前記画素値を補正する
(4)に記載の情報処理装置。
(6)
前記補正値は、前記平均値より大きい値である
(5)に記載の情報処理装置。
(7)
前記制御部は、
前記画素信号に含まれる前記反射光の成分および環境光の成分に応じて算出される受光値の変化率に応じて、前記飽和領域の前記画素値を補正する
(4)に記載の情報処理装置。
(8)
光源から照射された射出光が被測定物に反射した反射光を受光する受光センサであって、前記受光センサが出力する画素信号に基づいて生成される受光画像情報の飽和領域を検出することと、
前記画素信号は前記被測定物までの距離の算出に用いられ、
前記飽和領域は、飽和した前記画素信号に基づいて生成される前記受光画像情報の領域であり、
前記飽和領域の前記受光画像情報を前記画素信号に基づき補正することと、
を含む補正方法。
(9)
コンピュータを、
光源から照射された射出光が被測定物に反射した反射光を受光する受光センサであって、前記受光センサが出力する画素信号に基づいて生成される受光画像情報の飽和領域を検出し、
前記画素信号は前記被測定物までの距離の算出に用いられ、
前記飽和領域は、飽和した前記画素信号に基づいて生成される前記受光画像情報の領域であり、
前記飽和領域の前記受光画像情報を前記画素信号に基づき補正する制御部
として機能させるためのプログラム。
10、10a、10b、10c 測距装置
11 光源部
12 受光部
13 測距処理部
20 アプリケーション部
40 制御部
50 測距部
60、60b、60c 補正部
61 飽和領域検出部
62、62c 飽和値推定部
63、63b、63c 飽和領域補償部
64 IR算出部
Claims (9)
- 光源から照射された射出光が被測定物に反射した反射光を受光する受光センサであって、前記受光センサが出力する画素信号に基づいて生成される受光画像情報の飽和領域を検出し、
前記画素信号は前記被測定物までの距離の算出に用いられ、
前記飽和領域は、飽和した前記画素信号に基づいて生成される前記受光画像情報の領域であり、
前記飽和領域の前記受光画像情報を前記画素信号に基づき補正する制御部
を備える情報処理装置。 - 前記受光画像情報は、前記画素信号に含まれる前記反射光の成分に応じて生成される画像情報である
請求項1に記載の情報処理装置。 - 前記受光画像情報は、前記画素信号に含まれる前記反射光の成分および環境光の成分に応じて生成される画像情報である
請求項1に記載の情報処理装置。 - 前記制御部は、
前記画素信号が飽和していない非飽和領域のうち前記飽和領域に隣接する前記受光画像情報の画素値に基づき、前記飽和領域の前記画素値を補正する
請求項2に記載の情報処理装置。 - 前記制御部は、
前記画素信号が飽和していない非飽和領域のうち前記飽和領域の周囲に位置する前記受光画像情報の前記画素値の平均値に基づいて算出した補正値を用いて前記飽和領域の前記画素値を補正する
請求項4に記載の情報処理装置。 - 前記補正値は、前記平均値より大きい値である
請求項5に記載の情報処理装置。 - 前記制御部は、
前記画素信号に含まれる前記反射光の成分および環境光の成分に応じて算出される受光値の変化率に応じて、前記飽和領域の前記画素値を補正する
請求項4に記載の情報処理装置。 - 光源から照射された射出光が被測定物に反射した反射光を受光する受光センサであって、前記受光センサが出力する画素信号に基づいて生成される受光画像情報の飽和領域を検出することと、
前記画素信号は前記被測定物までの距離の算出に用いられ、
前記飽和領域は、飽和した前記画素信号に基づいて生成される前記受光画像情報の領域であり、
前記飽和領域の前記受光画像情報を前記画素信号に基づき補正することと、
を含む補正方法。 - コンピュータを、
光源から照射された射出光が被測定物に反射した反射光を受光する受光センサであって、前記受光センサが出力する画素信号に基づいて生成される受光画像情報の飽和領域を検出し、
前記画素信号は前記被測定物までの距離の算出に用いられ、
前記飽和領域は、飽和した前記画素信号に基づいて生成される前記受光画像情報の領域であり、
前記飽和領域の前記受光画像情報を前記画素信号に基づき補正する制御部
として機能させるためのプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/761,936 US20220390577A1 (en) | 2019-09-27 | 2020-08-03 | Information processing apparatus, correction method, and program |
KR1020227009318A KR20220069944A (ko) | 2019-09-27 | 2020-08-03 | 정보 처리 장치, 보정 방법 및 프로그램 |
CN202080067172.4A CN114521241A (zh) | 2019-09-27 | 2020-08-03 | 信息处理装置、校正方法以及程序 |
JP2021548399A JPWO2021059748A1 (ja) | 2019-09-27 | 2020-08-03 | |
EP20870098.9A EP4036521A4 (en) | 2019-09-27 | 2020-08-03 | INFORMATION PROCESSING DEVICE, CORRECTION METHOD, AND PROGRAM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019176862 | 2019-09-27 | ||
JP2019-176862 | 2019-09-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021059748A1 true WO2021059748A1 (ja) | 2021-04-01 |
Family
ID=75166581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/029674 WO2021059748A1 (ja) | 2019-09-27 | 2020-08-03 | 情報処理装置、補正方法およびプログラム |
Country Status (7)
Country | Link |
---|---|
US (1) | US20220390577A1 (ja) |
EP (1) | EP4036521A4 (ja) |
JP (1) | JPWO2021059748A1 (ja) |
KR (1) | KR20220069944A (ja) |
CN (1) | CN114521241A (ja) |
TW (1) | TW202130973A (ja) |
WO (1) | WO2021059748A1 (ja) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11922606B2 (en) * | 2021-10-04 | 2024-03-05 | Samsung Electronics Co., Ltd. | Multipass interference correction and material recognition based on patterned illumination without frame rate loss |
CN117523437A (zh) * | 2023-10-30 | 2024-02-06 | 河南送变电建设有限公司 | 一种用于变电站近电作业现场实时风险识别方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008061033A (ja) * | 2006-08-31 | 2008-03-13 | Sanyo Electric Co Ltd | スミア推測方法およびスミア除去回路 |
JP2016183922A (ja) * | 2015-03-26 | 2016-10-20 | 富士フイルム株式会社 | 距離画像取得装置及び距離画像取得方法 |
JP2017133853A (ja) | 2016-01-25 | 2017-08-03 | 株式会社リコー | 測距装置 |
JP2018077071A (ja) * | 2016-11-08 | 2018-05-17 | 株式会社リコー | 測距装置、監視カメラ、3次元計測装置、移動体、ロボット、光源駆動条件設定方法及び測距方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107710015B (zh) * | 2015-07-03 | 2021-08-24 | 新唐科技日本株式会社 | 距离测量装置以及距离图像合成方法 |
-
2020
- 2020-07-24 TW TW109125040A patent/TW202130973A/zh unknown
- 2020-08-03 EP EP20870098.9A patent/EP4036521A4/en active Pending
- 2020-08-03 JP JP2021548399A patent/JPWO2021059748A1/ja active Pending
- 2020-08-03 CN CN202080067172.4A patent/CN114521241A/zh active Pending
- 2020-08-03 WO PCT/JP2020/029674 patent/WO2021059748A1/ja active Application Filing
- 2020-08-03 US US17/761,936 patent/US20220390577A1/en active Pending
- 2020-08-03 KR KR1020227009318A patent/KR20220069944A/ko unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008061033A (ja) * | 2006-08-31 | 2008-03-13 | Sanyo Electric Co Ltd | スミア推測方法およびスミア除去回路 |
JP2016183922A (ja) * | 2015-03-26 | 2016-10-20 | 富士フイルム株式会社 | 距離画像取得装置及び距離画像取得方法 |
JP2017133853A (ja) | 2016-01-25 | 2017-08-03 | 株式会社リコー | 測距装置 |
JP2018077071A (ja) * | 2016-11-08 | 2018-05-17 | 株式会社リコー | 測距装置、監視カメラ、3次元計測装置、移動体、ロボット、光源駆動条件設定方法及び測距方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4036521A4 |
Also Published As
Publication number | Publication date |
---|---|
US20220390577A1 (en) | 2022-12-08 |
KR20220069944A (ko) | 2022-05-27 |
EP4036521A4 (en) | 2022-10-19 |
JPWO2021059748A1 (ja) | 2021-04-01 |
CN114521241A (zh) | 2022-05-20 |
EP4036521A1 (en) | 2022-08-03 |
TW202130973A (zh) | 2021-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109698917B (zh) | 固体摄像装置、固体摄像装置的驱动方法以及电子设备 | |
US9357122B2 (en) | Photoelectric conversion apparatus, focus detecting apparatus, imaging system, and driving method of photoelectric conversion apparatus | |
US10389957B2 (en) | Readout voltage uncertainty compensation in time-of-flight imaging pixels | |
CN106664379B (zh) | 高动态范围像素和用于操作高动态范围像素的方法 | |
JP5552858B2 (ja) | 固体撮像装置、固体撮像装置の駆動方法、及び、電子機器 | |
JP5978771B2 (ja) | 信号処理装置および方法、撮像素子、並びに、撮像装置 | |
KR101215142B1 (ko) | 고체 촬상 장치 및 촬상 시스템 | |
US10491842B2 (en) | Imaging device, a solid-state imaging device for use in the imaging device | |
US8687099B2 (en) | Imaging device, imaging method, and electronic device | |
US20170117310A1 (en) | Solid-state image sensor, electronic apparatus, and imaging method | |
TW202127650A (zh) | 光偵測裝置 | |
TW200939758A (en) | Solid-state image sensing device, method for reading signal of solid-state image sensing device, and image pickup apparatus | |
CN104272723A (zh) | 特别用于将时变图像数据的取样亮度感测和异步检测相结合的光电阵列 | |
WO2021059748A1 (ja) | 情報処理装置、補正方法およびプログラム | |
JP2013207433A (ja) | 固体撮像装置、撮像信号出力方法および電子機器 | |
US20220046158A1 (en) | Distance measuring device | |
WO2006129460A1 (ja) | 撮像装置 | |
KR20160040173A (ko) | 변환 장치, 촬상 장치, 전자 기기, 변환 방법 | |
KR20140084817A (ko) | 3차원 이미지 센서의 거리 픽셀 및 이를 포함하는 3차원 이미지 센서 | |
JP6173058B2 (ja) | 撮像装置、撮像システム、撮像装置の駆動方法、撮像システムの駆動方法 | |
JP2017107132A (ja) | 電子機器 | |
US20220146648A1 (en) | Distance measuring device, method for controlling distance measuring device, and electronic device | |
KR20130098040A (ko) | 깊이 이미지를 자동으로 캘리브레이션하는 이미지 시스템 | |
JP6539987B2 (ja) | 撮像素子および撮像装置 | |
WO2020196378A1 (ja) | 距離画像の取得方法、及び、距離検出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20870098 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021548399 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2020870098 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2020870098 Country of ref document: EP Effective date: 20220428 |