WO2018056070A1 - 信号処理装置、撮影装置、及び、信号処理方法 - Google Patents
信号処理装置、撮影装置、及び、信号処理方法 Download PDFInfo
- Publication number
- WO2018056070A1 WO2018056070A1 PCT/JP2017/032393 JP2017032393W WO2018056070A1 WO 2018056070 A1 WO2018056070 A1 WO 2018056070A1 JP 2017032393 W JP2017032393 W JP 2017032393W WO 2018056070 A1 WO2018056070 A1 WO 2018056070A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- image
- addition
- unit
- synthesis
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 282
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 238000003786 synthesis reaction Methods 0.000 claims description 189
- 230000015572 biosynthetic process Effects 0.000 claims description 187
- 230000033001 locomotion Effects 0.000 claims description 93
- 238000001514 detection method Methods 0.000 claims description 79
- 238000003384 imaging method Methods 0.000 claims description 49
- 238000004364 calculation method Methods 0.000 claims description 30
- 230000002194 synthesizing effect Effects 0.000 claims description 17
- 238000005516 engineering process Methods 0.000 abstract description 56
- 238000000034 method Methods 0.000 description 169
- 230000008569 process Effects 0.000 description 131
- 238000009825 accumulation Methods 0.000 description 88
- 102100027206 CD2 antigen cytoplasmic tail-binding protein 2 Human genes 0.000 description 73
- 101000914505 Homo sapiens CD2 antigen cytoplasmic tail-binding protein 2 Proteins 0.000 description 73
- 101000922137 Homo sapiens Peripheral plasma membrane protein CASK Proteins 0.000 description 72
- 102100031166 Peripheral plasma membrane protein CASK Human genes 0.000 description 72
- 238000004891 communication Methods 0.000 description 48
- 210000003127 knee Anatomy 0.000 description 39
- 101100204393 Arabidopsis thaliana SUMO2 gene Proteins 0.000 description 31
- 101150112492 SUM-1 gene Proteins 0.000 description 31
- 101150096255 SUMO1 gene Proteins 0.000 description 31
- 101100311460 Schizosaccharomyces pombe (strain 972 / ATCC 24843) sum2 gene Proteins 0.000 description 31
- 230000004397 blinking Effects 0.000 description 24
- 101100181929 Caenorhabditis elegans lin-3 gene Proteins 0.000 description 21
- 238000003860 storage Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 239000000203 mixture Substances 0.000 description 15
- 239000002131 composite material Substances 0.000 description 14
- 229920006395 saturated elastomer Polymers 0.000 description 14
- 101100534673 Arabidopsis thaliana SUMO3 gene Proteins 0.000 description 13
- 101100116390 Schizosaccharomyces pombe (strain 972 / ATCC 24843) ded1 gene Proteins 0.000 description 13
- 238000012432 intermediate storage Methods 0.000 description 12
- 230000008859 change Effects 0.000 description 10
- 239000004065 semiconductor Substances 0.000 description 10
- 239000000758 substrate Substances 0.000 description 7
- 238000002156 mixing Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000012538 light obscuration Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 101100074846 Caenorhabditis elegans lin-2 gene Proteins 0.000 description 2
- 101100497386 Mus musculus Cask gene Proteins 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000010267 cellular communication Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000002485 combustion reaction Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008033 biological extinction Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06G—ANALOGUE COMPUTERS
- G06G1/00—Hand manipulated computing devices
- G06G1/16—Hand manipulated computing devices in which a straight or curved line has to be drawn through related points on one or more families of curves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/745—Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
- H04N25/58—Control of the dynamic range involving two or more exposures
- H04N25/587—Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10144—Varying exposure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present technology relates to a signal processing device, an imaging device, and a signal processing method.
- the present invention reliably recognizes a blinking object and accurately recognizes an obstacle.
- the present invention relates to a signal processing device, an imaging device, and a signal processing method that can be used.
- Patent Document 1 a technique disclosed in Patent Document 1 is known.
- the present technology has been made in view of such a situation, and in a scene with a very large luminance difference, it is possible to surely recognize a blinking object and accurately recognize an obstacle. Is.
- a signal processing device combines an adder that adds signals of a plurality of images shot at different exposure times using different saturation signal amounts, and a signal of the plurality of images obtained as a result of the addition.
- a signal processing device including a combining unit.
- An imaging device includes an image generation unit that generates a plurality of images captured at different exposure times, an addition unit that adds signals of the plurality of images using different saturation signal amounts, and an addition And a combining unit that combines signals of a plurality of images obtained as a result of the above.
- the signal processing method includes a step of adding signals of a plurality of images captured at different exposure times using different saturation signal amounts, and combining the signals of the plurality of images obtained as a result of the addition. It is a signal processing method including.
- signals of a plurality of images captured at different exposure times are added using different saturation signal amounts, and a plurality obtained as a result of the addition
- the image signals are synthesized.
- the signal processing device and the photographing device may be independent devices, or may be internal blocks constituting one device.
- in-vehicle cameras are increasingly installed in automobiles for the purpose of realizing advanced driving control such as automatic driving.
- advanced driving control such as automatic driving.
- in-vehicle cameras are required to ensure visibility even under extremely large brightness differences, such as tunnel exits, in order to ensure safety. This situation is necessary.
- FIG. 1 is a diagram illustrating an example of shooting of a shooting target with a very large luminance difference.
- FIG. 1 shows an example of photographing a tunnel exit, operation control for ensuring safety cannot be performed if the situation at the tunnel exit cannot be recognized.
- FIG. 2 is a diagram illustrating an example of shooting of a flashing shooting target.
- a traffic light whose blue (left end) is lit is shown, but the third frame (Frame 3) and the fourth frame (Frame 4).
- the traffic light in the off state is shown.
- the traffic light when the traffic light is turned off, it is a cause of hindrance to video (image) evidence when used in a drive recorder, for example.
- the traffic light when the traffic light is turned off, when the image is used for, for example, automatic driving of a car, it causes troubles in driving control such as stopping the car.
- Example of vehicle front recognition Further, when realizing automatic driving, a technique for recognizing an obstacle (target object) such as a preceding vehicle existing in the traveling direction of a car or a pedestrian crossing a road is essential. For example, if the detection of an obstacle in front of an automobile is delayed, the operation of the automatic brake may be delayed.
- an obstacle target object
- the operation of the automatic brake may be delayed.
- FIG. 3 is a diagram illustrating an example of recognition in front of the vehicle.
- two vehicles traveling in front of the car, the state of the road surface, and the like are recognized, and automatic driving is controlled according to the recognition result.
- Patent Document 1 proposes a method of suppressing an overexposure by combining images taken with a plurality of different exposure amounts and expanding an apparent dynamic range.
- this method as shown in FIG. 4, the brightness value of a long exposure image (long accumulation image) having a long exposure time is referred to, and if the brightness falls below a predetermined threshold, the long exposure image (long accumulation image) If it exceeds, a short-exposure image (short accumulation image) is output, so that an image having a wide dynamic range can be generated.
- FIG. 2 when a high-brightness subject such as an LED traffic light is blinking, the long exposure image (long accumulation image) and the short exposure image (short accumulation image) are combined. By doing so, the light-off state may be recorded although the light-on state of the traffic light should be recorded.
- FIG. 5 shows an example of the case where the light-off state is recorded even though the light-on state of the traffic light should be recorded.
- FIG. 6 shows an example in which photographing is performed with an exposure time that exceeds the extinguishing period of the blinking light source.
- the extinguishing period of 4 ms 4 ms ⁇ exposure time.
- FIG. 7 and 8 are diagrams for explaining a technique of the current technology.
- a plurality of captured images (long storage images, short storage images) taken at different exposure times (T1, T2) are combined to expand the dynamic range and simultaneously capture a plurality of captured images (The added value of the long accumulation image and the short accumulation image is always used. Therefore, even in a situation where the lighting state of the LED is recorded in only one of the plurality of captured images exposed at different exposure timings, the image signal of the captured image including the lighting state of the LED is obtained. Effective use can suppress the occurrence of LED extinction.
- the knee point Kp1 can be said to be a signal amount in which the long accumulation (P1) is saturated and the slope of the addition signal Plo changes.
- g1 represents an exposure ratio (exposure time of long accumulation (T1) / exposure time of short accumulation (T2)).
- a linear signal (P) that is a linearly restored signal is obtained for each of the first region and the second region with the knee point Kp1 as a boundary.
- the second region that is, the region of Kp1 ⁇ Plo is a saturated region
- the value of long accumulation (P1) that is saturated and constant is estimated from the value of short accumulation (P2)
- P1 long accumulation
- P2 short accumulation
- the first term on the right side represents the start offset of the second region
- the second term on the right side represents the signal amount of short accumulation
- the third term on the right side represents the short accumulation. It becomes the signal amount of long accumulation estimated.
- Patent Document 3 proposes a method of acquiring a vertical histogram of an image ahead of an automobile obtained from a photographing apparatus and detecting the position of an obstacle (target object) from the peak position.
- this method as shown in FIG. 9, a histogram of pixel values is acquired in a strip-shaped region A1 along the traveling direction with respect to a captured image in front of the automobile.
- FIG. 9A since there are no obstacles in the traveling direction, the road surface histogram is flat.
- B of FIG. 9 another vehicle is traveling in front of the automobile, and there are obstacles in the traveling direction. Therefore, a peak appears at a specific position on the flat road surface histogram. ing.
- the position of the obstacle can be detected by specifying coordinates corresponding to the luminance level of the peak.
- the calculation formula changes sharply with the knee point Kp1 as the boundary. Therefore, the image noise distribution is asymmetric. Therefore, for example, in a road surface situation where the sun is present in the direction of travel of the automobile and the brightness changes smoothly, when a histogram of the road surface is acquired, the histogram includes pseudo spikes (histogram spikes). Will occur.
- FIG. 10 shows an example of histogram spikes.
- FIG. 10 shows an example of a histogram obtained as a result of synthesizing a signal having a smooth luminance change using the above-described current technique. As shown in a frame A2 in the figure, a histogram spike is shown. Has occurred.
- the occurrence position of the spike in the histogram shown in FIG. 10 corresponds to the position of the result of synthesis using the current technology of C in FIG.
- the synthesis result of C in FIG. 11 is obtained by synthesizing the value of long accumulation (P1) of A in FIG. 11 and the value of short accumulation (P2) of B in FIG.
- a pseudo spike may occur in the histogram. Further, even when there is an obstacle in front of the car, in addition to the peak (present peak) indicating the presence of the obstacle shown in the frame A4 in the figure, as shown in the frame A3 in the figure, the histogram shows There is a possibility that a pseudo spike (pseudo peak) occurs.
- the pseudo peak and the presence or absence of the obstacle May not be distinguished from the main peak for detecting, and an erroneous detection of an obstacle may occur.
- this technology has the ability to correctly output the lighting status of fast-flashing subjects such as LED traffic lights as well as suppressing whiteout and blackout in scenes with very large brightness differences. This is possible, and by suppressing spikes in the histogram, it is possible to accurately detect an obstacle without erroneous detection.
- FIG. 13 is a block diagram illustrating a configuration example of an embodiment of a camera unit as a photographing apparatus to which the present technology is applied.
- the camera unit 10 includes a lens 101, an image sensor 102, a delay line 103, a signal processing unit 104, an output unit 105, and a timing control unit 106.
- the lens 101 collects light from the subject and makes it incident on the image sensor 102 to form an image.
- the image sensor 102 is, for example, a CMOS (Complementary Metal Metal Oxide Semiconductor) image sensor.
- the image sensor 102 receives incident light from the lens 101 and performs photoelectric conversion to capture a captured image (image data) corresponding to the incident light.
- CMOS Complementary Metal Metal Oxide Semiconductor
- the image sensor 102 functions as an imaging unit that performs imaging at an imaging timing specified by the timing control unit 106, and performs N imaging during the frame rate period of the output image output by the output unit 105. N shot images obtained by N times of shooting are sequentially output.
- the delay line 103 sequentially stores N captured images sequentially output by the image sensor 102 and supplies the N captured images to the signal processing unit 104 at the same time.
- the signal processing unit 104 processes N photographed images from the delay line 103 and generates an output image of one frame (sheet). At that time, the signal processing unit 104 has N systems for performing linearization after calculating an addition value of pixel values of the same coordinates of N photographed images, and blends the processing results to generate an output image. To do.
- the signal processing unit 104 performs processing such as noise removal and WB (white balance) adjustment on the output image, and supplies the output image to the output unit 105. Further, the signal processing unit 104 detects (detects) the exposure level from the brightness of the N captured images from the delay line 103 and supplies the detected level to the timing control unit 106.
- the output unit 105 outputs an output image (video data) from the signal processing unit 104.
- the timing control unit 106 controls the shooting timing of the image sensor 102. That is, the timing control unit 106 adjusts the exposure time of the image sensor 102 based on the exposure level detected by the signal processing unit 104. At that time, the timing control unit 106 performs shutter control so that the exposure timings of the N captured images are as close as possible.
- the camera unit 10 is configured as described above.
- the image sensor 102 acquires shooting data of N shot images having different exposure times.
- the timing control unit 106 performs control to make the exposure periods as close as possible to extend the effective exposure time and to easily cover the blinking cycle of the fast blinking subject such as an LED.
- T1, T2, and T3 indicate respective exposure timings when performing photographing three times within one frame.
- the timing control unit 106 controls the exposure timing so that the exposure of T2 is started as soon as the exposure of T1 is completed, and the exposure of T3 is started as soon as the exposure of T2 is completed. That is, the interval between the end of exposure at T1 and the start of exposure at T2 and the interval between the end of exposure at T2 and the start of exposure at T3 are minimized.
- the lighting period of the fast flashing subject can easily overlap with any one of the exposure periods T1, T2, and T3, and the probability that an image of the lighting period can be captured can be increased.
- the lighting period is short when the exposure timings T1, T2, and T3 are set apart as shown in FIG. 15A.
- the exposure timing and the light emission timing do not overlap.
- the timing control unit 106 performs control to bring the exposure timings of T1, T2, and T3 close to each other according to the turn-off period. be able to.
- FIG. 16 is a diagram illustrating a configuration example of the signal processing unit 104 in FIG. 13.
- image data of N photographed images acquired by the image sensor 102 is processed and synthesized into one frame (sheet) output image.
- the signal processing unit 104 always performs synthesis between the image data of N photographed images so that total N-1 synthesis processes are performed.
- T1 and T2 indicate photographed images corresponding to the respective exposure times when photographing twice within one frame.
- the captured images corresponding to T1 and T2 are also described as an image signal T1 and an image signal T2, respectively.
- the signal processing unit 104 includes a first addition processing unit 121, a first linearization processing unit 122, a second addition processing unit 123, a second linearization processing unit 124, a synthesis coefficient calculation unit 125, and a motion detection unit 126. , A synthesis coefficient modulation unit 127, and a synthesis processing unit 128.
- the first addition processing unit 121 performs a first addition process of adding the image signal T1 and the image signal T2 input thereto to generate an addition signal SUM1.
- the first addition processing unit 121 supplies the addition signal SUM1 obtained by the first addition processing to the first linearization processing unit 122.
- the resulting signal is added. Is called.
- the clip values of the image signal T1 and the image signal T2 in the first addition process are set.
- the clip value (upper limit clip value) can be said to be a saturation value (saturation signal amount) or a limit value.
- saturation value saturation signal amount
- limit value a limit value
- Equation (6) a function that is MIN (a, b) means that the upper limit value (saturation value, limit value) of b is a.
- the meaning of this function is the same in formulas described later.
- the first linearization processing unit 122 refers to the addition signal SUM1 from the first addition processing unit 121, performs a first linearization process, and generates a linear signal LIN1 that is linear with respect to brightness.
- the first linearization processing unit 122 supplies the linear signal LIN1 obtained by the first linearization processing to the motion detection unit 126 and the synthesis processing unit 128.
- KP1_1 CLIP_T1_1 ⁇ (1 + 1 / G1) ... (7)
- the linear signal LIN1 is obtained by the following equation (8) or equation (9) according to the region of the addition signal SUM1 and the knee point Kp (KP1_1).
- the second addition processing unit 123 performs a second addition process of adding the image signal T1 and the image signal T2 input thereto to generate an addition signal SUM2.
- the second addition processing unit 123 supplies the addition signal SUM2 obtained by the second addition processing to the second linearization processing unit 124.
- the upper limit clipping process is performed on the values of the image signal T1 and the image signal T2 using values different from the first addition process described above, and then the result is obtained.
- Signal addition is performed.
- the clip values of the image signal T1 and the image signal T2 in the second addition process are set.
- the clip value of the image signal T1 is CLIP_T1_2
- the clip value of the image signal T2 is CLIP_T2_2
- the following equation (10) is calculated in the second addition process, and the addition signal SUM2 is obtained.
- the second linearization processing unit 124 refers to the addition signal SUM2 from the second addition processing unit 123, performs a second linearization process, and generates a linear signal LIN2 that is linear with respect to brightness.
- the second linearization processing unit 124 supplies the linear signal LIN2 obtained by the second linearization processing to the motion detection unit 126 and the synthesis processing unit 128.
- KP1_2 CLIP_T1_2 ⁇ (1 + 1 / G1) ... (11)
- the linear signal LIN2 is obtained by the following equation (12) or equation (13) according to the region of the addition signal SUM2 and the knee point Kp (KP1_2).
- the synthesis coefficient calculation unit 125 calculates a synthesis coefficient for synthesizing the linear signal LIN1 and the linear signal LIN2 with reference to the image signal T1.
- the synthesis coefficient calculation unit 125 supplies the calculated synthesis coefficient to the synthesis coefficient modulation unit 127.
- the threshold for starting the synthesis (blending) of the linear signal LIN2 is BLD_TH_LOW
- the synthesis ratio (blend rate) is 1.0
- the threshold at which the linear signal LIN2 is 100% is Assuming that BLD_TH_HIGH, the synthesis coefficient is obtained by the following equation (14). However, here, the signal is clipped in the range of 0 to 1.0.
- the motion detection unit 126 performs motion determination by defining a difference between the linear signal LIN1 from the first linearization processing unit 122 and the linear signal LIN2 from the second linearization processing unit 124 as a motion amount. At that time, the motion detection unit 126 calculates a motion coefficient by comparing the amount of motion with the amount of noise assumed from the sensor characteristics in order to distinguish between the noise of the signal and the blinking of a fast blinking body such as an LED. The motion detection unit 126 supplies the calculated motion coefficient to the synthesis coefficient modulation unit 127.
- the motion coefficient is It is calculated by (15). However, here, the signal is clipped in the range of 0 to 1.0.
- ABS () means a function that returns an absolute value. The meaning of this function is the same in formulas described later.
- the synthesis coefficient modulation unit 127 performs modulation on the synthesis coefficient from the synthesis coefficient calculation unit 125 in consideration of the motion coefficient from the motion detection unit 126, and calculates a post-motion compensation synthesis coefficient.
- the synthesis coefficient modulation unit 127 supplies the calculated post-motion compensation synthesis coefficient to the synthesis processing unit 128.
- the post-motion compensation synthesis coefficient is obtained by the following equation (16). However, here, the signal is clipped in the range of 0 to 1.0.
- composition coefficient after motion compensation composition coefficient-motion coefficient ⁇ ⁇ ⁇
- the synthesis processing unit 128 synthesizes the linear signal LIN1 from the first linearization processing unit 122 and the linear signal LIN2 from the second linearization processing unit 124 with the motion-compensated synthesis coefficient from the synthesis coefficient modulation unit 127 ( (Alpha blending), and the resultant image signal is output as an HDR (High Dynamic Range) synthesized signal obtained as a result.
- HDR High Dynamic Range
- the combined image signal is obtained by the following equation (17).
- Post-synthesis image signal (LIN2-LIN1) x Motion-compensated synthesis coefficient + LIN1 ⁇ ⁇ ⁇ (17)
- the signal processing unit 104 is configured as described above.
- step S11 the first addition processing unit 121 performs upper limit clip processing on the values of the image signal T1 and the image signal T2 using predetermined clip values (CLIP_T1_1, CLIP_T2_1).
- step S12 the first addition processing unit 121 calculates the above equation (6), thereby adding the image signal T1 and the image signal T2 after the upper limit clipping process in step S11 to generate the addition signal SUM1. .
- step S13 the second addition processing unit 123 uses the clip values (CLIP_T1_2, CLIP_T2_2) different from the first addition processing (S11, S12) for the values of the image signal T1 and the image signal T2, and performs the upper limit clip processing. Do.
- step S14 the second addition processing unit 123 adds the image signal T1 and the image signal T2 after the upper limit clipping process obtained in the process of step S13 by calculating the above equation (10), and adds the added signal. Generate SUM2.
- step S15 the first linearization processing unit 122 calculates the above equations (7) to (9) to linearize the addition signal SUM1 obtained in the processing of step S12, and generates a linear signal LIN1. .
- step S16 the second linearization processing unit 124 linearizes the addition signal SUM2 obtained in the process of step S14 by calculating the above equations (11) to (13), and generates a linear signal LIN2. .
- step S17 the synthesis coefficient calculation unit 125 calculates the synthesis coefficient by calculating the above equation (14) with reference to the image signal T1.
- step S18 the motion detection unit 126 detects motion using the linear signal LIN1 obtained by the process of step S15 and the linear signal LIN2 obtained by the process of step S16, and calculates the above equation (15). Calculate the motion coefficient.
- step S19 the synthesis coefficient modulation unit 127 calculates the above equation (16) to subtract the motion coefficient obtained in the process of step S18 from the synthesis coefficient obtained in the process of step S17. A composite coefficient is calculated.
- step S20 the synthesis processing unit 128 refers to the post-motion-compensated synthesis coefficient obtained in step S19 and calculates the above-described equation (17), thereby obtaining the linear signal LIN1 obtained in step S15.
- the linear signal LIN2 obtained by the processing in step S16 is synthesized to generate a synthesized image signal.
- the spikes of the histogram are obtained by performing synthesis in accordance with the synthesis coefficient after motion compensation.
- the linear signal LIN1 and the linear signal LIN2 are combined while avoiding the vicinity of the knee point Kp at which the above occurs. That is, the position of the histogram spike is shifted, and before the long accumulation is saturated (before the histogram spike occurs), the linear signal LIN1 side is smoothly switched to the linear signal LIN2 side having a different knee point Kp. By doing so, spikes in the histogram can be suppressed.
- step S21 the synthesis processing unit 128 outputs the synthesized image signal obtained by the process in step S20.
- FIG. 18 shows an example of signal processing results.
- FIG. 18A shows a processing result when using the above-described technique according to the current technique, and compares it with a processing result when using the technique according to the present technique shown in FIG. 18B. .
- FIG. 19 and FIG. 20 show examples of actual captured images. That is, FIG. 19 and FIG. 20 show the results of signal processing when the current technique is used. In the signal processing for these captured images, a histogram in the direction along the road is obtained in a backlight scene with the sun in front, but in FIG. 20, it can be seen where the spikes are occurring in the captured image. The pixel position corresponding to the luminance level of the spike in the histogram is highlighted (for example, in the frame A6 in the figure). In FIG. 20, when there is a bright light source such as the sun in front of the traveling direction, there is a region where spikes are generated in an annular shape.
- FIG. 21 is a diagram illustrating a configuration example of the signal processing unit 104 when three sheets are combined.
- T1, T2, and T3 indicate photographed images corresponding to respective exposure times when photographing is performed three times within one frame.
- the exposure ratio gain for adjusting the brightness of T2 to T1 is defined as G1
- the exposure ratio gain for adjusting the brightness of T3 to T2 is defined as G2.
- the captured images corresponding to T1, T2, and T3 are also described as image signal T1, image signal T2, and image signal T3, respectively.
- the signal processing unit 104 includes a first addition processing unit 141, a first linearization processing unit 142, a second addition processing unit 143, a second linearization processing unit 144, a third addition processing unit 145, and a third linearity.
- first synthesis coefficient calculation unit 147 first motion detection unit 148, first synthesis coefficient modulation unit 149, first synthesis processing unit 150, second synthesis coefficient calculation unit 151, second motion detection unit 152, A second synthesis coefficient modulation unit 153 and a second synthesis processing unit 154 are included.
- the first addition processing unit 141 performs a first addition process of adding the image signal T1, the image signal T2, and the image signal T3 input thereto to generate an addition signal SUM1.
- the first addition processing unit 141 supplies the addition signal SUM1 obtained by the first addition processing to the first linearization processing unit 142.
- the resulting signals are added. Is called.
- the clip values of the image signals T1, T2, and T3 in the first addition process are set.
- the clip value of the image signal T1 is CLIP_T1_1
- the clip value of the image signal T2 is CLIP_T2_1
- the clip value of the image signal T3 is CLIP_T3_1
- the following equation (18) is obtained in the first addition process:
- the sum signal SUM1 is obtained by calculation.
- the first linearization processing unit 142 refers to the addition signal SUM1 from the first addition processing unit 141, performs a first linearization process, and generates a linear signal LIN1 that is linear with respect to brightness.
- the first linearization processing unit 142 supplies the linear signal LIN1 obtained by the first linearization processing to the first motion detection unit 148 and the first synthesis processing unit 150.
- the linear signal LIN1 is obtained by the following equations (21) to (23) according to the region of the addition signal SUM1 and the knee point Kp (KP1_1, KP2_1).
- KP2_1 ⁇ SUM1, LIN1 KP2_1 + (KP2_1-KP1_1) ⁇ (1 + G1 ⁇ G2 / (1 + G2)) + (SUM1-KP2_1) ⁇ (1 + G2 + G1 ⁇ G2) (23)
- the second addition processing unit 143 performs a second addition process of adding the image signal T1, the image signal T2, and the image signal T3 input thereto to generate an addition signal SUM2.
- the second addition processing unit 143 supplies the addition signal SUM2 obtained by the second addition processing to the second linearization processing unit 144.
- the second addition process after the upper limit clipping process is performed on the values of the image signals T1, T2, and T3 using predetermined values, the resulting signals are added. Is called.
- the clip values of the image signals T1, T2, and T3 in the second addition process are set.
- the clip value of the image signal T1 is CLIP_T1_2
- the clip value of the image signal T2 is CLIP_T2_2
- the clip value of the image signal T3 is CLIP_T3_2
- the following equation (24) is obtained in the second addition process: Calculation is performed to obtain an addition signal SUM2.
- the second linearization processing unit 144 refers to the addition signal SUM2 from the second addition processing unit 143, performs a second linearization process, and generates a linear signal LIN2 that is linear with respect to brightness.
- the second linearization processing unit 144 supplies the linear signal LIN2 obtained by the second linearization processing to the first motion detection unit 148, the first synthesis processing unit 150, and the second motion detection unit 152.
- the linear signal LIN2 is obtained by the following equations (27) to (29) according to the region of the addition signal SUM2 and the knee point Kp (KP1_2, KP2_2).
- the third addition processing unit 145 performs a third addition process of adding the image signal T1, the image signal T2, and the image signal T3 input thereto to generate an addition signal SUM3.
- the third addition processing unit 145 supplies the addition signal SUM3 obtained by the third addition processing to the third linearization processing unit 146.
- the resulting signal is added. Is called.
- the clip values of the image signals T1, T2, and T3 in the third addition process are set.
- the clip value of the image signal T1 is CLIP_T1_3
- the clip value of the image signal T2 is CLIP_T2_3
- the clip value of the image signal T3 is CLIP_T3_3
- the following equation (30) is used in the third addition process: Calculation is performed to obtain the addition signal SUM3.
- the third linearization processing unit 146 performs the third linearization processing with reference to the addition signal SUM3 from the third addition processing unit 145, and generates a linear signal LIN3 that is linear with respect to brightness.
- the third linearization processing unit 146 supplies the linear signal LIN3 obtained by the third linearization processing to the second motion detection unit 152 and the second synthesis processing unit 154.
- KP1_3 CLIP_T1_3 ⁇ (1 + 1 / G1 + 1 / (G1 ⁇ G2)) (31)
- KP2_3 CLIP_T1_3 + CLIP_T2_3 ⁇ (1 + 1 / G2) (32)
- the linear signal LIN3 is obtained by the following equations (33) to (35) according to the region of the addition signal SUM3 and the knee point Kp (KP1_3, KP2_3).
- KP2_3 ⁇ SUM3, LIN3 KP2_3 + (KP2_3-KP1_3) ⁇ (1 + G1 ⁇ G2 / (1 + G2)) + (SUM3-KP2_3) ⁇ (1 + G2 + G1 ⁇ G2) (35)
- the first synthesis coefficient calculation unit 147 calculates a first synthesis coefficient for synthesizing the linear signal LIN1 and the linear signal LIN2 with reference to the image signal T1.
- the first synthesis coefficient calculation unit 147 supplies the calculated first synthesis coefficient to the first synthesis coefficient modulation unit 149.
- the threshold for starting the synthesis (blending) of the linear signal LIN2 is BLD_TH_L_LOW
- the synthesis ratio (blend rate) is 1.0
- the threshold at which the linear signal LIN2 is 100% is Assuming BLD_TH_L_HIGH, the first synthesis coefficient is obtained by the following equation (36). However, here, the signal is clipped in the range of 0 to 1.0.
- the first motion detection unit 148 performs motion determination by defining a difference between the linear signal LIN1 from the first linearization processing unit 142 and the linear signal LIN2 from the second linearization processing unit 144 as a motion amount. At this time, the first motion detection unit 148 calculates the first motion coefficient by comparing the amount of motion with the amount of noise assumed from the sensor characteristics in order to distinguish between the noise of the signal and the blinking of the fast blinking body such as an LED. To do. The first motion detection unit 148 supplies the calculated first motion coefficient to the first synthesis coefficient modulation unit 149.
- the first motion coefficient is (37).
- the signal is clipped in the range of 0 to 1.0.
- the first synthesis coefficient modulation unit 149 performs modulation on the first synthesis coefficient from the first synthesis coefficient calculation unit 147 in consideration of the first motion coefficient from the first motion detection unit 148, and after the first motion compensation A composite coefficient is calculated.
- the first synthesis coefficient modulation unit 149 supplies the calculated first motion-compensated synthesis coefficient to the first synthesis processing unit 150.
- the first motion-compensated synthesis coefficient is obtained by the following equation (38). However, here, the signal is clipped in the range of 0 to 1.0.
- the first synthesis processing unit 150 uses the linear signal LIN1 from the first linearization processing unit 142 and the linear signal LIN2 from the second linearization processing unit 144 as the first motion compensation from the first synthesis coefficient modulation unit 149. Composite (alpha blend) with post-combination coefficient.
- the first synthesis processing unit 150 supplies the synthesis signal BLD1 obtained as a result of the synthesis to the second synthesis processing unit 154.
- the composite signal BLD1 is obtained by the following equation (39).
- Composite signal BLD1 (LIN2-LIN1) x First motion compensated composite coefficient + LIN1 ⁇ ⁇ ⁇ (39)
- the second synthesis coefficient calculation unit 151 calculates a second synthesis coefficient for synthesizing the synthesis signal BLD1 and the linear signal LIN3 with reference to the image signal T2.
- the second synthesis coefficient calculation unit 151 supplies the calculated second synthesis coefficient to the second synthesis coefficient modulation unit 153.
- the threshold for starting synthesis (blending) of the linear signal LIN3 is BLD_TH_H_LOW
- the synthesis ratio (blend rate) is 1.0
- the threshold at which the linear signal LIN3 is 100% is Assuming BLD_TH_H_HIGH, the second synthesis coefficient is obtained by the following equation (40).
- the signal is clipped in the range of 0 to 1.0.
- the second motion detection unit 152 defines a difference between the linear signal LIN2 from the second linearization processing unit 144 and the linear signal LIN3 from the third linearization processing unit 146 as a motion amount, and performs motion determination. At this time, the second motion detection unit 152 calculates the second motion coefficient by comparing the amount of motion with the amount of noise assumed from the sensor characteristics in order to distinguish between the noise of the signal and the blinking of the fast blinking body such as the LED. To do. The second motion detection unit 152 supplies the calculated second motion coefficient to the second synthesis coefficient modulation unit 153.
- the second motion coefficient is (41).
- the signal is clipped in the range of 0 to 1.0.
- Second motion coefficient ⁇ ABS (LIN2-LIN3) ⁇ Normalized gain-MDET_TH_LOW ⁇ ⁇ (MDET_TH_HIGH-MDET_TH_LOW) (41)
- the second synthesis coefficient modulation unit 153 performs modulation on the second synthesis coefficient from the second synthesis coefficient calculation unit 151 in consideration of the second motion coefficient from the second motion detection unit 152, and after the second motion compensation A composite coefficient is calculated.
- the second synthesis coefficient modulation unit 153 supplies the calculated second motion-compensated synthesis coefficient to the second synthesis processing unit 154.
- the second motion-compensated synthesis coefficient is obtained by the following equation (43).
- the signal is clipped in the range of 0 to 1.0.
- Second motion compensation composite coefficient Second composite coefficient-Second motion coefficient ⁇ ⁇ ⁇ (43)
- the second synthesis processing unit 154 receives the synthesized signal BLD1 from the first synthesis processing unit 150 and the linear signal LIN3 from the third linearization processing unit 146 after the second motion compensation from the second synthesis coefficient modulation unit 153.
- the synthesized image signal is output as an HDR synthesized signal obtained by synthesizing with the synthesis coefficient (alpha blend).
- the combined image signal is obtained by the following equation (44).
- Post-synthesis image signal (LIN3-BLD1) x Second motion compensated synthesis coefficient + BLD1
- step S51 the first addition processing unit 141 performs upper limit clip processing on the values of the image signal T1, the image signal T2, and the image signal T3 using predetermined clip values (CLIP_T1_1, CLIP_T2_1, CLIP_T3_1).
- step S52 the first addition processing unit 141 adds the image signal T1, the image signal T2, and the image signal T3 after the upper limit clipping process obtained in the process of step S51 by calculating the above equation (18).
- the addition signal SUM1 is generated.
- step S53 the second addition processing unit 143 performs upper limit clipping processing using clip values (CLIP_T1_2, CLIP_T2_2, CLIP_T3_2) different from the first addition processing (S51, S52) for at least the value of the image signal T1. .
- step S54 the second addition processing unit 143 calculates the above equation (24) to add the image signal T1, the image signal T2, and the image signal T3 after the upper limit clipping process obtained in the process of step S53.
- the addition signal SUM2 is generated.
- step S55 the third addition processing unit 145 uses the clip values (CLIP_T1_3, CLIP_T2_3, CLIP_T3_3) different from the second addition processing (S53, S54) for at least the value of the image signal T2, and performs the upper limit clip processing. .
- step S56 the third addition processing unit 145 adds the image signal T1, the image signal T2, and the image signal T3 after the upper limit clipping process obtained by the process of step S55 by calculating the above equation (30).
- the addition signal SUM3 is generated.
- the clip value (CLIP_T1_2) used in the second addition process (S53, S54) is changed to the first addition process (S51).
- S52) can be made lower than the clip value (CLIP_T1_1) used.
- the clip value (CLIP_T2_3) used in the third addition process (S55, S56) is used as the second addition process (S53).
- S54) can be made lower than the clip value (CLIP_T2_2) used.
- step S57 the first linearization processing unit 142 calculates the above equations (19) to (23), thereby linearizing the addition signal SUM1 obtained in the processing of step S52 and generating a linear signal LIN1. .
- step S58 the second linearization processing unit 144 linearizes the addition signal SUM2 obtained in the process of step S54 by calculating the above equations (25) to (29), and generates a linear signal LIN2. .
- step S59 the third linearization processing unit 146 calculates the above equations (31) to (35) to linearize the addition signal SUM3 obtained by the processing in step S56, and generates a linear signal LIN3. .
- step S60 the first synthesis coefficient calculation unit 147 calculates the first synthesis coefficient by calculating the above equation (36) with reference to the image signal T1.
- step S61 the first motion detection unit 148 detects motion using the linear signal LIN1 obtained by the process of step S57 and the linear signal LIN2 obtained by the process of step S58, and calculates the above equation (37). Thus, the first motion coefficient is calculated.
- step S62 the first synthesis coefficient modulation unit 149 calculates the first motion coefficient obtained by the process of step S61 from the first synthesis coefficient obtained by the process of step S60 by calculating the above equation (38). Subtraction is performed to calculate the first motion-compensated synthesis coefficient.
- step S63 the first synthesis processing unit 150 refers to the first motion-compensated synthesis coefficient obtained by the process of step S62, and calculates the above equation (39), thereby obtaining the process of step S57.
- the linear signal LIN1 and the linear signal LIN2 obtained in step S58 are combined to generate a combined signal BLD1.
- the histogram is obtained by performing synthesis according to the first motion-compensated synthesis coefficient.
- the linear signal LIN1 and the linear signal LIN2 are synthesized while avoiding the vicinity of the knee point Kp where the spike is generated.
- step S64 the second synthesis coefficient calculation unit 151 calculates the second synthesis coefficient by calculating the above equation (40) with reference to the image signal T2.
- step S65 the second motion detection unit 152 detects motion using the linear signal LIN2 obtained by the process of step S58 and the linear signal LIN3 obtained by the process of step S59, and the above equations (41) and ( 42) is calculated to calculate the second motion coefficient.
- step S66 the second synthesis coefficient modulation unit 153 calculates the second motion coefficient obtained in the process of step S65 from the second synthesis coefficient obtained in the process of step S64 by calculating the above equation (43). Subtraction is performed to calculate the second motion-compensated synthesis coefficient.
- step S67 the second synthesis processing unit 154 refers to the second motion-compensated synthesis coefficient obtained in the process of step S66, calculates the above equation (44), and is obtained in the process of step S63.
- the synthesized signal BLD1 and the linear signal LIN3 obtained by the process of step S59 are synthesized to generate a synthesized image signal.
- the histogram is obtained by performing synthesis according to the second motion-compensated synthesis coefficient.
- the synthesized signal BLD1 and the linear signal LIN3 are synthesized while avoiding the vicinity of the knee point Kp where the spike is generated.
- step S68 the second synthesis processing unit 154 outputs the synthesized image signal obtained by the process in step S67.
- FIG. 24 is a diagram for explaining the details of the first addition processing by the first addition processing unit 141 and the first linearization processing by the first linearization processing unit 142.
- clip processing using predetermined clip values is performed, and clip values CLIP_T1_1, CLIP_T2_1, and CLIP_T3_1 are set for the image signals T1, T2, and T3, respectively.
- the clip value of the image signal T1 is CLIP_T1_1.
- the clip value of the image signal T2 is CLIP_T2_1
- the clip value of the image signal T3 is CLIP_T3_1.
- the long accumulation, intermediate accumulation, and short accumulation image signals T1, T2, and T3 are clipped by independent clip values (CLIP_T1_1, CLIP_T2_1, CLIP_T3_1) according to the above equation (18). Then, the addition signal SUM1 is obtained by addition.
- the position of the knee point Kp (KP1_1, KP2_1 in FIG. 24) is obtained by the above equations (19) and (20) as the point where the slope of the addition signal SUM1 changes. .
- a linear signal that is linear with respect to brightness (a signal restored linearly) is referred to for each of the first to third regions by referring to the value of the addition signal SUM1.
- the first region (SUM1 ⁇ KP1_1) that is below the level at which the image signal T1 (long accumulation) is saturated is the image signal T1 (long accumulation) and the image signal T2 (intermediate accumulation).
- the image signal T3 (short accumulation) are regions in which the signal amount changes linearly with respect to the amount of light.
- the addition signal SUM1 is used as the linear signal LIN1. That is, in this first region, the linear signal LIN1 is obtained by the above equation (21).
- the image signal T2 (intermediate storage) is below the saturation level
- the image signal T1 long storage
- the image signal T3 (short storage) are regions in which the signal amount changes linearly with respect to the amount of light.
- the third region (KP2_1 ⁇ SUM1) exceeding the level at which the image signal T2 (intermediate storage) is saturated is clipped with both the image signal T1 (long storage) and the image signal T2 (intermediate storage).
- the signal amount does not change even if the light amount changes, but the image signal T3 (short accumulation) is a region where the signal amount changes linearly with respect to the light amount.
- the linear signal LIN1 is obtained by the above equation (23).
- the linear signal LIN1 that is a signal linear with respect to brightness is generated with reference to the addition signal SUM1 obtained in the first addition process.
- FIG. 25 is a diagram for explaining the details of the second addition processing by the second addition processing unit 143 and the second linearization processing by the second linearization processing unit 144.
- clip processing using predetermined clip values is performed, and clip values CLIP_T1_2, CLIP_T2_2, and CLIP_T3_2 are set for the image signals T1, T2, and T3, respectively.
- the clip value of the image signal T1 is CLIP_T1_2.
- the clip value of the image signal T2 is CLIP_T2_2
- the clip value of the image signal T3 is CLIP_T3_2.
- the clip values set for each image signal are the same as the clip values CLIP_T2_2 and CLIP_T3_2 for the image signals T2 and T3 and the clip values CLIP_T2_1 and CLIP_T3_1 compared to the clip values of FIG.
- the clip value CLIP_T1_2 and the clip value CLIP_T1_1 are different from each other.
- the clip value CLIP_T1_2 that is lower than the clip value CLIP_T1_1 as the clip value for the image signal T1 (long accumulation). Is set.
- the same value is set as the clip value of the image signal T2 (intermediate storage) and the image signal T3 (short storage).
- the long accumulation, intermediate accumulation, and short accumulation image signals T1, T2, and T3 are clipped with independent clip values (CLIP_T1_2, CLIP_T2_2, CLIP_T3_2) according to the above equation (24). Then, the addition signal SUM2 is obtained by addition.
- the first region (SUM2 ⁇ KP1_2) where the image signal T1 (long accumulation) is below the saturation level is the image signal T1 (long accumulation), image signal T2 (intermediate accumulation).
- the image signal T3 (short accumulation) are regions in which the signal amount changes linearly with respect to the amount of light.
- the addition signal SUM2 is used as the linear signal LIN2. That is, in this first region, the linear signal LIN2 is obtained by the above equation (27).
- the image signal T1 (long storage) is clipped and the amount of light changes.
- the signal amount does not change, but the image signal T2 (intermediate storage) and the image signal T3 (short storage) are regions in which the signal amount changes linearly with respect to the amount of light.
- the third region (KP2_2 ⁇ SUM2) exceeding the level at which the image signal T2 (intermediate storage) is saturated is clipped with both the image signal T1 (long storage) and the image signal T2 (intermediate storage).
- the signal amount does not change even if the light amount changes, but the image signal T3 (short accumulation) is a region where the signal amount changes linearly with respect to the light amount.
- the linear signal LIN2 that is a signal linear with respect to brightness is generated with reference to the addition signal SUM2 obtained by the second addition processing.
- the above expression (30) is calculated as in the first addition process and the second addition process.
- An addition signal SUM3 is obtained.
- the knee point is obtained by the above equations (31) and (32) as in the first linearization process and the second linearization process.
- Kp (KP1_3, KP2_3) is obtained, and the linear signal LIN3 is generated for each of the first to third regions by the above equations (33) to (35).
- FIG. 26 is a diagram illustrating details of suppression of histogram spikes according to the present technology.
- the horizontal axis in the figure represents brightness, and the change of the linear signal LIN1, the linear signal LIN2, and the combined signal BLD1 of these linear signals (LIN1, LIN2) is shown.
- the position of the signal at which the histogram spike occurs depends on the clip position of the signal before the addition signal SUM is generated (and the knee point Kp obtained therefrom). Therefore, in the present technology, in the second addition process, by setting a clip value different from the clip value used in the first addition process, the linear signal LIN1 and the linear signal LIN2 in which the histogram spike generation position is shifted are generated. To be.
- the clip value (CLIP_T1_1 and CLIP_T1_2) set for the image signal T1 (long accumulation) is different between the linear signal LIN1 shown in FIG. 24 and the linear signal LIN2 shown in FIG. Therefore, the spike occurrence position (“SP1” of LIN1 and “SP2” of LIN2) is shifted.
- a signal (linear signal LIN2) in which the position of the knee point Kp is lowered is prepared in parallel, and changes according to the clip value.
- the linear signal LIN1 side (dotted line A1 in the figure) to the linear signal LIN2 side (in the figure) so as to avoid the vicinity of the knee point Kp ("SP1" of LIN1 and "SP2" of LIN2) To the dotted line A3).
- the synthesized signal BLD1 has a composition ratio of the linear signal LIN2 in the synthesized signal BLD1 obtained by synthesizing the linear signal LIN1 and the linear signal LIN2 within the range of dotted lines B1 to B2.
- the synthesis ratio of the linear signal LIN1 is 100% to 0%
- the synthesis ratio is determined by the first synthesis coefficient (the first motion-compensated synthesis coefficient).
- FIG. 27 shows details of such a synthesis coefficient.
- FIG. 27 is a diagram illustrating details of the synthesis coefficient used in the present technology.
- the linear signal LIN1 since the histogram spike does not occur until the image signal T1 (long accumulation) is clipped by the clip value CLIP_T1_1, the linear signal LIN1 is linear by looking at the level of the image signal T1 (long accumulation). A synthesis ratio (first synthesis coefficient) between the signal LIN1 and the linear signal LIN2 is set.
- the first synthesis coefficient (BLD_TH_L_LOW, BLD_TH_L_HIGH) is set. However, what value is set for the width of the synthesis area is arbitrary.
- the linear signal LIN2 side must already satisfy the condition that the histogram spike does not occur in the BLD_TH_L_LOW region where the synthesis (blending) of the linear signal LIN1 and the linear signal LIN2 is started. Therefore, in the present technology, a value obtained by reducing only the noise amount of the image signal T2 (intermediate storage) and the image signal T3 (short accumulation) in the vicinity thereof from BLD_TH_L_LOW is set as the clip value CLIP_T1_2.
- a value lower than the clip value (CLIP_T1_1) used in the first addition process is set as the clip value (CLIP_T1_2) of the image signal T1 used in the second addition process. Therefore, in the linear signal LIN2, the signal amount of the image signal T1 (long accumulation) is clipped rather than in the linear signal LIN1.
- the reduced signal amount is estimated using the image signal T2 (intermediate storage) and the image signal T3 (short storage), but in the image signal T1 (long storage)
- a moving object or the like that appears only brighter may be darker than the linear signal LIN1 in which a higher clip value is set.
- motion determination is performed between the linear signal LIN1 and the linear signal LIN2, and if there is motion, the synthesis ratio on the safer (more reliable) linear signal LIN1 side is increased.
- the first synthesis coefficient is controlled (modulated). Then, by using the first motion-compensated synthesis coefficient obtained in this way and synthesizing the linear signal LIN1 and the linear signal LIN2, it is possible to suppress, for example, darkening of a moving object or the like. It becomes possible.
- the image signal T1 (long accumulation) is used in the first addition process while the image signal T1 of the image signal T1 (long accumulation) is not used in the second addition process is assumed.
- more reliable information is not the linear signal LIN2 but the linear signal LIN1.
- the first synthesis coefficient is controlled so that the synthesis ratio on the linear signal LIN1 side is increased.
- the linear signal LIN1 is compared with the linear signal LIN2, and if the difference is large, the linear signal LIN1 may be used. That is, when the linear signal LIN1 and the linear signal LIN2 are combined, the first combining coefficient is modulated so that the combining ratio of the signal with higher information reliability is increased.
- first synthesis coefficient for synthesizing the linear signal LIN1 and the linear signal LIN2 and the first motion-compensated synthesis coefficient have been described here, the first synthesis coefficient for synthesizing the synthesis signal BLD1 and the linear signal LIN3 is described.
- the two synthesis coefficients and the second motion-compensated synthesis coefficient can be similarly controlled.
- N integer greater than or equal to 1
- the image signal T1 corresponds to a long accumulation image.
- the image signal T2 corresponds to a medium accumulation image
- the image signal T3 corresponds to a short accumulation image.
- the exposure time of the image signal T1 is S1
- the exposure time of the image signal T2 is S2
- the exposure time of the image signal T3 is S3. If the exposure time is similarly specified for the image signal T4 and thereafter, the exposure time of the image signal TN is SN.
- the clip value of the image signal T1 is CLIP_T1
- the clip value of the image signal T2 is CLIP_T2
- the clip value of the image signal T3 is CLIP_T3. If the clip value is designated in the same manner for the image signal T4 and thereafter, the clip value of the image signal TN is CLIP_TN.
- the knee point Kp the point where the inclination of the addition signal SUM first changes when the image signal T1 is saturated is referred to as KP_1, and then the point where the inclination of the addition signal SUM changes when the image signal T2 is saturated is changed to KP_2.
- KP_1 the point where the inclination of the addition signal SUM changes when the image signal T2 is saturated is changed to KP_2.
- the points where the image signal T3,..., TN is saturated and the inclination of the addition signal SUM is changed to KP_3,.
- the linear signal LIN after linearization is the linear signal in the region of SUM ⁇ KP_1, LIN_1, the linear signal in the region of KP_1 ⁇ SUM ⁇ KP_2 is LIN_2, and the linear signal in the region of KP_2 ⁇ SUM ⁇ KP_3. LIN_3. If there is a similar relationship thereafter, the linear signal in the region of KP_N ⁇ 1 ⁇ SUM is LIN_N.
- Such a relationship can be illustrated as shown in FIG. 28, for example.
- clip values CLIP_T1, CLIP_T2, and CLIP_T3 are set for the image signals T1, T2, and T3 to be added.
- the addition signal SUM of the image signals T1, T2, and T3 changes its inclination at the knee point KP_1 corresponding to the clip value CLIP_T1 of the image signal T1 (first change in C1 in the figure), and the image The inclination further changes at the knee point KP_2 corresponding to the clip value CLIP_T2 of the signal T2 (the second change in C2 in the figure).
- the addition signal SUM of the image signals T1, T2, and T3 is linearized, but the linear signal LIN_1 is restored in the first region where SUM ⁇ KP_1.
- the linear signal LIN_2 is restored in the second region of KP_1P ⁇ SUM ⁇ KP_2, and the linear signal LIN_3 is restored in the third region of KP_2 ⁇ SUM.
- FIG. 28 for simplification of explanation, three image signals T1, T2, and T3, that is, a case of combining three images are shown as image signals to be subjected to addition processing.
- the image signal is similarly processed, and a linear signal is restored from the addition signal SUM in accordance with the knee point Kp.
- the calculation formula for converting the addition signal SUM into the linear signal LIN can be expressed as the following formula (45) and formula (46).
- the following formula (45) is a calculation formula for obtaining the addition signal SUM.
- LIN_m is expressed by the following equation (46) for 1 ⁇ m ⁇ N. ).
- the position of KP_m can be expressed as in the following equation (47) for 1 ⁇ m ⁇ N.
- the amount of long accumulation signal amount is estimated from intermediate accumulation and short accumulation, so moving objects that appear bright only in long accumulation, There is a possibility that it will be darker than in the case of simple addition. Therefore, in the present technology, motion correction processing is performed together.
- the present technology can be applied to all photographing apparatuses such as in-vehicle cameras and surveillance cameras.
- the subject to be photographed is not limited to the LED traffic light or the LED speed regulation sign, and the subject to be photographed can be a subject having a very large luminance difference or a blinking subject (for example, a light-emitting body that blinks rapidly).
- the present technology is particularly useful in a photographing apparatus that detects an obstacle using a histogram.
- the camera unit 10 shown in FIG. 13 can be configured as a stacked solid-state imaging device such as a back-illuminated CMOS image sensor.
- a semiconductor substrate 200A in which the pixel region 201 is formed and a semiconductor substrate 200B in which the signal processing circuit region 202 is formed can be stacked.
- the semiconductor substrate 200A and the semiconductor substrate 200B are electrically connected by, for example, a through via or a metal bond.
- FIG. 30 shows a detailed configuration of the pixel area 201 and the signal processing circuit area 202 of FIG. 30, the signal processing circuit area 202 includes a camera signal processing unit 211, signal processing units 212 to 214 that perform various signal processing, and the like.
- the camera signal processing unit 211 can be configured to include the above-described signal processing unit 104 (FIG. 13). That is, the camera signal processing unit 211 can perform the signal processing shown in the flowcharts of FIGS. 17 and 22 to 23.
- the camera signal processing unit 211 may include the delay line 103, the timing control unit 106, and the like.
- the pixel area 201 includes a pixel array portion of the image sensor 102 and the like.
- a semiconductor substrate 200C in which a memory region 203 is formed is laminated between a semiconductor substrate 200A in which a pixel region 201 is formed and a semiconductor substrate 200B in which a signal processing circuit region 202 is formed. You may be made to do.
- the signal processing circuit area 202 includes a camera signal processing unit 311 and signal processing units 312 to 314 that perform various signal processing.
- the memory area 203 includes memory units 321 to 322.
- the camera signal processing unit 311 includes the signal processing unit 104 (FIG. 13) and the like, similar to the camera signal processing unit 211 of FIG.
- the delay line 103 is included in the memory area 203 so that the delay line 103 sequentially stores image data from the pixel area 201 (imaging device 102), and the camera signal processing unit 311 (signal processing unit) is appropriately stored. 104).
- the above-described series of processing can be executed by hardware or can be executed by software.
- a program constituting the software is installed in the computer.
- the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing a computer incorporated in dedicated hardware and various programs.
- FIG. 33 is a block diagram illustrating a configuration example of hardware of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- An input / output interface 1005 is further connected to the bus 1004.
- An input unit 1006, an output unit 1007, a recording unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
- the input unit 1006 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 1007 includes a display, a speaker, and the like.
- the recording unit 1008 includes a hard disk, a nonvolatile memory, and the like.
- the communication unit 1009 includes a network interface.
- the drive 1010 drives a removable recording medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 1001 loads the program stored in the recording unit 1008 into the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program, for example. A series of processing is performed.
- the program executed by the computer 1000 can be provided by being recorded on a removable recording medium 1011 as a package medium, for example.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the recording unit 1008 via the input / output interface 1005 by attaching the removable recording medium 1011 to the drive 1010.
- the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the recording unit 1008.
- the program can be installed in the ROM 1002 or the recording unit 1008 in advance.
- the program executed by the computer 1000 may be a program that is processed in time series in the order described in this specification, or a necessary timing such as when a call is made in parallel. It may be a program in which processing is performed.
- processing steps for describing a program for causing the computer 1000 to perform various processes do not necessarily have to be processed in chronological order according to the order described in the flowchart, but in parallel or individually. (For example, parallel processing or object processing).
- the program may be processed by one computer, or may be processed in a distributed manner by a plurality of computers. Furthermore, the program may be transferred to a remote computer and executed.
- the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
- the embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
- the present technology can take a configuration of cloud computing in which one function is shared by a plurality of devices via a network and jointly processed.
- the technology according to the present disclosure can be applied to various products.
- the technology according to the present disclosure may be any type of movement such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, a robot, a construction machine, and an agricultural machine (tractor).
- FIG. 34 is a block diagram illustrating a schematic configuration example of a vehicle control system 7000 that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
- the vehicle control system 7000 includes a plurality of electronic control units connected via a communication network 7010.
- the vehicle control system 7000 includes a drive system control unit 7100, a body system control unit 7200, a battery control unit 7300, a vehicle exterior information detection unit 7400, a vehicle interior information detection unit 7500, and an integrated control unit 7600. .
- the communication network 7010 for connecting the plurality of control units conforms to an arbitrary standard such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), or FlexRay (registered trademark). It may be an in-vehicle communication network.
- Each control unit includes a microcomputer that performs arithmetic processing according to various programs, a storage unit that stores programs executed by the microcomputer or parameters used for various calculations, and a drive circuit that drives various devices to be controlled. Is provided.
- Each control unit includes a network I / F for communicating with other control units via a communication network 7010, and is connected to devices or sensors inside and outside the vehicle by wired communication or wireless communication. A communication I / F for performing communication is provided. In FIG.
- control unit 7600 as a functional configuration of the integrated control unit 7600, a microcomputer 7610, a general-purpose communication I / F 7620, a dedicated communication I / F 7630, a positioning unit 7640, a beacon receiving unit 7650, an in-vehicle device I / F 7660, an audio image output unit 7670, An in-vehicle network I / F 7680 and a storage unit 7690 are illustrated.
- other control units include a microcomputer, a communication I / F, a storage unit, and the like.
- the drive system control unit 7100 controls the operation of the device related to the drive system of the vehicle according to various programs.
- the drive system control unit 7100 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
- the drive system control unit 7100 may have a function as a control device such as ABS (Antilock Brake System) or ESC (Electronic Stability Control).
- a vehicle state detection unit 7110 is connected to the drive system control unit 7100.
- the vehicle state detection unit 7110 includes, for example, a gyro sensor that detects the angular velocity of the rotational movement of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, an operation amount of an accelerator pedal, an operation amount of a brake pedal, and steering of a steering wheel. At least one of sensors for detecting an angle, an engine speed, a rotational speed of a wheel, or the like is included.
- the drive system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detection unit 7110, and controls an internal combustion engine, a drive motor, an electric power steering device, a brake device, or the like.
- the body system control unit 7200 controls the operation of various devices mounted on the vehicle body according to various programs.
- the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp.
- the body control unit 7200 can be input with radio waves or various switch signals transmitted from a portable device that substitutes for a key.
- the body system control unit 7200 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
- the battery control unit 7300 controls the secondary battery 7310 that is a power supply source of the drive motor according to various programs. For example, information such as battery temperature, battery output voltage, or remaining battery capacity is input to the battery control unit 7300 from a battery device including the secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and controls the temperature adjustment of the secondary battery 7310 or the cooling device provided in the battery device.
- the outside information detection unit 7400 detects information outside the vehicle on which the vehicle control system 7000 is mounted.
- the outside information detection unit 7400 is connected to at least one of the imaging unit 7410 and the outside information detection unit 7420.
- the imaging unit 7410 includes at least one of a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras.
- the outside information detection unit 7420 detects, for example, current weather or an environmental sensor for detecting weather, or other vehicles, obstacles, pedestrians, etc. around the vehicle equipped with the vehicle control system 7000. At least one of the surrounding information detection sensors.
- the environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunshine sensor that detects sunlight intensity, and a snow sensor that detects snowfall.
- the ambient information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device.
- the imaging unit 7410 and the outside information detection unit 7420 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
- FIG. 35 shows an example of installation positions of the imaging unit 7410 and the vehicle outside information detection unit 7420.
- the imaging units 7910, 7912, 7914, 7916, and 7918 are provided at, for example, at least one of the front nose, the side mirror, the rear bumper, the back door, and the upper part of the windshield in the vehicle interior of the vehicle 7900.
- An imaging unit 7910 provided in the front nose and an imaging unit 7918 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 7900.
- Imaging units 7912 and 7914 provided in the side mirror mainly acquire an image of the side of the vehicle 7900.
- An imaging unit 7916 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 7900.
- the imaging unit 7918 provided on the upper part of the windshield in the passenger compartment is mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or
- FIG. 35 shows an example of shooting ranges of the respective imaging units 7910, 7912, 7914, and 7916.
- the imaging range a indicates the imaging range of the imaging unit 7910 provided in the front nose
- the imaging ranges b and c indicate the imaging ranges of the imaging units 7912 and 7914 provided in the side mirrors, respectively
- the imaging range d The imaging range of the imaging part 7916 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 7910, 7912, 7914, and 7916, an overhead image when the vehicle 7900 is viewed from above is obtained.
- the vehicle outside information detection units 7920, 7922, 7924, 7926, 7928, and 7930 provided on the front, rear, sides, corners of the vehicle 7900 and the upper part of the windshield in the vehicle interior may be, for example, an ultrasonic sensor or a radar device.
- the vehicle outside information detection units 7920, 7926, and 7930 provided on the front nose, the rear bumper, the back door, and the windshield in the vehicle interior of the vehicle 7900 may be, for example, LIDAR devices.
- These outside information detection units 7920 to 7930 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, and the like.
- the vehicle exterior information detection unit 7400 causes the imaging unit 7410 to capture an image outside the vehicle and receives the captured image data. Further, the vehicle exterior information detection unit 7400 receives detection information from the vehicle exterior information detection unit 7420 connected thereto. When the vehicle exterior information detection unit 7420 is an ultrasonic sensor, a radar device, or a LIDAR device, the vehicle exterior information detection unit 7400 transmits ultrasonic waves, electromagnetic waves, or the like, and receives received reflected wave information.
- the outside information detection unit 7400 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received information.
- the vehicle exterior information detection unit 7400 may perform environment recognition processing for recognizing rainfall, fog, road surface conditions, or the like based on the received information.
- the vehicle outside information detection unit 7400 may calculate a distance to an object outside the vehicle based on the received information.
- the outside information detection unit 7400 may perform image recognition processing or distance detection processing for recognizing a person, a car, an obstacle, a sign, a character on a road surface, or the like based on the received image data.
- the vehicle exterior information detection unit 7400 performs processing such as distortion correction or alignment on the received image data, and combines the image data captured by the different imaging units 7410 to generate an overhead image or a panoramic image. Also good.
- the vehicle exterior information detection unit 7400 may perform viewpoint conversion processing using image data captured by different imaging units 7410.
- the vehicle interior information detection unit 7500 detects vehicle interior information.
- a driver state detection unit 7510 that detects the driver's state is connected to the in-vehicle information detection unit 7500.
- Driver state detection unit 7510 may include a camera that captures an image of the driver, a biosensor that detects biometric information of the driver, a microphone that collects sound in the passenger compartment, and the like.
- the biometric sensor is provided, for example, on a seat surface or a steering wheel, and detects biometric information of an occupant sitting on the seat or a driver holding the steering wheel.
- the vehicle interior information detection unit 7500 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, and determines whether the driver is asleep. May be.
- the vehicle interior information detection unit 7500 may perform a process such as a noise canceling process on the collected audio signal.
- the integrated control unit 7600 controls the overall operation in the vehicle control system 7000 according to various programs.
- An input unit 7800 is connected to the integrated control unit 7600.
- the input unit 7800 is realized by a device that can be input by a passenger, such as a touch panel, a button, a microphone, a switch, or a lever.
- the integrated control unit 7600 may be input with data obtained by recognizing voice input through a microphone.
- the input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or may be an external connection device such as a mobile phone or a PDA (Personal Digital Assistant) that supports the operation of the vehicle control system 7000. May be.
- the input unit 7800 may be, for example, a camera.
- the passenger can input information using a gesture.
- data obtained by detecting the movement of the wearable device worn by the passenger may be input.
- the input unit 7800 may include, for example, an input control circuit that generates an input signal based on information input by a passenger or the like using the input unit 7800 and outputs the input signal to the integrated control unit 7600.
- a passenger or the like operates the input unit 7800 to input various data or instruct a processing operation to the vehicle control system 7000.
- the storage unit 7690 may include a ROM (Read Only Memory) that stores various programs executed by the microcomputer, and a RAM (Random Access Memory) that stores various parameters, calculation results, sensor values, and the like.
- the storage unit 7690 may be realized by a magnetic storage device such as an HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
- General-purpose communication I / F 7620 is a general-purpose communication I / F that mediates communication with various devices existing in the external environment 7750.
- General-purpose communication I / F7620 is a cellular communication protocol such as GSM (Global System of Mobile communications), WiMAX, LTE (Long Term Evolution) or LTE-A (LTE-Advanced), or wireless LAN (Wi-Fi (registered trademark)). Other wireless communication protocols such as Bluetooth (registered trademark) may also be implemented.
- the general-purpose communication I / F 7620 is connected to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via, for example, a base station or an access point.
- the general-purpose communication I / F 7620 is a terminal (for example, a driver, a pedestrian or a store terminal, or an MTC (Machine Type Communication) terminal) that exists in the vicinity of the vehicle using, for example, P2P (Peer To Peer) technology. You may connect with.
- a terminal for example, a driver, a pedestrian or a store terminal, or an MTC (Machine Type Communication) terminal
- P2P Peer To Peer
- the dedicated communication I / F 7630 is a communication I / F that supports a communication protocol formulated for use in vehicles.
- the dedicated communication I / F 7630 is a standard protocol such as WAVE (Wireless Access in Vehicle Environment), DSRC (Dedicated Short Range Communications), or cellular communication protocol, which is a combination of the lower layer IEEE 802.11p and the upper layer IEEE 1609. May be implemented.
- the dedicated communication I / F 7630 typically includes vehicle-to-vehicle communication, vehicle-to-infrastructure communication, vehicle-to-home communication, and vehicle-to-pedestrian communication. ) Perform V2X communication, which is a concept that includes one or more of the communications.
- the positioning unit 7640 receives, for example, a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a GPS (Global Positioning System) satellite), performs positioning, and performs latitude, longitude, and altitude of the vehicle.
- the position information including is generated.
- the positioning unit 7640 may specify the current position by exchanging signals with the wireless access point, or may acquire position information from a terminal such as a mobile phone, PHS, or smartphone having a positioning function.
- the beacon receiving unit 7650 receives, for example, radio waves or electromagnetic waves transmitted from a radio station installed on the road, and acquires information such as the current position, traffic jam, closed road, or required time. Note that the function of the beacon receiving unit 7650 may be included in the dedicated communication I / F 7630 described above.
- the in-vehicle device I / F 7660 is a communication interface that mediates the connection between the microcomputer 7610 and various in-vehicle devices 7760 present in the vehicle.
- the in-vehicle device I / F 7660 may establish a wireless connection using a wireless communication protocol such as a wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB).
- the in-vehicle device I / F 7660 is connected to a USB (Universal Serial Bus), HDMI (High-Definition Multimedia Interface), or MHL (Mobile High-definition Link) via a connection terminal (and a cable if necessary). ) Etc. may be established.
- the in-vehicle device 7760 may include, for example, at least one of a mobile device or a wearable device that a passenger has, or an information device that is carried into or attached to the vehicle.
- In-vehicle device 7760 may include a navigation device that searches for a route to an arbitrary destination.
- In-vehicle device I / F 7660 exchanges control signals or data signals with these in-vehicle devices 7760.
- the in-vehicle network I / F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010.
- the in-vehicle network I / F 7680 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 7010.
- the microcomputer 7610 of the integrated control unit 7600 is connected via at least one of a general-purpose communication I / F 7620, a dedicated communication I / F 7630, a positioning unit 7640, a beacon receiving unit 7650, an in-vehicle device I / F 7660, and an in-vehicle network I / F 7680.
- the vehicle control system 7000 is controlled according to various programs based on the acquired information. For example, the microcomputer 7610 calculates a control target value of the driving force generation device, the steering mechanism, or the braking device based on the acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 7100. Also good.
- the microcomputer 7610 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, following traveling based on inter-vehicle distance, vehicle speed maintaining traveling, vehicle collision warning, or vehicle lane departure warning. You may perform the cooperative control for the purpose. Further, the microcomputer 7610 controls the driving force generator, the steering mechanism, the braking device, or the like based on the acquired information on the surroundings of the vehicle, so that the microcomputer 7610 automatically travels independently of the driver's operation. You may perform the cooperative control for the purpose of driving.
- ADAS Advanced Driver Assistance System
- the microcomputer 7610 is information acquired via at least one of the general-purpose communication I / F 7620, the dedicated communication I / F 7630, the positioning unit 7640, the beacon receiving unit 7650, the in-vehicle device I / F 7660, and the in-vehicle network I / F 7680.
- the three-dimensional distance information between the vehicle and the surrounding structure or an object such as a person may be generated based on the above and local map information including the peripheral information of the current position of the vehicle may be created.
- the microcomputer 7610 may generate a warning signal by predicting a danger such as a collision of a vehicle, approach of a pedestrian or the like or an approach to a closed road based on the acquired information.
- the warning signal may be, for example, a signal for generating a warning sound or lighting a warning lamp.
- the audio image output unit 7670 transmits an output signal of at least one of audio and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle.
- an audio speaker 7710, a display unit 7720, and an instrument panel 7730 are illustrated as output devices.
- Display unit 7720 may include at least one of an on-board display and a head-up display, for example.
- the display portion 7720 may have an AR (Augmented Reality) display function.
- the output device may be other devices such as headphones, wearable devices such as glasses-type displays worn by passengers, projectors, and lamps.
- the display device can display the results obtained by various processes performed by the microcomputer 7610 or information received from other control units in various formats such as text, images, tables, and graphs. Display visually. Further, when the output device is an audio output device, the audio output device converts an audio signal made up of reproduced audio data or acoustic data into an analog signal and outputs it aurally.
- At least two control units connected via the communication network 7010 may be integrated as one control unit.
- each control unit may be configured by a plurality of control units.
- the vehicle control system 7000 may include another control unit not shown.
- some or all of the functions of any of the control units may be given to other control units. That is, as long as information is transmitted and received via the communication network 7010, the predetermined arithmetic processing may be performed by any one of the control units.
- a sensor or device connected to one of the control units may be connected to another control unit, and a plurality of control units may transmit / receive detection information to / from each other via the communication network 7010. .
- a computer program for realizing each function of the camera unit 10 according to the present embodiment described with reference to FIG. 13 can be implemented in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored.
- the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed via a network, for example, without using a recording medium.
- the camera unit 10 can be applied to the integrated control unit 7600 of the application example illustrated in FIG.
- the signal processing unit 104 and the timing control unit 106 of the camera unit 10 correspond to the microcomputer 7610 of the integrated control unit 7600.
- the integrated control unit 7600 provides different clip values for long accumulation and short accumulation in order to suppress histogram spikes, blinking is performed in a scene with a very large luminance difference such as a tunnel exit. It is possible to reliably recognize LED traffic lights and road signs with high response speed, and to accurately recognize obstacles such as preceding vehicles and pedestrians.
- the components of the camera unit 10 described with reference to FIG. 13 is realized in a module (for example, an integrated circuit module configured by one die) for the integrated control unit 7600 illustrated in FIG. May be.
- the camera unit 10 described with reference to FIG. 13 may be realized by a plurality of control units of the vehicle control system 7000 illustrated in FIG.
- this technique can take the following structures.
- a signal processing apparatus comprising: a combining unit that combines a plurality of image signals obtained as a result of the addition.
- the combining unit is a signal amount of an image signal obtained as a result of addition, and a plurality of images obtained as a result of linearization in a region different from the periphery of the signal amount when the inclination of the signal amount with respect to the light amount changes.
- the signal processing device according to (1).
- the synthesis coefficient calculation unit uses a first image signal obtained as a result of addition and linearization using the first saturation signal amount and a second saturation signal amount lower than the first saturation signal amount.
- the signal of the second image obtained as a result of the addition and the linearization is synthesized, the signal of the first image according to the level of the signal of the setting image in which the amount of the first saturation signal is set.
- the signal processing apparatus according to (6), wherein the synthesis coefficient for synthesizing the signal and the signal of the second image is calculated.
- the synthesis coefficient calculation unit synthesizes the signal of the first image and the signal of the second image until the level of the signal of the set image reaches the first saturation signal amount.
- the signal processing apparatus according to (7), wherein the synthesis coefficient is calculated so that a synthesis ratio of the signal of the second image in an obtained synthesized image signal is 100%.
- the signal processing apparatus according to (8), wherein when the level of the signal of the set image becomes the first saturation signal amount, the inclination of the image signal obtained as a result of addition changes.
- (10) Further comprising a synthesis coefficient modulation section for modulating the synthesis coefficient based on a motion detection result between the signals of the plurality of images;
- the signal processing apparatus according to (6), wherein the combining unit combines the signals of the plurality of images based on a combined coefficient after motion compensation obtained as a result of modulation.
- the synthesis coefficient modulation unit When a motion is detected between the signals of the plurality of images, the synthesis coefficient modulation unit is configured so that a synthesis ratio of a signal of an image with high information reliability among the signals of the plurality of images is increased.
- the signal processing device according to (10), wherein the synthesis coefficient is modulated.
- the synthesis coefficient modulator uses a first image signal obtained as a result of addition and linearization using the first saturation signal amount, and a second saturation signal amount lower than the first saturation signal amount.
- the synthesis coefficient for synthesizing the signal of the first image and the signal of the second image is modulated so that the synthesis ratio of the signal of the first image in the signal of is increased.
- the plurality of images include a first exposure image having a first exposure time and a second exposure image having a second exposure time different from the first exposure time,
- the control unit performs control so as to capture the second exposure image following the first exposure image, and sets the end of exposure of the first exposure image and the start of exposure of the second exposure image.
- the signal processing device according to any one of (1) to (12), wherein the interval is minimized.
- An imaging device comprising: a combining unit that combines signals of a plurality of images obtained as a result of addition.
- a signal processing method including a step of combining signals of a plurality of images obtained as a result of addition.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Computer Hardware Design (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
2.本技術の実施の形態
3.本技術の実施の形態の変形例
4.本技術の信号処理の詳細な内容
5.N枚合成の計算式
6.固体撮像装置の構成例
7.コンピュータの構成例
8.応用例
近年、自動運転など高度な運転制御を実現することを目的として自動車に車載カメラが搭載されるケースが増えている。しかしながら、車載カメラでは安全性を確保するため、トンネルの出口など輝度差が非常に大きい条件においても視認性を担保することが求められており、画像の白飛びを抑えて広ダイナミックレンジ化するための技術が必要な状況である。
また、近年において、信号機や標識の光源が電球からLEDに置き換わりつつある。しかしながらLEDは明滅の応答速度が従来式の電球に比べて高速であるため、撮影装置でLEDの信号機や標識を撮影するとフリッカが発生し消灯して見えてしまうという問題があり、ドライブレコーダの証拠能力の担保や自動車の運転自動化に向けて大きな課題となっている。
また、自動運転を実現する際に、自動車の進行方向に存在する先行車両や、道路を横断中の歩行者等の障害物(対象物)を認識するための技術が必須となる。例えば、自動車の前方の障害物の検出が遅れると、自動ブレーキの動作が遅れてしまう可能性がある。
ここで、上述した特許文献1には、複数の異なる露光量で撮影された画像を合成することで白飛びを抑制し、見かけのダイナミックレンジを拡大する手法が提案されている。この手法においては、図4に示すように、露光時間の長い長時間露光画像(長蓄画像)の輝度値を参照し、明るさが所定の閾値を下回れば長時間露光画像(長蓄画像)、上回れば短時間露光画像(短蓄画像)を出力することで、広いダイナミックレンジを持つ画像を生成することが可能となる。
図1に示した画像の白飛びと、図2に示したLEDのフリッカを同時に解決するための手法として、異なる露光時間で撮影した複数の撮影画像の加算信号を用いて、ダイナミックレンジの拡大された信号を生成する手法(以下、現状の技術の手法という)がある。
P2 = FULLSCALE × 1 / g1 ・・・(2)
P = Plo ・・・(4)
P = Kp1 + (Plo - Kp1) × (1 + g1) ・・・(5)
(2)また、例えば、複数の画像の信号のうち、長蓄画像の信号だけ、クリップ値を下げて、加算信号の傾きが変化するポイントであるニーポイントKpの位置を引き下げた信号を並列に用意し、ヒストグラムのスパイクが発生するニーポイントKpの周辺を避けて信号の乗り換えを行うことで、ニーポイントKpにおける急峻な特性の変化を抑制する。
(3)その際に、動き補正処理をあわせて行うことにより、ニーポイントKpの位置を引き下げることで発生する懸念のある高速点滅被写体の減光を抑制する。
図13は、本技術を適用した撮影装置としてのカメラユニットの一実施の形態の構成例を示すブロック図である。
次に、図14及び図15を参照して、図13のタイミング制御部106によるシャッタ制御について説明する。
図16は、図13の信号処理部104の構成例を示す図である。
LIN1 = SUM1 ・・・(8)
LIN1 = KP1_1 + (SUM1 - KP1_1) × (1 + G1) ・・・(9)
LIN2 = SUM2 ・・・(12)
LIN2 = KP1_2 + (SUM2 - KP1_2) × (1 + G1) ・・・(13)
次に、図17のフローチャートを参照して、図16の信号処理部104により実行される、2枚合成を行う場合の信号処理の流れを説明する。
次に、図18乃至図20を参照して、図16の信号処理部104による信号処理(図17)の処理結果について説明する。
図21は、3枚合成を行う場合の信号処理部104の構成例を示す図である。
KP2_1 = CLIP_T1_1 + CLIP_T2_1 × (1 + 1 / G2) ・・・(20)
LIN1 = SUM1 ・・・(21)
LIN1 = KP1_1 + (SUM1 - KP1_1) × (1 + G1 × G2 / (1 + G2)) ・・・(22)
LIN1 = KP2_1 + (KP2_1 - KP1_1) × (1 + G1 × G2 / (1 + G2)) + (SUM1 - KP2_1) × (1 + G2 + G1 × G2) ・・・(23)
KP2_2 = CLIP_T1_2 + CLIP_T2_2 × (1 + 1 / G2) ・・・(26)
LIN2 = SUM2 ・・・(27)
LIN2 = KP1_2 + (SUM2 - KP1_2) × (1 + G1 × G2 / (1 + G2)) ・・・(28)
LIN2 = KP2_2 + (KP2_2 - KP1_2 ) × (1 + G1 × G2 / (1 + G2)) + (SUM2 - KP2_2) × (1 + G2 + G1 × G2) ・・・(29)
KP2_3 = CLIP_T1_3 + CLIP_T2_3 × (1 + 1 / G2) ・・・(32)
LIN3 = SUM3 ・・・(33)
LIN3 = KP1_3 + (SUM3 - KP1_3) × (1 + G1 × G2 / (1 + G2)) ・・・(34)
LIN3 = KP2_3 + (KP2_3 - KP1_3) × (1 + G1 × G2 / (1 + G2)) + (SUM3 - KP2_3) × (1 + G2 + G1 × G2) ・・・(35)
次に、図22及び図23のフローチャートを参照して、図21の信号処理部104により実行される、3枚合成を行う場合の信号処理の流れを説明する。
図24は、第1加算処理部141による第1加算処理と、第1線形化処理部142による第1線形化処理の詳細を説明する図である。
図25は、第2加算処理部143による第2加算処理と、第2線形化処理部144による第2線形化処理の詳細を説明する図である。
図26は、本技術によるヒストグラムのスパイクの抑制の詳細を説明する図である。
図27は、本技術で用いられる合成係数の詳細を説明する図である。
次に、本技術で用いられる動き補償後合成係数の詳細について説明する。
異なる露光時間で撮影された複数の画像の信号を、異なる飽和信号量を用いて加算する加算部と、
加算の結果得られる複数の画像の信号を合成する合成部と
を備える信号処理装置。
(2)
加算の結果得られる画像の信号を線形化する線形化部をさらに備え、
前記合成部は、加算の結果得られる画像の信号の信号量であって、その信号量の光量に対する傾きが変化するときの信号量の周辺と異なる領域で、線形化の結果得られる複数の画像の信号を合成する
前記(1)に記載の信号処理装置。
(3)
前記傾きが変化するときの信号量は、前記飽和信号量に応じて変化する
前記(2)に記載の信号処理装置。
(4)
加算の対象となる複数の画像の信号ごとに、少なくとも1つの画像の信号に対する飽和信号量が異なるように設定される
前記(1)乃至(3)のいずれかに記載の信号処理装置。
(5)
前記複数の画像の信号のうち、より長い露光時間を有する画像の信号は、前記飽和信号量が異なるように設定される
前記(4)に記載の信号処理装置。
(6)
前記複数の画像の信号のうち、基準の画像の信号に基づいて、線形化の結果得られる複数の画像の信号の合成割合を示す合成係数を算出する合成係数算出部をさらに備え、
前記合成部は、前記合成係数に基づいて、前記複数の画像の信号を合成する
前記(2)乃至(5)のいずれかに記載の信号処理装置。
(7)
前記合成係数算出部は、第1の飽和信号量を用いた加算と線形化の結果得られる第1の画像の信号と、前記第1の飽和信号量よりも低い第2の飽和信号量を用いた加算と線形化の結果得られる第2の画像の信号とを合成する際に、前記第1の飽和信号量が設定された設定画像の信号のレベルに応じて、前記第1の画像の信号と前記第2の画像の信号とを合成するための前記合成係数を算出する
前記(6)に記載の信号処理装置。
(8)
前記合成係数算出部は、前記設定画像の信号のレベルが、前記第1の飽和信号量のとなるときまでに、前記第1の画像の信号と前記第2の画像の信号とを合成して得られる合成画像の信号における、前記第2の画像の信号の合成割合が100%になるように、前記合成係数を算出する
前記(7)に記載の信号処理装置。
(9)
前記設定画像の信号のレベルが、前記第1の飽和信号量となるとき、加算の結果得られる画像の信号における前記傾きが変化する
前記(8)に記載の信号処理装置。
(10)
前記複数の画像の信号間の動き検出結果に基づいて、前記合成係数を変調する合成係数変調部をさらに備え、
前記合成部は、変調の結果得られる動き補償後の合成係数に基づいて、前記複数の画像の信号を合成する
前記(6)に記載の信号処理装置。
(11)
前記合成係数変調部は、前記複数の画像の信号間で動きが検出された場合、前記複数の画像の信号のうち、情報の信頼性の高い画像の信号の合成割合が増加されるように、前記合成係数を変調する
前記(10)に記載の信号処理装置。
(12)
前記合成係数変調部は、第1の飽和信号量を用いた加算と線形化の結果得られる第1の画像の信号と、前記第1の飽和信号量よりも低い第2の飽和信号量を用いた加算と線形化の結果得られる第2の画像の信号との間で動きが検出された場合、前記第1の画像の信号と前記第2の画像の信号とを合成して得られる合成画像の信号における、前記第1の画像の信号の合成割合が増加されるように、前記第1の画像の信号と前記第2の画像の信号とを合成するための前記合成係数を変調する
前記(11)に記載の信号処理装置。
(13)
前記複数の画像の露光時間を制御する制御部をさらに備え、
前記複数の画像は、第1の露光時間を有する第1の露光画像と、前記第1の露光時間とは異なる第2の露光時間を有する第2の露光画像とを含み、
前記制御部は、前記第1の露光画像に続いて前記第2の露光画像を撮影するように制御するとともに、前記第1の露光画像の露光終了と前記第2の露光画像の露光開始との間隔を極小化する
前記(1)乃至(12)のいずれかに記載の信号処理装置。
(14)
異なる露光時間で撮影された複数の画像を生成する画像生成部と、
前記複数の画像の信号を、異なる飽和信号量を用いて加算する加算部と、
加算の結果得られる複数の画像の信号を合成する合成部と
を備える撮影装置。
(15)
異なる露光時間で撮影された複数の画像の信号を、異なる飽和信号量を用いて加算し、
加算の結果得られる複数の画像の信号を合成する
ステップを含む信号処理方法。
Claims (15)
- 異なる露光時間で撮影された複数の画像の信号を、異なる飽和信号量を用いて加算する加算部と、
加算の結果得られる複数の画像の信号を合成する合成部と
を備える信号処理装置。 - 加算の結果得られる画像の信号を線形化する線形化部をさらに備え、
前記合成部は、加算の結果得られる画像の信号の信号量であって、その信号量の光量に対する傾きが変化するときの信号量の周辺と異なる領域で、線形化の結果得られる複数の画像の信号を合成する
請求項1に記載の信号処理装置。 - 前記傾きが変化するときの信号量は、前記飽和信号量に応じて変化する
請求項2に記載の信号処理装置。 - 加算の対象となる複数の画像の信号ごとに、少なくとも1つの画像の信号に対する飽和信号量が異なるように設定される
請求項2に記載の信号処理装置。 - 前記複数の画像の信号のうち、より長い露光時間を有する画像の信号は、前記飽和信号量が異なるように設定される
請求項4に記載の信号処理装置。 - 前記複数の画像の信号のうち、基準の画像の信号に基づいて、線形化の結果得られる複数の画像の信号の合成割合を示す合成係数を算出する合成係数算出部をさらに備え、
前記合成部は、前記合成係数に基づいて、前記複数の画像の信号を合成する
請求項2に記載の信号処理装置。 - 前記合成係数算出部は、第1の飽和信号量を用いた加算と線形化の結果得られる第1の画像の信号と、前記第1の飽和信号量よりも低い第2の飽和信号量を用いた加算と線形化の結果得られる第2の画像の信号とを合成する際に、前記第1の飽和信号量が設定された設定画像の信号のレベルに応じて、前記第1の画像の信号と前記第2の画像の信号とを合成するための前記合成係数を算出する
請求項6に記載の信号処理装置。 - 前記合成係数算出部は、前記設定画像の信号のレベルが、前記第1の飽和信号量のとなるときまでに、前記第1の画像の信号と前記第2の画像の信号とを合成して得られる合成画像の信号における、前記第2の画像の信号の合成割合が100%になるように、前記合成係数を算出する
請求項7に記載の信号処理装置。 - 前記設定画像の信号のレベルが、前記第1の飽和信号量となるとき、加算の結果得られる画像の信号における前記傾きが変化する
請求項8に記載の信号処理装置。 - 前記複数の画像の信号間の動き検出結果に基づいて、前記合成係数を変調する合成係数変調部をさらに備え、
前記合成部は、変調の結果得られる動き補償後の合成係数に基づいて、前記複数の画像の信号を合成する
請求項6に記載の信号処理装置。 - 前記合成係数変調部は、前記複数の画像の信号間で動きが検出された場合、前記複数の画像の信号のうち、情報の信頼性の高い画像の信号の合成割合が増加されるように、前記合成係数を変調する
請求項10に記載の信号処理装置。 - 前記合成係数変調部は、第1の飽和信号量を用いた加算と線形化の結果得られる第1の画像の信号と、前記第1の飽和信号量よりも低い第2の飽和信号量を用いた加算と線形化の結果得られる第2の画像の信号との間で動きが検出された場合、前記第1の画像の信号と前記第2の画像の信号とを合成して得られる合成画像の信号における、前記第1の画像の信号の合成割合が増加されるように、前記第1の画像の信号と前記第2の画像の信号とを合成するための前記合成係数を変調する
請求項11に記載の信号処理装置。 - 前記複数の画像の露光時間を制御する制御部をさらに備え、
前記複数の画像は、第1の露光時間を有する第1の露光画像と、前記第1の露光時間とは異なる第2の露光時間を有する第2の露光画像とを含み、
前記制御部は、前記第1の露光画像に続いて前記第2の露光画像を撮影するように制御するとともに、前記第1の露光画像の露光終了と前記第2の露光画像の露光開始との間隔を極小化する
請求項1に記載の信号処理装置。 - 異なる露光時間で撮影された複数の画像を生成する画像生成部と、
前記複数の画像の信号を、異なる飽和信号量を用いて加算する加算部と、
加算の結果得られる複数の画像の信号を合成する合成部と
を備える撮影装置。 - 異なる露光時間で撮影された複数の画像の信号を、異なる飽和信号量を用いて加算し、
加算の結果得られる複数の画像の信号を合成する
ステップを含む信号処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP17852847.7A EP3518524A4 (en) | 2016-09-23 | 2017-09-08 | SIGNAL PROCESSING DEVICE, IMAGE CAPTURE DEVICE, AND SIGNAL PROCESSING METHOD |
US16/328,506 US20210281732A1 (en) | 2016-09-23 | 2017-09-08 | Signal processing device, imaging device, and signal processing method |
KR1020197007316A KR102317752B1 (ko) | 2016-09-23 | 2017-09-08 | 신호 처리 장치, 촬영 장치 및 신호 처리 방법 |
JP2018540959A JP7030703B2 (ja) | 2016-09-23 | 2017-09-08 | 信号処理装置、撮影装置、及び、信号処理方法 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016185872 | 2016-09-23 | ||
JP2016-185872 | 2016-09-23 | ||
JP2016-236016 | 2016-12-05 | ||
JP2016236016 | 2016-12-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018056070A1 true WO2018056070A1 (ja) | 2018-03-29 |
Family
ID=61690341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/032393 WO2018056070A1 (ja) | 2016-09-23 | 2017-09-08 | 信号処理装置、撮影装置、及び、信号処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20210281732A1 (ja) |
EP (1) | EP3518524A4 (ja) |
JP (1) | JP7030703B2 (ja) |
KR (1) | KR102317752B1 (ja) |
WO (1) | WO2018056070A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11490023B2 (en) | 2020-10-30 | 2022-11-01 | Ford Global Technologies, Llc | Systems and methods for mitigating light-emitting diode (LED) imaging artifacts in an imaging system of a vehicle |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11172142B2 (en) * | 2018-09-25 | 2021-11-09 | Taiwan Semiconductor Manufacturing Co., Ltd. | Image sensor for sensing LED light with reduced flickering |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04207581A (ja) * | 1990-11-30 | 1992-07-29 | Canon Inc | 撮像装置 |
JP2002135787A (ja) * | 2000-10-18 | 2002-05-10 | Hitachi Ltd | 撮像装置 |
JP2003189174A (ja) * | 2001-12-20 | 2003-07-04 | Acutelogic Corp | 撮影装置及び撮影方法 |
JP2008022485A (ja) * | 2006-07-14 | 2008-01-31 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2009152669A (ja) * | 2007-12-18 | 2009-07-09 | Sony Corp | 撮像装置、撮像処理方法及び撮像制御プログラム |
JP2013175897A (ja) * | 2012-02-24 | 2013-09-05 | Toshiba Corp | 画像処理装置及び固体撮像装置 |
JP2014036401A (ja) * | 2012-08-10 | 2014-02-24 | Sony Corp | 撮像装置、画像信号処理方法及びプログラム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3082971B2 (ja) | 1991-08-30 | 2000-09-04 | 富士写真フイルム株式会社 | ビデオ・カメラ,それを用いた撮影方法およびその動作方法,ならびに画像処理装置および方法 |
JP2004179953A (ja) | 2002-11-27 | 2004-06-24 | Matsushita Electric Ind Co Ltd | 画像サーバと画像サーバシステム、カメラ画像のネットワーク伝送及び表示方法 |
JP2005267030A (ja) * | 2004-03-17 | 2005-09-29 | Daihatsu Motor Co Ltd | 歩行者輪郭抽出方法及び歩行者輪郭抽出装置 |
JP4979933B2 (ja) | 2005-12-16 | 2012-07-18 | 株式会社オートネットワーク技術研究所 | 車載カメラ及びドライブレコーダ |
US8704943B2 (en) * | 2011-01-21 | 2014-04-22 | Aptina Imaging Corporation | Systems for multi-exposure imaging |
JP2013066142A (ja) * | 2011-08-31 | 2013-04-11 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
JP6412364B2 (ja) * | 2014-08-04 | 2018-10-24 | 日本放送協会 | 撮像装置および撮像方法 |
-
2017
- 2017-09-08 EP EP17852847.7A patent/EP3518524A4/en not_active Withdrawn
- 2017-09-08 KR KR1020197007316A patent/KR102317752B1/ko active IP Right Grant
- 2017-09-08 JP JP2018540959A patent/JP7030703B2/ja active Active
- 2017-09-08 US US16/328,506 patent/US20210281732A1/en not_active Abandoned
- 2017-09-08 WO PCT/JP2017/032393 patent/WO2018056070A1/ja unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04207581A (ja) * | 1990-11-30 | 1992-07-29 | Canon Inc | 撮像装置 |
JP2002135787A (ja) * | 2000-10-18 | 2002-05-10 | Hitachi Ltd | 撮像装置 |
JP2003189174A (ja) * | 2001-12-20 | 2003-07-04 | Acutelogic Corp | 撮影装置及び撮影方法 |
JP2008022485A (ja) * | 2006-07-14 | 2008-01-31 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2009152669A (ja) * | 2007-12-18 | 2009-07-09 | Sony Corp | 撮像装置、撮像処理方法及び撮像制御プログラム |
JP2013175897A (ja) * | 2012-02-24 | 2013-09-05 | Toshiba Corp | 画像処理装置及び固体撮像装置 |
JP2014036401A (ja) * | 2012-08-10 | 2014-02-24 | Sony Corp | 撮像装置、画像信号処理方法及びプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3518524A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11490023B2 (en) | 2020-10-30 | 2022-11-01 | Ford Global Technologies, Llc | Systems and methods for mitigating light-emitting diode (LED) imaging artifacts in an imaging system of a vehicle |
Also Published As
Publication number | Publication date |
---|---|
EP3518524A4 (en) | 2019-09-25 |
JP7030703B2 (ja) | 2022-03-07 |
EP3518524A1 (en) | 2019-07-31 |
US20210281732A1 (en) | 2021-09-09 |
JPWO2018056070A1 (ja) | 2019-07-04 |
KR20190054069A (ko) | 2019-05-21 |
KR102317752B1 (ko) | 2021-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10805548B2 (en) | Signal processing apparatus, imaging apparatus, and signal processing method | |
WO2018092379A1 (ja) | 画像処理装置と画像処理方法 | |
US10704957B2 (en) | Imaging device and imaging method | |
EP3474534B1 (en) | Image processing apparatus, imaging apparatus, and image processing system | |
US11272115B2 (en) | Control apparatus for controlling multiple camera, and associated control method | |
JP7226440B2 (ja) | 情報処理装置、情報処理方法、撮影装置、照明装置、及び、移動体 | |
JP6816768B2 (ja) | 画像処理装置と画像処理方法 | |
WO2018016151A1 (ja) | 画像処理装置と画像処理方法 | |
JP7030703B2 (ja) | 信号処理装置、撮影装置、及び、信号処理方法 | |
WO2018012317A1 (ja) | 信号処理装置、撮影装置、及び、信号処理方法 | |
WO2021229983A1 (ja) | 撮像装置及びプログラム | |
US11438517B2 (en) | Recognition device, a recognition method, and a program that easily and accurately recognize a subject included in a captured image | |
US20230412923A1 (en) | Signal processing device, imaging device, and signal processing method | |
WO2019111651A1 (ja) | 撮像システム、画像処理装置、及び、画像処理方法 | |
WO2018135208A1 (ja) | 撮像装置と撮像システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17852847 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018540959 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20197007316 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017852847 Country of ref document: EP Effective date: 20190423 |