WO2022128014A1 - Korrektur von bildern eines rundumsichtkamerasystems bei regen, lichteinfall und verschmutzung - Google Patents
Korrektur von bildern eines rundumsichtkamerasystems bei regen, lichteinfall und verschmutzung Download PDFInfo
- Publication number
- WO2022128014A1 WO2022128014A1 PCT/DE2021/200236 DE2021200236W WO2022128014A1 WO 2022128014 A1 WO2022128014 A1 WO 2022128014A1 DE 2021200236 W DE2021200236 W DE 2021200236W WO 2022128014 A1 WO2022128014 A1 WO 2022128014A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- image
- cameras
- neural network
- output
- Prior art date
Links
- 238000011109 contamination Methods 0.000 title abstract description 5
- 238000012937 correction Methods 0.000 title description 12
- 238000013528 artificial neural network Methods 0.000 claims abstract description 85
- 238000001514 detection method Methods 0.000 claims abstract description 70
- 238000003702 image correction Methods 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 39
- 102100033620 Calponin-1 Human genes 0.000 claims abstract description 34
- 101000945318 Homo sapiens Calponin-1 Proteins 0.000 claims abstract description 34
- 230000006735 deficit Effects 0.000 claims abstract description 33
- 230000001771 impaired effect Effects 0.000 claims abstract description 22
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 21
- 238000010801 machine learning Methods 0.000 claims abstract description 14
- 238000009736 wetting Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 60
- 230000006870 function Effects 0.000 claims description 37
- 238000012545 processing Methods 0.000 claims description 29
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 claims description 13
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 claims description 13
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 claims description 13
- 230000015556 catabolic process Effects 0.000 claims description 5
- 238000006731 degradation reaction Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 102100033591 Calponin-2 Human genes 0.000 claims 1
- 101000945403 Homo sapiens Calponin-2 Proteins 0.000 claims 1
- 238000006243 chemical reaction Methods 0.000 description 9
- 238000013459 approach Methods 0.000 description 7
- 230000000875 corresponding effect Effects 0.000 description 7
- 230000002123 temporal effect Effects 0.000 description 6
- 239000002131 composite material Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000009833 condensation Methods 0.000 description 4
- 230000005494 condensation Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000010200 validation analysis Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 3
- 238000001454 recorded image Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000593 degrading effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000012152 algorithmic method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
- 238000001556 precipitation Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4048—Field of view, e.g. obstructed view or direction of gaze
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2555/00—Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
- B60W2555/20—Ambient conditions, e.g. wind or rain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the invention relates to a machine learning method, a method and a device for correcting image data from a plurality of vehicle cameras of an all-round vision system in the event of rain, incidence of light or dirt, for example a vehicle-mounted all-round vision camera system.
- Camera-based assistance systems which are used to recognize objects to avoid collisions and to recognize road boundaries to keep the vehicle in lane. Cameras looking ahead are used for this purpose, for example. In addition to forward-looking cameras, surround view (all-round view) or satellite cameras are also used, which in any arrangement on a vehicle implement detection functions for driving, parking or for visualization in the near and far 360° environment (or parts of it) around the vehicle.
- a camera can be used to implement what is known as a rain light detector, which detects rain on the windshield and, for example, activates the windshield wipers.
- the camera systems mentioned above show a degradation both in the recognition of objects and in the representation of the environment or objects, but this occurs as soon as visibility is impaired in a situation by rain, incidence of light or dirt. If the view of the front camera installed, for example, in the mirror base of the interior mirror is restricted by water droplets on the windscreen or dirt, the view can be restored by operating the windscreen wipers. This requires the camera to be installed in the wiping range of the windscreen wiper.
- cameras attached to the sides of a vehicle are increasingly being used, which, in addition to displaying the surroundings, are also used to detect objects on the side.
- These cameras are often installed on the outside of the vehicle, e.g. in the area of the exterior mirrors. If the (outer) lenses of the cameras are wet with water droplets or dirt, the display or detection functionality can also be very limited here. Due to the lack of cleaning options such as windscreen wipers, this leads to degradation or failure of a system.
- the last example is reversing cameras, which are usually installed above the license plate and get dirty very quickly. Here, too, rain or dust can cause fogging, which makes a clean display difficult.
- CNN-based methods for object recognition are largely able to compensate for contamination or wetting of the lenses by water droplets, at least to a certain extent, methods for object recognition that are based on image characteristics (features) such as optical flow or structure-from experience -Motion, severe degradation due to pollution.
- Algorithmic methods are known for detecting dirt or precipitation on the outer lens of a camera or on the windshield of a vehicle by means of image processing.
- WO 2013/083120 A1 shows a method for evaluating image data from a vehicle camera, in which information about raindrops on a pane in the field of view of the vehicle camera is taken into account when evaluating the image data.
- the information about raindrops can in turn be determined from the image data.
- An example of the evaluation of the image data is the recognition of objects, which then takes the information into account in a targeted manner. For example, the influence of the edges seen by the camera (light/dark or color transitions) can be estimated from a recognized rain intensity.
- edge-based evaluation methods can be adjusted in their threshold values.
- a quality criterion for the image data can be derived from the information, which is then taken into account when evaluating the image data.
- a system would be desirable that algorithmically offers an enhancement of the images despite dirt, incidence of light or drops of water, for example to improve downstream object detection, and also enables a function for rain and light detection (rain + light detection).
- a method for machine learning relates to an image correction of input image data from a plurality of cameras of an all-round vision system, which are impaired by rain, incidence of light and/or dirt, into corrected output image data by means of an artificial neural network.
- Learning takes place with a large number of training image pairs in such a way that at the input of the artificial neural network there is a first image (or first simultaneously recorded images) with rain, light and/or dirt impairment and a second image as the target output image (or second target images to be achieved simultaneously) of the same scene is provided without impairment.
- the artificial neural network is designed in such a way that it determines a security measure c that depends on the degree of wetting by water, incidence of light and/or soiling for an input image.
- the network can be designed, for example, by means of a corresponding design or a corresponding architecture of the artificial neural network.
- the artificial neural network can determine and output the security measure c for a new input image (or for each of the simultaneously captured input images from the multiple cameras).
- the safety measure c thus depends on the degree of impairment caused by being wetted by rain or water, by incidence of light and/or by dirt and, when using the trained network, characterizes the certainty that an image correction is correct.
- the safety measure c characterizes the “(un)safety” with which an image correction is carried out by the trained neural network.
- the security measure c is in effect a measure of the network's security in its computed output (i.e. the image correction performed by the network).
- the artificial neural network can be, for example, a convolutional neural network (“convolutional neural network”, CNN).
- CNN convolutional neural network
- Conversion to output image data without impairment typically includes conversion to output image data with reduced impairment.
- the camera can be, for example, a (monocular) camera that is mounted in or on a vehicle and that captures the surroundings of the vehicle.
- An example of such a vehicle-mounted camera is a camera arranged behind the windshield in the interior of the vehicle, which can capture and image the area of the vehicle environment in front of the vehicle through the windshield.
- At least one factor d is determined as a measure of the difference between the corrected output image and the impaired input image and is made available to the artificial neural network as part of the training.
- the factor d is taken into account by the artificial neural network during learning, for example in such a way that the neural network trains the linking of input image, output image and factor d.
- the trained network can later estimate or determine a factor d for a currently recorded impaired camera image and generate (or reconstruct) an output image that has been correspondingly greatly corrected.
- a factor d can be specified for the trained neural network, for example, and the degree of correction of the currently captured camera image can thereby be controlled.
- the factor d can be determined, for example, by means of a local comparison of an undisturbed image with that of an image affected by rain or dirt.
- the factor d can be determined with the aid of 2D filters, which can be mapped, for example, in the input layers of an artificial neural network.
- the factor d can be represented as the variance of a 2D low-pass filter.
- more complex contrast values structural similarity
- correlations sum of absolute distances - SAD, sum of squared distances - SSD, zero-means normalized cross correlation - ZNCC
- a factor d can be determined from a comparison of the target output image and the associated impaired input image. This determination can be made in advance, ie a factor d is already available for each training image pair.
- the factor d can be determined purely on the basis of the training image pairs as part of the learning process.
- a value can be provided by the factor d which indicates the degree of a possible reconstruction of the corrected image and is given to subsequent image processing or image display functions.
- a low value can, for example, indicate a high correction, a high value a low correction for the further processing stages and can be taken into account when determining the quality of the generated object data—just like the security measure c.
- the training image pairs are generated by taking a first image with rain, light and/or dirt impairment (in the optical path of the camera) and a second image without impairment at the same time or immediately following one another with different exposure times simultaneously with the cameras or captured by the cameras.
- one artificial neural network is trained jointly or simultaneously for all vehicle cameras.
- a sequence of consecutive images for each individual camera can be used for joint training.
- the temporal correlation of images can be profitably taken into account during training and/or when using the trained network.
- Information about image features and their target output image data can be used, which are captured at a time t by a front camera and at a later time by a side camera or the rear camera. This can be used to train that an object with certain image features has identical brightness and color in the output images of all individual cameras.
- the training image pairs contain at least one sequence of consecutive input and output images (as image data).
- image sequences video sequences
- at least one input video sequence and one target video sequence are required for machine learning.
- temporal aspects or relationships in the reconstruction can advantageously be taken into account. Examples include raindrops or dirt particles that move over time. This creates areas in the image that had a clear view at a time t and a view disturbed by rain at a time t+1.
- information in the clear image areas can be used to reconstruct the visual areas disturbed by rain or dirt.
- the temporal aspect can help to reconstruct a clear image, especially in the areas covered by dirt.
- areas of the lens are covered by dirt and other areas are clear.
- dirt prevents the object from being fully recorded.
- the artificial neural network has a common input interface for two separate output interfaces.
- the common input interface has shared feature representation layers. Corrected (ie converted) image data are output at the first output interface.
- ADAS-relevant detections of at least one ADAS detection function are output at the second output interface.
- ADAS stands for advanced systems for assisted or automated driving (English: Advanced Driver Assistance Systems).
- ADAS-relevant detections are, for example, objects, items, road users, which represent important input variables for ADAS/AD systems.
- the artificial neural network includes ADAS detection functions, for example lane detection, object detection, depth detection (3D estimation of the image components), semantic detection, or the like. The outputs of both output interfaces are optimized as part of the training.
- a method for correcting input image data from a plurality of cameras of an all-round vision system, which is impaired by rain, incident light and/or dirt comprises the steps: a) input image data captured by the cameras which are impaired by rain, incident light and/or dirt are impaired are provided to a trained artificial neural network, b) the trained artificial neural network is configured to convert the input image data with rain, light and/or dirt impairment into output image data without impairment and a security measure c, which depends on Degree of wetting by water, incidence of light and/or contamination for an image or each image of the input image data is and characterizes (when using the trained network) the certainty that an image correction by the network is correct, and c) the trained artificial neural network is configured to output Output image data and the determined safety measure c.
- the corrected output image data advantageously enables better automatic object recognition on the output image data, e.g. conventional lane/object or traffic sign detection, or improved stitching (combining the simultaneously captured images from the cameras) and display of the composite image data.
- the input image data contain at least one sequence (video sequence) of consecutively captured input images from the cameras.
- the cameras are vehicle-mounted environment detection cameras.
- a factor d is additionally provided to the trained artificial neural network and in step b) the (strength or degree of) image correction or conversion is controlled as a function of the factor d.
- the factor d is estimated and the impairment of the currently captured input image data is taken into account in the estimation. Cumulatively or alternatively, the estimation of the factor d of the currently recorded Input image data take into account the factor(s) d of the previously acquired image data.
- a temporal development of the factor d can be taken into account when determining or estimating the factor d.
- the temporal development of the factor d and a sequence of input images are included in the estimation.
- Information about the development of brightness over time can also be used for image regions with different factors d.
- a separate factor d is estimated or determined for each of the cameras of the all-round vision system. This enables the individual conversion of image data from the individual (vehicle) cameras, in particular as a function of the current impairment of the image from the respective camera.
- information about the current environmental situation of the vehicle is taken into account when determining the factor d.
- Information about the current environmental situation can include, for example, rain sensor data, external (V2X data or data from a navigation system, e.g. GPS receiver with a digital map) spatially resolved weather and/or sun position information, driving situation information (country road, city, motorway, tunnel, underpass). This information can (at least partially) also be obtained from the camera image data via image processing.
- the current factor d can be estimated based on surrounding situation information and from the temporal order of images as well as from the history of the factor d.
- the factor d can thus be estimated dynamically when using a trained artificial neural network.
- the corrected image data from the vehicle-mounted environment detection camera and the determined safety measure (or the determined safety measures) c and optionally also the factor d are output to at least one ADAS detection function, which determines and outputs ADAS-relevant detections.
- ADAS detection functions can use known edge or Pattern recognition methods include as well as recognition methods that can recognize and optionally classify relevant image objects by means of an artificial neural network.
- the approach can be extended and the artificial neural network for correcting the image data can be combined with a neural network for ADAS detection functions, e.g. lane detection, object detection, depth detection, semantic detection.
- ADAS detection functions e.g. lane detection, object detection, depth detection, semantic detection.
- the learned method can be used in reverse instead of a reconstruction of unclear or impaired image data, in order to artificially add rain or dirt from the learned reconstruction profile in recorded image data for a simulation for validation.
- the learned reconstruction profile can also be used to evaluate the quality of an artificial rain simulation in recorded image data.
- the method can be used in augmented reality and in the area of dash cams and accident recordings.
- the invention further relates to a device with at least one data processing unit configured to correct input image data from a plurality of cameras of an all-round vision system, which are impaired by rain, incidence of light and/or dirt, in output image data.
- the device comprises: an input interface, a trained artificial neural network and a (first) output interface.
- the input interface is configured to receive input image data affected by rain, light and/or dirt captured by the cameras.
- the trained artificial neural network is configured to convert the degraded input image data into output image data without degrading to convert and output a security measure c, which is dependent on the degree of wetting by water, incidence of light and/or contamination for an image or each image of the input image data and is a measure of the security of the network in its calculated output or the security of the network characterizes that the image correction of the network or by the network is correct.
- the (first) output interface is configured to output the converted (corrected) image data and the determined safety measure(s) c.
- the input image data contain at least one sequence of successively acquired input images as input image data
- the artificial neural network has been trained using at least one sequence of successive input and output images as image data
- the device or the data processing unit can in particular be a microcontroller or processor, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) and the like include more and software for performing the appropriate method steps.
- CPU central processing unit
- GPU graphics processing unit
- DSP digital signal processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the data processing unit is implemented in a hardware-based image pre-processing stage (Image Signal Processor, ISP).
- ISP Image Signal Processor
- the trained artificial neural network for image correction is part of an on-board ADAS detection neural network, e.g. for semantic segmentation, lane detection or object detection, with a shared input interface (input or feature representation layers), and two separate output interfaces ( output layers), wherein the first output interface is configured to output the converted output image data and the second output interface to output the ADAS detections (image recognition data).
- ADAS detection neural network e.g. for semantic segmentation, lane detection or object detection
- output layers output layers
- the invention further relates to a computer program element which, when a data processing unit is programmed with it, the data processing unit for it instructs to carry out a method for image correction of input image data from a plurality of cameras of an all-round vision system in output image data.
- the invention further relates to a computer-readable storage medium on which such a program element is stored.
- the invention further relates to the use of a method for machine learning of an image correction of input image data from a plurality of cameras of an all-round vision system in output image data for training an artificial neural network of a device with at least one data processing unit.
- the present invention can thus be implemented in digital electronic circuitry, computer hardware, firmware or software.
- assistance systems which are based on optical flow for feature search
- FIG. 1 shows a first schematic representation of a device according to the invention in one embodiment
- FIG. 2 shows a second schematic representation of a device according to the invention in an embodiment in a vehicle
- Fig. 6 shows a modified system in which the image correction is only part of the
- a device 1 according to the invention for image correction of input image data from a number of cameras of an all-round vision system can have a number of units or circuit components.
- the device 1 for image correction has a number of vehicle cameras 2-i, which each generate camera images or video data.
- the device 1 has four vehicle cameras 2-i for generating camera images.
- the number of vehicle cameras 2-i can vary for different applications.
- the device 1 according to the invention has at least two vehicle cameras for generating camera images.
- the camera images from neighboring vehicle cameras 2-i typically have overlapping image areas.
- the device 1 contains a data processing unit 3, which assembles the camera images generated by the vehicle cameras 2-i to form a composite overall image.
- the data processing unit 3 has a system for image correction or image conversion 4 .
- the system for image conversion 4 generates corrected output or output image data (Opti) without impairment from the input image data (Ini) of the vehicle cameras (2-i), which are at least partially impaired by rain, incidence of light and/or dirt.
- the optimized output image data from the individual vehicle cameras 2-i are put together to form a composite overall image (so-called stitching).
- the overall image composed of the image processing unit 3 from the corrected image data (Opti) is then displayed by a display unit 5 users displayed.
- the system for image correction 4 is formed by an independent hardware circuit that carries out the image correction.
- the system executes program instructions when performing an image correction method.
- the data processing unit 3 can have one or more image processing processors, in which case it converts the camera images or video data received from the various vehicle cameras 2 - i and then assembles them into a composite overall image (stitching).
- the system for image conversion 4 is formed by a processor provided for this purpose, which carries out the image correction in parallel with the one or more other processors of the data processing unit 3 .
- the parallel data processing reduces the time required to process the image data.
- FIG. 2 shows a further schematic representation of a device 1 according to the invention in one embodiment.
- the device 1 shown in FIG. 2 is used in a surround view system of a vehicle 10, in particular a passenger car or a truck.
- the four different vehicle cameras 2-1, 2-2, 2-3, 2-4 can be located on different sides of vehicle 10 and have corresponding viewing areas (dashed lines) in front of V, behind H, on the left L and on the right R of the or of the vehicle(s) 10 .
- the first vehicle camera 2-1 is located at a front of the vehicle 10, the second vehicle camera 2-2 at a rear of the vehicle 10, the third vehicle camera 2-3 at the left side of the vehicle 10, and the fourth vehicle camera 2-4 at the right side of vehicle 10.
- the camera images from two adjacent vehicle cameras 2-i have overlapping image areas VL, VR, HL, HR.
- the vehicle cameras 2 - i are what are known as fish-eye cameras, which have a viewing angle of at least 185°.
- the vehicle cameras 2 - i can transmit the camera images or camera image frames or video data to the data processing unit 3 via an Ethernet connection.
- the data processing unit 3 calculates a composite surround view camera image, which is displayed to the driver and/or a passenger on the display 5 of the vehicle 10 .
- a camera such as the rear vehicle camera
- deviate 2-2 differ from those of the other cameras 2-1, 2-3, 2-4 because the lens of the rear vehicle camera 2-2 is wet with raindrops or dirty.
- the neural network learns optimal parameters for the Image correction in this situation.
- impaired e.g. the rear view camera 2-2
- unimpaired images e.g. the front 2-1 and side cameras 2-3, 2-4
- ground truth data is preferably used in a first application, which is an image quality applied to all target cameras 2-1, 2-2, 2-3, 2-4 without being impaired by rain, light or have dirt.
- a neural network CNN1, CNN10, CNN11, CNN12 is created with regard to an optimal parameter set trained for the web.
- the neural network for the common cameras 2-i can be trained in such a way that even in the case of missing training data and ground truth data for a camera, for example a side camera 2-3 or 2-4, the network has the parameters for this camera 2-3 or 2-4 is trained and optimized with the missing data based on the training data from the other cameras 2-1, 2-2 and 2-4 or 2-3.
- the neural network uses training and ground truth data that are different in time and are correlated with the individual cameras 2-i, which data were captured or recorded by the different cameras 2-i at different points in time.
- information from features or objects and their ground truth data can be used, which were recorded, for example, at a point in time t by the front camera 2-1 and at a point in time t+n by the side cameras 2-3, 2-4.
- These features or objects and their ground truth data can replace missing information in each other's cameras' training and ground truth data when used as training data in the images of the other cameras 2-i and then by the network.
- the network can optimize the parameters for all side cameras 2-3, 2-4 and, if necessary, compensate for missing information in the training data.
- An essential component is an artificial neural network CNN1, which learns in a training phase, a set of training input images In (In1, In2, In3, ...) a set of corresponding corrected training (target) output images Out (Out1, Out2 , Out3, ).
- assignment means that the neural network CNN1 learns to generate a corrected image.
- An input image (In1 , In2, In3, ...) can contain, for example, a street scene in the rain, on which the human eye can only see blurred larger objects such as a large lane marking representing a bicycle and the sky .
- an input image can mean input images captured simultaneously by a plurality or all individual cameras 2-i, since there is a plurality of cameras 2-i.
- an initial image can contain the target output images for a number or all individual cameras 2-i.
- a factor d optionally serves as an additional input variable for the neural network CNN1.
- the factor d is a control parameter that controls the degree of correction for the deterioration (rain, light or dirt) of the image.
- the factor d for an image pair consisting of a training image and a corrected image can be calculated in advance or as part of the training from the image pair (In1 , Out1 ; In2, Out2; In3 , Out3; ...) are determined and made available to the neural network CNN1. This means that the factor d can also be learned.
- the factor d can be controlled by specifying a factor d to what extent the neural network CNN1 corrects a currently recorded image—the factor d can also be imagined as an external regression parameter (with any gradation). Since the factor d can be subject to possible fluctuations in the range of +/- 10%, this is of the training is taken into account.
- the factor d can be noisy by about +/- 10% during the training (e.g., during the different epochs of the training of the neural network) in order to be robust against misestimates of the factor d in the range of about +/- during the inference in the vehicle. to be 10%.
- the required accuracy of the factor d is in the range of +/- 10% - thus the neural network CNN1 is robust to deviations in estimates of this parameter.
- the factor d can be output by the trained neural network CNN1 for an image correction that has taken place.
- downstream image recognition or image display functions receive information about the extent to which the originally captured image was corrected.
- the artificial neural network CNN1 is designed in such a way that it determines a security measure c that depends on the degree of wetting by water, incidence of light and/or soiling for an input image.
- the network can be designed, for example, by appropriately designing the architecture of the artificial neural network CNN1.
- the artificial neural network CNN1 can determine and output the safety measure c for a new input image.
- the safety measure c thus depends on the degree of impairment caused by being wetted by rain or water, by incidence of light and/or by dirt and, when using the trained network, characterizes the certainty that an image correction is correct.
- an input image means the simultaneously recorded input images from several or all individual cameras 2-i
- the term “a safety measure c” can also mean that a separate (possibly different) safety measure c is determined for each of the different simultaneously recorded input images becomes.
- Fig. 3 three pairs of images In1 + Out1, In2 + Out2, In3 + Out3 are shown schematically.
- the neural network CNN1 is trained or has been designed in such a way that it can determine and output a security measure c1, c2 or c3 for each input image of an image pair.
- This safety measure c can include one of the following forms of implementation or a combination of these:
- a confidence measure c_Prob The output of the network is calibrated in such a way that it can be interpreted probabilistically as the probability with which the network makes the right decision. Values for this are normalized to a range between [0,1] and these correspond to the spectrum from 0% probability to 100% probability that the network has calculated a correct correction of an image.
- This calibration can be carried out after the actual machine learning method has been completed using a training image data set by subsequently checking the quality of the learning using a validation image data set.
- the validation image data record also contains image pairs of a first image affected by rain, light and/or dirt and a second image of the same scene without impairment as a corresponding desired output image. In practice, part of the input and target output images can be retained, i.e. not used for the machine learning process, and then used for validation.
- a measure of dispersion similar to a standard deviation c_Dev Here, an uncertainty of the network output is estimated in such a way that it describes the dispersion of the network output. This can be implemented in different ways. Possibilities for this are the subdivision into measurement and model uncertainties.
- the measurement uncertainty refers to uncertainties caused by the input data, e.g. slight noise. These can be added to the network via an additional output and trained by changing the error function.
- Model uncertainty refers to uncertainties caused by the limited mapping accuracy and generalizability of a network. This relates to factors such as the size of the training data and the architecture of the network design.
- the model uncertainty can be estimated, e.g., by Monte Carlo dropout or network ensembles. The model uncertainty and the measurement uncertainty can be added together.
- the safety measure c can be calculated for the entire image, image areas or the individual pixels of the image.
- c_Prob low The network has a low confidence in its estimate - incorrect estimates occur more frequently.
- c_Prob high The network has a high confidence in its estimate - the image correction is correct in most cases.
- c_Dev low The spread of the image correction of the network is low - so the network predicts a very precise image correction.
- c_Dev high The estimated scatter of the image correction, similar to a standard deviation, is high and the output of the network is less precise/less sharp - small changes in the input data or in the modeling of the network would cause deviations in the image correction.
- o c_Prob high and c_Dev low a very reliable and accurate image correction that can be accepted with a high degree of certainty
- o c_Prob low and c_Dev high a very uncertain and imprecise image correction that would rather be rejected
- o c_Prob high and c_Dev high or c_Prob low and c_Dev low these corrections are associated with uncertainties and careful use of the image corrections is recommended here
- One way to generate the training data is to acquire image data with a "stereo camera setup" as described in Porav et al. described with reference to Fig. 8 there: a two-part chamber with transparent panes is arranged in front of two identical camera modules at a small distance from one another, the chamber in front of the right-hand stereo camera module, for example, is sprayed with water droplets, while the chamber in front of the left-hand stereo camera module is kept free of impairments.
- a light source can, for example, only be directed at one chamber. Or in the case of dirt, this can also only be applied to one chamber.
- the trained neural network CNN1 receives original input image data (Ini) from the multiple cameras 2-i as input.
- a factor d can optionally be specified or determined by the neural network CNN1 on the basis of the input image data (Ini), which factor specifies (controls) how strongly the input image data should be corrected.
- the neural network calculates corrected image data (Opti) from the multiple cameras 2-i without impairments and one or more safety measure(s) c.
- the corrected image data (Opti) from multiple cameras 2-i and the at least one safety measure c are output.
- 5 and 6 show exemplary embodiments of possible combinations of a first network for image correction with one or more networks for (detection) functions for driver assistance systems and/or automated driving.
- FIG. 5 shows a neural network CNN10 for the image correction of an input image (Ini), optionally controlled by a factor d, which shares feature representation layers (as input or lower layers) with a network for detection functions (fn1, fn2, fn3, fn4).
- the detection functions (fn1, fn2, fn3, fn4) are image processing functions that detect objects, structures, properties (generally: features) relevant to ADAS or AD functions in the image data.
- Many such detection functions (fn1, fn2, fn3, fn4), which are based on machine learning, have already been developed or are the subject of current development (e.g.: traffic sign classification, object classification, semantic segmentation, depth estimation, lane marker detection and localization).
- Detection functions (fn1, fn2, fn3, fn4) of the second neural network CNN2 deliver better results on corrected images (Opti) than on the original impaired input image data (Ini). Common features for the image correction and for the detection functions are learned in the feature representation layers of the neural network CNN 10 .
- the neural network CNN10 with divided input layers and two separate outputs has a first output CNN 11 for outputting the corrected output/output image data (Opti) and a second output CNN 12 for outputting the detections: objects, depth, track, semantics, etc .
- the feature representation layers are optimized in terms of both image correction and detection functions (fn1, fn2, fn3, fn4) during training, optimizing the image correction also results in an improvement in the detection functions (fn1, fn2, fn3, fn4).
- FIG. 6 shows an approach based on the system of FIG. 5 for neural network-based image correction by optimization of features.
- the features for the detection functions (fn1, fn2, fn3, fn4) are optimized during training with regard to image correction and with regard to the detection functions (fn1, fn2, fn3, fn4).
- no corrected images are calculated.
- the detection functions (fn1, fn2, fn3, fn4) - as already explained - are improved by the joint training of image correction and detection functions compared to a system with only one neural network (CNN2) for detection functions (fn1, fn2, fn3, fn4) , in which only the detection functions (fn1, fn2, fn3, fn4) have been optimized in the training.
- CNN2 neural network
- the corrected image (Opti) is output through an additional output interface (CNN11) and combined with the ground truth (the corresponding corrected training image) compared.
- this output (CNN11) can continue to be used or, in order to save computing time, cut off.
- the weights for the detection functions (fn1, fn2, fn3, fn4) are modified such that they take into account the image corrections for the detection functions (fn1, fn2, fn3, fn4).
- the weights of the detection functions (fn1, fn2, fn3, fn4) thus implicitly learn the information about the brightness improvement.
- an assistance system which algorithmically converts the image data of the underlying camera system into a representation, despite impairments from rain, incidence of light or dirt, which corresponds to a recording without these impairments, are presented below.
- the converted image can then be used either purely for display purposes or as input for feature-based recognition algorithms.
- the calculation in a system is based, for example, on a neural network, which converts an input image with condensation, dirt or water droplets with little contrast and color information into a cleaned representation upstream of a detection or display unit.
- the neural network was trained with a data set consisting of "fogged input images" and the associated "cleaned images”.
- the neural network is trained through the use of cleaned images in such a way that features occurring in the image pairs to be improved are preserved for a later correspondence search or object recognition and, at best, even amplified, despite condensation or dirt.
- the method for image improvement or correction can be integrated in a hardware-based image pre-processing stage, the ISP.
- This ISP is supplemented by a neural network on the hardware side, which Conversion makes and the processed information with the original data possible detection or display method is available.
- the neural network system can be trained to use additional information from non-fogged cameras such as side cameras to further improve the conversion for the fogged areas.
- the network is trained less with individual images for each camera, but as an overall system consisting of several camera systems.
- information on the image quality can be made available to the network for training in addition to information on soiling or condensation.
- the system and the method can be optimized such that it calculates image data optimized for object recognition and human vision.
- the degree of dirt on the cameras varies.
- a satellite camera attached to the side of the vehicle is more heavily soiled than a satellite camera attached to the front of the vehicle.
- the artificial neural network is designed, trained and optimized in such a way that it uses, for example, the image information and image properties of satellite cameras without fogging to calculate a fog-free display in images from cameras with fogging. The image calculated from this can then be used for display purposes, but also for recognizing features.
- the corrected images from fogged cameras are used both for optical flow or structure from motion feature detection and for display purposes.
- the method is designed in such a way that, with a joint training of an artificial neural network with images of different degrees of soiling (e.g. condensation on the side cameras) and clear images (e.g. for the front and rear view cameras), optimal parameters are achieved for all satellite cameras learn at the same time.
- images of different degrees of soiling e.g. condensation on the side cameras
- clear images e.g. for the front and rear view cameras
- ground truth data are preferably used in a first application, one for all target cameras applied image quality.
- the ground truth data for all target cameras is balanced in such a way that no differences in brightness can be seen in the ground truth data in a surround view application, for example.
- a neural network is trained with regard to an optimal parameter set for a network. Data with differently illuminated side areas are also conceivable, for example when the vehicle is next to a street lamp or the vehicle has an additional light source on one side.
- the network for the common cameras can be trained such that even in the case of missing training data and ground truth data for a camera, for example a side camera, the network uses the parameters for the camera with the missing data based on the training data of the other cameras trained and optimized. This can be achieved, for example, as a restriction in the training of the network, for example as an assumption that the correction and the training must always be the same due to the similar image quality of the side cameras.
- the network uses training and ground truth data that are different in time and correlated with the cameras, which were recorded by the different cameras at different times.
- information from features and their ground truth data can be used, which were recorded, for example, at a point in time t by the front camera and at a point in time t+n by the side cameras.
- This feature and its ground truth data can replace missing information in each other's cameras' training and ground truth data when used in the other cameras' images and then by the network as training data. In this way, the network can optimize the parameters for all side cameras and, if necessary, compensate for missing information in the training data.
- the system detects water droplets or dirt, for example to activate a windshield wiper or display a satellite camera cleaning request. In this way, together with brightness detection, a function of rain light detection can be implemented in addition to correcting the images.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180082534.1A CN116648734A (zh) | 2020-12-15 | 2021-12-03 | 下雨、光线入射和脏污时环视摄像***图像的修正 |
JP2023530226A JP2023549914A (ja) | 2020-12-15 | 2021-12-03 | 雨天、逆光、汚れ時におけるサラウンドビューカメラシステムの画像の修正 |
US18/257,659 US20240029444A1 (en) | 2020-12-15 | 2021-12-03 | Correction of images from a panoramic-view camera system in the case of rain, incident light and contamination |
KR1020237017548A KR20230093471A (ko) | 2020-12-15 | 2021-12-03 | 빗물, 빛 번짐 및 먼지가 있는 전방위 카메라 시스템 이미지의 보정 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102020215860.6 | 2020-12-15 | ||
DE102020215860.6A DE102020215860A1 (de) | 2020-12-15 | 2020-12-15 | Korrektur von Bildern eines Rundumsichtkamerasystems bei Regen, Lichteinfall und Verschmutzung |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022128014A1 true WO2022128014A1 (de) | 2022-06-23 |
Family
ID=80123092
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/DE2021/200236 WO2022128014A1 (de) | 2020-12-15 | 2021-12-03 | Korrektur von bildern eines rundumsichtkamerasystems bei regen, lichteinfall und verschmutzung |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240029444A1 (de) |
JP (1) | JP2023549914A (de) |
KR (1) | KR20230093471A (de) |
CN (1) | CN116648734A (de) |
DE (1) | DE102020215860A1 (de) |
WO (1) | WO2022128014A1 (de) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883275A (zh) * | 2023-07-07 | 2023-10-13 | 广州工程技术职业学院 | 基于边界引导的图像去雨方法、***、装置及介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230186446A1 (en) * | 2021-12-15 | 2023-06-15 | 7 Sensing Software | Image processing methods and systems for low-light image enhancement using machine learning models |
CN115578631B (zh) * | 2022-11-15 | 2023-08-18 | 山东省人工智能研究院 | 基于多尺度交互和跨特征对比学习的图像篡改检测方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013083120A1 (de) | 2011-12-05 | 2013-06-13 | Conti Temic Microelectronic Gmbh | Verfahren zur auswertung von bilddaten einer fahrzeugkamera unter berücksichtigung von informationen über regen |
US20200051217A1 (en) * | 2018-08-07 | 2020-02-13 | BlinkAI Technologies, Inc. | Artificial intelligence techniques for image enhancement |
US20200204732A1 (en) * | 2018-12-24 | 2020-06-25 | Wipro Limited | Method and system for handling occluded regions in image frame to generate a surround view |
DE102019205962A1 (de) * | 2019-04-25 | 2020-10-29 | Robert Bosch Gmbh | Verfahren zur Generierung von digitalen Bildpaaren als Trainingsdaten für Neuronale Netze |
-
2020
- 2020-12-15 DE DE102020215860.6A patent/DE102020215860A1/de active Pending
-
2021
- 2021-12-03 WO PCT/DE2021/200236 patent/WO2022128014A1/de active Application Filing
- 2021-12-03 CN CN202180082534.1A patent/CN116648734A/zh active Pending
- 2021-12-03 US US18/257,659 patent/US20240029444A1/en active Pending
- 2021-12-03 JP JP2023530226A patent/JP2023549914A/ja active Pending
- 2021-12-03 KR KR1020237017548A patent/KR20230093471A/ko unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013083120A1 (de) | 2011-12-05 | 2013-06-13 | Conti Temic Microelectronic Gmbh | Verfahren zur auswertung von bilddaten einer fahrzeugkamera unter berücksichtigung von informationen über regen |
US20200051217A1 (en) * | 2018-08-07 | 2020-02-13 | BlinkAI Technologies, Inc. | Artificial intelligence techniques for image enhancement |
US20200204732A1 (en) * | 2018-12-24 | 2020-06-25 | Wipro Limited | Method and system for handling occluded regions in image frame to generate a surround view |
DE102019205962A1 (de) * | 2019-04-25 | 2020-10-29 | Robert Bosch Gmbh | Verfahren zur Generierung von digitalen Bildpaaren als Trainingsdaten für Neuronale Netze |
Non-Patent Citations (5)
Title |
---|
ALLETTO STEFANO ET AL: "Adherent Raindrop Removal with Self-Supervised Attention Maps and Spatio-Temporal Generative Adversarial Networks", 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP (ICCVW), IEEE, 27 October 2019 (2019-10-27), pages 2329 - 2338, XP033732390, DOI: 10.1109/ICCVW.2019.00286 * |
H. PORAV ET AL.: "I Can See Clearly Now: Image Restoration via De-Raining", 2019 IEEE INT. CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), MONTREAL, CANADA, 13 July 2020 (2020-07-13), pages 7087 - 7093 |
HORIA PORAV ET AL: "Rainy screens: Collecting rainy datasets, indoors", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 10 March 2020 (2020-03-10), XP081618496 * |
LIU XING ET AL: "Dual Residual Networks Leveraging the Potential of Paired Operations for Image Restoration", 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 15 June 2019 (2019-06-15), pages 7000 - 7009, XP033686685, DOI: 10.1109/CVPR.2019.00717 * |
PORAV HORIA ET AL: "I Can See Clearly Now: Image Restoration via De-Raining", 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 20 May 2019 (2019-05-20), pages 7087 - 7093, XP033593449, DOI: 10.1109/ICRA.2019.8793486 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883275A (zh) * | 2023-07-07 | 2023-10-13 | 广州工程技术职业学院 | 基于边界引导的图像去雨方法、***、装置及介质 |
CN116883275B (zh) * | 2023-07-07 | 2023-12-29 | 广州工程技术职业学院 | 基于边界引导的图像去雨方法、***、装置及介质 |
Also Published As
Publication number | Publication date |
---|---|
US20240029444A1 (en) | 2024-01-25 |
CN116648734A (zh) | 2023-08-25 |
KR20230093471A (ko) | 2023-06-27 |
JP2023549914A (ja) | 2023-11-29 |
DE102020215860A1 (de) | 2022-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022128014A1 (de) | Korrektur von bildern eines rundumsichtkamerasystems bei regen, lichteinfall und verschmutzung | |
DE102018203807A1 (de) | Verfahren und Vorrichtung zur Erkennung und Bewertung von Fahrbahnzuständen und witterungsbedingten Umwelteinflüssen | |
WO2018177484A1 (de) | Verfahren und system zur vorhersage von sensorsignalen eines fahrzeugs | |
EP3291172B1 (de) | Verfahren zur ermittlung von ergebnisbilddaten | |
DE102008053472A1 (de) | Objekt-Erfassungssystem | |
DE102017217072B4 (de) | Verfahren zum Erkennen eines Witterungsverhältnisses in einer Umgebung eines Kraftfahrzeugs sowie Steuervorrichtung und Kraftfahrzeug | |
DE102018100909A1 (de) | Verfahren zum Rekonstruieren von Bildern einer Szene, die durch ein multifokales Kamerasystem aufgenommen werden | |
DE102014201158A1 (de) | Verfahren und Vorrichtung zum Überprüfen eines von einer Objekterkennung erkannten relevanten Objekts | |
DE102018204451A1 (de) | Verfahren und Vorrichtung zur Autokalibrierung eines Fahrzeugkamerasystems | |
EP4078941A2 (de) | Umwandlung von eingangs-bilddaten einer mehrzahl von fahrzeugkameras eines rundumsichtsystems in optimierte ausgangs-bilddaten | |
DE102019220168A1 (de) | Helligkeits-Umwandlung von Bildern einer Kamera | |
DE102012015282B4 (de) | Verfahren zur Detektion eines verdeckten Zustands einer Bilderfassungseinrichtung eines Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug | |
DE102006037600B4 (de) | Verfahren zur auflösungsabhängigen Darstellung der Umgebung eines Kraftfahrzeugs | |
EP3293971B1 (de) | Verfahren zur nachführung eines bildausschnitts | |
WO2022128013A1 (de) | Korrektur von bildern einer kamera bei regen, lichteinfall und verschmutzung | |
DE102019118106B4 (de) | Verfahren zur Ermittlung einer Sichtweite | |
EP3938946A1 (de) | Verfahren zum bereitstellen einer trainingsdatensatzmenge, ein verfahren zum trainieren eines klassifikators, ein verfahren zum steuern eines fahrzeugs, ein computerlesbares speichermedium und ein fahrzeug | |
WO2020064065A1 (de) | Verfahren zum detektieren von lichtverhältnissen in einem fahrzeug | |
DE102019000486A1 (de) | Verfahren zum Erkennen und Anzeigen von Verkehrsspiegeln | |
DE102020213267A1 (de) | Helligkeits-Umwandlung von Stereobildern | |
DE102018130229B4 (de) | Verfahren und Vorrichtung zur Objektextraktion aus eine dreidimensionale Szene darstellenden Szenenbilddaten | |
DE102009042476A1 (de) | Bestimmung von Zuständen in der Umgebung eines Kraftfahrzeugs mittels einer Stereokamera | |
WO2023160991A1 (de) | Verfahren und prozessorschaltung zum betreiben einer datenbrille mit driftkorrektur eines beschleunigungssensors sowie entsprechend betreibbare datenbrille und computerlesbares elektronisches speichermedium | |
DE102020213270A1 (de) | System zur Vermeidung von Unfällen durch Wildwechsel bei Dämmerung und Nacht | |
DE102020210816A1 (de) | Verfahren zur Erkennung dreidimensionaler Objekte, Computerprogramm, Maschinenlesbares Speichermedium, Steuergerät, Fahrzeug und Videoüberwachungssystem |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21851801 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023530226 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 20237017548 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180082534.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18257659 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21851801 Country of ref document: EP Kind code of ref document: A1 |