CN111275068A - Evaluating measurement parameters with a KI module taking into account measurement uncertainties - Google Patents

Evaluating measurement parameters with a KI module taking into account measurement uncertainties Download PDF

Info

Publication number
CN111275068A
CN111275068A CN201911219144.5A CN201911219144A CN111275068A CN 111275068 A CN111275068 A CN 111275068A CN 201911219144 A CN201911219144 A CN 201911219144A CN 111275068 A CN111275068 A CN 111275068A
Authority
CN
China
Prior art keywords
module
prediction
output
uncertainty
internal processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911219144.5A
Other languages
Chinese (zh)
Inventor
冯迪
L.罗森鲍姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN111275068A publication Critical patent/CN111275068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

Method, data set, KI module, application and computer program for training a KI module, said KI module being configured for converting a set of input variables into at least a prediction of a continuous output variable by means of an internal processing chain, wherein the behavior of the internal processing chain is specified by means of the parameters, wherein the method has the steps of: inputting a predefined set of learning groups of input parameters into a KI module; for each learning group, a KI module is utilized to obtain a prediction of continuous output parameters; using a KI module to additionally find the uncertainty of the prediction; the prediction provided by the KI module is compared to predicted learned values, wherein the learned values are assigned to respective learned groups of input quantities; an error function is evaluated, which depends not only on the deviation of the prediction from the learned value, which is determined in the comparison, but also on the uncertainty of the prediction; the parameters are optimized in such a way that the value of the error function is reduced in the case of a new input into the learning group.

Description

Evaluating measurement parameters with a KI module taking into account measurement uncertainties
Technical Field
The invention relates to a physical measurement technique in which a parameter of interest is evaluated by measurement data via a KI module and is thus measured indirectly.
Background
In many fields of measurement technology, the parameters which are ultimately of interest for direct measurement are not available (zugänglich), or the outlay for such direct measurement is too great.
Neural networks are well suited in order to evaluate output parameters of very much lower dimensions, such as the position, size or type of an object, from measurement data of very high dimensions, such as image data. The evaluation also works in a completely new, untrained condition if the network is trained with a sufficient number of different conditions. This is important, for example, for at least partially automated driving, since it is far from possible to predict all occurring traffic situations.
US 6957203B 2 discloses a method with which unavoidable measurement uncertainties (messuniceheits) in real measurement runs can be taken into account in the evaluation of measurement data with a neuron network.
US 2016/019459 a1 discloses: the evaluation is made more robust with respect to measurement uncertainty by deliberately adding noise (Rausch) during training.
EP 3171297 a1 is directed to classification based on tagged data and to account for human error in the tags of "groudtruth".
EP 1438603B 1 discloses: data from multiple sensors is fused to more accurately locate an object, such as an aircraft.
Disclosure of Invention
A method for training a KI module has been developed within the scope of the present invention, which KI module is designed to convert (ü bersetzen) a set of input variables into at least a prediction of continuous output variables by means of an internal processing chain, which KI module does not exclude that the KI module also finds (ermitteln) further output variables from the same input variables, which KI module can be designed, for example, to find the distance to an object from image data or other physical measurement data that has been obtained by observation of a spatially detected region as continuous output variables and to classify the type of such an object at the same time.
The behavior of the internal processing chain is specified by parameters (festlegen). Such parameters are learned during training and the KI module operates later on in the run based on the learned parameters.
In the case of the method, a predefined set (Menge) of learning groups (Lern-Sätzen) of input variables is input into a KI module.
The prediction provided by the KI module is compared with the predicted learning values assigned to the respective learning groups of the input quantities. The learning data thus comprise pairs of a respective learning group of input variables, which may be, for example, images, and the associated learning value of the prediction, which may be, for example, the distance to a visible object in the image.
An error function (fehlerfunk) is evaluated, wherein the error function depends not only on the deviation of the prediction from the learned value, which is found in the comparison, but also on the uncertainty of the prediction. The error function thus contains an arbitrary measure for uncertainty (Ma β). The parameters of the internal processing chain of the KI module are optimized in such a way that the value of the error function is reduced in the case of re-entry of said learned set of input parameters. This optimization may have any arbitrary interruption criterion. For example, the optimization may be terminated when the value of the error function is below a determined threshold. The optimization can also be terminated, for example, if the change of the error function is below a predefined threshold from one process to the next.
The parameters of the internal processing chain are optimized in view of: when a predetermined set of learning groups of input variables is input, the value of the error function is minimized.
To this end, the KI module is only capable, at least during this training: in addition to the prediction of the continuous output variable, the uncertainty of the prediction is also determined from the set of input variables. The KI module, which also finds the uncertainty of the prediction, can be used later on, not only during training. This is not however imperative.
It is also possible, on the other hand, for example, to extend the KI module for the purpose of training only to an output which also takes place with regard to uncertainties, and to remove the functionality again for later runs of the KI module.
The training may for example be started starting from a standard value for the parameter, such as a random value. The error function then first takes the high value of (anehmen). The parameters can then be optimized using a multivariate optimization method, for example a gradient descent method, in such a way that the value of the error function becomes smaller and smaller step by step.
In this case, in particular, for example, in each training step, the average of the values of the error function, which values are obtained for all learning groups of input variables from the predefined set, can be brought to the quality measure for the optimization. The quality measure can also be refined (verfeinern) arbitrarily, for example by taking into account a maximum or minimum of the error function in view of a predefined set of learning groups.
It is recognized that the consideration in the error function of the measure for the uncertainty as well can intensify (härten) the training of the KI module exclusively with respect to such uncertainties of the measurements, which actually occur physically at the time of the measurement.
In this case, the training can also take into account, in particular: this uncertainty is not constant for all measurements performed with the corresponding physical Instrument (Instrument). Thus, for example, the uncertainty with which the distance to the object may depend entirely on the distance itself. If for example a camera image is used, the farther away the object is from the camera, the smaller the object appears in it. This is then represented in the image by correspondingly smaller pixels. The increased distance from the camera also increases the probability: causing the object to be partially obscured by other objects (abscatten). The uncertainty may also depend on the situation. For example, a camera image recorded in the case of heavy rainfall may be evaluated less reliably than in the case of a camera image recorded in the case of sunshine.
The trained KI module may further be utilized in order to continuously provide information about the uncertainty with which the prediction of the output variables is to be made, also in later runs of the KI module. In this way, the measurement uncertainty of the used sensor (herausskorririgeren) can be learned in depth (eingehend studieren) and corrected therefrom, for example, in order to achieve better accuracy in tracking the object. The statements about the uncertainty can be used, for example, in order to systematically study (uncersuchen) the behavior of the sensors used under different ambient conditions and the behavior of the evaluation device (auswittring) switched on downstream. In this case, the already mentioned dependency of the uncertainty with respect to the distance from the sensor can be further analyzed, for example.
The discussion regarding the uncertainty may be simplified not only during training but also in later runs of the KI module: the object recognition is cleared (Bereinigung) of artifacts. For example, apparent (scheinbar) objects that are closely adjacent can be identified with comparable confidence (Konfidenz) at the location of the "real" object, depending on the algorithm used. If the objects are so close to each other that the uncertainties of their positions overlap, it can be understood that (plausibel): the plurality of probes are combined into one probe.
The uncertainty observed here can have its cause, for example, in terms of the mechanism of the physical observation or in terms of the environment at the time of the observation. Such uncertainty is also known as the incidental (alatorisch) uncertainty, which is different from the cognitive theoretical (epistemisch) uncertainty that is generated by the modeling itself with the KI module.
In a particularly advantageous embodiment, a KI module is selected which contains at least one artificial neuron network, KNN, as an internal processing chain, such a network can be configured, for example, as a stack of successive layers of different types, wherein the set of input variables is fed as input to a first layer and wherein the output of one layer is fed as input to the successive layers, KNN can then contain layers which, from their own input, each recognize a certain feature, for example, for such a layer being a convolutional layer (faltonschchinen), KNN can additionally contain layers whose output has a lower dimension than its input, for an example of such a layer being a sampling layer (Polling-Schichten), the dimension of the data input can thus be reduced step by step during the traversal (Durchwan) of the network, in this way, a large drop (Gefälle) can be overcome, in particular with regard to the dimension, and thus the image with pixels in space with a spatial distance of 262144 is then the total distance of the object (262144).
The parameter which specifies the behavior of KNN can be, for example, a weight with which the different activations (Aktivierung) of neurons from one another are weighted.
In a further advantageous embodiment, a KI module is selected which is designed to determine the prediction and/or the uncertainty of the prediction by means of a recursion. In this way, the prediction can be determined more robustly with respect to noise in the input variables, wherein the suppression of noise is specifically learned when the KI module is trained with the respective physical application.
In a particularly advantageous embodiment, the predicted uncertainty is modeled as noise, and the standard deviation of the noise is determined
Figure DEST_PATH_IMAGE001
Including into the extracted uncertainty of the prediction. The standard deviation of
Figure 367391DEST_PATH_IMAGE001
It can then be evaluated by recursion, for example, analogously to the prediction itself. To account for this uncertainty, then only one additional Regressor (Regressor) is required. The (anehmen) distribution of the noise present may be, for example, a gaussian distribution, wherein its covariance matrix may be occupied, for example, in a diagonal or co-located (isotopsch) manner but may also be completely occupied. The presented distribution of the noise may for example also be a Laplace (Laplace) distribution.
Can be adjusted to the standard deviation
Figure 762600DEST_PATH_IMAGE001
By not directly determining the standard deviation
Figure 639289DEST_PATH_IMAGE001
But rather, the logarithm thereof. Such logarithms can be taken directly into account in the error function as a measure for the uncertainty. Advantageously, the methodTherefore, an error function is selected which contains the standard deviation in the additive Term (addive Term)
Figure 461752DEST_PATH_IMAGE001
The logarithm of (d).
In a further advantageous embodiment, the at least one learned set of input variables and/or the predicted at least one learned value includes at least one measured value of the physical measured variable. The measured value can be determined, in particular, by means of a sensor which records a physical effect, the type and/or intensity of which is characterized by the physical measured variable. The KI module then learns together the inevitable variability in the physical detection of the measured quantities.
The invention also relates to a method for operating a KI module. In the first phase, the KI module is trained in accordance with the described training methodology. In a second phase, the physical measurement data is detected by means of at least one sensor. The physical measurement data is input as input variables into the KI module. At least one actuator (Aktor) which causes at least one mechanical movement is actuated as a function of the prediction of the output variable provided by the KI module and/or as a function of the uncertainty of the prediction. The resulting mechanical movement is then more appropriate for the situation characterized by the input variable (angelessener).
In the training according to the described training method, a data set is generated with which the behavior of the internal processing chain of the KI module can be defined in such a way that, in normal operation, in the case of a specific application, which was the subject of the training, a more precise prediction of the value of the continuous output variable of interest can be made. The existing KI module also utilizes such data sets to be up-rated thereafter (aufwetten). Thus, this data set is a stand-alone product with customer interest (Kudennutzen). For example, it is possible to provide as an external service: a data set for the parameters of the KI module is derived from the set of physically measured input quantities, where the data set enables the KI module to predict more accurately. Thus, the present invention also relates to a data set of parameters that specify the behavior of the internal processing chain of the KI module and that are obtained using the described training method.
The invention also relates to a KI module having an internal processing chain whose behavior is specified by parameters. The KI module is designed to convert a set of input variables into at least one prediction of a continuous output variable by means of the internal processing chain. A first output module is provided, which is configured to output the prediction. Additionally, a second output module is provided, which is configured to output the predicted uncertainty.
The first output module and the second output module are switched on in parallel. This should at least be understood as: the two output modules are supplied with the same information by the part of the internal processing chain of the KI module that is switched on upstream of the two output modules and work independently of each other.
In a particularly advantageous embodiment, the internal processing chain of the KI module comprises an artificial neuron network, KNN, having a plurality of successive layers, wherein the first and second output modules each extract their inputs from the same layer of KNN. As described above, the inputs to the two output modules can then be reduced in their dimensions already strongly in relation to the original set of input variables. The output module no longer needs to overcome a large drop in terms of dimensions. Thus, if a distance starts with data having dimensions that are not too large, the recursion of the final result, for example for one to three dimensions, is better demonstrated (motivieren) for the distance.
In a further particularly advantageous embodiment, the first output module and the second output module each comprise one or more layers of KNN. These output modules may, for example, each contain its own KNN, but they may also be part of KNN in the internal processing chain of the KI module.
As previously described, the KI module may be particularly advantageously employed in applications in which the input parameters include measurement data that has been derived from a physical observation of a spatially-based detection region by at least one sensor. The continuous output variable here comprises the position and/or size of the object and/or of the area preselected for the localization or classification of the object, the value of the continuous output variable being predicted by the KI module. As previously explained, the training of the KI module may be made more robust not only with respect to case-independent sensor noise, but also with respect to uncertainty that depends on the respective case.
The sensor may in particular be mounted on a vehicle. The KI module can then be used, for example, for this purpose to determine the distance of the recognized object from the vehicle for the requirements of the system for at least partially automated driving or the requirements of the driving assistance system. The information may for example be utilized in order to track and evaluate the trajectory of such objects: whether the vehicle has to change its behavior so as not to conflict with the trajectory.
As explained before, the measurement data may comprise, inter alia, image recordings, radar data and/or LIDAR data. This measurement method has the following common points: with increasing distances to the object, generally less measurement data are available from which the distance can be deduced. Accordingly, the statistics of the measurement data become worse, which in turn reduces the accuracy of the distance determination. This degradation (Verschlechterung) can be monitored if the KI module described is used. Here, the KI module need not have been trained in the necessary manner in the training methods described. If the KI module is not trained in the described training method, it is still possible to: at least one measurement of uncertainty. However, if the KI module has been trained using the described training method, this uncertainty becomes significantly lower.
The method for training, just like the method for operating, can also be implemented in a computer-supported manner, in particular, and thus in the form of software which represents a separate product having the interests of the customer. The software comes from training it implements: the KI module is incremented as follows, so that a more accurate prediction of the continuous output variable of interest can be found from the given physical measurement data. The invention therefore also relates to a computer program containing machine-readable instructions which, when executed on a computer and/or on a control device, cause the computer or the control device to carry out one of the described methods. The invention likewise relates to a machine-readable data carrier and/or a download item with a computer program.
The invention also relates to a control device with a computer program and/or with a machine-readable data carrier and/or a download product. Alternatively or also in conjunction with this, the control device can also be designed in any other way specifically for carrying out the described method, for example by an implementation of the functionality of the method in the form of an application-specific integrated circuit (ASIC).
Drawings
In the following, further measures to improve the invention are shown together with the description of preferred embodiments of the invention on the basis of the figures. Wherein:
FIG. 1 illustrates an embodiment of a method 100;
FIG. 2 illustrates an embodiment of the KI module 1;
FIG. 3 illustrates an exemplary scenario in which the KI module 1 can be used;
fig. 4 shows an exemplary case with a plurality of objects 62, 62', 62 ", through which a discussion of different degrees of determination can be made.
Detailed Description
According to FIG. 1, in step 110 of the method 100 a predetermined set of learning groups 2a of input variables 2 is input into the KI module 1. In the internal processing chain 3 of the KI module 1, a prediction 41 of the successive output variable of interest is found in step 120 for each learning group 2a of input variables 2. With the KI module 1, the uncertainty 42 of the prediction 41 is additionally found in step 130.
This may likewise occur within the internal processing chain 3 of the KI module as exemplarily depicted in FIG. 1 and thus be trained together when training the KI module 1. However, it is not mandatory that the uncertainty evaluation takes place in the same internal processing chain 3, which is already responsible for the evaluation of the prediction 41.
In the case of the determination of the uncertainty 42 in step 130, the learning group 2a and/or the prediction 41 obtained therefrom can be considered individually or in an arbitrarily weighted combination as exemplarily depicted in fig. 1.
In the example shown in FIG. 1, uncertainty is modeled as noise according to block 131, where the standard deviation of the noise is
Figure 541834DEST_PATH_IMAGE001
Is included into the evaluated uncertainty 42 according to block 132.
In step 140, the prediction 41 of the successive output variables is compared with the learned values 41a, which are assigned to the respective learning group 2a of the input variables 2. The deviation 41' is determined.
In step 150, an error function 5 is evaluated, which depends not only on the deviation 41' but also on the uncertainty 42. The error function is for example:
Figure 740735DEST_PATH_IMAGE002
wherein the error function assigns the group, here denoted by the letter M, the parameters 31 for the internal processing chain 3 of the KI module 1 and assigns the prediction 41, here denoted by the letter y, a value corresponding to the deviation 41 'of the prediction 41 from the learned value 41a, here denoted by the letter y'. Using standard deviation of noise
Figure 409613DEST_PATH_IMAGE001
This expression for the error function can be extended according to rules to a new error function 5:
Figure 465294DEST_PATH_IMAGE003
the error function 5 may be passed through a regularizer (regularisiier), such as L, according to preference2Regularizers or random losses (Dropouts) to accumulate (aneichean). The error function 5 can furthermore also be extended to the determination of the entire batch Y of n predictions 41. If the network output matrix is:
Figure 219623DEST_PATH_IMAGE004
wherein all predictions 41= y in the network output matrix1,…,ynIntegrated, the extended error function 5 can then be written as:
Figure 956635DEST_PATH_IMAGE005
in step 160, new values for the parameters 31 of the internal processing chain 3 for the KI module 1 are determined within the scope of the optimization with the following objectives: the value of the error function 5 is reduced in the case of the new input 110 of the learning group 2 a.
FIG. 2 illustrates an embodiment of the KI module 1. The internal processing chain 3 of the KI module 1, whose behavior is specified by parameters 31, comprises an artificial neuron network, KNN, having a plurality of successive layers, of which only three layers 3a-3c are depicted in fig. 2 by way of example.
A first output module 4a is provided, which is designed to output a prediction 41. A second output module 4b is provided, which is designed to output the uncertainty 42. The two output modules 4a and 4b are switched on in a parallel sense so that they extract (beziehen) their inputs from the same layer 3c of the internal processing chain 3.
In addition to the output modules 4a and 4b, further output modules can be provided, which, for example, carry out the classification. The other output modules may likewise extract their inputs from the same layer 3c of the internal processing chain 3, but also from the other layers 3a, 3 b.
FIG. 3 illustrates an exemplary scenario in which a trained KI module 1 may be used. The KI module 1 is here mounted in a vehicle 7, which also carries a sensor 61. The sensor may for example be a camera, a radar sensor or a LIDAR (laser radar) sensor. LIDAR provides depth information in space and is therefore particularly well suited for identifying three-dimensional objects. The sensor 61 monitors the detection region 6 located in front of the vehicle 7 in the direction of travel and forwards its measurement data, in this case the intensity values of the image pixels, as input variable 2 to the KI module 1. The KI module 1 thus determines a prediction 41 for the output variable of interest and the associated uncertainty 42 of the prediction.
In the example shown in fig. 3, the position 62a and/or the size (ausdehnnung) 62b of the object 62 in the detection region 6 may be output quantities of interest, for example. However, for example, the position 63a and/or the position 63b of the preselected region 63 for further classification and/or localization of the object 62 may also be output variables of interest.
Fig. 4 shows an exemplary case in which the uncertainty 42 is clearly different for the objects 62, 62 ', 62 ″, with which uncertainty 42 a prediction 41 can be made about the characteristics of the different objects 62, 62', 62 ″. These objects 62, 62', 62 ″ are in this example vehicles on the roadway 8 from the perspective of the following vehicle not depicted in fig. 4.
With respect to a vehicle 62 traveling immediately in front of the following vehicle, a prediction 41 with minimal uncertainty 42 is possible because this vehicle 62 is well visible. With a vehicle 62 'that is further away, the uncertainty 42 has become greater since this vehicle 62' appears smaller, for example, in the camera image and is therefore also represented by fewer pixels. With respect to the vehicle 62 ″, the uncertainty is greatest because the vehicle is not only furthest away, but is also covered by the vehicle 62', for example, by about half, which is indicated by the partial dashed line drawn in fig. 4 to the outline of the vehicle 62 ″.

Claims (17)

1. A method (100) for training a KI module (1) which is designed to convert at least a set of input variables (2) into a prediction (41) of continuous output variables by means of an internal processing chain (3), wherein the behavior of the internal processing chain (3) is defined by means of parameters (31), wherein the method has the following steps:
inputting (110) a predetermined set of learning groups (2 a) of input variables (2) into the KI module (1);
for each learning group (2 a), using the KI module (1) to solve (120) a prediction (41) of the continuous output quantity;
-additionally solving (130) an uncertainty (42) of the prediction (41) with the KI module (1);
-the prediction (41) provided by the KI module (1) is compared (140) with the predicted learned values (41 a), wherein the learned values are assigned to the respective learned groups (2 a) of the input quantities (2);
an error function (5) is evaluated (150), which depends not only on the deviation (41') of the prediction (41) from the learned value (41 a) found in the comparison (140) but also on the uncertainty (42) of the prediction (41);
the parameter (31) is optimized (160) in such a way that the value of the error function (5) is reduced when the learning group (2 a) is re-input (110).
2. The method (100) according to claim 1, wherein a KI module (1) is selected, which contains at least one artificial neuron network, KNN, as an internal processing chain (3).
3. The method (100) according to one of claims 1 to 2, wherein a KI module (1) is selected which is designed to determine the prediction (41) and/or the uncertainty (42) of the prediction (41) by means of recursion.
4. Method (100) according to one of claims 1 to 3, wherein the uncertainty (42) of the prediction (41) is modeled (131) as noise and wherein a standard deviation of the noise is modeled (131)
Figure 565777DEST_PATH_IMAGE001
Is included (132) into the uncertainty (42) sought.
5. The method (100) according to claim 4, wherein an error function (5) is selected which contains the standard deviation in an additive term
Figure 341972DEST_PATH_IMAGE001
The logarithm of (d).
6. The method (100) according to one of claims 1 to 5, wherein the at least one learned set (2 a) of input quantities (2) and/or the predicted at least one learned value (41 a) comprises at least one measured value of a physical measured quantity.
7. Method for running a KI module (1), wherein the KI module (1) is trained in a first phase using the method (100) according to one of claims 1 to 6 and wherein in a second phase
Detecting physical measurement data with at least one sensor;
inputting the physical measurement data as input variables (2) into the KI module (1); and
at least one actuator causing at least one mechanical movement is manipulated as a function of the prediction (41) of an output variable provided by the KI module (1) and/or as a function of the uncertainty (42) of the prediction (41).
8. Data set of parameters (31) specifying the behavior of the internal processing chain (3) of a KI module (1), obtained with the method (100) according to one of claims 1 to 6.
9. A KI module (1) having an internal processing chain (3), the behavior of which is defined by parameters (31), wherein the KI module (1) is designed to convert a set of input variables (2) into at least one prediction (41) of successive output variables by means of the internal processing chain (3), wherein a first output module (4 a) is provided, which is designed to output the prediction (41), wherein a second output module (4 b) is provided, which is designed to output an uncertainty (42) of the prediction (41), and wherein the first output module (4 a) and the second output module (4 b) are switched in parallel at least in such a way that the two output modules (4 a, 4 b) are fed in parallel by a part of the internal processing chain (3) of the KI module (1) that is switched upstream of the two output modules, 4b) The same information is supplied and the two output modules (4 a, 4 b) work independently of each other.
10. The KI module (1) according to claim 9, wherein the internal processing chain (3) of the KI module (1) comprises an artificial neuron network, KNN, having a plurality of successive layers (3 a-3 c), wherein the first output module (4 a) and the second output module (4 b) respectively extract their inputs from the same layer (3 c) of the KNN.
11. The KI module (1) according to claim 10, wherein the first output module (4 a) and the second output module (4 b) each comprise one or more layers of KNN.
12. The use of the KI module (1) according to one of the claims 9 to 11, wherein the input quantities (2) comprise measurement data which have been obtained from physical observation of a detection region (6) in space by at least one sensor (61), wherein the continuous output quantities comprise: a position (62 a) and/or a size (62 b) of an object (62), and/or a position (63 a) and/or a size (63 b) of a region (63) pre-selected for a localization or classification of the object (62).
13. Use according to claim 12, wherein the sensor (61) is fitted on a vehicle (7).
14. Use according to one of claims 12 to 13, wherein the measurement data comprises image recordings, radar data and/or LIDAR data.
15. Computer program comprising machine-readable instructions which, when executed on a computer and/or on a control device, cause the computer or the control device to carry out the method according to one of claims 1 to 7.
16. Machine-readable data carrier and/or download product with a computer program according to claim 15.
17. A control device and/or computer, wherein the control device and/or computer: having a computer program according to claim 15; and/or with a machine-readable data carrier and/or download product according to claim 16; and/or otherwise specifically configured for carrying out the method according to one of claims 1 to 7.
CN201911219144.5A 2018-12-04 2019-12-03 Evaluating measurement parameters with a KI module taking into account measurement uncertainties Pending CN111275068A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102018220941.3A DE102018220941A1 (en) 2018-12-04 2018-12-04 Evaluation of measured variables with AI modules taking into account measurement uncertainties
DE102018220941.3 2018-12-04

Publications (1)

Publication Number Publication Date
CN111275068A true CN111275068A (en) 2020-06-12

Family

ID=70680882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911219144.5A Pending CN111275068A (en) 2018-12-04 2019-12-03 Evaluating measurement parameters with a KI module taking into account measurement uncertainties

Country Status (2)

Country Link
CN (1) CN111275068A (en)
DE (1) DE102018220941A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378468A (en) * 2021-06-18 2021-09-10 中国科学院地理科学与资源研究所 Weight optimization method and system of multidimensional geoscience parameters

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019217300A1 (en) 2019-11-08 2021-05-12 Robert Bosch Gmbh Method for training an artificial neural network, computer program, storage medium, device, artificial neural network and application of the artificial neural network
DE102020207564A1 (en) 2020-06-18 2021-12-23 Robert Bosch Gesellschaft mit beschränkter Haftung Method and apparatus for training an image classifier
DE102020215539A1 (en) 2020-12-09 2022-06-09 Robert Bosch Gesellschaft mit beschränkter Haftung Determining the robustness of an object detector and/or classifier for image data
DE102021200030A1 (en) * 2021-01-05 2022-07-07 Volkswagen Aktiengesellschaft Method, computer program and device for operating an AI module

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6314414B1 (en) 1998-10-06 2001-11-06 Pavilion Technologies, Inc. Method for training and/or testing a neural network with missing and/or incomplete data
US7099796B2 (en) 2001-10-22 2006-08-29 Honeywell International Inc. Multi-sensor information fusion technique
US11256982B2 (en) 2014-07-18 2022-02-22 University Of Southern California Noise-enhanced convolutional neural networks
EP3171297A1 (en) 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378468A (en) * 2021-06-18 2021-09-10 中国科学院地理科学与资源研究所 Weight optimization method and system of multidimensional geoscience parameters
CN113378468B (en) * 2021-06-18 2024-03-29 中国科学院地理科学与资源研究所 Weight optimization method and system for multidimensional geoscience parameters

Also Published As

Publication number Publication date
DE102018220941A1 (en) 2020-06-04

Similar Documents

Publication Publication Date Title
CN111275068A (en) Evaluating measurement parameters with a KI module taking into account measurement uncertainties
US11823429B2 (en) Method, system and device for difference automatic calibration in cross modal target detection
US20210004966A1 (en) Method for the Assessment of Possible Trajectories
US11392804B2 (en) Device and method for generating label objects for the surroundings of a vehicle
Bhatt et al. An analysis of the performance of Artificial Neural Network technique for apple classification
US20210097344A1 (en) Target identification in large image data
WO2020202505A1 (en) Image processing apparatus, image processing method and non-transitoty computer readable medium
CN112149491A (en) Method for determining a trust value of a detected object
US20220026557A1 (en) Spatial sensor system with background scene subtraction
Agarwal et al. Efficient NetB3 for Automated Pest Detection in Agriculture
US20230267549A1 (en) Method of predicting the future accident risk rate of the drivers using artificial intelligence and its device
JP2023010697A (en) Contrastive predictive coding for anomaly detection and segmentation
US20240046614A1 (en) Computer-implemented method for generating reliability indications for computer vision
Sonka et al. Dual approach for maneuver classification in vehicle environment data
CN112444787A (en) Processing of radar signals with suppression of motion artifacts
CN112150344A (en) Method for determining a confidence value of an object of a class
Jaafer et al. Data augmentation of IMU signals and evaluation via a semi-supervised classification of driving behavior
US20220262103A1 (en) Computer-implemented method for testing conformance between real and synthetic images for machine learning
CN113869100A (en) Identifying objects in images under constant or unchanging motion relative to object size
JP2021197184A (en) Device and method for training and testing classifier
Daya Sagar et al. Smart agricultural solutions through machine learning
CN113065428A (en) Automatic driving target identification method based on feature selection
Tronchin et al. Explainable ai for car crash detection using multivariate time series
Nordenmark et al. Radar-detection based classification of moving objects using machine learning methods
Hendrix et al. On Training Set Selection in Spatial Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination