WO2020152060A1 - Dispositif et procédé d'apprentissage d'un réseau neuronal - Google Patents

Dispositif et procédé d'apprentissage d'un réseau neuronal Download PDF

Info

Publication number
WO2020152060A1
WO2020152060A1 PCT/EP2020/051170 EP2020051170W WO2020152060A1 WO 2020152060 A1 WO2020152060 A1 WO 2020152060A1 EP 2020051170 W EP2020051170 W EP 2020051170W WO 2020152060 A1 WO2020152060 A1 WO 2020152060A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
evaluation device
output data
data
input
Prior art date
Application number
PCT/EP2020/051170
Other languages
German (de)
English (en)
Inventor
Michael Feigenbutz
Original Assignee
Rockwell Collins Deutschland Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockwell Collins Deutschland Gmbh filed Critical Rockwell Collins Deutschland Gmbh
Priority to US17/424,551 priority Critical patent/US20220121933A1/en
Priority to EP20701932.4A priority patent/EP3915054A1/fr
Publication of WO2020152060A1 publication Critical patent/WO2020152060A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the invention relates to a device and a method for training a neural network.
  • Neural networks are known. They are implemented in particular for technical purposes as artificial neural networks and serve e.g. of information processing in applications in which there is little or no explicit or systematic knowledge of the problem to be solved. These are, for example, recognition methods, such as text recognition, image recognition, object recognition and face recognition, in which a few hundred thousand to million pixels have to be converted into a comparatively small number of permitted results. (Artificial) neural networks are also used in control engineering to replace conventional controllers or to give them setpoints that the network has determined from a self-developed prognosis of the process. The possible applications are not limited to technical or technology-related areas. When predicting changes in complex systems, neural networks are often used as a support, for example for the early detection of emerging tornadoes or for estimating the further development of economic processes.
  • neural network To achieve the desired functionality of a neural network, it is necessary that the neural network is taught or trained. Accordingly, learning methods are known which serve to cause a neural network to generate associated output patterns for certain input patterns. The learning processes can be classified into supervised learning, unsupervised learning and reinforcing learning.
  • neural networks can be relatively inexpensive to set up, implement and operate, so that they can be used to replace conventional (expensive) systems.
  • the invention is therefore based on the object of specifying a device and a method with which a neural network can be trained efficiently.
  • An apparatus for training a neural network is specified, with a neural network to be trained for providing a predetermined functionality for processing input data, with an input for supplying the input data and an output for outputting output data serving as result, and with one Evaluation device for providing a predetermined functionality, with an input for supplying input data and an output for outputting output data serving as results.
  • the evaluation device and the neural network are arranged parallel to one another, a comparison device being provided for comparing the output data of the evaluation device with the output data of the neural network and for determining the quality of the output data of the neural network in relation to the output data of the evaluation device. Furthermore, a feedback device is provided for reporting back to the neural network the quality of the output data determined by the comparison device.
  • a method for training a neural network is specified, with the steps Providing a neural network to be trained for providing a predetermined functionality for processing input data, having an input for supplying the input data and an output for outputting output data serving as results;
  • an evaluation device for providing a predetermined functionality, with an input for supplying input data and an output for outputting output data serving as results;
  • the evaluation device and the neural network can accordingly be arranged or switched in parallel.
  • the evaluation device can be a known, conventional system which, under certain circumstances, conforms to specified technical rules, already reliably fulfills the desired functionality.
  • the evaluation device can be relatively expensive, so that the aim is to replace the evaluation device with the neural network to be trained beforehand.
  • the evaluation device takes over the training of the neural network by being operated in parallel.
  • the evaluation device can also be referred to as a training device.
  • the evaluation device can be a conventional device that is not based on a neural network. It is also possible, however, that the evaluation device itself has a neural network, which, however, is then already fully trained or trained.
  • the term "functionality" can be understood to mean any applications, tasks or goals which are to be provided by the evaluation device on the one hand and the neural network to be trained on the other hand.
  • both systems are therefore identified with input situations that are as identical faced onen. Only then is it possible to transfer the behavior and knowledge of one system (the proven evaluation device) to the other system (the neural network to be learned). The more identical the input situations and thus the resulting input data, the more reliably the neural network can be trained.
  • the comparison device is used to compare the (essentially correct and proven) output data of the evaluation device with the output data of the neural network and thereby to determine the quality of the output data of the neural network in relation to the output data of the evaluation device.
  • the data are evaluated in particular, so that the feedback required below can take place with the aid of the feedback device.
  • the feedback device is designed to return the quality of the output data determined by the comparison device to the neural network in order in this way to effect a training effect for the neural network and thus subsequently to improve the quality of the output data of the neural network.
  • Quadality of the output data can be understood, for example, to mean the correctness of the probabilities (also referred to as "prediction") supplied by the neural network.
  • Neural networks are usually trained to determine certain probabilities and to draw conclusions from them. These probabilities can be compared with the much more exact data of the (conventional) evaluation device, the result being reported back to the neural network.
  • a dog is presented to a neural network via a connected video camera. Based on its training status, the neural network states that the object presented can be 80% dog, 80% cat or 10% fish. The quality of these results is then determined and fed back into the neural network as feedback:
  • Conventional methods can be used for the feedback of the quality information on the output data to the neural network, such as, for example, back propagation, feedback, error feedback.
  • the neural network learns based on the given learning material. Accordingly, the weights between the individual neurons are usually modified. Learning rules specify the way in which the neural network makes these changes.
  • the device according to the invention and the method according to the invention make it possible to learn neural networks automatically and to dispense with the pre-generation of test data sets. Rather, real data records are generated during operation of the (conventional) evaluation device, which can be used to train the neural network.
  • an already existing solution (conventional evaluation device) is combined with a new solution based on a neural network in the learning phase (training phase). It is possible that different sensor systems are used for the then combined systems.
  • the existing solution takes over the training of the new solution (the neural network).
  • the input and output parameters of the existing solution are used to generate the feedback for the feedback (for example the so-called back propagation) of the neural network.
  • the input data for the evaluation device and the input data for the neural network can each be provided by a sensor device.
  • Any type of measured value recording is suitable as a sensor device, such as a (video) camera, a video sensor (e.g. IR), an imaging RADAR sensor, a LIDAR sensor, a 2D camera, a 3D camera , a microphone.
  • the sensor device is designed to generate the input data in such a way that they can be processed by the neural network or the evaluation device.
  • other components can also be interposed in order to prepare the data accordingly if this is necessary due to the characteristics of the sensor device.
  • the input data for the evaluation device and the input data for the neural network can be provided by different sensor devices. It is therefore not absolutely essential that both sensor devices are constructed identically. For example, it is possible to couple the evaluation device to a video sensor while the neural network receives input data from a lidar sensor.
  • the evaluation device and the associated sensor device can deliver output data in a satisfactory quality.
  • the evaluation device is a conventional or conventional system that has been able in the past to supply output data in a quality that meets the respective requirements.
  • the requirements can be specified, for example, by technical regulations or by the manufacturer.
  • "Satisfactory quality" means that the quality is sufficient to fulfill the intended purpose or the desired functionality.
  • the quality can also be regarded as satisfactory if a defined minimum percentage recognition rate or - conversely - a defined maximum error rate with respect to the "objects" to be recognized is achieved.
  • the training phase can be concluded.
  • the training of the neural network should be carried out as efficiently as possible and can therefore be ended when the neural network delivers results with satisfactory quality. In particular, this state can be reached when only slight or negligible deviations of the results of the neural network from those of the evaluation device are determined. It is also possible to determine the number of iterations or feedbacks and to assume that the neural network has been adequately trained when a predetermined number of data records (e.g. hundreds of thousands or millions) is reached. In these cases, the completion of the training phase is determined so that the neural network can subsequently also be operated independently.
  • the evaluation device can be separated from the neural network, whereby the neural network can be operated autonomously without the evaluation device having to continue to be operated in parallel.
  • the neural network can then be operated in isolation, without the evaluation device.
  • the evaluation device can thus be removed from the arrangement.
  • the evaluation device and the neural network can be operated in parallel.
  • the evaluation device and the neural network can complement each other, so that the quality of work of the overall system consisting of both systems can be improved. It is also possible to provide additional properties.
  • a system with an evaluation device and a video sensor can be supplemented by a neural network with an imaging RADAR, IR or LIDAR sensor in order to recognize objects of an input image from the sum of the findings with high precision.
  • Neural networks can be used for a variety of functionalities, with the given functionality being able to be selected from the group recognizing one or more objects, recognizing text, writing, images, patterns, vehicles, people or faces, recognizing spatial correlations, optimization processes, regulation and analysis of complex processes, early warning systems, optimization, time series analysis, language generation, data mining, machine translation, medical diagnostics, epidemiology, biometrics, sound systems, navigation with imaging sensors, recognition of chronological sequences, predictive maintenance, etc.
  • FIG. 1 shows a schematic representation of a conventional system with a conventional evaluation device
  • Figure 2 shows an inventive device for training a neural network
  • Figure 3 shows an example of an application of an already trained
  • Figure 1 shows a schematic representation of the structure of a conventional system with a conventional evaluation device 1, which is coupled to a video sensor 2.
  • the evaluation device 1 known per se is designed to carry out video-based object recognition. It is thus able to recognize 3 objects based on input data generated by video sensor 2 and to supply information about identified (classified) objects 5 as output data 4.
  • a real situation 6 with real objects 7 is recorded by the video sensor 2 and supplied to the evaluation device 1 in the form of input data 3.
  • the functionality of the evaluation device 1 enables information about the real objects 7 to be determined from the input data 3 and output in the form of the output data 4, so that the identified objects 5 (recognized by the evaluation device 1) are determined as the results of the evaluation device 1 .
  • Such proven systems for object detection can e.g. can be used for the recognition of traffic signs by cars or the recognition of pedestrians by autonomous vehicles. These systems can also be used, for example, to identify objects on conveyor belts.
  • FIG. 2 shows an example of a device according to the invention for training a neural network. Part of this device is the evaluation device 1 with the video sensor 2 already explained in connection with FIG. 1. The real objects 7 in the real situation 6 can thus be output by the evaluation device as output data 4 with the identified objects 5 as a result.
  • a neural network 8 (also referred to as a neural network) is arranged, which is to be trained by the evaluation device 1.
  • the neural network 8 is thus initially in an initial state was not yet able to deliver satisfactory results.
  • the neural network 8 can also be coupled to a video sensor 2. In the specific example, however, the neural network 8 is coupled to a LIDAR sensor 9.
  • LIDAR also called LADAR
  • LIDAR sensors have proven particularly useful for the detection of three-dimensional situations.
  • the LIDAR sensor 9 is confronted with the same real situation 6 and thus with the same real objects 7 as the video sensor 2 or the evaluation device 1.
  • the LIDAR sensor 9 thus supplies its own input data 10, which are processed in the neural network 8.
  • the results of the neural network 8 are provided as output data 11 and consist in particular of weightings. From the results or weightings, corresponding insights regarding the identified objects 5 result.
  • the results are usually also given as probabilities ("prediction").
  • sensors can also be coupled to the neural network 8 if this makes sense for the planned application.
  • the neural network 8 is still incompletely trained and of the two real objects 7 has only recognized the square as the only identified object 5, but not the triangle as another real object 7. For the reliable one Recognizing the triangle, the neural network 8 must therefore be trained even further.
  • the results of the evaluation device 1 and the neural network 8 in the form of the output data 4, 11 are fed to a comparison device 12 which carries out an evaluation of the results of the neural network 8, in particular in comparison to the results of the evaluation device 1.
  • the output data 4, 1 1 can be compared to one another in this way to determine the quality of the output data 1 1 of the neural network 8 in relation to the output data 4 of the evaluation device 1.
  • the findings of the comparison device 12 are guided back to the neural network 8 with the aid of a feedback device 13.
  • the feedback serves in particular as error feedback in order to correct errors in the neural network.
  • the feedback is often also referred to as "back propagation" and can be implemented by known methods.
  • a proven method is, for example, the gradient descent method, which starts with a randomly selected weight combination, for which the gradient is determined and descended by a predetermined length - the learning rate. This changes the weights accordingly. The gradient is again determined for the newly obtained weight combination and the weights are modified again. This process is repeated until a local minimum or global minimum is reached or until a predetermined maximum number of repetitions has been reached.
  • the neural network 8 is trained so that the results generated by it are increasingly approaching the results of the proven, conventional evaluation device 1.
  • the evaluation device 1 is no longer required in this case.
  • the neural network 8 can be operated independently according to the structure of FIG. 3 and delivers results of sufficient quality.
  • the neural network 8 is suitable for identifying both the square and the triangle as identified objects 5.
  • the neural network 8 can be operated in parallel with the conventional evaluation device 1, it is possible without any problems to train the neural network 8 in real operation, that is to say in real use of the evaluation device 1.
  • the "test data” generated by the evaluation device 1 are ret usable data that can be used as a "by-product" for training the neural network 8. It is therefore not necessary to set up an independent training phase. Rather, the training could take place in normal normal operation of the evaluation device 1.
  • the use of new sensors with evaluation by a neural network is particularly simplified where little or no training data is available.
  • the method can also make it possible to continue learning the neural network over a longer period of time in order to increase the reliability of the neural network more and more.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Un procédé d'apprentissage d'un réseau neuronal comporte les étapes suivantes : - fournir un réseau neuronal (8) destiné à l'apprentissage pour fournir une fonctionnalité prédéterminée de traitement de données d'entrée (10), et comportant une entrée destinée à fournir les données d'entrée (10) et une sortie destinée à délivrer des données de sortie (11) servant de résultats ; - fournir un dispositif d'évaluation (1) destiné à fournir une fonctionnalité prédéterminée et comportant une entrée destinée à fournir des données d'entrée (3) et une sortie destinée à délivrer des données de sortie (4) servant de résultats ; - faire fonctionner le dispositif d'évaluation (1) et le réseau neuronal (8) en parallèle ; - comparer les données de sortie (4) du dispositif d'évaluation (1) avec les données de sortie (11) du réseau neuronal (8) et déterminer la qualité des données de sortie (11) du réseau neuronal (8) par rapport aux données de sortie (4) du dispositif d'évaluation (1) ; - retourner la qualité des données de sortie (11) vers le réseau neuronal (8).
PCT/EP2020/051170 2019-01-23 2020-01-17 Dispositif et procédé d'apprentissage d'un réseau neuronal WO2020152060A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/424,551 US20220121933A1 (en) 2019-01-23 2020-01-17 Device and Method for Training a Neural Network
EP20701932.4A EP3915054A1 (fr) 2019-01-23 2020-01-17 Dispositif et procédé d'apprentissage d'un réseau neuronal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019101617.7 2019-01-23
DE102019101617.7A DE102019101617A1 (de) 2019-01-23 2019-01-23 Vorrichtung und Verfahren zum Trainieren eines Neuronalen Netzwerks

Publications (1)

Publication Number Publication Date
WO2020152060A1 true WO2020152060A1 (fr) 2020-07-30

Family

ID=69192035

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/051170 WO2020152060A1 (fr) 2019-01-23 2020-01-17 Dispositif et procédé d'apprentissage d'un réseau neuronal

Country Status (4)

Country Link
US (1) US20220121933A1 (fr)
EP (1) EP3915054A1 (fr)
DE (1) DE102019101617A1 (fr)
WO (1) WO2020152060A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3121250A1 (fr) 2021-03-25 2022-09-30 Airbus Helicopters Procédé d’apprentissage d’une intelligence artificielle supervisée destinée à identifier un objet prédéterminé dans l’environnement d’un aéronef

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078339A1 (en) * 2014-09-12 2016-03-17 Microsoft Technology Licensing, Llc Learning Student DNN Via Output Distribution
US20170011738A1 (en) * 2015-07-09 2017-01-12 Google Inc. Generating acoustic models

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078339A1 (en) * 2014-09-12 2016-03-17 Microsoft Technology Licensing, Llc Learning Student DNN Via Output Distribution
US20170011738A1 (en) * 2015-07-09 2017-01-12 Google Inc. Generating acoustic models

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ASAMI TAICHI ET AL: "Domain adaptation of DNN acoustic models using knowledge distillation", 2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 5 March 2017 (2017-03-05), pages 5185 - 5189, XP033259399, DOI: 10.1109/ICASSP.2017.7953145 *
ASIT MISHRA ET AL: "Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy", 15 November 2017 (2017-11-15), XP055683717, Retrieved from the Internet <URL:https://arxiv.org/pdf/1711.05852.pdf> [retrieved on 20200407] *
GEOFFREY HINTON ET AL: "Distilling the Knowledge in a Neural Network", CORR (ARXIV), vol. 1503.02531v1, 9 March 2015 (2015-03-09), pages 1 - 9, XP055549014 *
JAISWAL BHAVESH ET AL: "Deep neural network compression via knowledge distillation for embedded applications", 2017 NIRMA UNIVERSITY INTERNATIONAL CONFERENCE ON ENGINEERING (NUICONE), IEEE, 23 November 2017 (2017-11-23), pages 1 - 4, XP033341337, DOI: 10.1109/NUICONE.2017.8325620 *
JONG-CHYI SU ET AL: "Adapting Models to Signal Degradation using Distillation", 29 August 2017 (2017-08-29), XP055674483, Retrieved from the Internet <URL:https://arxiv.org/pdf/1604.00433.pdf> [retrieved on 20200306] *
SEBASTIAN RUDER ET AL: "Knowledge Adaptation: Teaching to Adapt", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 February 2017 (2017-02-07), XP080746999 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3121250A1 (fr) 2021-03-25 2022-09-30 Airbus Helicopters Procédé d’apprentissage d’une intelligence artificielle supervisée destinée à identifier un objet prédéterminé dans l’environnement d’un aéronef
EP4086819A1 (fr) 2021-03-25 2022-11-09 Airbus Helicopters Procédé d'apprentissage d'une intelligence artificielle supervisée destinée à identifier un objet prédeterminé dans l'environnement d'un aéronef

Also Published As

Publication number Publication date
US20220121933A1 (en) 2022-04-21
EP3915054A1 (fr) 2021-12-01
DE102019101617A1 (de) 2020-07-23

Similar Documents

Publication Publication Date Title
EP3282399B1 (fr) Procede de reconnaissance ameliore d&#39;anomalies de processus d&#39;une installation technique et systeme de diagnostic correspondant
DE102017000536A1 (de) Zellsteuereinheit zum Feststellen einer Ursache einer Anomalie bei einer Fertigungsmaschine
DE102017000287A1 (de) Zellensteuerung und produktionssystem zum verwalten der arbeitssituation einer vielzahl von fertigungsmaschinen in einer fertigungszelle
EP2402827A1 (fr) Procédé et dispositif pour un contrôle de fonction d&#39;un dispositif de reconnaissance d&#39;objet d&#39;un véhicule automobile
DE102019124018A1 (de) Verfahren zum Optimieren von Tests von Regelsystemen für automatisierte Fahrdynamiksysteme
DE102017006599A1 (de) Verfahren zum Betrieb eines Fahrzeugs
DE112020001369T5 (de) Gepulste synaptische elemente für gepulste neuronale netze
EP3825796A1 (fr) Procédé et dispositif de fonctionnement basé sur l&#39;ia d&#39;un système d&#39;automatisation
EP4013574A1 (fr) Système et procédé d&#39;automatisation destinés à la manipulation de produits
WO2020152060A1 (fr) Dispositif et procédé d&#39;apprentissage d&#39;un réseau neuronal
DE102018209108A1 (de) Schnelle Fehleranalyse für technische Vorrichtungen mit maschinellem Lernen
WO2020216621A1 (fr) Apprentissage de modules aptes à l&#39;apprentissage avec des données d&#39;apprentissage dont les étiquettes sont bruitées
DE102019215016A1 (de) Messanordnung, Verfahren zum Einrichten einer Messanordnung und Verfahren zum Betreiben einer Messanordnung
DE102017116016A1 (de) Kraftfahrzeug-Sensorvorrichtung mit mehreren Sensoreinheiten und einem neuronalen Netz zum Erzeugen einer integrierten Repräsentation einer Umgebung
WO2020057868A1 (fr) Procédé et dispositif servant à faire fonctionner un système de commande
AT519777B1 (de) Verfahren zur Erkennung des normalen Betriebszustands eines Arbeitsprozesses
EP3650964B1 (fr) Procédé de commande ou de régulation d&#39;un système technique
EP3629242B1 (fr) Procédé de configuration d&#39;un dispositif d&#39;évaluation d&#39;image ainsi que procédé d&#39;évaluation d&#39;image dispositif d&#39;évaluation d&#39;image
DE102020207564A1 (de) Verfahren und Vorrichtung zum Trainieren eines Bildklassifikators
EP3655934B1 (fr) Concept de surveillance d&#39;un espace de stationnement
DE102019208922A1 (de) Verfahren und Vorrichtung zum Kontrollieren eines Produktionsprozesses
EP4246268B1 (fr) Procédé de détermination sécurisée d&#39;un trajet de vol d&#39;un véhicule aérien sans pilote et véhicule aérien sans pilote
EP3866135B1 (fr) Procédé de commande d&#39;une installation de signalisation lumineuse
EP4111279A1 (fr) Transmission de bord à nuage à réduction de données basée sur des modèles de prédiction
DE102020209985A1 (de) Vorrichtung und Verfahren zum Ermitteln einer Umfeldinformation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20701932

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020701932

Country of ref document: EP

Effective date: 20210823