EP4346558A1 - Outil de diagnostic objectif fondé sur un logiciel, à commande vocale à utiliser dans le diagnostic d'un trouble neurologique chronique - Google Patents

Outil de diagnostic objectif fondé sur un logiciel, à commande vocale à utiliser dans le diagnostic d'un trouble neurologique chronique

Info

Publication number
EP4346558A1
EP4346558A1 EP22732938.0A EP22732938A EP4346558A1 EP 4346558 A1 EP4346558 A1 EP 4346558A1 EP 22732938 A EP22732938 A EP 22732938A EP 4346558 A1 EP4346558 A1 EP 4346558A1
Authority
EP
European Patent Office
Prior art keywords
analysis module
emotion
biomarker
diagnostic tool
test person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22732938.0A
Other languages
German (de)
English (en)
Inventor
Peter O. Owotoki
Leah W. Owotoki
David Lehmann
Diana Wanjiku
Moriah-Jane Lorentz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VitafluenceAi GmbH
Original Assignee
VitafluenceAi GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VitafluenceAi GmbH filed Critical VitafluenceAi GmbH
Publication of EP4346558A1 publication Critical patent/EP4346558A1/fr
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6898Portable consumer electronic devices, e.g. music players, telephones, tablet computers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7425Displaying combinations of multiple images regardless of image source, e.g. displaying a reference anatomical image with a live image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/06Children, e.g. for attention deficit diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0204Acoustic sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • A61B5/1176Recognition of faces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7475User input or interface means, e.g. keyboard, pointing device, joystick
    • A61B5/749Voice-controlled interfaces

Definitions

  • the invention relates to a software-based diagnostic tool for use in diagnosing a chronic, neurological disorder in a human using artificial intelligence, as well as a method for operating the diagnostic tool and a system comprising the diagnostic tool.
  • Chronic neurological disorders are common in humans. They express themselves in an atypical intellectual development and/or an atypical social behavior. Examples of such disorders are autism, attention deficit disorder (ADHD), schizophrenia, Alzheimer's, psychosis, etc. Autism is one of the best-known chronic neurological disorders, which is why it is considered below as an example but representative of all chronic neurological disorders as the starting point for the invention.
  • ADHD attention deficit disorder
  • schizophrenia schizophrenia
  • Alzheimer's psychosis
  • Autism is one of the best-known chronic neurological disorders, which is why it is considered below as an example but representative of all chronic neurological disorders as the starting point for the invention.
  • autism spectrum disorder abbreviated ASS or English ASD (Austism Spectrum Disorder).
  • ASS autism spectrum disorder
  • English ASD Austism Spectrum Disorder
  • Autism shows itself externally, especially in behavior and communication.
  • this developmental disorder is, on the one hand, the social interaction or dealing with and exchanging ideas with other people and a limited interest in repetitive, identical or similar processes, and on the other hand the verbal and non-verbal language of the autistic person, ie the voice and body language such as facial expressions, eye contact and gestures.
  • a reduction in intelligence can also often be determined, but there are also forms of autism in which the affected person is of average or even high intelligence.
  • Autism is diagnosed in the classic way by a specialized doctor, neurologist or therapist by asking the potentially autistic patient a more or less large number of specially developed questions from a list of questions and by subsequently observing and evaluating the answers and reactions.
  • autism-specific symptoms i.e. the symptom constellation
  • diagnosis using a questionnaire is also disadvantageous because the questions take a long time to pose, for example between one and three hours, and the questions and observations are adapted to the patient's age, regional language and ethnic background have to. The latter requires that the medical professional be familiar with the ethnic characteristics of the patient, because behavior, verbal and non-verbal communication differed from people to people.
  • the object of the present invention is to provide a device, a system and an operating method that overcomes the disadvantages mentioned and enables an objective, at least assistive diagnosis of a chronic neurological disorder, in particular autism and its associated neurological diseases, which is preferably possible at any time and from anywhere in accessible to the world regardless of the language and ethnic origin of the person concerned.
  • the diagnostic tool according to the invention and the method used and executed by it are based on improvements in the state of the art and innovations in the field of artificial intelligence.
  • a cost-effective, user-friendly and rapid diagnosis is made with the aid of the diagnostic tool according to the invention and its operating method.
  • a biomarker is a measurable and therefore analyzable variable of a biological characteristic of a person, more precisely a variable that enables a qualitative or quantitative assessment of a physical, physiological or behavioral characteristic of a person.
  • a software-based diagnostic tool for use in diagnosing a chronic neurological disorder in a human subject using artificial intelligence comprising
  • a speech analysis module for determining characteristic values of a first, namely vocal biomarker of a speech signal of the test person
  • the operating software is set up to trigger the speech analysis module and the at least one further module one after the other and to feed their determined characteristic values to the overall result evaluation unit.
  • the speech analysis module includes
  • a voice signal trigger control which is set up to display one of the individual images and/or individual videos or a text on an image display device for the test person in order to send at least one voice signal to the test person in the form of a naming of an object contained in the respective individual image or individual video or in form of reading the text aloud,
  • a voice recording unit which is set up to record the voice signal in an audio recording with the aid of a voice input device
  • a speech signal analyzer which is set up to first evaluate the speech signal in the audio recording as to which pitch occurs at which point in time, and then to determine a frequency distribution of the pitches over a number of frequency bands of a frequency spectrum under consideration, this frequency distribution forming the characteristic values of the first biomarker .
  • the overall result evaluation unit is set up to determine whether the test person has the chronic, neurological disorder on the basis of the characteristic values of the test person's biomarkers using a machine learning algorithm based on artificial intelligence by comparison with a multidimensional interface.
  • the interface can be understood as a mathematical hyperplane in a multidimensional space whose dimensions are defined by the number of characteristic values of all biomarkers.
  • the interface represents a mathematical boundary between the biomarker values of people with the chronic, neurological disorder and people without such a disorder.
  • the overall result evaluation unit is a classification model trained with biomarker values from comparison persons, which determines whether and to what degree of probability the identified biomarker values of the subject lies on the side of the interface associated with the comparators with the chronic neurological disorder or on the side of the interface associated with the comparators without the chronic neurological disorder.
  • the learning algorithm is preferably a support vector machine (SVM), a so-called random forest or a deep convolutional neuronal network algorithm, the learning algorithm having been trained with a number of first and second comparison data sets from characteristic values of the biomarkers, the first comparison data sets of a group of Reference persons are assigned who have the chronic, neurological disorder, and the second comparison data sets are assigned to a group of reference persons who do not have the chronic, neurological disorder.
  • SVM support vector machine
  • a special feature when using the learning algorithm is that it can be continuously optimized or trained with new comparative data sets in order to classify the biomarker characteristics as precisely as possible, so that it can be used in the differentiation of the biomarker recognitions between people with and without chronic, neurological disorders, or in the definition of the interface, is getting better and better.
  • a random forest is described, for example, in A Paul, D P Mukherjee, P Das, A Gangopadhyay, AR Chintha and S Kundu, "Improved Random Forest for Classification," in IEEE Transactions on Image Processing, Vol. 27, No. 8, Pages 4012-4024, Aug. 2018. In particular, it represents a good choice for the learning algorithm when the training data, i.e.
  • the number of comparison data sets to create the classification model increases, in particular between a few hundred and a few thousand comparison data sets. Furthermore, a deep convolutional neural network algorithm is particularly suitable if the training data, i.e. the number of comparison data sets to create the classification model, is particularly large, in particular over 5000, with such a model even achieving a classification accuracy of close to 99%.
  • the diagnostic tool thus evaluates at least two biomarkers, with the first biomarker (vocal biomarker) being of particular importance and characterizing a property of the test person's voice. More specifically, the first biomarker identifies the tone spectrum used by the subject as a first criterion for assessing the presence of a chronic neurological disorder. With the help of this vocal biomarker, one can Determine with 95% certainty whether the test person has a specific chronic neurological disorder. In order to improve the accuracy of the diagnosis, at least one second biomarker is used, the characteristic values of which are determined by the at least one further module.
  • the first biomarker identifies the tone spectrum used by the subject as a first criterion for assessing the presence of a chronic neurological disorder.
  • this vocal biomarker one can Determine with 95% certainty whether the test person has a specific chronic neurological disorder.
  • at least one second biomarker is used, the characteristic values of which are determined by the at least one further module.
  • the further module can be an emotion analysis module for evaluating the reaction of the test person to an emotional stimulus as a second biomarker and can include at least the following:
  • an emotion-triggering control which is set up to display a set of individual images and/or individual videos or at least one individual video on the image display device in order to stimulate a number of individual emotions in the test person
  • an emotion observation unit which is set up to evaluate a (video) recording of the test person's face, obtained with the aid of an image recording device, at least to determine when it shows an emotional reaction.
  • the emotion analysis module is set up to determine at least the respective reaction time between the stimulation of the respective emotion and the occurrence of the emotional reaction, with at least these reaction times forming the characteristic values of the second biomarker in this embodiment variant.
  • the additional module can be a viewing direction analysis module for evaluating the viewing direction of the test person as a second biomarker and can include at least the following:
  • a line of sight guide which is set up to display at least one image or video on the image display device in order to guide the line of sight of the test person
  • the second biomarker can either be a property of the emotion processing or of the subject's gaze. It thus characterizes a property of their ability to interact socially, namely either the reaction time to an emotional stimulus or the direction of their gaze, and can thus be referred to as a "social biomarker".
  • the speech analysis module and the emotion analysis module can be present in one embodiment of the diagnostic tool, in another embodiment only the speech analysis module and the gaze analysis module, and in a third embodiment the speech analysis module, the emotion analysis module and the gaze analysis module.
  • the emotion analysis module then forms a first further module and the viewing direction analysis module forms a second further module, with at least the reaction times to the emotional stimuli forming characteristic values of the second biomarker and the viewing direction over time forming characteristic values of a third biomarker of the test person.
  • the overall result evaluation unit is then set up to determine whether the test person has the chronic neurological disorder based on the characteristic values of the first, second and third biomarker of the test person using the machine learning algorithm based on artificial intelligence by comparison with a multidimensional interface (hyperplane). The order in which the characteristic values of the second and third biomarkers are determined is not important.
  • the diagnostic tool is preferably set up, the set of individual images and/or individual videos or the text for triggering the speech signal, and/or the set of individual images and/or individual videos or the at least one video for the emotion stimulation and/or the at least one image or video for the Select and display gaze direction control depending on person-specific data of the test person.
  • the voice signal trigger control is set up to select and display either the set of individual images and/or individual videos or the text depending on the age of the test person. Children can preferably be shown the set of individual images and/or individual videos and adults can be shown the text on the image display device if the test person cannot read. Otherwise, it is preferable to use a text to be read aloud, because this way the language element is longer, more extensive in terms of sound and tonality, and overall more homogeneous.
  • the diagnostic tool can have a filter to filter out background or background noise from the speech signal before the pitch evaluation, in particular the voice or voices of other people such as an assistant who is or may be present in the vicinity of the test person and speaks during the audio recording.
  • a filter to filter out background or background noise from the speech signal before the pitch evaluation, in particular the voice or voices of other people such as an assistant who is or may be present in the vicinity of the test person and speaks during the audio recording.
  • the diagnostic tool can preferably have a bandpass filter that is set up to restrict the pitch spectrum under consideration to the range between 30 and 600 Hz.
  • the human voice covers a frequency range between 30 Hz and 2000 Hz, with spoken language usually being below 600 Hz. Limiting the pitch spectrum to the range between 30 and 600 Hz with the same number of frequency bands improves pitch analysis accuracy because the individual frequency bands are narrower.
  • the number of frequency bands is preferably between 6 and 18, ideally 12. This number represents a good balance between the accuracy of the pitch determination and the computing time and computing power required for it.
  • the speech signal analyzer preferably includes a deep convolutional neuronal network algorithm in order to estimate the pitches, also referred to as pitch detection in technical jargon.
  • a deep convolutional neuronal network algorithm in order to estimate the pitches
  • PRAAT another high quality pitch estimation algorithm
  • the emotion observation unit and/or the viewing direction observation unit is set up to evaluate the facial image in real time.
  • the examination is carried out while the test person is looking at the image reproduction device or is being shown the set of individual images and/or individual videos or the at least one video or image on it.
  • the emotion observation unit and/or the line of sight observation unit can each have a video recording unit or use such a video recording unit that is part of the diagnostic tool in order to save a corresponding video recording while the test person views the set of individual images and/or individual videos or at least one video or picture is shown.
  • This corresponding video recording can be made available to the emotion observation unit or the viewing direction observation unit for evaluation.
  • the emotion observation unit comprises face recognition software based on Compassionate Artificial Intelligence, which is trained on certain emotions, namely those emotions that are stimulated by the individual images or individual videos of the sentence or by the video, such as joy, sadness, anger or fear.
  • Compassionate Artificial Intelligence which is trained on certain emotions, namely those emotions that are stimulated by the individual images or individual videos of the sentence or by the video, such as joy, sadness, anger or fear.
  • the emotion observation unit is preferably set up to determine the type of reaction to the respectively stimulated emotion in addition to the reaction time, this type of response being part of the characteristics of the second biomarker.
  • the type of reaction can be binary information that indicates whether the reaction is a positive or negative emotion. For example, joy and sadness can be interpreted as positive emotions, and anger and fear as negative emotions.
  • the response type can be the specific emotion with which the subject responds. The response type can then form part of the characteristics of the second biomarker, together with the corresponding response time for the particular emotional response to which the response type is linked.
  • the emotion analysis module is set up to determine whether the reaction shown by the test person corresponds to the stimulated emotion. In the simplest case, this can be done by comparing whether both the emotional stimulus and the type of reaction are positive or negative emotions. If this is the case, the test person reacted as expected or "normally". If this is not the case, i.e. the emotional reaction is positive although the emotional stimulus was negative or vice versa, the test person reacted unexpectedly or "abnormally”. At best, a comparison can also be made as to whether the specifically determined emotion with which the test person reacts corresponds to that of the stimulated emotion or whether these emotions are different. The result of this respective comparison can be given in a congruence indicator, e.g.
  • a "1" indicates agreement of the emotional response with the stimulated emotion and a "0” indicates a lack of agreement, at least with regard to whether they are positive or negative emotions.
  • a "-1" may indicate a lack of correspondence between the emotional response and the stimulated emotion and a "0” the fact that the subject showed no response at all.
  • the congruence indicator can then also form part of the characteristics of the second biomarker, together with the corresponding reaction time for the emotional reaction to which the congruence indicator is linked.
  • the congruence indicator is particularly helpful and meaningful information, at least when the test person refers to a does not react to a specific stimulus with an emotion that would be expected because this is indicative of a chronic neurological disorder.
  • the second biomarker comprises 3n parameters in this case.
  • the line of sight guidance can be set up to display the at least one image or video in discrete positions of the image display device one after the other or to move it along a continuous path.
  • the image or video is thus reproduced smaller than the display area (screen) of the image display device and is moved across the display area, with the test subject being supposed to follow the chronological sequence of the display locations or the display path with their eyes.
  • the line of sight observation unit preferably includes eye-tracking software.
  • the diagnostic tool according to the invention can advantageously be used as a software application for a portable communication terminal, in particular a smartphone or tablet. This means that the diagnostic tool can be used by almost anyone at any time.
  • the diagnostic tool according to the invention can also be used as a software application on a server that can be controlled via a computer network by a browser on an external terminal in order to run the diagnostic tool.
  • This variant also ensures high accessibility of the diagnostic tool or access to it at any time from anywhere in the world, with the variant also taking into account the fact that the computing power in a portable communication terminal device may not be sufficient to execute the artificial intelligence algorithms mentioned.
  • a server with a processing unit with sufficient computing power is better suited for this.
  • a diagnostic system for use in the diagnosis of a chronic, neurological disorder in a human subject using artificial intelligence comprising
  • processing unit such as a processor, for executing the program code and processing the data of the diagnostic tool
  • a voice input device such as a microphone, for recording at least one voice signal from the test person for the diagnostic tool
  • an image capturing device such as a CCD camera, for capturing an image of the subject's face for the diagnostic tool
  • an image display device such as a monitor or a display, for displaying image data for the test person and
  • the diagnostic system is preferably a portable communication terminal device, in particular a smartphone or tablet, on which the diagnostic tool is run as a software application.
  • the non-volatile memory, the processing unit, the voice input device, the image recording device, the image display device and the input means represent integral components of the communication terminal.
  • the processing unit can be part of a server connected to a computer network such as the Internet and controllable via a browser, with the non-volatile memory being connected to the server and the peripheral devices being part of an external terminal device, in particular a portable communication terminal device.
  • the diagnostic tool can be called up via the network/Internet and executed on the server.
  • the external terminal device can also have a volatile memory, with the diagnostic tool being stored partly on the server-side memory and partly on the terminal-side memory.
  • the image or text data used by the modules, as well as at least the voice signal triggering control and the voice recording unit of the voice analysis module, the emotion triggering control of the emotion analysis module and/or the line of sight guidance of the line of sight analysis module can be stored on the end device and executed there, whereas on the server-side memory of the speech signal analyzer, the emotion observation unit and the reaction assessment unit and the line of sight observation unit and the overall result evaluation unit are stored and executed. Consequently, all computationally intensive functional units of the diagnostic tool are arranged on the server side.
  • a method for operating the software-based diagnostic tool for use in diagnosing a chronic neurological disorder in a human subject using artificial intelligence comprising
  • a speech analysis module for determining characteristic values of a first, namely vocal biomarker of a speech signal of the test person
  • the operating software triggers the speech analysis module and the at least one other module one after the other and feeds their determined characteristic values to the overall result evaluation unit
  • a speech signal trigger control of the speech analysis module presents a set of individual images and/or individual videos or a text on an image display device for the test person in order to send at least one speech signal to the test person in the form of a name for an object contained in the respective individual image or individual video or in the form of a to trigger the reading of the text,
  • a voice recording unit of the voice analysis module records the voice signal in an audio recording with the aid of a voice input device
  • a speech signal analyzer of the speech analysis module first evaluates the speech signal in the audio recording to determine which pitch occurs at which point in time, and then determines a frequency distribution of the pitches over a number of frequency bands of a frequency spectrum under consideration, with this frequency distribution forming the characteristic values of the first biomarker, and - the overall result evaluation unit determines whether the test person has the chronic neurological disorder on the basis of the characteristic values of the biomarkers of the test person using a machine learning algorithm based on artificial intelligence by comparison with a multidimensional interface.
  • the further module is an emotion analysis module for evaluating the reaction of the test person to an emotional stimulus as a second biomarker
  • an emotion triggering control of the emotion analysis module displays a set of individual images and/or individual videos or at least one individual video on the image display device in order to stimulate a number of individual emotions in the test person
  • an emotion observation unit of the emotion analysis module evaluates a recording of the subject's face, obtained with the aid of an image recording device (6), at least to determine when the subject shows an emotional reaction
  • the emotion analysis module determines the respective reaction time between the stimulation of the respective emotion and its occurrence, and at least these reaction times form the characteristic values of the second biomarker.
  • the emotion observation unit can also evaluate the recording of the test person's face as to which emotional reaction it shows, i.e. the type of reaction, for example in the way whether it is a positive or negative emotional reaction, or in the type of determination the concrete emotion.
  • the respective reaction time and reaction type form the characteristic values of the second biomarker for each stimulated emotion.
  • the emotion analysis module can also determine a congruence indicator that indicates whether the emotional response corresponds to the stimulated emotion, for example whether both are positive or negative emotions respectively or even the emotion type matches.
  • the respective reaction time and the congruence indicator form the characteristic values of the second biomarker.
  • the emotion analysis module preferably determines however, three pieces of information for each stimulated emotion, namely the reaction time, the type of reaction and the congruence indicator. In this case, for each stimulated emotion, the respective reaction time, reaction type and the congruence indicator form the characteristic values of the second biomarker.
  • the further module is a viewing direction analysis module for evaluating the viewing direction of the test person as a second biomarker
  • a line of sight guide of the line of sight analysis module displays at least one image or video on the image display device in order to guide the line of sight of the test person
  • a viewing direction monitoring unit of the viewing direction analysis module determines the viewing direction over time from a recording of the subject's face obtained with the aid of an image recording device (6), this viewing direction profile forming the characteristic values of the second biomarker.
  • the emotion analysis module is a first additional module and the viewing direction analysis module is a second additional module and these modules are triggered one after the other, with at least the reaction times to the emotional stimuli forming characteristic values of the second biomarker and the viewing direction over the Time characteristic values of a third biomarker of the test person forms, and wherein the overall result evaluation unit determines whether the test person has the chronic neurological disorder based on the characteristics of the first, second and third biomarker of the test person using the machine learning algorithm based on artificial intelligence by comparison with a multidimensional interface having.
  • the operating method is set up to control the diagnostic tool in such a way that it executes the steps and functions for which it is set up accordingly, as described above.
  • the software-based diagnostic tool and its operating method are described in more detail below using a specific example and the accompanying figures.
  • FIG. 1 a schematic representation of the structure of a first diagnostic system according to the invention
  • FIG. 2 a schematic representation of the structure of a second diagnostic system according to the invention
  • Figure 3 a schematic representation of the functional units of the language analysis module of the diagnostic tool
  • Figure 4 a schematic representation of the functional units of the emotion analysis module of the diagnostic tool
  • Figure 5 a schematic representation of the functional units of the gaze analysis module of the diagnostic tool
  • Figure 6 a schematic representation of the structure of a third diagnostic system according to the invention
  • FIG. 7 a flow chart of an operating method according to the invention
  • FIG. 8 a schematic signal flow chart
  • Figure 9 a recorded speech signal comprising eight individual speech signals
  • Figure 10 the pitch signals of the eight individual speech signals from Figure 9 over time (pitch spectrum)
  • Figure 11 a pitch histogram for the eight pitch signals in Figure 10
  • Figure 12 an example pitch histogram of an autistic subject
  • Figure 13 an example pitch histogram of a non-autistic subject
  • Figure 14 further examples of pitch histograms of autistic subjects
  • Figure 15 further examples of pitch histograms non- autistic subjects
  • Figure 16 a diagram illustrating emotional stimuli and their effect on the subject
  • FIG. 17 a chronological sequence of representations of an image on the image display device at different positions;
  • FIG. 18 a determined viewing direction path
  • FIG. 1 shows a software-based diagnostic tool as part of a diagnostic system 1 according to a first embodiment variant.
  • FIG. 7 illustrates an operating method for this diagnostic tool or for the diagnostic system.
  • the diagnostic system 1 comprises, on the one hand, a computer system 2, which has at least one processing unit 3 in the form of a processor 3 with one, two or more cores, and at least one non-volatile memory 4, and peripheral devices 5, 6, 7, 8, on the other hand, which are operatively connected to the computer system 2, more precisely, are connected to it by communication technology, so that the peripheral devices 5, 6, 7, 8 receive control data from the computer system 2, are therefore controlled and/or can transmit useful data, in particular image and sound data, to it .
  • a computer system 2 which has at least one processing unit 3 in the form of a processor 3 with one, two or more cores, and at least one non-volatile memory 4, and peripheral devices 5, 6, 7, 8, on the other hand, which are operatively connected to the computer system 2, more precisely, are connected to it by communication technology, so that the peripheral devices 5, 6, 7, 8 receive control data from the computer system 2, are therefore controlled and/or can transmit useful data, in particular image and sound data, to it .
  • the peripheral devices 5, 6, 7, 8 are a voice input device 5 in the form of a microphone 5, an image recording device 6 in the form of a camera 6, for example a CCD camera, an image display device 7 in the form of a display 7 or monitor and a Input means 8, e.g. in the form of control keys, a keyboard or a touch-sensitive surface of the image display device 7 in conjunction with a graphical user interface displayed thereon, which graphically highlights the partial area of the image display device 7 to be touched for a possible input.
  • the input means 8 can also be formed by a speech recognition module.
  • the peripheral devices 5, 6, 7, 8 are locally assigned to a test person 11, in particular accessible to him, so that he can interact with the peripheral devices 5, 6, 7, 8.
  • the peripheral devices 5, 6, 7, 8 can be connected to the computer system 2 via one or more cable connections, either via a common cable connection or via individual cable connections.
  • the peripheral devices 5, 6, 7, 8 can also be connected to the computer system 2 via a wireless connection, in particular a radio connection such as Bluetooth or WLAN. It is also a mixture of these connection types possible, so that one or more of the peripheral devices 5, 6, 7, 8 with the computer system 2 via a Cable connection and one or more of the peripheral devices 5, 6, 7, 8 via a wireless, in particular radio connection with the computer system 2 can be connected.
  • peripheral devices 5, 6, 7, 8 can be connected directly to the computer system or indirectly via an external device 12, for example via an external computer such as a personal computer, which in turn can be connected wirelessly or via cable via at least one local and/or global Network 9 such as the Internet can be connected to the computer system 2 for communication. This is illustrated in FIG.
  • the peripheral devices 5, 6, 7, 8 can each form individual devices. Alternatively, however, they can also be installed individually in combination with one another in one device.
  • the camera 6 and microphone 5 can be housed in a common housing, or the display 7 and the input device 8 can form an integrated functional unit.
  • the peripheral devices can all be an integral part of the external device 12, which can then be, for example, a mobile telecommunications terminal 12, in particular a laptop, a smartphone or a tablet.
  • An embodiment variant of the external device 12 in the form of a smartphone 12 is illustrated in FIG.
  • the peripheral devices 5, 6, 7, 8 communicate with the computer system 2 via the external device 12 and a local and/or global network 9 such as the Internet 9, to which the external device 12 is connected, on the one hand, wirelessly or via Cable, and the computer system 2 on the other hand, wirelessly or via cable, is connected.
  • a local and/or global network 9 such as the Internet 9 to which the external device 12 is connected, on the one hand, wirelessly or via Cable, and the computer system 2 on the other hand, wirelessly or via cable, is connected.
  • the computer system 2 acts as a central server and has a corresponding communication interface 10 for this purpose, in particular an IP-based interface, via which communication with the external device 12 takes place.
  • a corresponding communication interface 10 for this purpose, in particular an IP-based interface, via which communication with the external device 12 takes place.
  • the communication with the computer system 2 as a server can take place via a special software application on the external device or via an Internet address or website that can be called up in a browser on the external device 12 .
  • the diagnostic system 1 or the computer system 2 and the peripheral devices 5, 6, 7, 8 that are operatively connected to it can be located as a common functional unit locally at the workplace of a doctor or therapist, e.g. in his practice or clinic.
  • the test person 11 must be present in person in order to be able to use the diagnostic system 1 .
  • the external device 12 with the peripheral devices 5, 6, 7, 8 to be located at said workstation, which device accesses the computer system 2 or the diagnostic tool via the network 9.
  • the test person 11 still has to be personally present at the doctor or therapist, but the investment costs for the doctor or therapist are lower.
  • the external device 12 is a mobile device, for example a laptop, smartphone or tablet, which also allows access to the computer system 2 or to the diagnostic tool from home. This eliminates time-consuming trips to the doctor or therapist.
  • a medical expert is basically not required to use the diagnostic system 1 according to the invention since the diagnosis is carried out independently and above all objectively by the diagnostic tool on the basis of the information provided by the test person 11 via the microphone 5 and the camera 6 .
  • the test person 11 interacts with the diagnostic system 1 on the basis of textual or spoken instructions which it outputs on the image display device 7 or a loudspeaker as a further peripheral device and which the test person 11 has to follow.
  • another person such as a parent or caregiver can support the operation of the diagnostic system 1, but this does not require a medical expert.
  • the diagnostic result should be discussed and evaluated with a medical expert, especially with regard to any therapy resulting from a positive autism diagnosis. In the case of a positive autism diagnosis, it is also recommended for reasons of emotional concern to carry out the use of the diagnosis system 1 under the supervision of another adult.
  • the diagnostic tool according to the invention consists of a combination of software 15 and data 14 that are stored in a non-volatile memory 4 .
  • Figures 1 and 2 represent the simple case that the software 15 and data 14 are stored together in a memory 4, which is part of the computer system 2, for example a hard disk memory.
  • this memory 4 can also be arranged outside of the computer system 2, for example in the form of a network drive or a cloud.
  • it is not mandatory that the software and the data are in the same memory 4 .
  • the data 14 and the software 15 can also be distributed in different memories, stored inside or outside the computer system, e.g. in a network memory or a cloud, which the computer system 2 accesses when required.
  • the data 14 and all of the software 15 can also be stored in separate memories, rather parts of the data and/or parts of the software can also be stored on different memories.
  • the data 14 from the diagnostic tool includes image data 16, 18, 19 in the form of individual images and/or individual videos and text data 17, which are intended to be displayed by the diagnostic tool on the image display device 7 in order to express a spoken statement, an emotional reaction and to achieve a direction of vision.
  • the image and text data 16, 17, 18, 19 are preferably each combined into a specific group or a specific data set, which are selected by the diagnostic tool depending on the person-specific information provided by the test person.
  • the text data 17 are provided in order to display them to an adult who is able to read as a test person 11 on the image display device 7 for reading.
  • the text data 17 can comprise a first text 17a in a first language, eg English, and a second text in a second language, eg Swahili.
  • the text can be, for example, a well-known standard text, eg a fairy tale or a story, such as Little Red Riding Hood or "A Tale of two Cities".
  • a first part 16 of the image data is provided in order to display individual images and/or individual videos one after the other on the image display device 7 to an adult or a child who is unable to read as a test person 11 so that the test person 11 names the object shown on the individual images and/or individual videos.
  • These individual images and/or individual videos 16 are designed in such a way that only a single object that is comparatively easy to name is shown on them, such as an elephant, a car, an airplane, a doll, a soccer ball, etc. In the case of a video, it can this objects must be shown in motion.
  • the individual images and/or individual videos can reflect reality or be drawn.
  • the individual images and/or individual videos can be divided into individual sets 16a, 16b of individual images and/or individual videos, the content of which is based on the age, Gender and ethnic origin are coordinated or have a specific age-related, gender-related and/or cultural context in order to ensure that the test person 11 actually recognizes and names the respective object.
  • the language in which the name is given is not important, since this is irrelevant for the diagnostic tool.
  • a first set of frames 16a can be intended to be presented to a boy or a child of a first ethnic origin on the image display device 7, and a second set of frames 16a can be intended to be presented to a girl or a child of a second ethnic origin on the Image display device 7 to be shown.
  • a second part 18 of the image data is provided in order to display individual images and/or individual videos one after the other to the test person 11 on the image display device 7 in order to trigger a specific emotional reaction in the test person 11, eg joy, sadness, anger or fear.
  • still images are generally suitable for triggering an emotional reaction, such as a short comic with a joke in it, videos can show situations that evoke more intense emotions, which is why videos are generally better suited.
  • the second part 18 of the image data 16, 18 can be divided into individual sets 18a, 18b of individual images and/or individual videos, the content of which is tailored to age, gender and ethnic background, in order to ensure that the test person 11 reacts to a certain situation with a certain emotion.
  • the individual images and/or individual videos can reflect reality or be drawn. The latter is ideal for children.
  • a third part 19 of image data can be provided, comprising at least one single image or video, which is displayed to the test person 11 on the image display device 7, in particular at different positions in succession, in order to direct their line of sight to the image display device 7.
  • a single individual image 19a (cf. FIG. 17) is sufficient for this purpose, which is displayed discretely one after the other at different positions or is continuously moved to different positions.
  • the individual image 19a can be any graphic object such as a symbol, an icon, a logo, a text or a figure. It can alternatively be a photo or a drawing.
  • the individual image can come from the set of individual images of the first part 16 or second part 18 of the image data, so that in this case no third part 19 of image data is required for guiding the viewing direction.
  • these individual images or the video can also come from the first part 16 or second part 18 of the image data, so that in this case no third part 19 of image data is required either.
  • the diagnostic tool consists of software 15 (program code) with instructions for execution on the processor s.
  • this software includes operating software 20, several analysis modules 21, 22, 23 and an overall result evaluation unit 24, the operating software 20 takes over the higher-level control of the processes in the diagnostic tool, in particular the individual analysis modules 21, 22, 23 one after the other and controls the overall result evaluation unit 24, compare Figure 7.
  • the first of the analysis modules is a speech analysis module 21 for determining characteristic values 27 of a first biomarker, which is referred to here as a vocal biomarker of a speech signal 26 of the test person 11 that is contained in an audio recording 26 .
  • the voice analysis module 21 comprises a voice signal trigger controller 21a and a voice recording unit 21b.
  • a speech signal analyzer 21c is also part of the speech analysis module 21 in order to obtain characteristic values of the vocal biomarker, as is shown schematically in FIG.
  • the speech analysis module 21 is triggered as the first analysis module by the operating software 20 after the test person 11 or another person assisting her, such as an adult or the doctor, has activated the diagnostic tool, see Figure 7, and, if necessary, personal data after the diagnostic tool has been requested , especially age
  • this person-specific data is part of a person profile that already exists before the start of the diagnostic tool and can be used by it.
  • the person-specific data can be specified by the test person 11 via the input means 8 .
  • the diagnostic tool expects a corresponding input via the input means 8 in order to then select the data 14 on the basis of the input made. If, however, in a simple variant of the diagnostic tool according to the invention only a certain Depending on the group of people, eg only adults or only children, the data can be specially tailored to this group of people and there is no need to enter the person-specific data.
  • the data or individual images and individual videos are then preferably stored in memory 4 in a gender-neutral and ethnic-culturally neutral manner.
  • the voice analysis module 21 is configured to first execute the voice signal trigger control 21a. This in turn is set up to load a set 16a, 16b of individual images or individual videos from the first image data 16 in the memory 4, or to load a text 17a, 17b from the text data 17 in the memory 4 and to display it on the image display device 7. In the case of single images or single videos, this is done one after the other.
  • the set 16a, 16b of individual images or individual videos or text 17a, 17b is preferably selected as a function of the personal data.
  • the set 16a, 16b of individual images or individual videos is loaded, otherwise the text 17a, 17b.
  • This condition can also be linked to the additional condition to be checked, whether the test person 11 has a reading disability, which can also be part of the person-specific data. If such a reading disability is present, the set 16a, 16b of individual images or individual videos is also used.
  • a first set 16a or a second set 16b of individual images or individual videos can be selected, which in this respective set is specifically tailored to the corresponding group of people.
  • a first text 17a or a second text 17b can be selected, which is respectively adapted to the corresponding group of people.
  • the evaluation of the person-specific data more precisely the examination of whether the test person 11 is under the age limit, has a reading disability, what gender they belong to, what ethnic origin they have or what language the test person 11 speaks or understands, or the selection of the appropriate one Set 16a, 16b of still images or still videos or text 17a, 17b are process steps which the speech signal trigger control 21a executes. It then loads the corresponding set 16a, 16b of individual images or individual videos or the corresponding text 17a, 17b from the memory 4 and controls the image display device 7 in such a way that the individual images or individual videos of the set 16a, 16b appear one after the other or the text 17a, 17b of the image display device 7 are displayed.
  • the individual images and individual videos of the sentence 16a, 16b and the text 17a, 17b are intended to receive a spoken statement from the test person 11, referred to below as the voice signal 26.
  • the spoken utterance is a single-word designation of the object that is shown on the respective individual image or in the respective individual video.
  • the spoken utterance is the reading of this text 17a, 17b.
  • the diagnostic tool in particular the higher-level operating software 20 or the speech analysis module 21, sends a corresponding textual or verbal instruction before the playback of the individual images or individual videos of the sentence 16a, 16b or the text 17a, 17b outputs to the test person 11, for example via the image display device 7 and/or a loudspeaker.
  • the set 16a, 16b may include seven or more still images or still videos.
  • the individual frames or videos can be played back for a fixed period of time, e.g. for 5 or 6 seconds, so that after this period the next frame or video is played back until all the frames or videos have been played back.
  • the voice signal trigger control 21a activates the voice recording unit 21b to record the voice of the test person 11 as a voice signal 26.
  • the voice recording unit 21b switches the speech input device 5 (microphone), records the time-continuous speech signal 26 or speech signals in an audio recording 26 and stores this in an audio data memory 13a for recorded speech signals.
  • the audio recording 26 itself is digital, in which case the voice signal 26 or its sampling (sampling) can already be digitized in the voice input device 5 or in one of these downstream analog/digital converters, which is part of the processing unit 3 or a separate digital signal processor (DSP). can be.
  • the audio data memory 13a can be part of the non-volatile memory 4 . Alternatively, it can be a memory that is separate from this in the computer system 2 or a memory that is separate from the computer system 2, for example a memory in a network drive or in a cloud.
  • the voice recording unit 21b can be set up to end the recording after a specified period of time in order to obtain an audio recording 27 of a specific length of time, for example an average of 45 seconds for children and an average of 60 seconds for adults.
  • the voice input device 5 can then also be switched off. Alternatively, it can be switched off when the audio signal from the voice input device 5 is below a certain limit value for a certain time after a voice signal 26, i.e. the test person 11 is no longer speaking.
  • manual triggering and ending of the audio recording can be provided.
  • the diagnostic tool receives a corresponding start or stop input via the input means 8.
  • the audio signal can be uninterrupted for the duration of the playback of the individual images, individual videos or the text, so that the recording is started once, namely at the beginning of playback, and is ended once, namely at the end of playback.
  • the recording can be started before or at the beginning of the playback of each individual image or individual video and then ended, in particular after receipt of the voice signal 26 from the test person 11, either after a specified period of time has elapsed or if the Audio signal of the voice input device 5 for a certain time after a voice signal 26 is below a certain threshold.
  • An example of such individual audio recordings is shown in FIG.
  • FIG. 9 shows the curves of the amplitude or the sound pressure level of eight individual speech signals 26, each recorded in an audio recording, over time.
  • the speech signals 26 are each based on a single more or less long spoken word.
  • the individual audio recordings can first be processed individually or combined to form an overall recording, which is then processed further.
  • all of the audio recordings are provided with the reference numeral 27, regardless of whether they are a number of individual audio recordings or a single overall recording.
  • the audio recording(s) 27 is/are then evaluated in the speech signal analyzer 21c, characteristic values 28 of a vocal biomarker of the recorded speech signal 26 being determined, see FIG . It is therefore not important whether the naming of the object on the respective single image or video was correct.
  • the evaluation of the audio recording(s) 27 by the speech signal analyzer 21c takes place in that the basic vocal frequencies or pitches in the speech signal 26 contained in the audio recording 27 are first estimated over time with the aid of artificial intelligence. This is called the pitch spectrum.
  • the speech signal analyzer 21c thus examines the basic tonal structure of the speech signal 26 in the audio recording 27.
  • the audio recording 27 is processed in a “deep convolutional neural network” algorithm, which is part of the speech signal analyzer 21c.
  • the deep convolutional neural network algorithm estimates the pitch of the audio signal 26 at any point in time, in particular within a specific frequency spectrum from 30Flz to 1000Flz, which includes all possible tones of the human voice.
  • the progression of the pitch over time is called the pitch spectrum.
  • Figure 10 shows the pitch spectra for the eight individual audio recordings from Figure 9.
  • this frequency range can be neglected.
  • This can be done, for example, by bandpass filtering, in which the frequency range from 30 Flz to 600 Flz is extracted from the voice signal 26 . This is preferably done after the pitch estimation or determination of the pitch spectrum, so that the further analysis is based only on the relevant part of the human voice.
  • a digital bandpass filter can be applied to the audio recording(s) 27, which is also part of the speech signal analyzer 21c. In an embodiment variant, this bandpass filter can have fixed limit frequencies, in particular at 30 Hz and 600 Hz.
  • the bandpass filter can have variable cut-off frequencies, with provision being made to determine the minimum and maximum frequencies in the pitch spectrum and then to configure the bandpass filter in such a way that the lower cut-off frequency corresponds to the determined minimum frequency and the upper cut-off frequency corresponds to the determined maximum frequency.
  • the speech signal 26 can also be filtered in such a way that background noise, such as the speech of persons other than the test person 11 in the speech signal 26, is eliminated.
  • background noise such as the speech of persons other than the test person 11 in the speech signal 26, is eliminated.
  • a corresponding digital filter can be applied to the audio recording(s) 27, which is also a component of the speech signal analyzer 21c.
  • Digital filters of this type are known per se. Background noise is filtered out sensibly before the pitch estimation or determination of the pitch spectrum, so that the result of this estimation is not falsified.
  • a histogram analysis is then applied to the pitch spectrum of the audio recording(s).
  • a histogram that is the result of this analysis is shown in FIG. 11.
  • the frequency range under consideration here the range between 30Hz and 600Hz
  • Each individual pitch is then assigned to the corresponding section or container using the pitches currently determined in the audio recording. This corresponds to an area-related summation of the occurrences of the individual pitches. In other words, it is determined for each frequency segment how often one of its pitches is contained in the audio recording. The determined number of total pitches of each section is then divided by the total number of pitches determined.
  • the histogram thus indicates in % how often the pitches or frequencies of a specific frequency section occurred in the audio recording.
  • the totality of all audio recordings or the totality of all pitch spectra (FIG. 10) is considered.
  • the relevant frequency range has been divided into 12 sections, although there can be fewer or more.
  • FIGS. 12 and 13 each show another pitch histogram as the result of a histogram analysis.
  • the histogram in FIG. 12 belongs to a speech signal of a test person 11 who has been proven to be autistic
  • the histogram in FIG. 13 belongs to a speech signal of a reference person who has been shown to be non-autistic.
  • the histogram provides information about the pitch variability in the voice of the subject 11, which is an objective biomarker for distinguishing an autistic person 11 from a reference person without autism.
  • Figures 12 and 13 illustrate in comparison, the voice varies less in pitch in a non-autistic person, being more confined to certain frequencies.
  • the frequencies used here are in a comparatively narrow frequency band, namely between 250 Hz and 400 Hz, and have a clear peak there, namely at approx. 300 Hz, see Figure 13.
  • the variability of the pitch of the voice is greater in an autistic person, as Figure 11 shows.
  • the dominant frequencies extend over a much broader frequency band, namely between 50 Hz and 350 Hz, see Figure 12, and their distribution is more even, i.e. it does not have a clearly pronounced peak.
  • FIGS. 14 and 15 each show four histograms. It can be clearly seen that autistic people use a broader spectrum of sounds
  • the pitch histogram can be understood as a vocal biomarker.
  • the characteristic values of this biomarker are formed by the frequencies of occurrence of the n frequency segments.
  • the histogram analysis according to FIG. 11 supplies twelve characteristic values, ie a frequency of occurrence for each frequency segment.
  • the histogram or the characteristic values of this biomarker can then be evaluated in a preliminary evaluation unit 24a to determine whether the test person 11 is not autistic, compare FIG. 8. This can be determined with a certainty of more than 95%.
  • the vocal biomarker alone is not meaningful enough to be able to make a clearly positive diagnosis of autism, so that further investigations are required, as explained below.
  • the result of the preliminary assessment unit 24a is thus the intermediate diagnosis 33 that the test person 11 is clearly not autistic or needs to be examined further.
  • the preliminary evaluation unit 24a can likewise be part of the speech analysis module 21, see FIG. It should be noted that the intermediate diagnosis 33 by the preliminary assessment unit 24a is not absolutely necessary. Rather, it can be provided that each test person 11 carries out all analyzes offered by the diagnostic tool.
  • the pre-assessment unit 24a is an algorithm that compares the characteristic values with a multidimensional plane, also called a hyperplane, which, figuratively speaking, forms an interface between subjects with and subjects without autism in a multidimensional data space.
  • the algorithm can be a machine learning algorithm or preferably a support vector machine (SVM).
  • SVM support vector machine
  • Such algorithms are generally known, for example from Böser, Bernhard E.; Guyon, Isabelle M.; Vapnik, Vladimir N. (1992). "A training algorithm for optimal margin classifiers". Proceedings of the fifth annual workshop on Computational learning theory - COLT '92. p. 144, or Fradkin, Dmitriy; Muchnik, Ilya (2006). "Support Vector Machines for Classification”.
  • the analysis of the vocal biomarker is followed by the analysis of a further biomarker, either in the form of the reaction time of the test person 11 to an emotional stimulus, or in the form of the viewing direction of the test person 11, with both of the other biomarkers mentioned preferably being analyzed and with a specific sequence not being important.
  • the operating software 20 activates the emotion analysis module 22 after the speech analysis module 21, see FIG.
  • the emotion analysis module 22 includes an emotion trigger control 22a, a
  • the emotion analysis module 22 measures the reaction time of the test person 11 to an emotional stimulus, which is triggered in the test person 11 by the display of selected image data 18 in the form of individual images or individual videos on the image display device 7. where the measurement is performed using facial recognition software and compassionate AI capable of recognizing certain emotions in a face.
  • This artificial intelligence is preferably a so-called "deep learning model" that has been trained with representative data sets on the emotions to be stimulated.
  • the emotion analysis module 22 starts the emotion triggering control 22a in a first step.
  • This is set up to load a set 18a, 18b of image data 18 from the memory 4 and to display it on the image display device 7 or to have it displayed.
  • a sentence 18a, 18b can be selected from a plurality of sentences depending on the aforementioned person-specific data, so that children or girls or persons of a first ethnic origin have a first sentence 18a of the image data 18, and adults, or boys or persons of a second ethnic origin are shown a second set 18b of the image data 18.
  • This image data 18 is a number of individual images or individual videos that are displayed one after the other on the image display device 7 . Their content is chosen in such a way that it triggers an emotional reaction in the test person in the form of joy, cheerfulness, sadness, fear or anger.
  • the image data set 18 suitably comprises a total of 6 individual images and/or individual videos, each of which stimulates an equal number of positive emotions such as joy or cheerfulness and negative emotions such as sadness, fear or anger.
  • the emotion trigger control 22a activates the emotion observation unit 22b, which in turn activates the image recording device 7 in order to record the face of the test person 11 or their facial expression, if necessary, at least temporarily, also record it.
  • the emotion observation unit 22b can be set up to record the detected face in a video recording and to analyze it “offline”, i.e. after the entire set 18a, 18b of individual images or videos has been shown.
  • the face recorded by the image recording device 7 can be evaluated in real time, so that no video recording has to be saved.
  • a video recording 29 is shown in FIG. 8, which represents the output signal of the image recording device 7 and can be either a stored video recording or a real-time recording, which is fed to the emotion observation unit 22b using signals.
  • the emotion trigger control 22a can set a start time marker t1, t2, t3, t4 with each playback of a new frame or video, which later serves as a reference.
  • FIG. 16 illustrates this using four individual videos 18a1, 18a2, 18a3, 18a4 of the first set 18a of the image data 18, which are shown one after the other.
  • the facial recognition software mentioned with compassionate artificial intelligence is part of the emotion observation unit 22b, which evaluates the video recording 29 to determine when the facial features of the test person 11 change to an extent that clearly indicates an emotional reaction, in particular associated with a specific expected emotion. In each of these recognized cases, the emotion observation unit 22b sets a reaction time marker E1, E2, E4.
  • the test person 11 shows no or insufficient emotion in the third individual video 18a3, so that no reaction time mark could be set here either.
  • the individual individual images or individual videos 18a1, 18a2, 18a3, 18a4 can be played back by the emotion-triggering control 22a for a specific, specified duration, with the individual durations being able to be the same or different.
  • the next frame or video is shown when the duration of the previous frame has expired.
  • the next single image or single video can be shown as soon as or shortly after the emotion observation unit 22b has recognized an emotion.
  • the emotion observation unit 22b gives feedback to the emotion trigger controller 22a to show the next frame or frame video.
  • the emotion triggering controller 22a triggers a timer instead of the start time markers, with the emotion observation unit 22b being able to stop the timer again when an emotion is recognized instead of setting the reaction time markers.
  • the timer reading is then read out by the reaction evaluation unit 22c and stored, since it represents the respective reaction time to the respective stimulated emotion. If a certain emotion is not triggered in the test subject 11, the playback of the next frame or video or the end of the playback time of the last frame or video can reset the timers take place. This case of non-stimulation of an emotion is also noted by the reaction evaluation unit 22c.
  • the emotion observation unit 22b can be set up to determine whether a positive or negative emotion was stimulated in the subject 11 .
  • This determination referred to below as the type of reaction, can be presented in the form of binary information +1 or ⁇ 1 and linked to the corresponding reaction time R1, R2, R4. It serves as a plausibility check or enables the congruence of the emotional reaction to the stimulated emotion to be determined.
  • the type of reaction can determine whether the test person 11 shows a reaction to the stimulation that is to be expected.
  • This is illustrated in FIG. 16 using a congruence indicator K1, K2, K3, K4. This results from the result of a comparison as to whether the type of reaction determined corresponds to the emotion stimulated. This comparison can also be carried out by the reaction evaluation unit 22c. If the type of reaction and the emotion stimulated are both positive or negative, there is congruence or agreement. The congruence indicator with the value 1 can show this case. Referring to FIG. 16, the test person 11 reacts as expected to the emotions stimulated by the first two individual images or individual videos 18a1, 18a2, so that the first and second congruence indicators K1, K2 each have the value 1.
  • the congruence indicator can display this case with the value 0 or -1.
  • the value ⁇ 1 has been chosen so that the congruence indicator value 0 can be used to indicate that there has been no reaction.
  • the subject 11 reacts unexpectedly to the emotion stimulated by the fourth frame or frame 18a4.
  • the type of emotional reaction does not match the stimulated emotion, so that the fourth congruence indicator K4 has the value -1.
  • the emotion analysis module 22 thus supplies a reaction time Ri, a reaction type (positive or negative emotion, +1, -1) and a congruence indicator (values -1, 0, +1) for a number of emotions stimulated in the subject 11, which in their
  • the characteristic values 30 of the second biomarker form the entirety, referred to as “emotional response biomarker” in FIG.
  • a table of these characteristics is shown below:
  • the operating software 20 activates the gaze direction analysis module 23, which is responsible for determining characteristic values 32 of a third biomarker of the test person 11, see Figure 7. This can be done automatically or based on a corresponding input from the Test person 11 done, who expects the diagnostic tool.
  • the line of sight analysis module 23 includes a line of sight guide 23a and a line of sight monitoring unit 23b, see Figure 5.
  • the line of sight analysis module 23 measures and tracks the line of sight of the test person 11 while he is looking at the image display device 7 .
  • the image recording device 6 is expediently arranged relative to the image reproduction device 7 in such a way that it captures the face of the test person 11 .
  • the viewing direction analysis module 23 starts the viewing direction guidance 23a in a first step. This is set up to load at least one individual image 19a or video of the image data 16, 18, 19 from the memory 4 and to display it on the image display device 7 or to have it displayed. As with the speech analysis module 21 and emotion analysis module 22, the at least one individual image 19a or video can be selected depending on the aforementioned personal data.
  • two or more different frames of the image data 16, 18, 19 can be loaded from the memory 4 and displayed alternately or randomly on different screen positions of the image display device 7.
  • the diagnostic tool can issue a corresponding request in advance on the image display device 7 or via a loudspeaker.
  • the line of sight guide 23a can show a video on the image display device 7, specifically over the entire surface, which is designed to direct the gaze of the test person 11 via the image display device 7 along a specific path.
  • the video can contain, for example, an object moving relative to a stationary background, such as a clown fish moving in an aquarium.
  • events that attract the test person's attention can occur one after the other at different spatial points in the video. In these cases, the line of sight guide 23a consequently only needs this one video.
  • the line of sight guide 23a activates the line of sight monitoring unit 23b, which in turn activates the image recording device 7 in order to record the face of the test person 11 or their line of sight, if necessary, at least temporarily also record it.
  • the line of sight analysis module 23 can be set up to record the detected face in a video recording and to analyze it “offline”, i.e. after the at least one individual image or video has been shown.
  • a real-time evaluation of the viewing direction of the face recorded by the image recording device 7 is preferably carried out, so that no video recording has to be stored permanently.
  • a video recording 31 is shown in FIG. 8, which represents the output signal of the image recording device 7 and can be either a stored video recording or a real-time recording, which is fed to the viewing direction monitoring unit 23b using signals.
  • the line of sight observation unit 23b is formed by eye-tracking software based on artificial intelligence.
  • eye-tracking software based on artificial intelligence.
  • Such software is well known, e.g. from Krafka K, Khosla A, Kellnhofer P, Kannan H., "Eye Tracking for everybody", IEEE Conference on Computer Vision and Pattern Recognition. 2016; 2176-2184. It determines the viewing direction of the test person 11 in the form of x, y coordinates of the focus of the eye at any point in time and stores them, so that a viewing direction path 35 currently results, as shown in FIG.
  • the characteristic values 28, 30, 21 of the three biomarkers are supplied to an overall result evaluation unit 24, which is part of the diagnostic tool according to the invention and which calculates the characteristic values 28, 30, 21 the biomarkers combined.
  • the overall result evaluation unit 24 is an artificial intelligence-based algorithm and is in the form of a model that has been trained with datasets of the three biomarkers of a large number of reference persons with and without autism. Strictly speaking, the algorithm is one
  • Classification algorithm that classifies the subject's biomarkers as "autistic” or “non-autistic” with a certain degree of probability.
  • the algorithm can be a machine learning algorithm or preferably a support vector machine (SVM). He compares the entirety of all characteristic values 28, 30, 21 of the three biomarkers simultaneously with a flyer level forming an interface between test persons with and test persons without autism in a multidimensional data space in order to assign the entirety of the data formed by the characteristic values either to a reference group of people with autism or to a reference group of people without autism. Depending on this assignment result, the diagnosis 34 is that the test person 11 is autistic or non-autistic with a certain probability.
  • the diagnostic tool according to the invention in the diagnosis of autism, it can be determined with an accuracy of more than 95% whether a test person 11 suffers from autism.
  • the evaluation of the biomarkers leads to a robust and, above all, objective result.
  • using the diagnostic tool can help reduce the diagnostic backlog and facilitate the decision as to which one Patents should be preferred to diagnosis by the medical expert.
  • a particular advantage of the diagnostic tool is that both adults and children can be examined with it and the diagnostic tool can be used from almost anywhere and at any time, especially from home.
  • the software-based diagnostic tool is part of a diagnostic system 1.
  • this can be a computer system 2 with peripheral devices connected to it, in particular a microphone 5, a camera 6, a display/monitor and an input device 8.
  • the computer system 2 itself can be a personal computer with a non-volatile memory 4 in which the diagnostic tool consisting of the aforementioned software components or modules and data is stored.
  • the computer system 2 can act as a server that can be reached via the Internet 9 with an external, in particular mobile, device 12 .
  • the peripheral devices 5, 6, 7, 8 are part of the external device, which is a smartphone or a tablet, for example.
  • the diagnostic tool is still formed by the software components or modules and data stored in the memory 4 of the computer system 2 .
  • the diagnostic tool can be arranged in a distributed manner, to be more precise, it can be embodied partly in the computer system 2 and partly in the external device 12 .
  • This embodiment variant implements an offline analysis of the biomarkers.
  • a non-volatile memory 4′ and a processor (not shown here) can thus be present in the external device 12 .
  • the non-volatile memory 4' stores the image data 16, 18, 19 and the text data 17 on the one hand, as well as part 20' of the operating software and those components 21a, 21b, 22a, 23a of the analysis modules 22, 22, 23 that do not require high computing power and do not place any special demands on the processor, e.g. a multi-core processor.
  • memory 4' is dated speech analysis module 21, the speech signal trigger controller 21a and the speech recording unit 21b. They perform the same process as previously explained, the difference being that the audio recording 27 is stored in the audio data store 13a and not analyzed on the external device 12.
  • the emotion trigger control 22a of the emotion analysis module 22 and the gaze direction control 23a of the viewing direction analysis module 23 are stored in the memory 4 ′.
  • These also each carry out the same method as explained above, with one difference being that during the respective playback of the images or the image, a video recording 29, 31 takes place, which is stored in the video data memory 13b and not analyzed on the external device 12 .
  • a video recording unit 25 is also present in the memory 4', analogous to the voice recording unit 21b.
  • the computer system 2 in its memory 4 in addition to a second part 20 'of the operating software, only contains those components of the analysis modules 21, 22, 23 that perform the actual analysis of the biomarkers, namely the speech signal analyzer 21c of the speech analysis module 21, the emotion observation unit 22b and the reaction evaluation unit 22c of the emotion analysis module 22 and the viewing direction observation unit 23b of the viewing direction analysis module 23.
  • the overall result evaluation unit is also present in the memory 4.
  • an audio data memory 13a and a video data memory 13b are also provided in the memory 4 of the computer system 2, into which the audio and video recordings 27, 29, 31 stored on the external device 12 are transferred. This can be done immediately after the corresponding recording has been saved or only after all recordings have been made. The evaluation of the individual biomarkers and the joint assessment of their characteristic values then continue to take place on the computer system.
  • analyzing components 21c, 22b, 22c, 23b of the analysis modules 21, 22, 23 are also arranged in the external device 12, so that the determination of the characteristic values 28, 30, 32 the biomarker is also external Device 12 is done.
  • these characteristic values 28, 30, 31 are then transmitted to the computer system 2, where they are evaluated together with the overall result evaluation unit 24 accordingly. This is advantageous for reasons of data protection because the characteristic values of the biomarkers do not allow the test person to be identified.
  • the diagnostic tool is arranged entirely in the external device 12, so that the diagnostic system 1 is formed only from this external device 12 with the peripheral devices 5, 6, 7, 8 already integrated therein and the diagnostic tool stored thereon.
  • the diagnostic tool can be implemented in an application, called app for short, and executed on a corresponding processor of the external device.
  • the external device is a smartphone or tablet.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Developmental Disabilities (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Technology (AREA)
  • Social Psychology (AREA)
  • Signal Processing (AREA)
  • Neurology (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Neurosurgery (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention concerne un outil de diagnostic fondé sur un logiciel, un procédé pour le faire fonctionner et un système de diagnostic à utiliser dans le diagnostic d'un trouble neurologique chronique, tel que l'autisme, aussi bien chez les enfants que chez les adultes. Cet outil de diagnostic comprend un module d'analyse vocale (21) pour déterminer des valeurs caractéristiques (28) d'un biomarqueur vocal d'un signal vocal (26) d'une personne à tester (11), au moins un autre module (22, 23) servant à déterminer des valeurs caractéristiques (30, 32) d'un deuxième biomarqueur, et une unité d'évaluation (25) montée en aval. Le module d'analyse vocale (21) comprend une unité de commande de déclenchement de signal vocal (21a) qui affiche les données d'image sur un dispositif d'affichage d'image (7) pour déclencher au moins un signal vocal (26) chez la personne à tester (11), une unité d'enregistrement de signal vocal (21b) qui enregistre le signal vocal (26), et un analyseur de signaux vocaux (21c) qui évalue ensuite le signal vocal (26) pour déterminer tout d'abord à quel moment quelle hauteur de son se produit, puis qui détermine par la suite une distribution de fréquence des hauteurs de son sur un certain nombre de bandes de fréquences d'un spectre de fréquences sélectionné, cette distribution de fréquences déterminant les valeurs caractéristiques (28) du biomarqueur vocal. L'unité d'évaluation (25) détermine, sur la base des valeurs caractéristiques (28, 30, 32) des biomarqueurs, par application d'un algorithme d'apprentissage machine et comparaison avec une interface multidimensionnelle, si la personne à tester (11) présente le trouble neurologique chronique.
EP22732938.0A 2021-05-31 2022-05-30 Outil de diagnostic objectif fondé sur un logiciel, à commande vocale à utiliser dans le diagnostic d'un trouble neurologique chronique Pending EP4346558A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021205548.6A DE102021205548A1 (de) 2021-05-31 2021-05-31 Softwarebasiertes, sprachbetriebenes und objektives Diagnosewerkzeug zur Verwendung in der Diagnose einer chronischen neurologischen Störung
PCT/EP2022/064578 WO2022253742A1 (fr) 2021-05-31 2022-05-30 Outil de diagnostic objectif fondé sur un logiciel, à commande vocale à utiliser dans le diagnostic d'un trouble neurologique chronique

Publications (1)

Publication Number Publication Date
EP4346558A1 true EP4346558A1 (fr) 2024-04-10

Family

ID=82163363

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22732938.0A Pending EP4346558A1 (fr) 2021-05-31 2022-05-30 Outil de diagnostic objectif fondé sur un logiciel, à commande vocale à utiliser dans le diagnostic d'un trouble neurologique chronique

Country Status (3)

Country Link
EP (1) EP4346558A1 (fr)
DE (1) DE102021205548A1 (fr)
WO (1) WO2022253742A1 (fr)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016040673A2 (fr) 2014-09-10 2016-03-17 Oregon Health & Science University Evaluation du trouble du spectre autistique à base d'animation
AU2018350984A1 (en) 2017-10-17 2020-05-07 Satish Rao Machine learning based system for identifying and monitoring neurological disorders
GB2567826B (en) 2017-10-24 2023-04-26 Cambridge Cognition Ltd System and method for assessing physiological state
US20190239791A1 (en) 2018-02-05 2019-08-08 Panasonic Intellectual Property Management Co., Ltd. System and method to evaluate and predict mental condition
WO2019246239A1 (fr) 2018-06-19 2019-12-26 Ellipsis Health, Inc. Systèmes et procédés d'évaluation de santé mentale
US11848079B2 (en) * 2019-02-06 2023-12-19 Aic Innovations Group, Inc. Biomarker identification
KR102643554B1 (ko) 2019-03-22 2024-03-04 코그노아, 인크. 개인 맞춤식 디지털 치료 방법 및 디바이스

Also Published As

Publication number Publication date
WO2022253742A1 (fr) 2022-12-08
DE102021205548A1 (de) 2022-12-01

Similar Documents

Publication Publication Date Title
Halim et al. On identification of driving-induced stress using electroencephalogram signals: A framework based on wearable safety-critical scheme and machine learning
CN107224291B (zh) 调度员能力测试***
Gomez-Pilar et al. Neurofeedback training with a motor imagery-based BCI: neurocognitive improvements and EEG changes in the elderly
DE202019005960U1 (de) Verwalten von Atembeschwerden auf Basis von Geräuschen des Atemsystems
Ethridge et al. Risk and resilience in an acute stress paradigm: Evidence from salivary cortisol and time-frequency analysis of the reward positivity
Künecke et al. Facial responsiveness of psychopaths to the emotional expressions of others
EP3755226B1 (fr) Système et procédé de détection et de mesure d'états affectifs
EP3930563A1 (fr) Procédé pour l'évaluation de lésions cutanées utilisant une intelligence artificielle
CN115376695A (zh) 基于扩展现实的神经心理评估及干预的方法、***和装置
Jones et al. Using time perception to explore implicit sensitivity to emotional stimuli in autism spectrum disorder
DE102022002867A1 (de) Auf physiologischen Informationen des Fahrers basierendes Verfahren und System zur Fahrzeugsteuerungsassistenz
Kaiser et al. EEG beta 2 power as surrogate marker for memory impairment: a pilot study
DE212020000450U1 (de) Technik zur Verwendung bei der Fahrzeugherstellung
Srimaharaj et al. Effective method for identifying student learning ability during classroom focused on cognitive performance
WO2021253139A1 (fr) Procédés d'évaluation de la santé cérébrale à l'aide de mesures comportementales et/ou électrophysiologiques du traitement visuel
Lei Driver mental states monitoring based on brain signals
Amd et al. A derived transformation of emotional functions using self-reports, implicit association tests, and frontal alpha asymmetries
EP4346558A1 (fr) Outil de diagnostic objectif fondé sur un logiciel, à commande vocale à utiliser dans le diagnostic d'un trouble neurologique chronique
DE102015109853A1 (de) Assistenz- und Entscheidungssystem und Verfahren zur Auswertung von Elektroenzephalogrammen
Ben Abdessalem et al. Toward Personalizing Alzheimer’s Disease Therapy Using an Intelligent Cognitive Control System
DE102017204068A1 (de) Elektronische Vorrichtung, System, Verfahren und Computerprogramm
Rekrut et al. Classifying Words in Natural Reading Tasks Based on EEG Activity to Improve Silent Speech BCI Training in a Transfer Approach
Park et al. Measuring emotional variables in occupational performance: A scoping review
DE112022002458T5 (de) Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und programm
Leon et al. Right hemisphere damage and prosody.

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231231

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR