EP1470735B1 - Method for determining an acoustic environment situation, application of the method and hearing aid - Google Patents
Method for determining an acoustic environment situation, application of the method and hearing aid Download PDFInfo
- Publication number
- EP1470735B1 EP1470735B1 EP02706499.7A EP02706499A EP1470735B1 EP 1470735 B1 EP1470735 B1 EP 1470735B1 EP 02706499 A EP02706499 A EP 02706499A EP 1470735 B1 EP1470735 B1 EP 1470735B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- classification
- unit
- processing stage
- processing
- class information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
Definitions
- the present invention relates to a method for determining an acoustic environmental situation, an application of the method, a device for determining the acoustic environment situation, and a hearing aid device.
- Modern hearing aids can be adapted today with the help of different hearing programs different acoustic environment situations.
- the hearing aid should offer the user an optimal benefit in every situation.
- the choice of hearing program can be made either via the remote control or via a switch on the hearing aid itself.
- switching between different programs is annoying or difficult, if not impossible, for many users.
- Which program offers the optimum comfort and the best speech intelligibility at which point in time is not always easy to determine even for experienced hearing aid wearers.
- An automatic recognition of the acoustic environment situation and an associated automatic switching of the hearing program in the hearing aid is therefore desirable.
- noise class contains a variety of sounds such as background calls, station sounds, hairdryers, and the noise class music includes pop music, classical music, solo instruments, vocals, etc.
- WO 01/76321 discloses a two-step hidden Markov model (HMM) structure that has one extraction phase and two identification phases related to the results of the extraction phase.
- HMM hidden Markov model
- the present invention is therefore based on the object, a method for determining an acoustic Specify environmental situation, which is more robust and accurate compared to the known methods.
- an acoustic input signal in a multi-stage process consisting of at least two classification stages, each stage including an extraction phase and an identification phase,
- Fig. 1 shows a known single-stage device for determining the acoustic environment situation, wherein the device consists of a series connection of a feature extraction unit F, a classification unit C and a post-processing unit P.
- An acoustic input signal IN recorded, for example, with a micronphone, is applied to the feature extraction unit F, in which characteristic features are extracted.
- Fig. 1 the features M of the classification unit C extracted in the feature extraction unit F are applied, in which basically one of the known pattern identification methods is used for noise classification.
- so-called distance estimators, Bayes classifiers, fuzzy logic systems or neural networks are suitable as pattern recognizers. Further information on the first two methods can be found in the publication " Pattern Classification and Scene Analysis "by Richard O. Duda and Peter E. Hart (John Wiley & Sons, 1973 ). Regarding neural networks, the standard work of Christopher M. Bishop entitled “Neural Networks for Pattern Recognition” (1995, Oxford University Press ). Further reference is made to the following publications: Ostendorf et.
- class information KI are obtained by the processing steps carried out in the classification unit C, which are optionally fed to a post-processing unit P for the purpose of possibly correcting the class membership. Obtained in the sequence cleaned class information KI '.
- a first embodiment of an inventive device is shown. It is a device with two process stages S1 and S2, wherein in each process stage S1, S2 depending on a feature extraction unit F1 or F2 and a Classification C1 or C2 are included.
- the original input signal IN namely both the feature extraction unit F1 and the feature extraction unit F2, which are each operatively connected to the corresponding classification unit C1 or C2 in the sequence, is fed to both process stages S1 and S2.
- class information KI1 which is obtained on the basis of calculations in the classification unit C1 of the first method stage S1
- the classification unit C2 of the second method step S2 is influenced, such that, for example, one of several possible pattern identification methods is selected and for the Noise classification in the classification unit C2 of the second method step S2 is applied.
- the features extraction unit F1 extracts the features tonality, spectral centroid (CGAV), fluctuation of the spectral centroid (CGFS) and spectral width and settling time, and in the classification unit C1 in which a HMM (Hidden Markov Model) Classifier is used, wherein the input signal IN using the HMM-Kassitch into one of the following classes is divided: "language”, "speech in noise", “noise” or "Music”. This is called class information KI.
- the result of the first processing stage S1 is applied to the classification unit C2 of the processing stage S2, in which a second feature set is extracted by means of the feature extraction unit F2.
- the additional feature variance of the harmonic structure (pitch) - also referred to below as Pitchvar - is extracted.
- the rule-based classifier involves only a few simple heuristic decisions based on the four features and based on the following considerations: The characteristic Tonality is used in each class for correction, if the feature values are completely outside an allowable value range of the class information KI1, which have been determined in the first classification unit C1, ie by the HMM classifier.
- tonality is high for "music", “medium” for speech, slightly lower for “speech in noise” and low for “noise”.
- an input signal IN falls through the classifying unit C1 into the class "speech”
- the classification unit C1 have indicated that the relevant signal components in the input signal IN fluctuates strongly.
- the tonality for this input signal IN is very low, it is most likely not “speech” but “speech in noise”.
- Similar considerations can be made for the other three features, namely, the variance of the harmonic structure (pitchvar), the spectral shear point (CGAV), and the spectral center of gravity fluctuation (CGFS).
- Class information KI1 Condition: Class information KI2: "Language” If tonality low if CGFS high and CGAV high otherwise "Speech in noise” "Music” “Noise” "Speech in noise” If tonality high if tonality low or CGAV high "Language” "Noise” “Noise” If tonality high "Music” “Music” If tonality is low or pitch deep or CGAV high "Noise”
- tonality is best suited to be able to correct a correction of the errors generated by the classification unit C1.
- tonality is of crucial importance when using the rule-based classifier.
- the hit rate could be improved by at least 3% compared to the single-stage process. In some cases even an improvement of the hit rate by 91% could be determined.
- a further embodiment is shown in a general representation. It is a n-stage processing technique.
- Each of the processing stages S1 to Sn has, in continuation of the above considerations, a feature extraction unit F1 to Fn and one of these downstream classification units C1 to Cn for generating the respective class information KI1 to KIn.
- a post-processing unit P1 to Pn for generating cleaned class information KI1 'to KIn' is present in each or individual processing stages S1 to Sn.
- Fig. 3 embodiment shown in particular also for a so-called coarse-fine Classification In a coarse-fine classification, a result obtained in a processing stage i is refined in a subsequent processing stage i + 1. Thus, a rough classification is made in a higher-level processing stage, whereby due to the rough classification a fine classification based on more specific feature extractions and / or classification methods is carried out in a downstream processing stage.
- This process can also be regarded as hypothesis generation in a higher-level process, which is checked, ie confirmed or rejected, at a later stage of the process.
- hypotheses which have themselves been created in a higher-level process (rough classification) can also be entered with other information, in particular with manual means, such as remote controls or switches.
- Fig. 3 this is indicated by a manipulated variable ST, representative of the first processing stage S1, via which, for example, the calculations in the classification unit C1 can be overridden.
- the manipulated variable can also be supplied to a classification unit C2 to Cn or to a postprocessing unit P1 to Pn of another method stage S1 to Sn.
- each processing stage S1 to Sn may, but this is not mandatory is to be assigned a task, such as: a rough classification, a fine classification, a localization of a noise source, a check whether a particular noise source, eg. As automotive noise in a vehicle, is present, or an extraction of certain signal components of an input signal, for.
- a task such as: a rough classification, a fine classification, a localization of a noise source, a check whether a particular noise source, eg. As automotive noise in a vehicle, is present, or an extraction of certain signal components of an input signal, for.
- the individual process stages S1 to Sn are therefore individual in the sense that different features are extracted in them and different classification methods are used.
- the localization of the sound source carried out in the first processing stage can be followed by a directional filtering, for example by means of multi-microphone technology.
- a feature extraction unit F1 to Fn may be divided among a plurality of classification units C1 to Cn, that is, the results of a feature extraction unit F1 to Fn may be divided among several Classification units C1 to Cn are used. Furthermore, it is conceivable that a classification unit C1 to Cn is used in a plurality of processing stages S1 to Sn. Finally, it can be provided that the class information KI1 to KIn obtained in the various processing stages S1 to Sn or the adjusted class information KI1 'to KIn' are weighted differently in order to obtain the final classification.
- Fig. 4 shows a further embodiment of the invention, in turn, several processing stages S1 to Sn are used.
- the class information KI1 to KIn are used not only in the immediately following processing stages, but optionally in all subordinate processing stages.
- the results of previous processing stages S1 to Sn can also have their effects on the subsequent feature extraction units F1 to Fn or on the features to be extracted.
- Postprocessing units P1 to Pn are conceivable which process the intermediate results of the classification and make them available as adjusted class information KI1 'to KIn'.
- Fig. 5 is a further embodiment of a multi-stage device for determining the acoustic environment situation, again in a general form shown. Shown are as already in the embodiments according to Fig. 3 and 4 a plurality of processing stages S1 to Sn having feature extraction units F1 to Fn and classification units C1 to Cn.
- the class information KI1 to KIn obtained in each processing stage S1 to Sn are supplied to a decision unit FD in which the final classification is made by generating the class information KI.
- the decision unit FD may provide feedback signals which act on the feature extraction units F1 to Fn and / or the classification units C1 to Cn, for example to adapt individual parameters in these processing units or to exchange entire classification units C1 to Cn.
- FIG. 12 is an unclaimed example illustrating the above general explanations of the possible structures.
- the first processing stage S1 consists of the feature extraction unit F1 and the classification unit C1.
- the second processing stage S2 the same features are used, which have already been used in the first process stage S1.
- a recalculation of the features in the process step S2 is therefore unnecessary, and the results of the feature extraction unit F1 of the first process step S1 can be used in the second process step S2.
- the second method stage S2 therefore, only the classification method is changed, specifically as a function of the class information KI1 of the first processing stage S1.
- Fig. 7 shows the use of the device according to the invention in a hearing aid device, which essentially has a transmission unit 200.
- 100 is one multistage processing unit referred to in any of the Fig. 2 to 5 realized embodiments is realized.
- the input signal IN is applied to both the multi-stage processing unit 100 and the transmission unit 200, in which the acoustic input signal IN is processed with the aid of the class information KI1 to KIn or KI1 'to KIn' generated in the multi-stage processing unit 100.
- it is preferably provided to select a suitable hearing program on the basis of the determined acoustic environmental situation, as in the introduction and in the international context Patent application WO 01/20965 has been described.
- Denoted by 300 is a manual input unit with the aid of - for example via a radio link, as shown schematically in FIG Fig. 7 is apparent - if necessary, both the multi-stage processing unit 100 in the manner already explained or the transfer unit 200 acts. In the case of the hearing aid 200 is based on the comments in the WO 01/20965 directed.
- a preferred application of the inventive method for determining the acoustic environment situation is the choice of a hearing program in a hearing aid.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Description
Die vorliegende Erfindung betrifft ein Verfahren zur Bestimmung einer akustischen Umgebungssituation, eine Anwendung des Verfahrens, eine Vorrichtung zur Bestimmung der akustischen Umgebungssituation sowie ein Hörhilfegerät.The present invention relates to a method for determining an acoustic environmental situation, an application of the method, a device for determining the acoustic environment situation, and a hearing aid device.
Moderne Hörgeräte können heute mit Hilfe verschiedener Hörprogramme unterschiedlichen akustischen Umgebungssituationen angepasst werden. Damit soll das Hörgerät dem Benutzer in jeder Situation einen optimalen Nutzen bieten.Modern hearing aids can be adapted today with the help of different hearing programs different acoustic environment situations. Thus, the hearing aid should offer the user an optimal benefit in every situation.
Die Wahl des Hörprogramms kann entweder über die Fernbedienung oder über einen Schalter am Hörgerät selbst vorgenommen werden. Das Umschalten zwischen verschiedenen Hörprogrammen ist jedoch für viele Benutzer lästig oder schwierig, wenn nicht sogar unmöglich. Welches Programm zu welchem Zeitpunkt den optimalen Komfort und die beste Sprachverständlichkeit bietet, ist auch für versierte Hörgeräteträger nicht immer einfach zu bestimmen. Ein automatisches Erkennen der akustischen Umgebungssituation und ein damit verbundenes automatisches Umschalten des Hörprogramms im Hörgerät ist daher wünschenswert.The choice of hearing program can be made either via the remote control or via a switch on the hearing aid itself. However, switching between different programs is annoying or difficult, if not impossible, for many users. Which program offers the optimum comfort and the best speech intelligibility at which point in time is not always easy to determine even for experienced hearing aid wearers. An automatic recognition of the acoustic environment situation and an associated automatic switching of the hearing program in the hearing aid is therefore desirable.
Es sind derzeit verschiedene Verfahren für die automatische Klassifizierung von akustischen Umgebungssituationen bekannt. Bei all diesen Verfahren werden aus dem Eingangssignal, das beim Hörgerät von einem oder mehreren Mikrofonen stammen kann, verschiedene Merkmale extrahiert. Basierend auf diesen Merkmalen trifft ein Mustererkenner unter Anwendung eines Algorithmus eine Entscheidung über die Zugehörigkeit des analysierten Eingangssignals zu einer bestimmten akustischen Umgebungssituation. Die verschiedenen bekannten Verfahren unterscheiden sich dabei einerseits durch die unterschiedlichen Merkmale, welche bei der Beschreibung der akustischen Umgebungssituation verwendet werden (Signalanalyse), und andererseits durch den verwendeten Mustererkenner der die Merkmale klassifiziert (Signalidentifikation).There are currently various methods for the automatic classification of acoustic environmental situations known. In all of these methods, various features are extracted from the input signal that may originate from one or more microphones in the hearing aid. Based on these characteristics, a pattern recognizer, using an algorithm, makes a decision about the affiliation of the analyzed input signal to a specific acoustic environment situation. The various known methods differ on the one hand by the different features that are used in the description of the acoustic environment situation (signal analysis), and on the other hand classified by the used pattern recognizer of the features (signal identification).
Aus der Veröffentlichung der internationalen Patentanmeldung mit dem Veröffentlichungsaktenzeichen
Gleichzeitig bereitet das Vorhandensein von mehreren, sehr allgemeinen Geräuschklassen, wie Musik oder Rauschen, einige Schwierigkeiten. So entspricht es der Natur dieser Geräuschklassen, dass sie sehr allgemein und breit, d.h. in vielfältiger Weise auftreten können. Die Geräuschklasse Rauschen, zum Beispiel, enthält die unterschiedlichsten Geräusche wie Hintergrundgespräche, Bahnhofgeräusche, Haartrockner, und die Geräuschklasse Musik beinhaltet Popmusik, klassische Musik, Einzelinstrumente, Gesang, usw.At the same time, the presence of several, very general noise classes, such as music or noise, presents some difficulties. Thus, it is the nature of these noise classes that they are very general and broad, i. can occur in a variety of ways. The noise class, for example, contains a variety of sounds such as background calls, station sounds, hairdryers, and the noise class music includes pop music, classical music, solo instruments, vocals, etc.
Gerade aufgrund der sehr allgemeinen Natur dieser Geräuschklassen ist es aber sehr schwierig, eine gute Erkennungsrate mit Hilfe der bekannten Verarbeitungsmethode in eine Merkmalsextraktionseinheit und einer nachgeschalteten Klassifizierungseinheit zu erhalten. Zwar kann die Robustheit des Erkennungssystems durch eine geeignete Wahl von Merkmalen verbessert werden, wie dies in
Der vorliegenden Erfindung liegt daher die Aufgabe zugrunde, ein Verfahren zur Bestimmung einer akustischen Umgebungssituation anzugeben, das gegenüber den bekannten Verfahren robuster und genauer ist.The present invention is therefore based on the object, a method for determining an acoustic Specify environmental situation, which is more robust and accurate compared to the known methods.
Diese Aufgabe wird durch die in Anspruch 1 angegebenen Massnahmen gelöst. Vorteilhafte Ausgestaltungen der Erfindung, eine Anwendung des Verfahrens, eine Vorrichtung sowie ein Hörhilfegerät sind in weiteren Ansprüchen angegeben.This object is achieved by the measures specified in
Indem ein akustisches Eingangssignal in einem mehrstufigen Verfahren, das aus mindestens zwei Klassifizierungsstufen besteht, wobei jede Stufe eine Extraktionsphase und eine Identifikationsphase beinhaltet,By including an acoustic input signal in a multi-stage process consisting of at least two classification stages, each stage including an extraction phase and an identification phase,
verarbeitet wird, wird eine überaus robuste und genaue Klassifikation der momentanen akustischen Umgebungssituation erhalten. So kann mit dem erfindungsgemässen Verfahren beispielsweise eine falsche Klassifizierung von Popmusik in die Kategorie "Sprache in Geräusch" erfolgreich vermieden werden. Das erfindungsgemässe Verfahren ermöglicht ferner die Unterteilung einer allgemeinen Geräuschklasse, beispielsweise Rauschen, in Unterklassen, wie beispielsweise Verkehrslärm oder Gesprächshintergrundlärm. Spezielle Situationen, wie sie beispielsweise im Innern eines Automobiles auftreten ("in-the-car noise"), können ebenfalls erkannt werden. Ganz allgemein können Raumeigenschaften identifiziert und entsprechend bei der Weiterverarbeitung von wichtigen Signalanteilen mitberücksichtigt werden. Es hat sich gezeigt, dass mit dem erfindungsgemässen Verfahren es zudem möglich ist, die Lärmquellen zu lokalisieren, womit die Möglichkeit geschaffen worden ist, das Vorhandensein einer spezifischen Lärmquelle in mehreren anderen Lärmquellen zu detektieren. Die Erfindung wird nachfolgend anhand von Zeichnungen beispielsweise näher erläutert. Dabei zeigen:
- Fig. 1
- eine bekannte einstufige Vorrichtung zur Bestimmung einer akustischen Umgebungssituation,
- Fig. 2
- eine erste Ausführungsform einer erfindungsgemässen Vorrichtung mit zwei Verarbeitungsstufen,
- Fig. 3
- eine zweite, allgemeine Ausführungsform einer erfindungsgemässen mehrstufigen Vorrichtung,
- Fig. 4
- eine dritte, allgemeine Ausführungsform einer erfindungsgemässen mehrstufigen Vorrichtung,
- Fig. 5
- eine vierte, allgemeine Ausführungsform einer erfindungsgemässen mehrstufigen Vorrichtung,
- Fig. 6
- ein gegenüber der zweistufigen Ausführungsform gemäss
Fig. 2 vereinfachtes nicht beanspruchtes Beispiel und - Fig. 7
- ein Hörhilfegerät mit einer erfindungsgemässen mehrstufigen Vorrichtung gemäss
Fig. 2 bis 5 .
- Fig. 1
- a known single-stage device for determining an acoustic environment situation,
- Fig. 2
- A first embodiment of a device according to the invention with two processing stages,
- Fig. 3
- a second, general embodiment of a multi-stage device according to the invention,
- Fig. 4
- A third, general embodiment of a multi-stage device according to the invention,
- Fig. 5
- A fourth, general embodiment of a multi-stage device according to the invention,
- Fig. 6
- a versus the two-stage embodiment according to
Fig. 2 simplified unclaimed example and - Fig. 7
- a hearing aid with a novel multi-stage device according to
Fig. 2 to 5 ,
Ein akustisches Eingangssignal IN, das beispielsweise mit einem Mikronfon aufgenommen wurde, ist der Merkmalsextraktionseinheit F beaufschlagt, in der charakteristische Merkmale extrahiert werden.An acoustic input signal IN, recorded, for example, with a micronphone, is applied to the feature extraction unit F, in which characteristic features are extracted.
Für die Merkmalsextraktion in Audiosignalen wurde im Aufsatz von
Des Weiteren wurde in der bereits zitierten Offenlegungsschrift der internationalen Patentanmeldung
Gemäss
Gemäss
In
Die allgemeine in
Durch die Merkmalsextraktionseinheit F1 werden die Merkmale Tonalität, spektraler Schwerpunkt (CGAV: spectral center of gravity), Fluktuation des spektralen Schwerpunkts (CGFS) und spektrale Breite und Einschwingzeit extrahiert und in der Klassifizierungseinheit C1, in der ein HMM-(Hidden Markov Model)-Klassifizierer zum Einsatz kommt, klassifiziert, wobei das Eingangssignal IN mit Hilfe der HMM-Kassifizierung in eine der folgenden Klassen eingeteilt wird: "Sprache", "Sprache in Rauschen", "Rauschen" oder "Musik". Dies wird als Klasseninformation KI bezeichnet. Das Resultat der ersten Verarbeitungsstufe S1 wird der Klassifizierungseinheit C2 der Verarbeitungsstufe S2 beaufschlagt, in der ein zweites Merkmalset mit Hilfe der Merkmalsextraktionseinheit F2 extrahiert wird. Dabei werden neben den Merkmalen Tonalität, spektraler Schwerpunkt und Fluktuation des spektralen Schwerpunkts (CGFS) das zusätzliche Merkmal Varianz der harmonischen Struktur (pitch) - nachfolgend auch etwa als Pitchvar bezeichnet - extrahiert. Aufgrund dieser Merkmale wird mit Hilfe eines Regel-basierten Klassifizierers in der Klassifizierungseinheit C2 das Resultat der ersten Verarbeitungsstufe S1 überprüft und gegebenenfalls korrigiert. Der Regel-basierte Klassifizierer beinhaltet lediglich wenige einfache heuristische Entscheidungen, welche auf den vier Merkmalen basieren und sich an den folgenden Überlegungen orientieren:
Das Merkmal Tonalität wird in jeder Klasse zur Korrektur verwendet, falls die Merkmalswerte vollständig ausserhalb eines zulässigen Wertebereiches der Klasseninformationen KI1 liegen, welche in der ersten Klassifizierungseinheit C1 - d.h. durch den HMM-Klassifizierer - bestimmt worden sind. Erwartungsgemäss ist die Tonalität für "Musik" hoch, für "Sprache" im mittleren Bereich, für "Sprache in Rauschen" ein wenig tiefer und für "Rauschen" tief. Falls beispielsweise ein Eingangssignal IN durch die Klassifizierungseinheit C1 in die Klasse "Sprache" fällt, dann wird erwartet, dass entsprechende Merkmale, welche in der Merkmalsextraktionseinheit F1 ermittelt worden sind, der Klassifizierungseinheit C1 angezeigt haben, dass der relevante Signalanteile im Eingangssignal IN stark fluktuiert. Falls, auf der anderen Seite, die Tonalität für dieses Eingangssignal IN sehr tief ist, handelt es sich mit grösster Wahrscheinlichkeit nicht um "Sprache", sondern um "Sprache in Rauschen". Ähnliche Überlegungen können für die anderen drei Merkmale angestellt werden, nämlich für die Varianz der harmonischen Struktur (Pitchvar), den spektralen Scherpunkt (CGAV) und für die Fluktuation des spektralen Schwerpunkts (CGFS). Entsprechend können die Regeln für den Regel-basierten Klassifizierer, welche in der Klassifizierungseinheit C2 zur Anwendung kommen, wie folgt formuliert werden:
The features extraction unit F1 extracts the features tonality, spectral centroid (CGAV), fluctuation of the spectral centroid (CGFS) and spectral width and settling time, and in the classification unit C1 in which a HMM (Hidden Markov Model) Classifier is used, wherein the input signal IN using the HMM-Kassifizierung into one of the following classes is divided: "language", "speech in noise", "noise" or "Music". This is called class information KI. The result of the first processing stage S1 is applied to the classification unit C2 of the processing stage S2, in which a second feature set is extracted by means of the feature extraction unit F2. In addition to the features tonality, spectral centroid and fluctuation of the spectral center of gravity (CGFS), the additional feature variance of the harmonic structure (pitch) - also referred to below as Pitchvar - is extracted. On the basis of these features, the result of the first processing stage S1 is checked using a rule-based classifier in the classification unit C2 and corrected if necessary. The rule-based classifier involves only a few simple heuristic decisions based on the four features and based on the following considerations:
The characteristic Tonality is used in each class for correction, if the feature values are completely outside an allowable value range of the class information KI1, which have been determined in the first classification unit C1, ie by the HMM classifier. As expected, tonality is high for "music", "medium" for speech, slightly lower for "speech in noise" and low for "noise". For example, if an input signal IN falls through the classifying unit C1 into the class "speech", then it is expected that corresponding features which have been determined in the feature extraction unit F1, the classification unit C1 have indicated that the relevant signal components in the input signal IN fluctuates strongly. If, on the other hand, the tonality for this input signal IN is very low, it is most likely not "speech" but "speech in noise". Similar considerations can be made for the other three features, namely, the variance of the harmonic structure (pitchvar), the spectral shear point (CGAV), and the spectral center of gravity fluctuation (CGFS). Accordingly, the rules for the rule-based classifier used in the classification unit C2 may be formulated as follows:
Bei dieser Ausführungsform der Erfindung hat sich in überraschender Weise herausgestellt, dass nahezu die gleichen Merkmale in der zweiten Verarbeitungsstufe S2 verwendet werden wie in der ersten Verarbeitungsstufe S1.In this embodiment of the invention, it has surprisingly been found that almost the same features are used in the second processing stage S2 as in the first processing stage S1.
Es hat sich darüber hinaus gezeigt, dass sich das Merkmal Tonalität am besten eignet, um eine Korrektur der durch die Klassifizierungseinheit C1 erzeugten Fehler korrigieren zu können. Es kann also festgehalten werden, dass bei Verwendung des Regel-basierten Klassifizierers die Tonalität von entscheidender Bedeutung ist.In addition, it has been found that the characteristic tonality is best suited to be able to correct a correction of the errors generated by the classification unit C1. Thus, it can be stated that tonality is of crucial importance when using the rule-based classifier.
Beim Austesten der vorstehend beschriebenen Ausführungsform konnte festgestellt werden, dass bereits bei diesem einfachen zweistufigen Verfahren die Trefferrate mindestens um 3% gegenüber dem einstufigen Verfahren verbessert werden konnte. In einigen Fällen konnte sogar eine Verbesserung der Trefferrate um 91% festgestellt werden.When debugging the embodiment described above, it could be stated that even in this simple two-stage process, the hit rate could be improved by at least 3% compared to the single-stage process. In some cases even an improvement of the hit rate by 91% could be determined.
In
In Weiterführung der Ausführungsvariante gemäss
In einem erfindungsgemässen Klassifizierungssystem mit mehreren Verarbeitungsstufen S1 bis Sn kann jeder Verarbeitungsstufe S1 bis Sn, was jedoch nicht zwingend ist, eine Aufgabe zugeordnet werden, wie zum Beispiel: eine Grobklassifizierung, eine Feinklassifizierung, eine Lokalisierung einer Geräuschquelle, eine Überprüfung, ob eine bestimmte Geräuschquelle, z. B. Automobilgeräusch in einem Fahrzeug, vorhanden ist, oder eine Extraktion von bestimmten Signalanteilen von einem Eingangssignal, z. B. Elimination von Echo unter Berücksichtigung von Raumcharakteristiken. Die einzelnen Verfahrensstufen S1 bis Sn sind daher individuell im Sinne, dass in ihnen verschiedene Merkmale extrahiert und verschiedene Klassifizierungsmethoden verwendet werden.In a classification system according to the invention having a plurality of processing stages S1 to Sn, each processing stage S1 to Sn may, but this is not mandatory is to be assigned a task, such as: a rough classification, a fine classification, a localization of a noise source, a check whether a particular noise source, eg. As automotive noise in a vehicle, is present, or an extraction of certain signal components of an input signal, for. B. Elimination of echo taking into account spatial characteristics. The individual process stages S1 to Sn are therefore individual in the sense that different features are extracted in them and different classification methods are used.
In einem weiteren Anwendungsbeispiel ist vorgesehen, in einer ersten Verarbeitungsstufe S1 ein individuelles Signal in einem Gemisch von verschiedenen Signalanteilen zu lokalisieren, in einer zweiten Verarbeitungsstufe S2 eine Grobklassifizierung der lokalisierten Signalquelle vorzunehmen und in einer dritten Verfahrenstufe S3 eine Feinklassifizierung der in der zweiten Verfahrensstufe S2 erhaltenen groben Klassifizierung vorzunehmen.In a further application example, it is provided to localize an individual signal in a mixture of different signal components in a first processing stage S1, carry out a coarse classification of the localized signal source in a second processing stage S2, and in a third process stage S3 to fine-classify those obtained in the second method stage S2 rough classification.
Die in der ersten Verarbeitungsstufe durchgeführte Lokalisation der Schallquelle kann eine Richtungsfilterung, zum Beispiel mittels Multi-Mikrofon-Technologie, nachgeschaltet werden.The localization of the sound source carried out in the first processing stage can be followed by a directional filtering, for example by means of multi-microphone technology.
Selbstverständlich kann eine Merkmalsextraktionseinheit F1 bis Fn unter mehreren Klassifizierungseinheiten C1 bis Cn aufgeteilt werden, d.h. die Resultate einer Merkmalsextraktionseinheit F1 bis Fn können von mehreren Klassifizierungseinheiten C1 bis Cn verwendet werden. Des weiteren ist denkbar, dass eine Klassifizierungseinheit C1 bis Cn in mehreren Verarbeitungsstufen S1 bis Sn verwendet wird. Schliesslich kann vorgesehen sein, dass die in den verschiedenen Verarbeitungsstufen S1 bis Sn erhaltenen Klasseninformationen KI1 bis KIn bzw. die bereinigten Klasseninformationen KI1' bis KIn' zur Erhaltung der abschliessenden Klassifizierung unterschiedlich gewichtet werden.Of course, a feature extraction unit F1 to Fn may be divided among a plurality of classification units C1 to Cn, that is, the results of a feature extraction unit F1 to Fn may be divided among several Classification units C1 to Cn are used. Furthermore, it is conceivable that a classification unit C1 to Cn is used in a plurality of processing stages S1 to Sn. Finally, it can be provided that the class information KI1 to KIn obtained in the various processing stages S1 to Sn or the adjusted class information KI1 'to KIn' are weighted differently in order to obtain the final classification.
In
Auch bei der Ausführungsvariante gemäss
In
Es wird ausdrücklich darauf hingewiesen, dass die Rückwirkungen und Verknüpfungen der Verarbeitungseinheiten der Ausführungsvarianten gemäss
Darüber hinaus ist denkbar, dass - beim Einsatz der Erfindung bei Hörhilfegeräten - die verschiedenen Verarbeitungsstufen auch zwischen zwei Hörgeräten, d.h. bei je einem Hörgerät für das linke und das rechte Ohr, verteilt sind. Der Informationsaustausch erfolgt bei dieser Ausführungsform über eine drahtgebundene oder eine drahtlose Übertragungsstrecke.In addition, it is conceivable that - when using the invention in hearing aids - the different processing stages between two hearing aids, i. in each case a hearing aid for the left and the right ear, are distributed. The information exchange takes place in this embodiment via a wired or a wireless transmission link.
Mit 300 ist eine manuelle Eingabeeinheit bezeichnet, mit Hilfe der - beispielsweise über eine Funkstrecke, wie dies schematisch in
Als mögliche Klassifizierungsmethoden bei allen erläuterten Ausführungsvarianten der Erfindung kommt insbesondere eine der folgenden Methoden zur Anwendung:
- Hidden Markov Modelle;
- Fuzzy Logic;
- Klassifizierung nach Bayes;
- Regel-basierte Klassifizierung;
- Neuronale Netzwerke;
- Minimale Distanz.
- Hidden Markov models;
- Fuzzy logic;
- Classification according to Bayes;
- Rule-based classification;
- Neural networks;
- Minimum distance.
Schliesslich wird ausdrücklich darauf hingewiesen, dass in den Merkmalsextraktionseinheiten F1 bis Fn (
Eine bevorzugte Anwendung des erfindungsgemässen Verfahrens zur Bestimmung der akustischen Umgebungssituation ist die Wahl eines Hörprogrammes in einem Hörhilfegerät. Denkbar ist jedoch auch die Verwendung der erfindungsgemässen Vorrichtung bzw. die Anwendung des erfindungsgemässen Verfahrens zur Spracherkennung.A preferred application of the inventive method for determining the acoustic environment situation is the choice of a hearing program in a hearing aid. However, it is also conceivable to use the device according to the invention or the application of the method according to the invention for speech recognition.
Claims (10)
- Method for determining an acoustic environment situation, wherein the method consists in processing an acoustic input signal (IN) in at least two processing stages (S1, ..., Sn), which acoustic input signal (IN) is preferably recorded with the aid of at least one microphone, in such a way- that in each of the at least two processing stages (S1, ..., Sn) an extraction phase is provided in which characteristic features are extracted from the input signal (IN),- that in each processing stage (Sl, ..., Sn) an identification phase is provided in which extracted characteristic features are classified,- that class information (KI1, ..., KIn) is generated based on the classification of the features in at least one processing stage (S1, ..., SN), which class information characterizes or identifies, respectively, the acoustic environment situation,- that the class information (KI1, ..., KIn) obtained in the identification phase of a processing stage i (S1, ..., Sn) determines the manner of processing in the subsequent or subordinate processing stage i+1 (S2, ..., Sn), and- that due to the class information (KI1, ..., KIn) obtained in the processing stage i (S1, ..., Sn), specific features in the extraction phase of the subsequent or subordinate processing stage i + 1 (S2, ...., Sn) and/or specific classification methods in the identification phase are selected in the subsequent or subordinate processing stage i + 1 (S2, ..., Sn).
- Method according to claim 1, characterized in that one of the following classification methods is used in the identification phase:- Hidden Markov models;- Fuzzy Logic;- Classification according to Bayes;- Rule-based classification;- Neuronal networks;- Minimum distance.
- Method according to one of the preceding claims, characterized in that technical and/or auditory-based features are extracted in the extraction phases.
- Application of the method according to one of claims 1 to 3 for adapting at least one hearing device to a current acoustic environment situation.
- Application according to claim 4, characterized in that a hearing program or a transfer function between at least one microphone and a handset in the at least one hearing aid is adjusted based on the determined current acoustic environment situation.
- Device for determining an acoustic environment situation with a feature extraction unit (F1, ..., Fn), which is operatively connected to a classification unit (C) for processing an input signal (IN), characterized in that at least two processing stages (S1, ..., Sn) are provided, wherein in each of the at least two processing stages (S1, ..., Sn) a feature extraction unit (F1, ..., Fn) is included and wherein in each processing stage (S1, ..., Sn) a classification unit (C1, .. , Cn) is included, that the input signal (IN) is applied to the feature extraction units (F1, ..., Fn), that class information (KI1, ..., KIn) is generated by the classification units (Cl, ..., Cn), that the class information (KI1, ..., KIn) of a processing stage i (S1, ..., Sn) is applied to a subsequent or subordinate processing stage i+1 (S2, ..., Sn), and that the class information (KI1, ..., KIn) of a processing stage i (S1, ..., Sn) of the feature extraction unit (M1, ..., Mn) is applied to a subsequent or subordinate processing stage i+1 (S2, ..., Sn) and/or to the classification unit (C2, ..., Cn) of the subsequent or subordinate processing stage i+1 (S2, ..., Sn).
- Device according to claim 6, characterized in that the class information (KI1, ..., KIn) of all processing stages (S1, ..., Sn) is applied to a decision unit (ED).
- Device according to claim 7, characterized in that the decision unit (ED) is operatively connected to at least one feature extraction unit (Fl, ..., Fn) and/or to at least one classification unit (C1, ..., Cn).
- Hearing aid with a transmission unit (200) that is operatively connected, on its input side, to at least one microphone and, on its output side, to a transducer unit, in particular to a receiver, and with a device according to one of claims 6 to 8 for generating class information (KI1, ..., KIn), the class information (KI1, ..., KIn) being applied to the transmission unit (200).
- Hearing aid according to claim 9, characterized in that an input unit (300) is provided which is operatively connected, preferably via a radio link, to the transmission unit (200) and/or to the device according to one of claims 6 to 8.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CH2002/000049 WO2002032208A2 (en) | 2002-01-28 | 2002-01-28 | Method for determining an acoustic environment situation, application of the method and hearing aid |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1470735A2 EP1470735A2 (en) | 2004-10-27 |
EP1470735B1 true EP1470735B1 (en) | 2019-08-21 |
Family
ID=4358282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP02706499.7A Expired - Lifetime EP1470735B1 (en) | 2002-01-28 | 2002-01-28 | Method for determining an acoustic environment situation, application of the method and hearing aid |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1470735B1 (en) |
JP (1) | JP3987429B2 (en) |
AU (2) | AU2002224722B2 (en) |
CA (1) | CA2439427C (en) |
WO (1) | WO2002032208A2 (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AUPS247002A0 (en) * | 2002-05-21 | 2002-06-13 | Hearworks Pty Ltd | Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions |
US7889879B2 (en) | 2002-05-21 | 2011-02-15 | Cochlear Limited | Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions |
EP2254351A3 (en) * | 2003-03-03 | 2014-08-13 | Phonak AG | Method for manufacturing acoustical devices and for reducing wind disturbances |
DK1326478T3 (en) | 2003-03-07 | 2014-12-08 | Phonak Ag | Method for producing control signals and binaural hearing device system |
EP1320281B1 (en) | 2003-03-07 | 2013-08-07 | Phonak Ag | Binaural hearing device and method for controlling such a hearing device |
US20040175008A1 (en) | 2003-03-07 | 2004-09-09 | Hans-Ueli Roeck | Method for producing control signals, method of controlling signal and a hearing device |
US8027495B2 (en) | 2003-03-07 | 2011-09-27 | Phonak Ag | Binaural hearing device and method for controlling a hearing device system |
JP4939935B2 (en) * | 2003-06-24 | 2012-05-30 | ジーエヌ リザウンド エー/エス | Binaural hearing aid system with matched acoustic processing |
US6912289B2 (en) | 2003-10-09 | 2005-06-28 | Unitron Hearing Ltd. | Hearing aid and processes for adaptively processing signals therein |
DE10356093B3 (en) * | 2003-12-01 | 2005-06-02 | Siemens Audiologische Technik Gmbh | Hearing aid with adaptive signal processing of received sound waves dependent on identified signal source direction and signal classification |
US20060182295A1 (en) | 2005-02-11 | 2006-08-17 | Phonak Ag | Dynamic hearing assistance system and method therefore |
DK1819195T3 (en) | 2006-02-13 | 2009-11-30 | Phonak Comm Ag | Method and system for providing hearing aid to a user |
US8068627B2 (en) | 2006-03-14 | 2011-11-29 | Starkey Laboratories, Inc. | System for automatic reception enhancement of hearing assistance devices |
US7986790B2 (en) | 2006-03-14 | 2011-07-26 | Starkey Laboratories, Inc. | System for evaluating hearing assistance device settings using detected sound environment |
US8494193B2 (en) * | 2006-03-14 | 2013-07-23 | Starkey Laboratories, Inc. | Environment detection and adaptation in hearing assistance devices |
AU2007251717B2 (en) * | 2006-05-16 | 2011-07-07 | Phonak Ag | Hearing device and method for operating a hearing device |
US8249284B2 (en) | 2006-05-16 | 2012-08-21 | Phonak Ag | Hearing system and method for deriving information on an acoustic scene |
US7957548B2 (en) | 2006-05-16 | 2011-06-07 | Phonak Ag | Hearing device with transfer function adjusted according to predetermined acoustic environments |
EP1858292B2 (en) | 2006-05-16 | 2022-02-23 | Sonova AG | Hearing device and method of operating a hearing device |
US7738666B2 (en) | 2006-06-01 | 2010-06-15 | Phonak Ag | Method for adjusting a system for providing hearing assistance to a user |
US8605923B2 (en) | 2007-06-20 | 2013-12-10 | Cochlear Limited | Optimizing operational control of a hearing prosthesis |
EP2192794B1 (en) | 2008-11-26 | 2017-10-04 | Oticon A/S | Improvements in hearing aid algorithms |
DK2569955T3 (en) | 2010-05-12 | 2015-01-12 | Phonak Ag | Hearing system and method for operating the same |
DK2596647T3 (en) | 2010-07-23 | 2016-02-15 | Sonova Ag | Hearing system and method for operating a hearing system |
DK2617127T3 (en) | 2010-09-15 | 2017-03-13 | Sonova Ag | METHOD AND SYSTEM TO PROVIDE HEARING ASSISTANCE TO A USER / METHOD AND SYSTEM FOR PROVIDING HEARING ASSISTANCE TO A USER |
JP2012083746A (en) * | 2010-09-17 | 2012-04-26 | Kinki Univ | Sound processing device |
US20150139468A1 (en) * | 2012-05-15 | 2015-05-21 | Phonak Ag | Method for operating a hearing device as well as a hearing device |
US8958586B2 (en) | 2012-12-21 | 2015-02-17 | Starkey Laboratories, Inc. | Sound environment classification by coordinated sensing using hearing assistance devices |
CN112954569B (en) * | 2021-02-20 | 2022-10-25 | 深圳市智听科技有限公司 | Multi-core hearing aid chip, hearing aid method and hearing aid |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001020965A2 (en) * | 2001-01-05 | 2001-03-29 | Phonak Ag | Method for determining a current acoustic environment, use of said method and a hearing-aid |
WO2001076321A1 (en) * | 2000-04-04 | 2001-10-11 | Gn Resound A/S | A hearing prosthesis with automatic classification of the listening environment |
-
2002
- 2002-01-28 AU AU2002224722A patent/AU2002224722B2/en not_active Ceased
- 2002-01-28 CA CA2439427A patent/CA2439427C/en not_active Expired - Lifetime
- 2002-01-28 EP EP02706499.7A patent/EP1470735B1/en not_active Expired - Lifetime
- 2002-01-28 WO PCT/CH2002/000049 patent/WO2002032208A2/en active Application Filing
- 2002-01-28 AU AU2472202A patent/AU2472202A/en active Pending
- 2002-01-28 JP JP2002535462A patent/JP3987429B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001076321A1 (en) * | 2000-04-04 | 2001-10-11 | Gn Resound A/S | A hearing prosthesis with automatic classification of the listening environment |
WO2001020965A2 (en) * | 2001-01-05 | 2001-03-29 | Phonak Ag | Method for determining a current acoustic environment, use of said method and a hearing-aid |
Also Published As
Publication number | Publication date |
---|---|
CA2439427A1 (en) | 2002-04-25 |
EP1470735A2 (en) | 2004-10-27 |
WO2002032208A2 (en) | 2002-04-25 |
AU2002224722B2 (en) | 2008-04-03 |
CA2439427C (en) | 2011-03-29 |
WO2002032208A3 (en) | 2002-12-05 |
AU2472202A (en) | 2002-04-29 |
JP2005504325A (en) | 2005-02-10 |
JP3987429B2 (en) | 2007-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1470735B1 (en) | Method for determining an acoustic environment situation, application of the method and hearing aid | |
DE60120949T2 (en) | A HEARING PROSTHESIS WITH AUTOMATIC HEARING CLASSIFICATION | |
WO2001020965A2 (en) | Method for determining a current acoustic environment, use of said method and a hearing-aid | |
DE102008039276B4 (en) | Sound processing apparatus, apparatus and method for controlling the gain and computer program | |
EP1247425B1 (en) | Method for operating a hearing-aid and a hearing aid | |
EP1647972A2 (en) | Intelligibility enhancement of audio signals containing speech | |
EP3386215B1 (en) | Hearing aid and method for operating a hearing aid | |
EP1404152B1 (en) | Device and method for fitting a hearing-aid | |
EP3266222B1 (en) | Device and method for driving the dynamic compressors of a binaural hearing aid | |
DE112016007138T5 (en) | DEVICE AND METHOD FOR MONITORING A WEARING STATE OF AN EARPHONE | |
EP2026607A1 (en) | Individually adjustable hearing aid and method for its operation | |
EP2200341B1 (en) | Method for operating a hearing aid and hearing aid with a source separation device | |
DE19948907A1 (en) | Signal processing in hearing aid | |
DE10114101A1 (en) | Processing input signal in signal processing unit for hearing aid, involves analyzing input signal and adapting signal processing unit setting parameters depending on signal analysis results | |
DE102015221764A1 (en) | Method for adjusting microphone sensitivities | |
EP2792165B1 (en) | Adaptation of a classification of an audio signal in a hearing aid | |
EP1303166B1 (en) | Method of operating a hearing aid and assembly with a hearing aid | |
EP1445761B1 (en) | Apparatus and method for operating voice controlled systems in vehicles | |
EP2658289B1 (en) | Method for controlling an alignment characteristic and hearing aid | |
EP1649719B1 (en) | Device and method for operating voice-assisted systems in motor vehicles | |
DE10310580A1 (en) | Device and method for adapting hearing aid microphones | |
EP1348315B1 (en) | Method for use of a hearing-aid and corresponding hearing aid | |
EP3340656A1 (en) | Method for operating a hearing aid | |
EP3048813B1 (en) | Method and device for suppressing noise based on inter-subband correlation | |
EP0540535B1 (en) | Process for speaker adaptation in an automatic speech-recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20030718 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK RO SI |
|
17Q | First examination report despatched |
Effective date: 20121204 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: SONOVA AG |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20190408 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): CH DE DK FR GB LI |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 50216344 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190821 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20200129 Year of fee payment: 19 Ref country code: GB Payment date: 20200127 Year of fee payment: 19 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 50216344 Country of ref document: DE |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20200127 Year of fee payment: 19 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20200603 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 50216344 Country of ref document: DE |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20210128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210803 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210128 |