EP2672732B1 - Procédé de focalisation d'un générateur de faisceau d'un instrument auditif - Google Patents

Procédé de focalisation d'un générateur de faisceau d'un instrument auditif Download PDF

Info

Publication number
EP2672732B1
EP2672732B1 EP13167409.5A EP13167409A EP2672732B1 EP 2672732 B1 EP2672732 B1 EP 2672732B1 EP 13167409 A EP13167409 A EP 13167409A EP 2672732 B1 EP2672732 B1 EP 2672732B1
Authority
EP
European Patent Office
Prior art keywords
acoustic
solid angle
head
acoustic signals
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP13167409.5A
Other languages
German (de)
English (en)
Other versions
EP2672732A2 (fr
EP2672732A3 (fr
Inventor
Vaclav Bouse
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Sivantos Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=49625951&utm_source=***_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=EP2672732(B1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Sivantos Pte Ltd filed Critical Sivantos Pte Ltd
Publication of EP2672732A2 publication Critical patent/EP2672732A2/fr
Publication of EP2672732A3 publication Critical patent/EP2672732A3/fr
Application granted granted Critical
Publication of EP2672732B1 publication Critical patent/EP2672732B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • H04R25/507Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils

Definitions

  • the invention relates to a method for focusing a beamformer of a hearing instrument.
  • Hearing instruments can be embodied, for example, as hearing aids to be worn on or in the ear.
  • a hearing aid is used to supply a hearing-impaired person with acoustic ambient signals that are processed and amplified for compensation or therapy of the respective hearing impairment. It consists in principle of one or more input transducers, of a signal processing device, of an amplification device, and of an output transducer.
  • the input transducer is typically a sound receiver, e.g. a microphone, and / or an electromagnetic receiver, e.g. an induction coil.
  • the output transducer is usually as an electroacoustic transducer, z.
  • miniature speaker as an electromechanical transducer, z. B.
  • bone conduction or realized as a stimulation electrode for cochlear stimulation. He is also referred to as a handset or receiver.
  • the output transducer generates output signals that are routed to the patient's ear and are intended to produce a hearing sensation in the patient.
  • the amplifier is usually integrated in the signal processing device.
  • the hearing aid is powered by a battery integrated into the hearing aid housing.
  • the essential components of a hearing aid are usually arranged on a printed circuit board as a circuit carrier or connected thereto.
  • the problem is to determine the direction in which the Beamformer should be directed, and to find an optimal width, so an optimal opening angle of the beam.
  • the problem is to find the spatial direction in which the directional microphone array is to have the highest sensitivity, and to find the angle or aperture angle over which the sensitivity should be increased.
  • better directionality and sensitivity can be achieved by aiming the beam as accurately as possible at the acoustic source of interest and focusing it as closely as possible.
  • interested acoustic sources can be speakers or voice signals, but a number of other possibilities are also possible, for example music or warning signals.
  • a hearing aid uses a method for acoustic source separation.
  • a binaural microphone arrangement uses a binaural microphone arrangement to determine the spatial direction of an acoustic source.
  • a binaural receiver arrangement then dependent on the determined direction acoustic output signal is generated.
  • a hearing aid which determines the spatial direction of acoustic sources.
  • a beamformer is then aligned in a direction determined to focus on the particular acoustic source.
  • the spatial direction can be determined inter alia on the basis of the orientation of the head or viewing direction of the user.
  • a hearing aid which uses a method for "blind source separation" of different acoustic sources. The user can select the various recognized sources in succession by pressing a switch.
  • SpeechFocus a method in which the acoustic environment is searched automatically for language parts . If speech components are identified, their spatial direction is determined. Then the gain becomes more acoustic Signals from this direction are raised compared to signals from other directions.
  • the hearing instrument may direct the beam to a desired direction by an algorithm for processing the microphone signals, irrespective of the orientation of the head, the beam direction being controllable by, for example, a remote control.
  • the user can not hear or hardly hear sources outside the beam and therefore can not register.
  • it is not pleasant for the user and not very intuitive to control the beam remotely.
  • the hearing instrument can automatically analyze the direction of any acoustic sources of interest and automatically align the beam in that direction, such as in the SpeechFocus method of the manufacturer Siemens.
  • this can be confusing for the user, as the hearing instrument can automatically and unexpectedly toggle between different sources without user interference.
  • a constantly adapting beamformer alters the binaural "cues", making it difficult for the user to locate the source of interest or even make it impossible.
  • the beam width is usually constant or can be manually adjusted between different preset opening angles by the user.
  • a method is known in which, prior to localization of an audio source, a classification of the audio signal is performed.
  • the classification can be made on the basis of features such as harmonic signal components or the expression of formants.
  • the subsequent localization benefits from the previous classification.
  • the object of the invention is to allow an automatic adaptation of the beam width and / or the beam direction, which can be used comfortably and intuitively, avoids the unexpected focusing of the beam without the intervention of the hearing instrument user, and it on simple and easy to use way to bring the user also acoustic sources outside the beam to the knowledge.
  • directivity is a property of the beamformer, which can be represented as a measure, which is the higher, the more the beamformer is focused, that is, the smaller the solid angle of the beam is.
  • the direction-dependent, directional detection of acoustic signals is automatically started as soon as the user looks in the direction of an acoustic source, such as a speaker, the head stops moving and the source then in turn focused, i. looks steadfastly.
  • suitable tolerance values or threshold values for example at least 15 ° rotation, must be specified in order to distinguish unintentional or irrelevant minimum head movements from relevant head movements.
  • a manual triggering of the focusing for example by pressing a button on the hearing instrument or using a remote control, is not required, which contributes significantly to practicality and comfort in the application of the method.
  • Directional alignment of the focus solid angle better aligns the focus with the source of interest to the user. This then allows a sharper focus through a narrower focus solid angle and thus increases the directionality. The increase in directionality, in turn, results in a further increase in the source signal of interest.
  • the process is focused on a source, while for the user's perception, only the signals from that source are highlighted, the additional space around the user is searched for other, additional sources. If such a further source is found, it is made perceptible to the user by increasing the gain, the user is as it were pointed out the presence of the other source. If the user responds by moving or rotating the head, the previous focus is automatically canceled and the focus is re-focused.
  • the re-focusing is started automatically and does not need to be triggered manually, which further contributes to practicability and convenience in the application of the method.
  • a further advantageous embodiment is that the method is only performed if a head movement has been detected before the detection of the absence of head movements. This avoids that, for example, an automatic focusing begins, even though the user has not turned to an acoustic source, for example because it is a non-acoustic source or because the user does not want to devote any of his attention to any source.
  • a further advantageous embodiment consists in that the method is only performed if, before focusing, an acoustic source in the focus solid angle has been detected. This prevents focusing despite the lack of acoustic sources, which obviously would not make sense.
  • FIG. 1 schematically a user 1 with left hearing instrument 2 and right hearing instrument 3 is shown in plan view.
  • the microphones of the left and right hearing instrument 2, 3 are each connected to form a directional microphone arrangement, so that there is the possibility of the respective beam from the user 1 from looking essentially either forward or backward.
  • e2e wireless link
  • essentially directions as seen by the user 1 are made possible to the right and left as further beam directions of the arrangement.
  • the automatic focusing of the beam can be done together for each monaural hearing instrument individually (front / back) as well as for the binaural arrangement (right / left).
  • the left and right hearing instrument 2, 3 together with the essential signal processing components are shown schematically.
  • the hearing instruments 2, 3 have the same structure and may differ in their outer shape, in order to take account of the respective use on the left or right ear.
  • the left-hand hearing instrument 2 comprises two microphones 4, 5, which are arranged spatially separated and together form a directional microphone arrangement.
  • the signals of the microphones 4, 5 are processed by a signal processing device 11, which outputs an output signal via the receiver 8.
  • a battery 10 is used to supply power to the hearing instrument 2.
  • a motion sensor 9 is provided, whose function is to be explained below in the automatic focusing.
  • the right-hand hearing instrument 3 comprises the microphones 6, 7, which are likewise joined together to form a directional microphone arrangement.
  • FIG. 3 the main signal processing components of the automatic focusing beamformer are schematic shown.
  • the signals of the microphones 4, 5 of the left-hand hearing instrument 2 are processed by the beamformer in such a way that a beam directed straight ahead from the user is produced (0 °, "broadside"), which has a variable beam width.
  • the variable beam width is equivalent to a variable directionality (smaller beam width means higher directionality and vice versa, where higher directionality is synonymous with greater directional dependence).
  • the beamformer is constructed in a conventional manner, for example as an arrangement of fixed beamformers, as a mixture of a fixed beamformer with a omni-directional signal, as a beamformer with variable beam width, etc.
  • Beamformer 13 output signals are the desired beam signal that contains all the acoustic signals from the direction of the beam, the omnidirectional omni signal (which includes all the acoustic sources in all directions with under-distorted binaural cues), and the anti-signal, which is all the acoustic signals from directions outside the beam.
  • the three signals are fed to the mixer 19, and in parallel to the source detectors 15, 16, 17.
  • the source detectors 15, 16, 17 continuously determine the likelihood (or comparable measure) of an acoustic source of interest, such as a Voice source in which there are three signals.
  • the motion sensor 9 has the task to detect head movements of the hearing instrument user, for example, rotation, and also to determine a measure of the width of the respective movement.
  • a dedicated hardware sensor of conventional type is the fastest and most reliable way to detect head movements. However, other ways to detect head movements are also available, for example, based on spatial analysis of the acoustic signals, or using additional ones alternative sensor systems.
  • a head movement detector 14 analyzes the signals of the motion sensor 9 and determines therefrom direction and amount of head movements.
  • All signals are fed to the focus control 18, which determines the beam width as a function of the signals.
  • the determined beam width is then supplied by the focus controller 18 to the beamformer 13 as an input signal.
  • the focus control also controls the mixer 19 in addition to the beam width, which mixes the above-explained three signals (Omni, Anti, Beam) and forwards to a hearing instrument signal processing 20.
  • the hearing instrument signal processing 20 the acoustic signals are further processed in the usual way for hearing instruments and amplified output to the receiver 8.
  • the receiver 8 generates the acoustic output signal for the hearing instrument user.
  • Focus control 18 is preferably implemented as a finite-state machine (FSM), the finite states of which are explained below.
  • FSM finite-state machine
  • the three signals (Omni, Anti, Beam) are mixed by Mixer 19 so that the user receives a natural-sounding spatial signal. This also means that no abrupt transitions take place, but gentle transitions. In the hearing instrument signal processing 20, the further processing steps take place, which in particular serve to compensate or to treat a hearing impairment of the user.
  • FIG. 4 an exemplary situation is shown schematically. Shown is the hearing instrument user 1 with left and right hearing instrument 2, 3 in plan view. Frontally in front of the user 1 is an acoustic source 21, in the direction of the user 1 looks. The beam of the respective hearing instrument 2, 3 is focused on the acoustic source 21, in which the beam width was reduced to the angle ⁇ 1 . Thus, the further acoustic source 22 is outside the beam, but would be within a beam with the Beam width ⁇ 2 lie. The further acoustic source 23 is still further outside the beam and is located almost next to the user. 1
  • FIGS. 5 to 8 The operation of the automatic focusing of the beam is explained schematically.
  • the beam with the width ⁇ is focused on the acoustic source 21.
  • the user moves the head away from the source 21 and towards the source 23.
  • the head movement is detected by the automatic focus control (or by the motion sensor).
  • the automatic focus control then defocuses the beam by switching to the Omni signal.
  • it can also be defocused that the beam width is set to a predetermined, significantly larger opening angle than in the focused state.
  • the user 1 has completely turned his head to the acoustic source 23.
  • the head movement ends and the user 1 looks towards the source 23.
  • the end of the head movement is detected, whereupon the automatic focusing of the beam on the source 23 begins.
  • the omnidirectional signal is changed over to the direction-dependent beam signal and / or the greatly increased beam width is gradually reduced.
  • the beam width is reduced until the signal source 23 is fully focused. Further reduction of the beam width results in the source no longer being completely within the beam so that the signal from the source 23 or its portion in the beam signal decreases.
  • the focusing of the beam ie the reduction of the opening angle of the beam, is stopped as soon as the source 23 is focused sharply, what in the FIG. 8 drawn angle ⁇ is the case. A possibly further reduction of the beam angle is reversed.
  • the FSM starts in the "Omni" 40 state (no directionality, the mixer outputs the Omni signal), making the hearing instrument user normal and direction independent hear.
  • the FSM starts in the "Omni" 40 state (no directionality, the mixer outputs the Omni signal), making the hearing instrument user normal and direction independent hear.
  • This state he is able to locate acoustic sources normally. He can move his head in a normal and natural way and turn, for example, to search for an interesting acoustic source, such as a speaker.
  • the FSM enters the focus state 42 and the directionality of the beamformer is gradually increased (the beam width is reduced and a correspondingly more directional signal is output to the user).
  • the proportion of the signal of the source in the beam signal increases and the mixer passes the filtered signal thus by exclusively or mainly outputs the signal beam.
  • the proportion of the source signal of interest in the beam signal can not be further increased.
  • the directionality is not further changed (beam width not further reduced) and the FSM exits the loop 43 and changes to the "Focused" state 44.
  • the automatic beam control continuously monitors head movements of the user by means of the motion sensor (Loop 47). As long as no head movements are detected, the FSM remains in the "Focused" state 44.
  • the FSM changes to the Glimpsing state 45.
  • the Glimpsing state 45 a small portion of the Omni signal containing the possible other source is mixed by the mixer into the output signal for the user.
  • the user registers that there is another source. If the user does not turn to this new source, he does not move his head.
  • the automatic focus control detects this with the help of the motion sensor and regulates the proportion of the omni signal after a certain period of time back to zero (fade out), so that the user can concentrate fully on focused signal again.
  • the described "glimpsing" is performed each time a new source appears in the acoustic environment or when the acoustic environment changes significantly.
  • the user moves his head because he wants to focus on a new signal or simply wants to survey the acoustic environment, which in the previous one FIG. 6 is shown, the head movement is detected and the focus control switches immediately to the omni signal, ie the beam width is greatly increased again and / or the mixer outputs additionally or exclusively the omni signal. This is shown in the figure by element 46.
  • the omnidirectional signal allows the user to survey the acoustic environment with all undistorted spatial cues that are distorted or missing in the beam signal. This allows the user to normal localization of acoustic sources. Once the user concentrates on another acoustic source, as explained above FIG. 7 corresponds, the FSM goes again in the state Focusing 42. This starts the beam focusing again.
  • the foregoing method by combining the various beamformer signals with the head motion detector, allows a function closely related to the human approach of focusing on different sources.
  • the head movement is used to provide natural feedback for automatic focusing and fast defocusing on a target to control the beamformer. Focusing occurs gradually when the user does not move his head.
  • the defocusing during head movement or the transition from the beam signal to the omnidirectional signal takes place quickly in order to quickly have an undistorted signal with all spatial information available in the case of changes.
  • the function of Glimpsing gives the user the possibility to stay focused on one source, but on the other hand to get an overview of new sources and changes.
  • the direction-dependent, directional detection of acoustic signals is advantageously started automatically as soon as the user looks in the direction of an acoustic source, for example a speaker, and then looks at the source in an unrelated manner.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (9)

  1. Procédé de focalisation d'un formateur de faisceaux (13) d'un instrument auditif (2, 3), comprenant les étapes consistant à :
    - détecter l'orientation et/ou la position spatiale de la tête de l'utilisateur de l'instrument auditif (1),
    - détecter des déplacements de la tête de l'utilisateur (1) de l'instrument auditif au moyen d'un capteur de mouvement (9) ou sur la base d'une analyse spatiale de signaux acoustiques,
    - détecter, en fonction de la direction, des signaux acoustiques lorsque l'utilisateur (1) de l'instrument auditif a déplacé sa tête dans la direction d'une source d'un signal acoustique (21),
    - augmenter ensuite l'amplification de signaux acoustiques qui proviennent d'un angle solide de focalisation (α1, α2, β) de manière frontale à l'avant de la tête de l'utilisateur (1) de l'instrument auditif, direction dans laquelle est tourné l'utilisateur (1) de l'instrument auditif, par rapport à des signaux acoustiques provenant d'autres angles solides,
    - effectuer ensuite une focalisation petit à petit, par diminution de l'angle solide de focalisation (α2) jusqu'à ce que le niveau de signaux acoustiques provenant de l'angle solide de focalisation (α2) diminue du fait de la diminution de l'angle solide de focalisation (α2) et jusqu'à ce qu'un angle solide de focalisation minimal (α1, β) soit atteint.
  2. Procédé selon la revendication 1, comprenant l'étape supplémentaire consistant à :
    - identifier la source acoustique (21) dans l'angle solide de focalisation (α2) sur la base des signaux acoustiques provenant de l'angle solide de focalisation (α2).
  3. Procédé selon la revendication 2, comprenant l'étape supplémentaire consistant à :
    - effectuer la focalisation jusqu'à ce que le niveau de signaux acoustiques de la source acoustique (21) diminue dans l'angle solide de focalisation (α2) du fait de la diminution de l'angle solide de focalisation (α2).
  4. Procédé selon la revendication 2 ou 3, comprenant les étapes supplémentaires consistant à :
    - déterminer la direction spatiale dans laquelle se trouve la source acoustique (21),
    - centrer l'angle solide de focalisation (α2) dans cette direction.
  5. Procédé selon l'une quelconque des revendications précédentes, comprenant les étapes supplémentaires consistant à :
    - détecter dans un second temps d'autres signaux acoustiques qui proviennent d'autres angles solides (γ) en tant qu'angle solide de focalisation (α2),
    - détecter d'autres sources acoustiques (23) sur la base des autres signaux acoustiques.
  6. Procédé selon la revendication 5, comprenant les étapes supplémentaires consistant à :
    - lors de la détection d'une autre source acoustique (23), augmenter l'amplification des autres signaux acoustiques,
    - détecter l'orientation et/ou la position spatiale de la tête de l'utilisateur (1) de l'instrument auditif après avoir augmenté l'amplification des autres signaux acoustiques,
    - lors de la détection de l'absence de mouvement de tête au cours d'un intervalle de temps prédéterminé, après avoir augmenté l'amplification des autres signaux acoustiques, réduire de nouveau l'amplification,
    - lorsqu'un mouvement de tête a été détecté au cours de l'intervalle de temps prédéterminé, effectuer une défocalisation en augmentant de nouveau l'angle solide de focalisation (α2) puis mettre en oeuvre le procédé selon l'une quelconque des revendications précédentes.
  7. Procédé selon la revendication 5, comprenant les étapes supplémentaires consistant à :
    - en l'absence de détection d'autres sources acoustiques (23), détecter l'orientation et/ou la position spatiale de la tête de l'utilisateur (1) de l'instrument auditif,
    - lors de la détection d'un mouvement de tête, effectuer une défocalisation en augmentant de nouveau l'angle solide de focalisation (α2) ou en basculant d'une détection dépendante de la direction à une détection indépendante de la direction des signaux acoustiques.
  8. Procédé selon l'une quelconque des revendications précédentes, dans lequel le procédé n'est mis en oeuvre que lorsqu'un mouvement de tête a été détecté avant la détection de l'absence de mouvements de tête.
  9. Procédé selon l'une quelconque des revendications précédentes, dans lequel le procédé n'est mis en oeuvre que lorsqu'une source acoustique (21) a été détectée dans l'angle solide de focalisation (α2) avant la focalisation.
EP13167409.5A 2012-06-06 2013-05-13 Procédé de focalisation d'un générateur de faisceau d'un instrument auditif Active EP2672732B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261656110P 2012-06-06 2012-06-06
DE102012214081A DE102012214081A1 (de) 2012-06-06 2012-08-08 Verfahren zum Fokussieren eines Hörinstruments-Beamformers

Publications (3)

Publication Number Publication Date
EP2672732A2 EP2672732A2 (fr) 2013-12-11
EP2672732A3 EP2672732A3 (fr) 2014-07-16
EP2672732B1 true EP2672732B1 (fr) 2016-07-27

Family

ID=49625951

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13167409.5A Active EP2672732B1 (fr) 2012-06-06 2013-05-13 Procédé de focalisation d'un générateur de faisceau d'un instrument auditif

Country Status (5)

Country Link
US (1) US8867763B2 (fr)
EP (1) EP2672732B1 (fr)
CN (1) CN103475974B (fr)
DE (1) DE102012214081A1 (fr)
DK (1) DK2672732T3 (fr)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102012214081A1 (de) 2012-06-06 2013-12-12 Siemens Medical Instruments Pte. Ltd. Verfahren zum Fokussieren eines Hörinstruments-Beamformers
US9124990B2 (en) * 2013-07-10 2015-09-01 Starkey Laboratories, Inc. Method and apparatus for hearing assistance in multiple-talker settings
EP2928210A1 (fr) 2014-04-03 2015-10-07 Oticon A/s Système d'assistance auditive biauriculaire comprenant une réduction de bruit biauriculaire
CN103901401B (zh) * 2014-04-10 2016-08-17 北京大学深圳研究生院 一种基于双耳匹配滤波器的双耳声音源定位方法
WO2015154282A1 (fr) * 2014-04-10 2015-10-15 华为终端有限公司 Dispositif d'appel, et procédé et dispositif de commutation appliqués à ce dernier
US9961456B2 (en) * 2014-06-23 2018-05-01 Gn Hearing A/S Omni-directional perception in a binaural hearing aid system
CN106686185B (zh) * 2014-06-30 2019-07-19 歌尔科技有限公司 提高免提通话设备通话质量的方法、装置和免提通话设备
US20180270571A1 (en) * 2015-01-21 2018-09-20 Harman International Industries, Incorporated Techniques for amplifying sound based on directions of interest
US10499164B2 (en) * 2015-03-18 2019-12-03 Lenovo (Singapore) Pte. Ltd. Presentation of audio based on source
CN106162427B (zh) * 2015-03-24 2019-09-17 青岛海信电器股份有限公司 一种声音获取元件的指向性调整方法和装置
DE102015211747B4 (de) * 2015-06-24 2017-05-18 Sivantos Pte. Ltd. Verfahren zur Signalverarbeitung in einem binauralen Hörgerät
DK3329692T3 (da) * 2015-07-27 2021-08-30 Sonova Ag Mikrofonaggregat med klemmefastgørelse
DE102015219572A1 (de) * 2015-10-09 2017-04-13 Sivantos Pte. Ltd. Verfahren zum Betrieb einer Hörvorrichtung und Hörvorrichtung
US11445305B2 (en) * 2016-02-04 2022-09-13 Magic Leap, Inc. Technique for directing audio in augmented reality system
EP3411873B1 (fr) * 2016-02-04 2022-07-13 Magic Leap, Inc. Technique d'orientation audio dans un système de réalité augmentée
EP3270608B1 (fr) 2016-07-15 2021-08-18 GN Hearing A/S Dispositif d'aide auditive doté d'un traitement adaptatif et procédé associé
EP3590097B1 (fr) 2017-02-28 2023-09-13 Magic Leap, Inc. Enregistrement d'objet réel et virtuel dans un dispositif de réalité mixte
WO2019084214A1 (fr) * 2017-10-24 2019-05-02 Whisper.Ai, Inc. Séparation et recombinaison audio pour l'intelligibilité et le confort
US10536785B2 (en) 2017-12-05 2020-01-14 Gn Hearing A/S Hearing device and method with intelligent steering
DE102018206979A1 (de) 2018-05-04 2019-11-07 Sivantos Pte. Ltd. Verfahren zum Betrieb eines Hörgeräts und Hörgerät
US11089402B2 (en) * 2018-10-19 2021-08-10 Bose Corporation Conversation assistance audio device control
US10795638B2 (en) 2018-10-19 2020-10-06 Bose Corporation Conversation assistance audio device personalization
EP3672280B1 (fr) * 2018-12-20 2023-04-12 GN Hearing A/S Dispositif auditif à formation de faisceau basée sur l'accélération
EP3687188B1 (fr) * 2019-01-25 2022-04-27 ams AG Système audio d'annulation de bruit et procédé de réglage d'une fonction de transfert cible d'un système audio d'annulation de bruit
US10798499B1 (en) 2019-03-29 2020-10-06 Sonova Ag Accelerometer-based selection of an audio source for a hearing device
TWI725668B (zh) * 2019-12-16 2021-04-21 陳筱涵 注意力集中輔助系統
DE102020207586A1 (de) * 2020-06-18 2021-12-23 Sivantos Pte. Ltd. Hörsystem mit mindestens einem am Kopf des Nutzers getragenen Hörinstrument sowie Verfahren zum Betrieb eines solchen Hörsystems
US11482238B2 (en) 2020-07-21 2022-10-25 Harman International Industries, Incorporated Audio-visual sound enhancement
WO2022076404A1 (fr) 2020-10-05 2022-04-14 The Trustees Of Columbia University In The City Of New York Systèmes et procédés pour la séparation de la parole basée sur le cerveau
JP2022062875A (ja) 2020-10-09 2022-04-21 ヤマハ株式会社 音信号処理方法および音信号処理装置
JP2022062876A (ja) * 2020-10-09 2022-04-21 ヤマハ株式会社 音信号処理方法および音信号処理装置
CN113938804A (zh) * 2021-09-28 2022-01-14 武汉左点科技有限公司 一种范围性助听方法及装置
DE102022201706B3 (de) 2022-02-18 2023-03-30 Sivantos Pte. Ltd. Verfahren zum Betrieb eines binauralen Hörvorrichtungssystems und binaurales Hörvorrichtungssystem
CN115620727B (zh) * 2022-11-14 2023-03-17 北京探境科技有限公司 音频处理方法、装置、存储介质及智能眼镜

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5964994A (ja) 1982-10-05 1984-04-13 Matsushita Electric Ind Co Ltd マイクロホン装置
US5473701A (en) 1993-11-05 1995-12-05 At&T Corp. Adaptive microphone array
US20100074460A1 (en) 2008-09-25 2010-03-25 Lucent Technologies Inc. Self-steering directional hearing aid and method of operation thereof
EP2672732A2 (fr) 2012-06-06 2013-12-11 Siemens Medical Instruments Pte. Ltd. Procédé de focalisation d'un générateur de faisceau d'un instrument auditif

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001246395A1 (en) 2000-04-04 2001-10-15 Gn Resound A/S A hearing prosthesis with automatic classification of the listening environment
US20040175008A1 (en) 2003-03-07 2004-09-09 Hans-Ueli Roeck Method for producing control signals, method of controlling signal and a hearing device
DE10351509B4 (de) 2003-11-05 2015-01-08 Siemens Audiologische Technik Gmbh Hörgerät und Verfahren zur Adaption eines Hörgeräts unter Berücksichtigung der Kopfposition
JP2009514312A (ja) * 2005-11-01 2009-04-02 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 音響追跡手段を備える補聴器
DE102007005861B3 (de) 2007-02-06 2008-08-21 Siemens Audiologische Technik Gmbh Hörvorrichtung mit automatischer Ausrichtung des Richtmikrofons und entsprechendes Verfahren
US8509454B2 (en) * 2007-11-01 2013-08-13 Nokia Corporation Focusing on a portion of an audio scene for an audio signal
EP2315458A3 (fr) 2008-04-09 2012-09-12 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Appareil et procédé pour générer des caractéristiques de filtres
EP2200341B1 (fr) 2008-12-16 2015-02-25 Siemens Audiologische Technik GmbH Procédé de fonctionnement d'un appareil d'aide auditive et appareil d'aide auditive doté d'un dispositif de séparation de sources
JP5409656B2 (ja) 2009-01-22 2014-02-05 パナソニック株式会社 補聴装置
EP2629551B1 (fr) * 2009-12-29 2014-11-19 GN Resound A/S Aide auditive binaurale
US9113247B2 (en) * 2010-02-19 2015-08-18 Sivantos Pte. Ltd. Device and method for direction dependent spatial noise reduction
DE102010026381A1 (de) 2010-07-07 2012-01-12 Siemens Medical Instruments Pte. Ltd. Verfahren zum Lokalisieren einer Audioquelle und mehrkanaliges Hörsystem
US8989413B2 (en) * 2011-09-14 2015-03-24 Cochlear Limited Sound capture focus adjustment for hearing prosthesis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5964994A (ja) 1982-10-05 1984-04-13 Matsushita Electric Ind Co Ltd マイクロホン装置
US5473701A (en) 1993-11-05 1995-12-05 At&T Corp. Adaptive microphone array
US20100074460A1 (en) 2008-09-25 2010-03-25 Lucent Technologies Inc. Self-steering directional hearing aid and method of operation thereof
EP2672732A2 (fr) 2012-06-06 2013-12-11 Siemens Medical Instruments Pte. Ltd. Procédé de focalisation d'un générateur de faisceau d'un instrument auditif

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HOFFMAN ET AL.: "Constrained Optimum Filtering for Multi-Microphone digital hearing aids", CONFERENCE RECORD TWENTY-FOURTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS, 1990, pages 28 - 32, XP032290890
PETERSON ET AL.: "Multimicrophone adaptive beamforming for interference reduction in hearing aids", JOURNAL OF REHABILITATION RESEARCH AND DEVELOPMENT, vol. 24, no. 4, 1987, pages 103 - 110, XP055376583

Also Published As

Publication number Publication date
DE102012214081A1 (de) 2013-12-12
US8867763B2 (en) 2014-10-21
CN103475974A (zh) 2013-12-25
CN103475974B (zh) 2016-07-27
EP2672732A2 (fr) 2013-12-11
DK2672732T3 (da) 2016-11-28
US20130329923A1 (en) 2013-12-12
EP2672732A3 (fr) 2014-07-16

Similar Documents

Publication Publication Date Title
EP2672732B1 (fr) Procédé de focalisation d'un générateur de faisceau d'un instrument auditif
EP1307072B1 (fr) Procédé pour actionner une prothèse auditive et prothèse auditive
EP1589784B1 (fr) Prothèse auditive avec dispositif de commande
DE102017214164B3 (de) Verfahren zum Betrieb eines Hörgeräts und Hörgerät
DE102007008738A1 (de) Verfahren zur Verbesserung der räumlichen Wahrnehmung und entsprechende Hörvorrichtung
EP2645743B1 (fr) Appareil auditif pour un traitement binaural et procédé destiné à préparer un traitement binaural
EP2373064B1 (fr) Méthode et appareils pour le contrôle vocal des appareils de correction auditive binaurale
EP2226795A1 (fr) Dispositif auditif et procédé de réduction d'un bruit parasite pour un dispositif auditif
EP2182741B1 (fr) Dispositif auditif doté d'une unité de reconnaissance de situation spéciale et procédé de fonctionnement d'un dispositif auditif
EP1561363B1 (fr) Interface d'entree vocale
EP2200341A1 (fr) Procédé de fonctionnement d'un appareil d'aide auditive et appareil d'aide auditive doté d'un dispositif de séparation de sources
EP2080410A1 (fr) Procédé d'utilisation d'une aide auditive et aide auditive
EP1881738B1 (fr) Procédé d'utilisation d'une prothèse auditive et assemblage avec une prothèse auditive
EP2658289B1 (fr) Procédé de commande d'une caractéristique de guidage et système auditif
EP1432282A2 (fr) Procédé d'adaptation d'une prothèse auditive à une situation environnante momentanée et système de prothèse auditive
EP2373062A2 (fr) Procédé de réglage double pour un système auditif
WO2019215200A1 (fr) Procédé de fonctionnement d'un système auditif ainsi que système auditif
EP2373063A1 (fr) Dispositif auditif et procédé de réglage de celui-ci pour un fonctionnement sans contre-réaction
EP3697107B1 (fr) Procédé de fonctionnement d'un système auditif et système auditif
DE102005019604B4 (de) Aktiv-Richtlautsprecher zum Beschallen einer Hörzone und Verfahren zum automatischen Anpassen der Wiedergabelautstärke eines einer Hörzone zugeordneten Richtlautsprechers
EP3926981A1 (fr) Système auditif doté d'au moins un instrument auditif porté sur la tête de l'utilisateur, ainsi que mode de fonctionnement d'un tel système auditif
WO2004100609A1 (fr) Systeme de reproduction de signaux audio sensible a la position
DE102022201706B3 (de) Verfahren zum Betrieb eines binauralen Hörvorrichtungssystems und binaurales Hörvorrichtungssystem
EP2592850B1 (fr) Activation et désactivation automatiques d'un système auditif binaural
EP2373065B2 (fr) Dispositif auditif et procédé de production d'une caractéristique de direction omnidirectionnelle

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 25/00 20060101AFI20140606BHEP

17P Request for examination filed

Effective date: 20141120

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

17Q First examination report despatched

Effective date: 20150611

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SIVANTOS PTE. LTD.

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20160309

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: E. BLUM AND CO. AG PATENT- UND MARKENANWAELTE , CH

Ref country code: AT

Ref legal event code: REF

Ref document number: 816653

Country of ref document: AT

Kind code of ref document: T

Effective date: 20160815

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502013003857

Country of ref document: DE

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20161122

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20160727

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161027

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161127

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161128

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161028

REG Reference to a national code

Ref country code: DE

Ref legal event code: R026

Ref document number: 502013003857

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

PLBI Opposition filed

Free format text: ORIGINAL CODE: 0009260

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

26 Opposition filed

Opponent name: GN HEARING A/S / WIDEX A/S / OTICON A/S

Effective date: 20170427

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161027

PLAX Notice of opposition and request to file observation + time limit sent

Free format text: ORIGINAL CODE: EPIDOSNOBS2

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170531

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

PLBB Reply of patent proprietor to notice(s) of opposition received

Free format text: ORIGINAL CODE: EPIDOSNOBS3

PLAB Opposition data, opponent's data or that of the opponent's representative modified

Free format text: ORIGINAL CODE: 0009299OPPO

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

R26 Opposition filed (corrected)

Opponent name: GN HEARING A/S / WIDEX A/S / OTICON A/S

Effective date: 20170427

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: SIVANTOS PTE. LTD.

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170513

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20170531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170513

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

APBM Appeal reference recorded

Free format text: ORIGINAL CODE: EPIDOSNREFNO

APBP Date of receipt of notice of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA2O

APAH Appeal reference modified

Free format text: ORIGINAL CODE: EPIDOSCREFNO

PLAB Opposition data, opponent's data or that of the opponent's representative modified

Free format text: ORIGINAL CODE: 0009299OPPO

APBM Appeal reference recorded

Free format text: ORIGINAL CODE: EPIDOSNREFNO

APBP Date of receipt of notice of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA2O

R26 Opposition filed (corrected)

Opponent name: GN HEARING A/S / OTICON A/S

Effective date: 20170427

APBQ Date of receipt of statement of grounds of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA3O

APBQ Date of receipt of statement of grounds of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA3O

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130513

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 816653

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180513

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160727

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180513

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160727

PLAB Opposition data, opponent's data or that of the opponent's representative modified

Free format text: ORIGINAL CODE: 0009299OPPO

R26 Opposition filed (corrected)

Opponent name: GN HEARING A/S / OTICON A/S

Effective date: 20170427

PLAB Opposition data, opponent's data or that of the opponent's representative modified

Free format text: ORIGINAL CODE: 0009299OPPO

R26 Opposition filed (corrected)

Opponent name: GN HEARING A/S / OTICON A/S

Effective date: 20170427

APBU Appeal procedure closed

Free format text: ORIGINAL CODE: EPIDOSNNOA9O

PLAB Opposition data, opponent's data or that of the opponent's representative modified

Free format text: ORIGINAL CODE: 0009299OPPO

R26 Opposition filed (corrected)

Opponent name: OTICON A/S

Effective date: 20170427

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230517

Year of fee payment: 11

Ref country code: CH

Payment date: 20230602

Year of fee payment: 11

APAH Appeal reference modified

Free format text: ORIGINAL CODE: EPIDOSCREFNO

APBM Appeal reference recorded

Free format text: ORIGINAL CODE: EPIDOSNREFNO

APBP Date of receipt of notice of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA2O

APBQ Date of receipt of statement of grounds of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA3O

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240522

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240517

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DK

Payment date: 20240522

Year of fee payment: 12