EP3863306A1 - Système auditif pourvu d'au moins un instrument auditif porté dans ou sur l'oreille de l'utilisateur ainsi que procédé de fonctionnement d'un tel système auditif - Google Patents
Système auditif pourvu d'au moins un instrument auditif porté dans ou sur l'oreille de l'utilisateur ainsi que procédé de fonctionnement d'un tel système auditif Download PDFInfo
- Publication number
- EP3863306A1 EP3863306A1 EP21151124.1A EP21151124A EP3863306A1 EP 3863306 A1 EP3863306 A1 EP 3863306A1 EP 21151124 A EP21151124 A EP 21151124A EP 3863306 A1 EP3863306 A1 EP 3863306A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- user
- hearing
- component
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/35—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
- H04R25/356—Amplitude, e.g. amplitude shift or compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/30—Monitoring or testing of hearing aids, e.g. functioning, settings, battery power
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/43—Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/60—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
- H04R25/604—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
- H04R25/606—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers acting directly on the eardrum, the ossicles or the skull, e.g. mastoid, tooth, maxillary or mandibular bone, or mechanically stimulating the cochlea, e.g. at the oval window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/03—Synergistic effects of band splitting and sub-band processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
Definitions
- the invention relates to a method for operating a hearing system to support the hearing ability of a user, with at least one hearing instrument worn in or on the user's ear.
- the invention also relates to such a hearing system.
- a hearing instrument is generally an electronic device that supports the hearing ability of a person wearing the hearing instrument (hereinafter referred to as “wearer” or “user”).
- the invention relates to hearing instruments which are set up to compensate for a hearing loss of a hearing-impaired user in whole or in part.
- a hearing instrument is also referred to as a “hearing aid”.
- hearing instruments that protect or improve the hearing ability of normal hearing users, for example to enable improved speech understanding in complex listening situations.
- Hearing instruments in general, and hearing aids in particular, are usually designed to be worn on him or on the user's ear, in particular as behind-the-ear devices (also referred to as BTE devices after the English term “behind the ear”) or in-the-ear devices (also referred to as ITE devices after the term “in the ear”).
- hearing instruments usually have at least one (acousto-electrical) input transducer, a signal processing unit (signal processor) and an output transducer.
- the input transducer picks up airborne sound from the surroundings of the hearing instrument and converts it Airborne sound is converted into an input audio signal (ie an electrical signal that carries information about the ambient sound).
- This input audio signal is also referred to below as the “recorded sound signal”.
- the input audio signal is processed (ie modified with regard to its sound information) in order to support the hearing ability of the user, in particular to compensate for a hearing loss of the user.
- the signal processing unit outputs a correspondingly processed audio signal (also referred to as “output audio signal” or “modified sound signal”) to the output transducer.
- the output transducer is designed as an electro-acoustic transducer, which converts the (electrical) output audio signal back into airborne sound, this airborne sound - modified compared to the ambient sound - being emitted into the user's ear canal.
- the output transducer In the case of a hearing instrument worn behind the ear, the output transducer, also referred to as a “receiver”, is usually integrated outside the ear in a housing of the hearing instrument. In this case, the sound emitted by the output transducer is conducted into the ear canal of the user by means of a sound tube. As an alternative to this, the output transducer can also be arranged in the auditory canal and thus outside the housing worn behind the ear. Such hearing instruments are also referred to as RIC devices (after the English term “receiver in canal”). Hearing instruments worn in the ear that are so small that they do not protrude beyond the auditory canal are also referred to as CIC devices (after the English term "completely in canal”).
- the output transducer can also be designed as an electro-mechanical transducer which converts the output audio signal into structure-borne sound (vibrations), this structure-borne sound being emitted, for example, into the skull bone of the user.
- structure-borne sound vibrations
- hearing system denotes a single device or a group of devices and possibly non-physical functional units that together are used in operation provide the necessary functions of a hearing instrument.
- the hearing system can consist of a single hearing instrument.
- the hearing system can comprise two interacting hearing instruments for supplying the two ears of the user. In this case we speak of a "binaural hearing system”.
- the hearing system can comprise at least one further electronic device, for example a remote control, a charger or a programming device for the or each hearing device.
- a control program in particular in the form of a so-called app, is often provided, this control program being designed to be implemented on an external computer, in particular a smartphone or tablet.
- the external computer itself is usually not part of the hearing system and, in particular, is usually not provided by the manufacturer of the hearing system.
- a frequent problem in the operation of a hearing system is that the hearing instrument or the hearing instruments of the hearing system alienate the user's own voice, in particular reproducing it too loudly and with a sound that is perceived as unnatural.
- This problem is at least partially solved in modern hearing systems by recognizing time segments (self-voicing intervals) of the recorded sound signal in which this sound signal contains the user's own voice. These self-voice intervals are processed differently in the hearing instrument, in particular less amplified, than other intervals of the recorded sound signal that do not contain the voice of the user.
- the invention is based on the object of enabling signal processing in a hearing system which is improved under this aspect.
- this object is achieved according to the invention by the features of claim 1.
- the object is achieved according to the invention by the features of claim 10.
- the invention is generally based on a hearing system for supporting the hearing ability of a user, the hearing system having at least one hearing instrument worn in or on an ear of the user.
- the hearing system in simple embodiments of the invention can consist exclusively of a single hearing instrument.
- the hearing system preferably comprises at least one further component, e.g. a further (in particular similar) hearing instrument for supplying the other ear of the user, a control program (in particular in the form of an app) for execution on an external computer (in particular a smartphone) the user and / or at least one other electronic device, e.g. B. a remote control or a charger.
- the hearing instrument and the at least one further component are in data exchange with one another, the functions of data storage and / or data processing of the hearing system being divided between the hearing instrument and the at least one further component.
- the hearing instrument has at least one input transducer for receiving a sound signal (in particular in the form of airborne sound) from the surroundings of the hearing instrument, a signal processing unit for processing (modifying) the recorded sound signal in order to support the hearing of the user, and an output transducer for outputting the modified sound signal on. If the hearing system is another hearing instrument to supply the other Has the ear of the user, this further hearing instrument also preferably has at least one input transducer, a signal processing unit and an output transducer.
- each hearing instrument of the hearing system is in particular in one of the designs described above (BTE device with internal or external output transducer, ITE device, e.g. CIC device, hearing implant, in particular cochlear implant, etc.).
- BTE device with internal or external output transducer ITE device, e.g. CIC device, hearing implant, in particular cochlear implant, etc.
- both hearing instruments are preferably designed in the same way.
- the or each input transducer is in particular an acousto-electrical transducer which converts airborne sound from the environment into an electrical input audio signal.
- the hearing system preferably comprises at least two input transducers, which can be arranged in the same hearing instrument or - if available - divided between the two hearing instruments of the hearing system.
- the output transducer is preferably designed as an electro-acoustic transducer (earpiece), which in turn converts the audio signal modified by the signal processing unit into airborne sound.
- the output transducer is designed to emit structure-borne sound or to directly stimulate the user's auditory nerve.
- the signal processing unit preferably comprises a plurality of signal processing functions, e.g. any selection from the functions of frequency-selective amplification, dynamic compression, spectral compression, direction-dependent damping (beamforming), noise suppression, in particular active noise cancellation (ANC for short), active feedback cancellation (active feedback cancellation) , AFC for short), wind noise suppression, which are applied to the recorded sound signal, ie the input audio signal, in order to process it to support the user's hearing.
- Each of these functions or at least a large part of these functions can be parameterized by one or more signal processing parameters.
- a variable is used as the signal processing parameter which can be assigned different values in order to influence the operation of the associated signal processing function.
- a signal processing parameter can be a binary variable with which the respective function is switched on and off.
- hearing aid parameters are formed by scalar floating point numbers, binary or continuously variable vectors or multidimensional arrays, etc.
- An example of such signal processing parameters is a set of gain factors for a number of frequency bands of the signal processing unit, which define the frequency-dependent gain of the hearing instrument.
- the at least one input transducer of the hearing instrument records a sound signal from the surroundings of the hearing instrument, this sound signal at least temporarily containing the user's own voice and an ambient noise.
- Ambient noise is used here and in the following to refer to the portion of the recorded sound signal that originates from the environment (and is therefore different from the user's own voice).
- the recorded sound signal (input audio signal) is modified in a signal processing step to support the hearing ability of a user.
- the modified sound signal is output by means of the output transducer of the hearing instrument.
- a first signal component and a second signal component are derived from the recorded sound signal (immediately or after preprocessing).
- the first signal component (also referred to below as “voice component”) is derived in such a way that the user's own voice is emphasized here in relation to the ambient noise;
- the user's own voice is either selectively amplified (that is, amplified to a greater extent than the ambient noise) or the ambient noise is selectively attenuated (that is, attenuated to a greater extent than the user's own voice).
- the second signal component (hereinafter also referred to as “ambient noise component”), on the other hand, is derived in such a way that the ambient noise is emphasized here compared to the user's own voice; here either the ambient noise is selectively amplified (i.e. amplified to a greater extent than one's own voice) or one's own voice is selectively attenuated (i.e. attenuated to a greater extent than the ambient noise).
- the user's own voice is preferably completely or at least removed from the second signal component as far as this is possible in terms of signal processing technology.
- the first signal component (self-voiced component) and the second signal component (ambient noise component) are processed in different ways in the signal processing step.
- the first signal component is amplified to a lesser extent than the second signal component and / or processed with modified dynamic compression (in particular with reduced dynamic compression, that is to say with a more linear gain characteristic).
- the first signal component is preferably processed in a way that is optimized for processing the user's own voice (in particular individually, i.e. user-specific).
- the second signal component is preferably processed in a way that is optimized for processing the ambient noise. This processing of the second signal component is optionally in turn varied depending on the type of ambient noise (voice noise, music, driving noise, construction noise, etc.) determined, e.g. as part of a classification of the hearing situation.
- the first signal component and the second signal component are combined (superimposed) to generate the modified sound signal.
- the overall signal resulting from the combination of the two signals can, however, optionally, within the scope of the invention, go through further processing steps before being output by the output transducer, in particular be amplified again.
- the two signal components that is to say the natural voice component and the ambient noise component, are derived from the first and second sound signals in such a way that they (completely or at least partially) overlap in time.
- the two signal components therefore coexist in time and are processed in parallel to one another (i.e. on parallel signal processing paths). These signal components are therefore not consecutive intervals of the recorded sound signal.
- the derivation of the first signal component is preferably carried out using direction-dependent damping (beamforming), so that a spatial signal component corresponding to the ambient noise is selectively attenuated (i.e. more attenuated than another spatial signal component in which the ambient noise is not present or is only weakly pronounced).
- a static (time-immutable) damping algorithm also known as a beamforming algorithm or beamformer for short
- an adaptive, direction-dependent beamformer is used, the damping characteristic of which has at least one local or global damping maximum, that is to say at least one direction of maximum damping (notch).
- This notch (or, if applicable, one of several notches) is preferably aligned with a dominant noise source in a volume of space that is rearward with respect to the head of the user.
- the derivation of the second signal component is preferably also carried out by means of direction-dependent damping, with a static or adaptive beamformer also being optionally used.
- the direction-dependent attenuation is used here in such a way that a spatial signal component corresponding to the natural voice component is selectively attenuated (i.e. more attenuated than a spatial signal component in which the user's own voice is not present or is only weakly pronounced).
- a notch of the corresponding beamformer is expediently aligned exactly or approximately at the front with respect to the head of the user.
- a beamformer with a damping characteristic corresponding to an anti-cardioid is used.
- At least the beamformer used to derive the second signal component preferably has a frequency-dependent varying attenuation characteristic.
- This dependency of the damping characteristic is expressed in particular in a notch width, notch depth that varies with the frequency and / or in a notch direction that varies slightly with the frequency.
- the dependence of the attenuation characteristic on the frequency is set (e.g. empirically or using a numerical optimization method) in such a way that the attenuation of one's own voice in the second signal component is optimized (i.e. a local or global maximum is reached) and thus the own voice is eliminated as best as possible from the second signal component.
- This optimization is carried out, for example, when a static beamformer is used to derive the second signal component, when the hearing system is individually adapted to the user (fitting).
- an adaptive beamformer is used to derive the second signal component, which continuously optimizes the damping characteristics during operation of the hearing system with a view to the best possible damping of the user's own voice.
- This measure is based on the knowledge that the user's own voice is attenuated differently by a beamformer than the sound from a sound source arranged at a distance from the front of the user. In particular, the user's own voice is not always perceived as coming exactly from the front.
- the attenuation characteristic of the beamformer used to derive the first signal component also has a dependency on the frequency, this dependency being determined in such a way that the attenuation of the ambient signal is optimized in the first signal component (i.e. a local or global maximum is reached) and thus the ambient signal is eliminated as best as possible from the first signal component.
- spectral filtering of the recorded sound signal is preferably used in order to derive the first signal component (natural voice component) and the second signal component (ambient noise component).
- first signal component at least one frequency component of the recorded sound signal in which components of the user's own voice are not present or only weakly pronounced is selectively attenuated (i.e. more attenuated than frequency components of the recorded sound signal in which the user's own voice has dominant shares).
- second signal component at least one frequency component of the recorded sound signal, in which components of the ambient noise are not present or only weakly pronounced, is selectively attenuated (i.e. more attenuated than frequency components of the recorded sound signal in which the ambient noise has dominant components).
- the method described above namely the separation of the recorded sound signal into the natural voice component and the ambient noise component and the parallel, different processing of both signal components, can be carried out continuously (and according to the same unchanged method) within the scope of the invention while the hearing system is in operation, regardless of when and how often the recorded sound signal contains the user's own voice.
- the signal processing path containing the voice component runs virtually empty in this case and processes a signal that does not contain the user's own voice.
- the separation of the recorded sound signal into the natural voice component and the ambient noise component and the parallel, different processing of the two signal components are only carried out in natural voice intervals if the recorded sound signal also contains the user's own voice.
- own voice intervals of the recorded sound signal are recognized in a signal analysis step, e.g. B. using methods as they are in themselves US 2013/0148829 A1 or off WHERE 2016/078786 A1 are known.
- the recorded sound signal is separated into the first signal component and the second signal component only in recognized self-voicing intervals (not in intervals that do not contain the user's own voice).
- the separation of the recorded sound signal into the natural voice component and the ambient noise component and the parallel, different processing of the two signal components is basically carried out both in recognized natural voice intervals and in the absence of the user's own voice, but in this case the derivation of the second signal component (i.e. the ambient noise component), depending on the presence or absence of the user's own voice, takes place differently:
- an algorithm optimized for the attenuation of the user's own voice is preferably used in self-voice intervals to derive the ambient noise component, in particular - as described above - a static beamformer with an optimized frequency dependency of the damping characteristics or a self-optimizing dynamic beamformer.
- an algorithm different therefrom is preferably used to derive the ambient noise component, which is based on the attenuation of a noise source arranged on the front of the user but remote from the user (e.g. a speaker to whom the user turns) is aligned.
- This different algorithm is designed, for example, as a static beamformer with a direction-dependent damping characteristic corresponding to an anti-cardioid, this beamformer differing in terms of the shape and / or frequency dependence of the anti-cardioid from the beamformer used on self-tuning intervals to derive the ambient noise component.
- an anti-cardioid without frequency dependency ie an anti-cardioid constant over the frequency
- the processing of the first signal component is preferably also carried out here, depending on the presence or Absence of the user's own voice, in different ways:
- the first signal component is preferably - as described above - processed in a way that is optimized for processing the user's own voice, but in the absence of his own voice in a different way.
- the hearing system according to the invention is generally set up for the automatic implementation of the method according to the invention described above.
- the hearing system is thus set up to record a sound signal from the surroundings of the hearing instrument by means of the at least one input transducer of the at least one hearing instrument, the sound signal at least temporarily having the user's own voice and ambient noise, the recorded sound signal in the signal processing step to support hearing of a user and to output the modified sound signal by means of the output transducer of the hearing instrument.
- the hearing system is also set up to derive the first signal component (self-voiced component) and the second signal component (ambient noise component), which overlaps in time, from the recorded sound signal in the manner described above, to process these two signal components in different ways in the signal processing step and according to this Merge processing to generate the modified sound signal.
- the set-up of the hearing system for the automatic implementation of the method according to the invention is of a programming and / or circuitry nature.
- the hearing system according to the invention thus comprises program-technical means (software) and / or circuit-technical means (hardware, for example in the form of an ASIC) which automatically carry out the method according to the invention when the hearing system is in operation.
- the program-technical or circuit-technical means for carrying out the method can in this case be arranged exclusively in the hearing instrument (or the hearing instruments) of the hearing system.
- programming means for performing the method are distributed to the at least one hearing instrument of the hearing system and to a control program installed on an external electronic device (in particular a smartphone).
- Fig. 1 shows a hearing system 2 with a single hearing aid 4, ie a hearing instrument set up to support the hearing ability of a hearing-impaired user.
- the hearing aid 4 is a BTE hearing aid that can be worn behind an ear of a user.
- the hearing system 2 comprises a second hearing aid, not expressly shown, for supplying the second ear of the user, and / or a control app that can be installed on a smartphone of the user.
- the functional components of the hearing system 2 described below are preferably distributed between the two hearing aids or the at least one hearing aid and the control app.
- the hearing aid 4 includes within a housing 5 at least one microphone 6 (in the example shown, two microphones 6) as an input transducer and an earpiece 8 (receiver) as an output transducer. In the state worn behind the user's ear, the two microphones 6 are aligned such that one of the microphones 6 points forward (i.e. in the direction of view of the user), while the other microphone 6 is oriented towards the rear (opposite to the direction of view of the user).
- the hearing aid 4 further comprises a battery 10 and a signal processing unit in the form of a digital signal processor 12.
- the signal processor 12 preferably comprises both a programmable subunit (for example a microprocessor) and a non-programmable subunit (for example an ASIC).
- the signal processor 12 comprises a (self-voice recognition) unit 14 and a (signal separation) unit 16. In addition, the signal processor 12 has two parallel signal processing paths 18 and 20.
- the units 14 and 16 are preferably designed as software components that are implemented in the signal processor 12 so that they can run.
- the signal processing paths 18 and 20 are preferably formed by electronic hardware circuits (e.g. on the mentioned ASIC).
- the signal processor 12 is supplied with an electrical supply voltage U from the battery 10.
- the microphones 6 record airborne sound from the surroundings of the hearing aid 4.
- the microphones 6 convert the sound into an (input) audio signal I which contains information about the recorded sound.
- the input audio signal I is fed to the signal processor 12 within the hearing aid 4.
- the earpiece 8 converts the output sound signal O into a modified airborne sound.
- This modified airborne sound is transmitted via a sound channel 22, which connects the receiver 8 to a tip 24 of the housing 5, and via a (not explicitly shown) flexible sound tube, which connects the tip 24 with an earpiece inserted into the ear canal of the user, transferred into the ear canal of the user.
- FIG Fig. 2 The functional interconnection of the components of the signal processor 12 described above is shown in FIG Fig. 2 illustrated.
- the input audio signal I (and thus the recorded sound signal) is fed to the voice recognition unit 14 and the signal separation unit 16.
- the self-voice recognition unit 14 recognizes, for example using one or more of the in US 2013/0148829 A1 or WO 2016/078786 A1 methods described whether the input audio signal I contains the user's own voice.
- a status signal V dependent on the result of this test (which thus indicates whether or not the input audio signal I contains the user's own voice) feeds the self-voice recognition unit 14 to the signal separation unit 16.
- the signal separation unit 16 treats the input audio signal I supplied in different ways. In self-voicing intervals, i.e. time segments in which the self-voice recognition unit 14 has recognized the user's own voice in the input audio signal I, the signal separation unit 16 derives a first signal component (or self-voice component) S1 and a second signal component (or a second signal component) from the input audio signal I Ambient noise component) S2, and feeds these temporally overlapping signal components S1 and S2 to the parallel signal processing paths 18 and 20, respectively. On the other hand, at intervals in which the input audio signal I does not contain the user's own voice, the signal separation unit 16 feeds the entire input audio signal I to the signal path 20.
- self-voicing intervals i.e. time segments in which the self-voice recognition unit 14 has recognized the user's own voice in the input audio signal I
- the signal separation unit 16 derives a first signal component (or self-voice component) S1 and a second signal component (or a second signal component) from the input audio signal I Ambi
- the signal separation unit 16 routes the first signal component S1 and the second signal component S2 by application different beamformer 26 or 28 (that is, different algorithms for directional attenuation) from the input audio signal I from.
- an attenuation characteristic G1 of the beamformer 26 used to derive the first signal component (self-voiced component) S1 is shown by way of example.
- the beamformer 26 is an adaptive algorithm (that is, when the hearing system 2 is in operation, can be changed at any time) with two symmetrically mutually variable notches 30 (that is, directions of maximum attenuation).
- the damping characteristic G1 is set in such a way that one of the notches 30 is aligned with a dominant noise source 32 in a volume of space — rearward with respect to the head 34 of the user.
- the dominant noise source 32 is, for example, a speaker standing behind the user. Due to the in Fig.
- an attenuation characteristic G2 of the beamformer 28 used to derive the second signal component (ambient noise component) S2 is shown by way of example.
- This damping characteristic G2 is in particular static (that is to say unchanged over time after the hearing aid 4 has been individually adapted to the user) and corresponds, for example, to an anti-cardioid.
- a notch 36 of the damping characteristic G2 is aligned at the front with respect to the head 34 of the user, so that the user's own voice is at least largely masked out of the second signal component S2.
- the attenuation characteristic G2 of the beamformer 28 varies as a function of the frequency, so that the user's own voice is optimally attenuated.
- the attenuation characteristic G2 corresponding to an anti-cardioid arises from the fact that the signal from the microphone 6 pointing forward and the signal from the microphone pointing backwards, delayed by a time offset 6 are superimposed on each other (ie weighted or unweighted totaled).
- the time offset is specified as a frequency-dependent function so that the attenuation of one's own voice in the second signal component is optimized.
- An optimized frequency dependency of the time offset is determined by an audiologist during a training session in the course of hearing aid adjustment (fitting).
- the beamformer 28 is adaptive, with the attenuation characteristic G2 being adapted during ongoing operation of the hearing system 2 by the signal processor 12 (e.g. by minimizing the output energy of the beamformer 28 in self-tuning intervals).
- the first signal component S1 and the second signal component S2 are processed differently.
- the same signal processing algorithms are preferably used with different parameterization on the first signal component S1 and the second signal component S2.
- a parameter set of the signal processing parameters is used that is optimized for processing the user's own voice (in particular, individually tailored to the specific user).
- the first signal component S1 containing the user's own voice is amplified to a lesser extent than the second signal component S2 (or even not amplified at all).
- a lower dynamic compression that is to say a more linear gain characteristic
- the signal processing paths 18 and 20 emit processed and thus modified signal components S1 'and S2' to a recombination unit 38, which combines the modified signal components S1 'and S2' (in particular, weighted or unweighted totalized).
- the output audio signal O resulting therefrom is output to the listener 8 by the recombination unit 38 (directly or indirectly via further processing steps).
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Neurosurgery (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102020201615.1A DE102020201615B3 (de) | 2020-02-10 | 2020-02-10 | Hörsystem mit mindestens einem im oder am Ohr des Nutzers getragenen Hörinstrument sowie Verfahren zum Betrieb eines solchen Hörsystems |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3863306A1 true EP3863306A1 (fr) | 2021-08-11 |
Family
ID=74175644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21151124.1A Pending EP3863306A1 (fr) | 2020-02-10 | 2021-01-12 | Système auditif pourvu d'au moins un instrument auditif porté dans ou sur l'oreille de l'utilisateur ainsi que procédé de fonctionnement d'un tel système auditif |
Country Status (4)
Country | Link |
---|---|
US (1) | US11463818B2 (fr) |
EP (1) | EP3863306A1 (fr) |
CN (1) | CN113259822B (fr) |
DE (1) | DE102020201615B3 (fr) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6661901B1 (en) * | 2000-09-01 | 2003-12-09 | Nacre As | Ear terminal with microphone for natural voice rendition |
EP2352312A1 (fr) * | 2009-12-03 | 2011-08-03 | Oticon A/S | Procédé de suppression dynamique de bruit acoustique environnant lors de l'écoute sur des entrées électriques |
US20130148829A1 (en) | 2011-12-08 | 2013-06-13 | Siemens Medical Instruments Pte. Ltd. | Hearing apparatus with speaker activity detection and method for operating a hearing apparatus |
WO2016078786A1 (fr) | 2014-11-19 | 2016-05-26 | Sivantos Pte. Ltd. | Procédé et dispositif de détection rapide de la voix naturelle |
EP3101919A1 (fr) * | 2015-06-02 | 2016-12-07 | Oticon A/s | Système auditif pair à pair |
EP3188507A1 (fr) * | 2015-12-30 | 2017-07-05 | GN Resound A/S | Dispositif auditif portable sur la tête |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6240192B1 (en) * | 1997-04-16 | 2001-05-29 | Dspfactory Ltd. | Apparatus for and method of filtering in an digital hearing aid, including an application specific integrated circuit and a programmable digital signal processor |
US6738485B1 (en) * | 1999-05-10 | 2004-05-18 | Peter V. Boesen | Apparatus, method and system for ultra short range communication |
US8249284B2 (en) * | 2006-05-16 | 2012-08-21 | Phonak Ag | Hearing system and method for deriving information on an acoustic scene |
EP2071874B1 (fr) * | 2007-12-14 | 2016-05-04 | Oticon A/S | Dispositif auditif, systeme de dispositif auditif et procedure pour controler le systeme de dispositif auditif |
DK2091266T3 (da) * | 2008-02-13 | 2012-09-24 | Oticon As | Høreindretning og anvendelse af en høreapparatindretning |
DK2991379T3 (da) | 2014-08-28 | 2017-08-28 | Sivantos Pte Ltd | Fremgangsmåde og apparat til forbedret opfattelse af egen stemme |
EP3057340B1 (fr) * | 2015-02-13 | 2019-05-22 | Oticon A/s | Unité de microphone partenaire et système auditif comprenant une unité de microphone partenaire |
DE102015204639B3 (de) | 2015-03-13 | 2016-07-07 | Sivantos Pte. Ltd. | Verfahren zum Betrieb eines Hörgeräts sowie Hörgerät |
US9967682B2 (en) * | 2016-01-05 | 2018-05-08 | Bose Corporation | Binaural hearing assistance operation |
DK3396978T3 (da) * | 2017-04-26 | 2020-06-08 | Sivantos Pte Ltd | Fremgangsmåde til drift af en høreindretning og en høreindretning |
EP3429230A1 (fr) * | 2017-07-13 | 2019-01-16 | GN Hearing A/S | Dispositif auditif et procédé avec prédiction non intrusive de l'intelligibilité de la parole |
DE102018216667B3 (de) | 2018-09-27 | 2020-01-16 | Sivantos Pte. Ltd. | Verfahren zur Verarbeitung von Mikrofonsignalen in einem Hörsystem sowie Hörsystem |
-
2020
- 2020-02-10 DE DE102020201615.1A patent/DE102020201615B3/de active Active
-
2021
- 2021-01-12 EP EP21151124.1A patent/EP3863306A1/fr active Pending
- 2021-02-07 CN CN202110167932.5A patent/CN113259822B/zh active Active
- 2021-02-10 US US17/172,289 patent/US11463818B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6661901B1 (en) * | 2000-09-01 | 2003-12-09 | Nacre As | Ear terminal with microphone for natural voice rendition |
EP2352312A1 (fr) * | 2009-12-03 | 2011-08-03 | Oticon A/S | Procédé de suppression dynamique de bruit acoustique environnant lors de l'écoute sur des entrées électriques |
US20130148829A1 (en) | 2011-12-08 | 2013-06-13 | Siemens Medical Instruments Pte. Ltd. | Hearing apparatus with speaker activity detection and method for operating a hearing apparatus |
WO2016078786A1 (fr) | 2014-11-19 | 2016-05-26 | Sivantos Pte. Ltd. | Procédé et dispositif de détection rapide de la voix naturelle |
EP3101919A1 (fr) * | 2015-06-02 | 2016-12-07 | Oticon A/s | Système auditif pair à pair |
EP3188507A1 (fr) * | 2015-12-30 | 2017-07-05 | GN Resound A/S | Dispositif auditif portable sur la tête |
Also Published As
Publication number | Publication date |
---|---|
CN113259822A (zh) | 2021-08-13 |
DE102020201615B3 (de) | 2021-08-12 |
US20210250705A1 (en) | 2021-08-12 |
US11463818B2 (en) | 2022-10-04 |
CN113259822B (zh) | 2022-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3451705B1 (fr) | Procédé et dispositif de reconnaissance rapide de voix propre | |
EP1379102B1 (fr) | Localisation du son avec des prothèses auditives binauriculaires | |
DE102018216667B3 (de) | Verfahren zur Verarbeitung von Mikrofonsignalen in einem Hörsystem sowie Hörsystem | |
DE102007017761B4 (de) | Verfahren zur Anpassung eines binauralen Hörgerätesystems | |
EP2164283B1 (fr) | Appareil auditif et fonctionnement d'un appareil auditif doté d'une transposition de fréquence | |
EP2991379B1 (fr) | Procede et dispositif de perception amelioree de sa propre voix | |
EP1931172A1 (fr) | Prothèse auditive avec suppression du bruit et procédé correspondant | |
EP2229010A2 (fr) | Procédé de compensation d'un bruit parasite dans un appareil auditif, appareil auditif et procédé d'adaptation de celui-ci | |
DE102007035171A1 (de) | Verfahren zum Anpassen eines Hörgeräts mit Hilfe eines perzeptiven Modells | |
EP1906702B2 (fr) | Procédé destiné au contrôle de commande d'un dispositif auditif et dispositif auditif correspondant | |
DE102012203349B4 (de) | Verfahren zum Anpassen einer Hörvorrichtung anhand des Sensory Memory und Anpassvorrichtung | |
EP2434781A1 (fr) | Procédé de reconstruction d'un signal vocal et dispositif auditif | |
EP2023667A2 (fr) | Procédé de réglage d'un système auditif doté d'un modèle perceptif pour oreilles binaurales et système auditif correspondant | |
DE102020201615B3 (de) | Hörsystem mit mindestens einem im oder am Ohr des Nutzers getragenen Hörinstrument sowie Verfahren zum Betrieb eines solchen Hörsystems | |
EP3913618A1 (fr) | Procédé de fonctionnement d'un appareil auditif et appareil auditif | |
EP2262282A2 (fr) | Procédé de détermination d'une réponse de fréquence d'un dispositif auditif et dispositif auditif correspondant | |
DE102007030067B4 (de) | Hörgerät mit passiver, eingangspegelabhängiger Geräuschreduktion und Verfahren | |
EP2604046A1 (fr) | Procédé permettant de faire fonctionner un appareil auditif et appareil auditif correspondant | |
EP2590437B1 (fr) | Adaptation périodique d'un dispositif de suppression de l'effet Larsen | |
DE102020202725B4 (de) | Binaurales Hörsystem mit zwei im oder am Ohr des Nutzers getragenen Hörinstrumenten sowie Verfahren zum Betrieb eines solchen Hörsystems | |
EP3926983A2 (fr) | Système auditif doté d'au moins un instrument auditif porté sur la tête de l'utilisateur, ainsi que mode de fonctionnement d'un tel système auditif | |
DE102022202266A1 (de) | Verfahren zum Betrieb eines Hörgeräts | |
DE102021210098A1 (de) | Verfahren zum Betrieb eines Hörgeräts | |
DE102021203584A1 (de) | Verfahren zum Betrieb eines Hörgeräts | |
WO2014049455A1 (fr) | Système auditif ainsi que procédé de transmission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220210 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230901 |