EP2819436A1 - A hearing aid operating in dependence of position - Google Patents

A hearing aid operating in dependence of position Download PDF

Info

Publication number
EP2819436A1
EP2819436A1 EP13173995.5A EP13173995A EP2819436A1 EP 2819436 A1 EP2819436 A1 EP 2819436A1 EP 13173995 A EP13173995 A EP 13173995A EP 2819436 A1 EP2819436 A1 EP 2819436A1
Authority
EP
European Patent Office
Prior art keywords
hearing aid
sound environment
aid system
user
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP13173995.5A
Other languages
German (de)
French (fr)
Other versions
EP2819436B1 (en
Inventor
Nikolai Bisgaard
Andrew Burke Dittberner
Charlotte Thunberg Jespersen
Fang Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GN Hearing AS
Original Assignee
GN Resound AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GN Resound AS filed Critical GN Resound AS
Priority to DK13173995.5T priority Critical patent/DK2819436T3/en
Priority to EP13173995.5A priority patent/EP2819436B1/en
Priority to US13/932,815 priority patent/US9094769B2/en
Publication of EP2819436A1 publication Critical patent/EP2819436A1/en
Application granted granted Critical
Publication of EP2819436B1 publication Critical patent/EP2819436B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting

Definitions

  • a new hearing aid system comprising a location detector, e.g. including at least one of a GPS receiver, a calendar system, a WIFI network interface, a mobile phone network interface , etc, for determination of the geographical position of the user of the hearing aid system, and an environment detector configured for determination of the type of sound environment surrounding the user of the hearing aid system based on sound as received by the hearing aid system and the geographical position of the hearing aid system as determined by the location detector.
  • a location detector e.g. including at least one of a GPS receiver, a calendar system, a WIFI network interface, a mobile phone network interface , etc, for determination of the geographical position of the user of the hearing aid system
  • an environment detector configured for determination of the type of sound environment surrounding the user of the hearing aid system based on sound as received by the hearing aid system and the geographical position of the hearing aid system as determined by the location detector.
  • Today's conventional hearing aids typically comprise a Digital Signal Processor (DSP) for processing of sound received by the hearing aid for compensation of the user's hearing loss.
  • DSP Digital Signal Processor
  • the processing of the DSP is controlled by a signal processing algorithm having various parameters for adjustment of the actual signal processing performed.
  • the gains in each of the frequency channels of a multichannel hearing aid are examples of such parameters.
  • the flexibility of the DSP is often utilized to provide a plurality of different algorithms and/or a plurality of sets of parameters of a specific algorithm.
  • various algorithms may be provided for noise suppression, i.e. attenuation of undesired signals and amplification of desired signals.
  • Desired signals are usually speech or music, and undesired signals can be background speech, restaurant clatter, music (when speech is the desired signal), traffic noise, etc.
  • Audio signals obtained from different sound environments may possess very different characteristics, e.g. average and maximum sound pressure levels (SPLs) and/or frequency content.
  • SPLs sound pressure levels
  • each type of sound environment may be associated with a particular program wherein a particular setting of algorithm parameters of a signal processing algorithm provides processed sound of optimum signal quality in the type of sound environment in question.
  • a set of such parameters may typically include parameters related to broadband gain, corner frequencies or slopes of frequency-selective filter algorithms and parameters controlling e.g. knee-points and compression ratios of Automatic Gain Control (AGC) algorithms.
  • AGC Automatic Gain Control
  • today's DSP based hearing instruments are usually provided with a number of different programs, each program tailored to a particular sound environment category and/or particular user preferences.
  • Signal processing characteristics of each of these programs is typically determined during an initial fitting session in a dispenser's office and programmed into the instrument by activating corresponding algorithms and algorithm parameters in a non-volatile memory area of the hearing aid and/or transmitting corresponding algorithms and algorithm parameters to the nonvolatile memory area.
  • Some known hearing aids are capable of automatically classifying the user's sound environment into one of a number of relevant or typical everyday sound environment categories, such as speech, babble speech, restaurant clatter, music, traffic noise, etc.
  • Obtained classification results may be utilised in the hearing aid to automatically select signal processing characteristics of the hearing aid, e.g. to automatically switch to the most suitable algorithm for the environment in question.
  • Such a hearing aid will be able to maintain optimum sound quality and/or speech intelligibility for the individual hearing aid user in various sound environments.
  • a new hearing aid system is provided with a hearing aid that includes the geographical position of a user of the new hearing aid system in its determination of the sound environment.
  • the sound environment within a certain geographical area typically remains in the same category over time.
  • incorporation of the geographical position in the determination of the current sound environment will improve the determination, i.e. the determination may be made faster, and/or the determination may be made with increased certainty.
  • a new hearing aid system comprising a first hearing aid with a first microphone for provision of a first audio input signal in response to sound signals received at the first microphone in a sound environment, a first processor that is configured to process the first audio input signal in accordance with a first signal processing algorithm to generate a first hearing loss compensated audio signal, a first output transducer for conversion of the first hearing loss compensated audio signal to a first acoustic output signal, a first sound environment detector configured for determination of the type of sound environment surrounding a user of the hearing aid system, and for provision of a first output for selection of the first signal processing algorithm of the first processor based on the determined type of sound environment, and a location detector, e.g. including at least one of a GPS receiver, a calendar system, a WIFI network interface, a mobile phone network interface, etc, configured for determining the geographical position of the hearing aid system.
  • a location detector e.g. including at least one of a GPS receiver, a calendar system, a WIFI network interface, a
  • the first sound environment detector is configured for determination of the type of sound environment surrounding the user of the hearing aid system based on the first audio input signal and the geographical position of the hearing aid system.
  • the hearing aid may be of any type configured to be head worn at, and shifting position and orientation together with, the head, such as a BTE, a RIE, an ITE, an ITC, a CIC, etc, hearing aid.
  • GPS receiver is used to designate a receiver of satellite signals of any satellite navigation system that provides location and time information anywhere on or near the Earth, such as the satellite navigation system maintained by the United States government and freely accessible to anyone with a GPS receiver and typically designated "the GPS-system", the Russian GLObal NAvigation Satellite System (GLONASS), the European Union Galileo navigation system, the Chinese Compass navigation system, the Indian Regional Navigational 20 Satellite System, etc, and also including augmented GPS, such as StarFire, Omnistar, the Indian GPS Aided Geo Augmented Navigation (GAGAN), the European Geostationary Navigation Overlay Service (EGNOS), the Japanese Multifunctional Satellite Augmentation System (MSAS), etc.
  • GAGAN Indian GPS Aided Geo Augmented Navigation
  • EGNOS European Geostationary Navigation Overlay Service
  • MSAS Japanese Multifunctional Satellite Augmentation System
  • augmented GPS a network of ground-based reference stations measure small variations in the GPS satellites' signals, correction messages are sent to the GPS system satellites that broadcast the correction messages back to Earth, where augmented GPS-enabled receivers use the corrections while computing their positions to improve accuracy.
  • the International Civil Aviation Organization (ICAO) calls this type of system a satellite-based augmentation system (SBAS).
  • a calendar system is a system that provides users with an electronic version of a calendar with data that can be accessed through a network, such as the Internet.
  • Well-known calendar systems include, e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, Microsoft Outlook with Exchange Server, etc.
  • the hearing aid may further comprise one or more orientation sensors, such as gyroscopes, e.g. MEMS gyros, tilt sensors, roll ball switches, etc, configured for outputting signals for determination of orientation of the head of a user wearing the hearing aid, e.g. one or more of head yaw, head pitch, head roll, or combinations hereof, e.g. inclination or tilt.
  • tilt denotes the angular deviation from the heads normal vertical position, when the user is standing up or sitting down.
  • the tilt in a resting position of the head of a person standing up or sitting down, the tilt is 0°, and in a resting position of the head of a person lying down, the tilt is 90°.
  • the first sound environment detector may be configured for provision of the first output for selection of the first signal processing algorithm of the first processor based on user head orientation as determined based on the output signals of the one or more orientation sensors. For example, if the user changes position from sitting up to lying down in order to take a nap, the environment detector may cause the first signal processor to switch program accordingly, e.g. the first hearing aid may be automatically muted.
  • the output signals of the one or more orientation sensors may be input to another part of the hearing aid system, e.g. the first processor, configured for selection of the signal processing algorithm of the first processor based on the output signals of the one or more orientation sensors and the output of the first sound environment detector.
  • the first processor configured for selection of the signal processing algorithm of the first processor based on the output signals of the one or more orientation sensors and the output of the first sound environment detector.
  • the signal processing algorithm may comprise a plurality of sub-algorithms or sub-routines that each performs a particular subtask in the signal processing algorithm.
  • the signal processing algorithm may comprise different signal processing sub-routines such as frequency selective filtering, single or multi-channel compression, adaptive feedback cancellation, speech detection and noise reduction, etc.
  • the signal processing algorithm will have one or several related algorithm parameters. These algorithm parameters can usually be divided into a number of smaller parameters sets, where each such algorithm parameter set is related to a particular part of the signal processing algorithm or to particular sub-routine as explained above. These parameter sets control certain characteristics of their respective subroutines such as corner-frequencies and slopes of filters, compression thresholds and ratios of compressor algorithms, adaptation rates and probe signal characteristics of adaptive feedback cancellation algorithms, etc.
  • Values of the algorithm parameters are preferably intermediately stored in a volatile data memory area of the processing means such as a data RAM area during execution of the signal processing algorithm.
  • Initial values of the algorithm parameters are stored in a non-volatile memory area such as an EEPROM/Flash memory area or battery backed-up RAM memory area to allow these algorithm parameters to be retained during power supply interruptions, usually caused by the user's removal or replacement of the hearing aid's battery or manipulation of an ON/OFF switch.
  • the location detector e.g. including a GPS receiver, may be included in the first hearing aid for determining the geographical position of the user, when the user wears the hearing aid in its intended operational position on the head, based on satellite signals in the well-known way.
  • the user's current position and possibly orientation can be provided, e.g. to the first environment detector, based on data from the first hearing aid.
  • the first environment detector may be included in the first hearing aid, whereby signal transmission between the environment detector and other circuitry of the hearing aid is facilitated.
  • the location detector e.g. including the GPS receiver, may be included in a hand-held device that is interconnected with the hearing aid.
  • the hand-held device may be a GPS receiver, a smart phone, e.g. an Iphone, an Android phone, windows phone, etc, e.g. with a GPS receiver, and a calendar system, etc, interconnected with the hearing aid.
  • a smart phone e.g. an Iphone, an Android phone, windows phone, etc, e.g. with a GPS receiver, and a calendar system, etc, interconnected with the hearing aid.
  • the first environment detector may be included in the hand-held device.
  • the first environment detector may benefit from the larger computing resources and power supply typically available in a hand-held device as compared with the limited computing resources and power available in a hearing aid.
  • the hand-held device may accommodate a user interface configured for user control of the hearing aid system including the first hearing aid.
  • the hand-held device may have an interface for connection with a Wide-Area-Network, such as the Internet.
  • a Wide-Area-Network such as the Internet.
  • the hand-held device may access the Wide-Area-Network through a mobile telephone network, such as GSM, IS-95, UMTS, CDMA-2000, etc.
  • a mobile telephone network such as GSM, IS-95, UMTS, CDMA-2000, etc.
  • the hand-held device may have access to electronic time management and communication tools used by the user for communication and for storage of time management and communication information relating to the user.
  • the tools and the stored information typically reside on a remote server accessed through the Wide-Area-Network.
  • a processor of the hand-held device may be configured for storing hearing aid parameters together with GPS-data in the Cloud, i.e. on a remote server accessed through the Internet, possibly together with a hearing profile of the user, e.g. for backup of hearing aid settings at various GPS-locations, and/or for sharing of hearing aid settings at various GPS-locations with other hearing aid users.
  • the processor of the hand-held device may be configured for retrieving a hearing aid setting of another user made at the current GPS-location.
  • the hearing aid settings may be grouped according to hearing profile similarities and/or age and/or race and/or ear size, etc, and the hearing aid setting of another user may be selected in accordance with the user's belonging to such groups.
  • the hearing aid may comprise a data interface for transmission of control signals from the hand-held device to other parts of the hearing aid system, including the first hearing aid.
  • the hearing aid may comprise a data interface for transmission of the output of the one or more orientation sensors to the hand-held device.
  • the data interface may be a wired interface, e.g. a USB interface, or a wireless interface, such as a Bluetooth interface, e.g. a Bluetooth Low Energy interface.
  • a wired interface e.g. a USB interface
  • a wireless interface such as a Bluetooth interface, e.g. a Bluetooth Low Energy interface.
  • the hearing aid may comprise an audio interface for reception of an audio signal from the hand-held device and possibly other audio signal sources.
  • the audio interface may be a wired interface or a wireless interface.
  • the data interface and the audio interface may be combined into a single interface, e.g. a USB interface, a Bluetooth interface, etc.
  • the hearing aid may for example have a Bluetooth Low Energy data interface for exchange of sensor and control signals between the hearing aid and the hand-held device, and a wired audio interface for exchange of audio signals between the hearing aid and the hand-held device.
  • the first sound environment detector may comprise a first feature extractor for determination of characteristic parameters of the first audio input signal.
  • the feature extractor may determine characteristic parameters of the audio input signal, such as average and maximum sound pressure levels (SPLs), signal power, spectral data and other well-known features.
  • Spectral data may include Discrete Fourier Transform coefficients, Linear Predictive Coding parameters, cepstrum parameters or corresponding differential cepstrum parameters.
  • the feature extractor may output the characteristic parameters to a first environment classifier configured for categorizing the sound environment based on the determined characteristic parameters and the geographical position.
  • the first environment classifier is configured for categorization of sound environments into a number of sound environment classes or categories, such as speech, babble speech, restaurant clatter, music, traffic noise, etc.
  • the classification process may utilise a simple nearest neighbour search, a neural network, a Hidden Markov Model system or another system capable of pattern recognition.
  • the output of the environmental classification can be a "hard" classification containing one single environmental class or a set of probabilities indicating the probabilities of the sound belonging to the respective classes. Other outputs may also be applicable.
  • the first environment classifier may output a determined sound environment category to a first parameter map configured for provision of the output for selection of the corresponding first signal processing algorithm of the first processor.
  • classification results may be utilised in the hearing aid to automatically select signal processing characteristics of the hearing aid, e.g. to automatically switch to the most suitable algorithm for the sound environment in question.
  • Such a hearing aid will be able to maintain optimum sound quality and/or speech intelligibility for the individual hearing aid user in various sound environments.
  • an omni-directional and a directional microphone preset program in dependence of, not just the level of background noise, but also on further signal characteristics of this background noise.
  • Omni-directional operation could be selected in the event that the noise being traffic noise to allow the user to clearly hear approaching traffic independent of its direction of arrival. If, on the other hand, the background noise was classified as being babble-noise, the directional listening program could be selected to allow the user to hear a target speech signal with improved signal-to-noise ratio (SNR) during a conversation.
  • SNR signal-to-noise ratio
  • Hidden Markov Models for analysis and classification of the microphone signal may for example obtain a detailed characterisation of e.g. a microphone signal.
  • Hidden Markov Models are capable of modelling stochastic and non-stationary signals in terms of both short and long time temporal variations.
  • the environment detector may be configured for recording the geographical position determined by the location detector together with the determined type of sound environment at the geographical position. Recording may be performed at regular time intervals, and/or with a certain geographical distance between recordings, and/or triggered by certain events, e.g. a shift in type of sound environment, a change in signal processing, such as a change in signal processing programme, a change in signal processing parameters, etc., etc.
  • the environment detector may be configured for increasing the probability that the current sound environment is of the same type as already recorded at or proximate the current geographical position, or, determining that the current sound environment is of the already recorded type of sound environment.
  • the threshold distance may be predetermined, e.g. reflecting the uncertainty of the determination of geographical position of the location detector, e.g. less than or equal to the uncertainty of the location detector, or less than or equal to an average distance between recordings of geographical position and type of sound environment, or less than a characteristic size of significant features at the current geographical position such as a sports arena, a central station, a city hall, a theatre, etc.
  • the threshold distance may also be adapted to the current environment, e.g. resulting in relatively small threshold distances in areas, e.g. urban areas, with short distances between recordings of different types of sound environments, and resulting in relatively large threshold distances in areas, e.g. open ranges, with large distances between recordings of different types of sound environments.
  • a user interface of the hearing aid system may be configured to allocate certain types of sound environment to certain geographical areas.
  • the location detector may determine the geographical position of the hearing aid system based on the postal address of a WIFI network the hearing aid system may be connected to, or by triangulation based on signals possibly received from various GSM-transmitters as is well-known in the art of mobile phones. Further, the location detector may be configured for accessing a calendar system of the user to obtain information on the expected whereabouts of the user, e.g. meeting room, office, canteen, restaurant, home, etc and to include this information in the determination of the geographical position. Thus, Information from the calendar system of the user may substitute or supplement information on the geographical position determined by otherwise, e.g. by a GPS receiver.
  • the environment detector may automatically switch the hearing aid(s) of the hearing aid system to flight mode, i.e. radio(s) of the hearing aid(s) are turned off, when the user is in an airplane according to the location detector.
  • flight mode i.e. radio(s) of the hearing aid(s) are turned off
  • GPS signals may be absent or so weak that the geographical position cannot be determined by a GPS receiver.
  • Information from the calendar system on the whereabouts of the user may then be used to provide information on the geographical position, or information from the calendar system may supplement information on the geographical position, e.g. indication of a specific meeting room may provide information on which floor in a high rise building, the hearing aid system is located. Information on height is typically not available from a GPS receiver.
  • the location detector may automatically use information from the calendar system, when the geographical position cannot be determined otherwise, e.g. when the GPS-receiver is unable to provide the geographical position.
  • the environment detector may determine the type of sound environment in a conventional way based on the received sound signal; or, the hearing aid may be set to operate in a mode selected by the user, e.g. previously during a fitting session, or when the situation occurs.
  • the new hearing aid system may be a binaural hearing aid system with two hearing aids, one for the right ear and one for the left ear of the user.
  • the new hearing aid system may comprise a second hearing aid with a second microphone for provision of a second audio input signal in response to sound signals received at the second microphone in a sound environment, a second processor that is configured to process the second audio input signal in accordance with a second signal processing algorithm to generate a second hearing loss compensated audio signal, a second output transducer for conversion of the second hearing loss compensated audio signal to a second acoustic output signal.
  • the circuitry of the second hearing aid is preferably identical to the circuitry of the first hearing aid apart from the fact that the second hearing aid, typically, is adjusted to compensate a hearing loss that is different from the hearing loss compensated by the first hearing aid, since; typically, binaural hearing loss differs for the two ears.
  • the first sound environment detector may be configured for determination of the type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system.
  • the first sound environment detector may be configured for provision of a second output for selection of a second signal processing algorithm of the second processor.
  • the second hearing aid may comprise a second sound environment detector similar to the first sound environment detector and configured for determination of the type of sound environment surrounding a user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system, and for provision of a second output for selection of the second signal processing algorithm of the second processor.
  • the signal processing algorithms of the first and second signal processors are selected in a coordinated way. Since sound environment characteristics may differ significantly at the two ears of a user, it will often occur that independent sound environment determination at the two ears of a user differs, and this may lead to undesired different signal processing of sounds in the hearing aids. Thus, preferably the signal processing algorithms of the first and second processors are selected based on the same signals, such as sound signals received at the hand-held device, or both sound signals received at the left ear and sound signals received at the right ear, or a combination of sound signals received at the hand-held device and sound signals received at the left ear and sound signals received at the right ear, etc.
  • the second sound environment detector may comprise a second feature extractor for determination of characteristic parameters of the second audio input signal.
  • the second feature extractor may output the characteristic parameters to a second environment classifier for categorizing the sound environment based on the determined characteristic parameters and the geographical position.
  • the second environment classifier may output a sound environment category to a second parameter map configured for provision of the output for selection of the second signal processing algorithm of the second processor.
  • a hearing aid system includes: a first hearing aid with a first microphone for provision of a first audio input signal in response to sound signals received at the first microphone in a sound environment, a first processor that is configured to process the first audio input signal in accordance with a first signal processing algorithm to generate a first hearing loss compensated audio signal, and a first output transducer for conversion of the first hearing loss compensated audio signal to a first acoustic output signal; a first sound environment detector configured for determining a type of sound environment surrounding a user of the hearing aid system, and for provision of a first output for selection of the first signal processing algorithm based on the determined type of sound environment; and a location detector configured for determining a geographical position of the hearing aid system; wherein the first sound environment detector is configured for determining the type of sound environment surrounding the user of the hearing aid system based on the first audio input signal and the geographical position of the hearing aid system.
  • the location detector includes a GPS receiver.
  • the first sound environment detector is configured for recording the geographical position determined by the location detector together with the type of sound environment at the geographical position.
  • the first sound environment detector is configured for determining the type of sound environment by considering a probability of occurrence for a previously recorded type of sound environment that is within a distance threshold from the determined geographical position.
  • the hearing aid system further includes a user interface configured to allocate certain sound environment categories to certain respective geographical areas.
  • the location detector is configured for accessing a calendar system of the user to obtain information regarding a location of the user, and to determine the geographical position of the hearing aid system based on the information regarding the location of the user.
  • the location detector is configured for automatically accessing the calendar system of the user to obtain the information regarding the location of the user, and to determine the geographical position of the hearing aid system based on the information regarding the location of the user, when the location detector is otherwise unable to determine the geographical position of the hearing aid system.
  • the location detector is configured for obtaining a height of the geographical position from the calendar system.
  • the first sound environment detector is configured for automatically switching the first hearing aid of the hearing aid system to a flight mode, when the user is in an airplane according to the location detector.
  • the first hearing aid comprises at least one orientation sensor configured for providing information regarding an orientation of a head of the user when the user wears the first hearing aid in its intended operating position.
  • the first hearing aid is configured for selection of the first signal processing algorithm based on the information regarding the orientation of the head of the user.
  • the hearing aid system further includes a hand-held device communicatively coupled with the first hearing aid, the hand-held device accommodating the location detector.
  • the hand-held device also accommodates the first sound environment detector.
  • the hand-held device comprises a user interface configured for controlling the first hearing aid.
  • the first hearing aid accommodates the first sound environment detector.
  • the first sound environment detector comprises: a first feature extractor for determining characteristic parameters of the first audio input signal, a first environment classifier for categorizing the sound environment based on the determined characteristic parameters and the geographical position, and a first parameter map for provision of the first output for selection of the first signal processing algorithm.
  • the hearing aid system further includes a second hearing aid with a second microphone for provision of a second audio input signal in response to sound signals received at the second microphone, a second processor that is configured to process the second audio input signal in accordance with a second signal processing algorithm to generate a second hearing loss compensated audio signal, a second output transducer for conversion of the second hearing loss compensated audio signal to a second acoustic output signal, wherein the first sound environment detector is configured for determining the type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system.
  • the first sound environment detector is configured for provision of a second output for selection of the second signal processing algorithm.
  • the second hearing aid comprises: a second sound environment detector configured for determining a type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system, and provision of a second output for selection of the second signal processing algorithm based on the type of sound environment determined by the second sound environment detector.
  • the new hearing aid system will now be described more fully hereinafter with reference to the accompanying drawings, in which various types of the new hearing aid system are shown.
  • the new hearing aid system may be embodied in different forms not shown in the accompanying drawings and should not be construed as limited to the embodiments and examples set forth herein.
  • Fig. 1 schematically illustrates a new hearing aid system 10 with a first hearing aid 12 with a sound environment detector 14.
  • the first hearing aid 12 may be of any type configured to be head worn at the head, such as a BTE, a RIE, an ITE, an ITC, a CIC, etc, hearing aid.
  • the first hearing aid 12 comprises a first front microphone 16 and first rear microphone 18 connected to respective A/D converters (not shown) for provision of respective digital input signals 20, 22 in response to sound signals received at the microphones 16, 18 in a sound environment surrounding the user of the hearing aid system 10.
  • the digital input signals 20, 22 are input to a hearing loss processor 24 that is configured to process the digital input signals 20, 22 in accordance with a signal processing algorithm to generate a hearing loss compensated output signal 26.
  • the hearing loss compensated output signal 26 is routed to a D/A converter (not shown) and an output transducer 28 for conversion of the hearing loss compensated output signal 26 to an acoustic output signal.
  • the new hearing aid system 10 further comprises a hand-held device 30, e.g. a smart phone, accommodating the sound environment detector 14 for determination of the sound environment surrounding the user of the hearing aid system 10. The determination is based on a sound signal picked up by a microphone 32 in the hand-held device. Based on the determination, the sound environment detector 14 provides an output 34 to the hearing aid processor 24 for selection of the signal processing algorithm appropriate for the determined sound environment.
  • a hand-held device 30 e.g. a smart phone
  • the hearing aid processor 24 is automatically switched to the most suitable algorithm for the determined environment whereby optimum sound quality and/or speech intelligibility is maintained in various sound environments.
  • the signal processing algorithms of the processor 24 may perform various forms of noise reduction and dynamic range compression as well as a range of other signal processing tasks.
  • the first environment detector 14 benefits from the larger computing resources and power supply typically available in the hand-held device 30.
  • the sound environment detector 14 comprises a feature extractor 36 for determination of characteristic parameters of the received sound signals.
  • the feature extractor 36 maps the signal from the microphone 32 onto sound features, i.e. the characteristic parameters. These features can be signal power, spectral data and other well-known features.
  • the sound environment detector 14 further comprises an environment classifier 38 for categorizing the sound environment based on the determined characteristic parameters output by the feature extractor 36.
  • the environment classifier 38 categorizes the sounds into a number of environmental classes, such as speech, babble speech, restaurant clatter, music, traffic noise, etc.
  • the classification process may utilise a simple nearest neighbour search, a neural network, a Hidden Markov Model system or another system capable of pattern recognition.
  • the output of the environmental classification can be a "hard" classification containing one single environmental class or a set of probabilities indicating the probabilities of the sound belonging to the respective classes. Other outputs may also be applicable.
  • the sound environment detector 14 further comprises a parameter map 40 for the provision of the output 34 for selection of the signal processing algorithms.
  • the parameter map 40 maps the output of the environment classifier 38 to a set of parameters for the hearing aid sound processor 20. Examples of such parameters are: Amount of noise reduction, amount of gain and amount of HF gain. Other parameters may be included.
  • the hand-held device 30 includes a location detector with a GPS receiver 42 configured for determining the geographical position of the hearing aid system 10.
  • the illustrated hand-held device 30 is a smart phone also having mobile interface 48 comprising a GSM-interface for interconnection with a mobile phone network and a WIFI interface 48 as is well-known in the art of mobile phones.
  • the position of the illustrated hearing aid system 10 may be determined as the address of the WIFI network or by triangulation based on signals received from various GSM-transmitters as is well-known in the art of mobile phones.
  • the illustrated environment detector 14 is configured for recording the determined geographical positions together with the determined types of sound environment at the respective geographical positions. Recording may be performed at regular time intervals, and/or with a certain geographical distance between recordings, and/or triggered by certain events, e.g. a shift in type of sound environment, a change in signal processing, such as a change in signal processing programme, a change in signal processing parameters, etc., etc.
  • the environment detector is configured for increasing the probability that the current sound environment is of the same type of sound environment, or, determining that the current sound environment is of the same type of sound environment.
  • a user interface (not shown) of the hearing aid system 10 may be configured to allocate certain types of sound environment to certain geographical areas.
  • the illustrated sound environment detector 14 is also configured for accessing a calendar system of the user, e.g. through the mobile interface 48, to obtain information on the whereabouts of the user, e.g. meeting room, office, canteen, restaurant, home, etc, and to include this information in the determination of the type of sound environment.
  • Information from the calendar system of the user may substitute or supplement information on the geographical position determined by the GPS receiver.
  • the environment detector 14 may automatically switch the hearing aid(s) of the hearing aid system 10 to flight mode, i.e. radio(s) of the hearing aid(s) are turned off, when the user is in an airplane as indicated in the calendar system of the user.
  • flight mode i.e. radio(s) of the hearing aid(s) are turned off
  • GPS signals may be absent or so weak that the geographical position cannot be determined by the GPS receiver.
  • Information from the calendar system on the whereabouts of the user may then be used to provide information on the geographical position, or information from the calendar system may supplement information on the geographical position, e.g. indication of a specific meeting room may provide information on the floor in a high rise building.
  • Information on height is typically not available from a GPS receiver.
  • the environment detector 14 may automatically use information from the calendar system, when the GPS-receiver is unable to provide the geographical position. In the event that no information on geographical position is available from the GPS receiver and calendar system, the environment detector may determine the type of sound environment in a conventional way based on the received sound signal; or, the hearing aid may be set to operate in a mode selected by the user, e.g. previously during a fitting session, or when the situation occurs.
  • the hearing aid 12 comprises one or more orientation sensors 44, such as gyroscopes, e.g. MEMS gyros, tilt sensors, roll ball switches, etc, configured for outputting signals for determination of orientation of the head of a user wearing the hearing aid, e.g. one or more of head yaw, head pitch, head roll, or combinations hereof, e.g. tilt, i.e. the angular deviation from the heads normal vertical position, when the user is standing up or sitting down. E.g. in a resting position, the tilt of the head of a person standing up or sitting down is 0°, and in a resting position, the tilt of the head of a person lying down is 90°.
  • gyroscopes e.g. MEMS gyros, tilt sensors, roll ball switches, etc
  • orientation sensors 44 such as gyroscopes, e.g. MEMS gyros, tilt sensors, roll ball switches, etc.
  • orientation sensors 44 such as gyro
  • the first processor 24 is configured for selection of the first signal processing algorithm of the processor 24 based on user head orientation as determined based on the output signals 46 of the one or more orientation sensors 44 and the output control signal 34 of the first sound environment detector 14. For example, if the user changes position from sitting up to lying down in order to take a nap, the environment detector 14 may cause the signal processor 24 to switch program accordingly, e.g. the first hearing aid 12 may be automatically muted.
  • the new hearing system 10 shown in Fig. 2 is similar to the new hearing aid system of Fig. 1 and operates in the same way, except for the fact that the sound environment detector 14 has been moved from the hand-held device 30 in Fig. 1 to the first hearing aid 12 of Fig. 2 .
  • the microphone output signals 20, 22 can be connected directly to the sound environment detector 14 so that the type of sound environment can be determined based on signals received by the microphones in the hearing aid without increasing data transmission requirements.
  • the new hearing aid system 10 shown in Fig. 3 is a binaural hearing aid system with two hearing aids, a first hearing aid 12A for the right ear and a second hearing aid 12B for the left ear of the user, and a hand-held device 30 comprising the GPS receiver 42 and the mobile interface 48.
  • Each of the illustrated first hearing aid 12A and second hearing aid 12B is similar to the hearing aid shown in Fig. 2 and operates in a similar way, except for the fact that the respective sound environment detectors 14A, 14B co-operate to provide co-ordinated selection of signal processing algorithms in the two hearing aids 12A, 12B as further explained below.
  • Each of the first and second hearing aids 12A, 12B' of the binaural hearing aid system 10 comprises a binaural sound environment detector 14A, 14B for determination of the sound environment surrounding a user of the binaural hearing aid system 10. The determination is based on the output signals of the microphones 20A, 22A, 20B, 22B. Based on the determination, the binaural sound environment detector 14A, 14B provides outputs 34A, 34B to the respective hearing aid processors 24A, 24B for selection of the signal processing algorithm appropriate for the determined sound environment.
  • the binaural sound environment detectors 14A, 14B determine the sound environment based on signals from both hearing aids, i.e. binaurally, whereby hearing aid processors 24A, 24B are automatically switched in co-ordination to the most suitable algorithm for the determined sound environment whereby optimum sound quality and/or speech intelligibility are maintained in various sound environments by the binaural hearing aid system 10.
  • the binaural sound environment detectors 14A, 14B illustrated in Fig. 3 are both similar to the sound environment detector 14 shown in Fig. 2 apart from the fact that the first environment detector 14 only receives inputs from one hearing aid 12 while each of the binaural sound environment detectors 14A, 14B receives inputs from both hearing aids 12A, 12B.
  • signals are transmitted between the hearing aids 12A, 12B so that the algorithms executed by the signal processors 24A, 24B are selected in coordination.
  • the output of the environment classifier 14A of the first hearing aid 12A is transmitted to the second hearing aid 12B, and the output of the environment classifier 14B of the second hearing aid 12B is transmitted to the first hearing aid 12A.
  • the parameter maps 40A, 40B of the first and second hearing aids 12A, 12B then operate based on the same two inputs to produce the control signals 34A, 34B for selection of the processor algorithms, and since the parameter mapping units 34A, 34B receive identical inputs, algorithm selections in the two hearing aids 12A, 12B are co-ordinated.
  • the transmission data rate is low, since only a set of probabilities or logic values for the environment classes has to be transmitted between the hearing aids 12A, 12B. Rather high latency can be accepted.
  • time constants By applying time constants to the variables that will change according to the output of the parameter mapping, it is possible to smooth out differences that may be caused by latency.
  • signal processing in the two hearing instruments is coordinated. However if transition periods of a few seconds are allowed the system can operate with only 3-4 transmissions per second. Hereby, power consumption is kept low.
  • the sound environment detectors 14A, 14B incorporate determined positions provided by the hand-held unit 30 of the new hearing aid system 10 in the same way as disclosed above with reference to Figs. 1 and 2 .
  • co-ordinated signal processing in the two hearing aids 12A, 12B is obtained by provision of a single sound environment detector 14 similar to the sound environment detector shown in Fig. 1 and operating in a similar way apart from the fact that the sound environment detector 14 provides two control outputs 34A, 34B, one of which 34A is connected to the first hearing aid 12A, and the other of which 34B is connected to the second hearing aid 12B.
  • the illustrated sound environment detector 14 is accommodated in the hand-held device 30.
  • Each of the hearing aids 12A, 12B is similar to the hearing aid 12 shown in Fig. 1 and operates in the same way.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Telephone Function (AREA)

Abstract

A new hearing aid system is provided, comprising a location detector, e.g. a GPS receiver, for determination of the geographical position of the user of the hearing aid system, and an environment detector configured for determination of the type of sound environment surrounding the user of the hearing aid system based on sound as received by the hearing aid system and the geographical position of the hearing aid system as determined by the location detector.

Description

    FIELD OF TECHNOLOGY
  • A new hearing aid system is provided, comprising a location detector, e.g. including at least one of a GPS receiver, a calendar system, a WIFI network interface, a mobile phone network interface , etc, for determination of the geographical position of the user of the hearing aid system, and an environment detector configured for determination of the type of sound environment surrounding the user of the hearing aid system based on sound as received by the hearing aid system and the geographical position of the hearing aid system as determined by the location detector.
  • BACKGROUND
  • Today's conventional hearing aids typically comprise a Digital Signal Processor (DSP) for processing of sound received by the hearing aid for compensation of the user's hearing loss. As is well known in the art, the processing of the DSP is controlled by a signal processing algorithm having various parameters for adjustment of the actual signal processing performed. The gains in each of the frequency channels of a multichannel hearing aid are examples of such parameters.
  • The flexibility of the DSP is often utilized to provide a plurality of different algorithms and/or a plurality of sets of parameters of a specific algorithm. For example, various algorithms may be provided for noise suppression, i.e. attenuation of undesired signals and amplification of desired signals. Desired signals are usually speech or music, and undesired signals can be background speech, restaurant clatter, music (when speech is the desired signal), traffic noise, etc.
  • The different algorithms or parameter sets are typically included to provide comfortable and intelligible reproduced sound quality in different sound environments, such as speech, babble speech, restaurant clatter, music, traffic noise, etc. Audio signals obtained from different sound environments may possess very different characteristics, e.g. average and maximum sound pressure levels (SPLs) and/or frequency content.
  • Therefore, in a hearing aid with a DSP, each type of sound environment may be associated with a particular program wherein a particular setting of algorithm parameters of a signal processing algorithm provides processed sound of optimum signal quality in the type of sound environment in question. A set of such parameters may typically include parameters related to broadband gain, corner frequencies or slopes of frequency-selective filter algorithms and parameters controlling e.g. knee-points and compression ratios of Automatic Gain Control (AGC) algorithms.
  • Consequently, today's DSP based hearing instruments are usually provided with a number of different programs, each program tailored to a particular sound environment category and/or particular user preferences. Signal processing characteristics of each of these programs is typically determined during an initial fitting session in a dispenser's office and programmed into the instrument by activating corresponding algorithms and algorithm parameters in a non-volatile memory area of the hearing aid and/or transmitting corresponding algorithms and algorithm parameters to the nonvolatile memory area.
  • Some known hearing aids are capable of automatically classifying the user's sound environment into one of a number of relevant or typical everyday sound environment categories, such as speech, babble speech, restaurant clatter, music, traffic noise, etc.
  • Obtained classification results may be utilised in the hearing aid to automatically select signal processing characteristics of the hearing aid, e.g. to automatically switch to the most suitable algorithm for the environment in question. Such a hearing aid will be able to maintain optimum sound quality and/or speech intelligibility for the individual hearing aid user in various sound environments.
  • US 2007/0140512 A1 and WO 01/76321 disclose examples of classifier approaches.
  • SUMMARY
  • A new hearing aid system is provided with a hearing aid that includes the geographical position of a user of the new hearing aid system in its determination of the sound environment.
  • The sound environment within a certain geographical area typically remains in the same category over time. Thus, incorporation of the geographical position in the determination of the current sound environment will improve the determination, i.e. the determination may be made faster, and/or the determination may be made with increased certainty.
  • Thus, a new hearing aid system is provided, comprising a first hearing aid with a first microphone for provision of a first audio input signal in response to sound signals received at the first microphone in a sound environment,
    a first processor that is configured to process the first audio input signal in accordance with a first signal processing algorithm to generate a first hearing loss compensated audio signal,
    a first output transducer for conversion of the first hearing loss compensated audio signal to a first acoustic output signal,
    a first sound environment detector configured for
    determination of the type of sound environment surrounding a user of the hearing aid system, and for
    provision of a first output for selection of the first signal processing algorithm of the first processor based on the determined type of sound environment, and
    a location detector, e.g. including at least one of a GPS receiver, a calendar system, a WIFI network interface, a mobile phone network interface, etc, configured for determining the geographical position of the hearing aid system.
  • The first sound environment detector is configured for determination of the type of sound environment surrounding the user of the hearing aid system based on the first audio input signal and the geographical position of the hearing aid system.
  • The hearing aid may be of any type configured to be head worn at, and shifting position and orientation together with, the head, such as a BTE, a RIE, an ITE, an ITC, a CIC, etc, hearing aid.
  • Throughout the present disclosure, the term GPS receiver is used to designate a receiver of satellite signals of any satellite navigation system that provides location and time information anywhere on or near the Earth, such as the satellite navigation system maintained by the United States government and freely accessible to anyone with a GPS receiver and typically designated "the GPS-system", the Russian GLObal NAvigation Satellite System (GLONASS), the European Union Galileo navigation system, the Chinese Compass navigation system, the Indian Regional Navigational 20 Satellite System, etc, and also including augmented GPS, such as StarFire, Omnistar, the Indian GPS Aided Geo Augmented Navigation (GAGAN), the European Geostationary Navigation Overlay Service (EGNOS), the Japanese Multifunctional Satellite Augmentation System (MSAS), etc. In augmented GPS, a network of ground-based reference stations measure small variations in the GPS satellites' signals, correction messages are sent to the GPS system satellites that broadcast the correction messages back to Earth, where augmented GPS-enabled receivers use the corrections while computing their positions to improve accuracy. The International Civil Aviation Organization (ICAO) calls this type of system a satellite-based augmentation system (SBAS).
  • Throughout the present disclosure, a calendar system is a system that provides users with an electronic version of a calendar with data that can be accessed through a network, such as the Internet. Well-known calendar systems include, e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, Microsoft Outlook with Exchange Server, etc. The hearing aid may further comprise one or more orientation sensors, such as gyroscopes, e.g. MEMS gyros, tilt sensors, roll ball switches, etc, configured for outputting signals for determination of orientation of the head of a user wearing the hearing aid, e.g. one or more of head yaw, head pitch, head roll, or combinations hereof, e.g. inclination or tilt.
  • Throughout the present disclosure, the word "tilt" denotes the angular deviation from the heads normal vertical position, when the user is standing up or sitting down. Thus, in a resting position of the head of a person standing up or sitting down, the tilt is 0°, and in a resting position of the head of a person lying down, the tilt is 90°.
  • The first sound environment detector may be configured for provision of the first output for selection of the first signal processing algorithm of the first processor based on user head orientation as determined based on the output signals of the one or more orientation sensors. For example, if the user changes position from sitting up to lying down in order to take a nap, the environment detector may cause the first signal processor to switch program accordingly, e.g. the first hearing aid may be automatically muted.
  • Alternatively, the output signals of the one or more orientation sensors may be input to another part of the hearing aid system, e.g. the first processor, configured for selection of the signal processing algorithm of the first processor based on the output signals of the one or more orientation sensors and the output of the first sound environment detector.
  • The signal processing algorithm may comprise a plurality of sub-algorithms or sub-routines that each performs a particular subtask in the signal processing algorithm. As an example, the signal processing algorithm may comprise different signal processing sub-routines such as frequency selective filtering, single or multi-channel compression, adaptive feedback cancellation, speech detection and noise reduction, etc.
  • Furthermore, several distinct selections of the above-mentioned signal processing sub routines may be grouped together to form two, three or more different pre-set listening programs which the user may be able to select between in accordance with his/hers preferences.
  • The signal processing algorithm will have one or several related algorithm parameters. These algorithm parameters can usually be divided into a number of smaller parameters sets, where each such algorithm parameter set is related to a particular part of the signal processing algorithm or to particular sub-routine as explained above. These parameter sets control certain characteristics of their respective subroutines such as corner-frequencies and slopes of filters, compression thresholds and ratios of compressor algorithms, adaptation rates and probe signal characteristics of adaptive feedback cancellation algorithms, etc.
  • Values of the algorithm parameters are preferably intermediately stored in a volatile data memory area of the processing means such as a data RAM area during execution of the signal processing algorithm. Initial values of the algorithm parameters are stored in a non-volatile memory area such as an EEPROM/Flash memory area or battery backed-up RAM memory area to allow these algorithm parameters to be retained during power supply interruptions, usually caused by the user's removal or replacement of the hearing aid's battery or manipulation of an ON/OFF switch.
  • The location detector, e.g. including a GPS receiver, may be included in the first hearing aid for determining the geographical position of the user, when the user wears the hearing aid in its intended operational position on the head, based on satellite signals in the well-known way. Hereby, the user's current position and possibly orientation can be provided, e.g. to the first environment detector, based on data from the first hearing aid.
  • The first environment detector may be included in the first hearing aid, whereby signal transmission between the environment detector and other circuitry of the hearing aid is facilitated.
  • Alternatively, the location detector, e.g. including the GPS receiver, may be included in a hand-held device that is interconnected with the hearing aid.
  • The hand-held device may be a GPS receiver, a smart phone, e.g. an Iphone, an Android phone, windows phone, etc, e.g. with a GPS receiver, and a calendar system, etc, interconnected with the hearing aid.
  • The first environment detector may be included in the hand-held device. The first environment detector may benefit from the larger computing resources and power supply typically available in a hand-held device as compared with the limited computing resources and power available in a hearing aid.
  • The hand-held device may accommodate a user interface configured for user control of the hearing aid system including the first hearing aid.
  • The hand-held device may have an interface for connection with a Wide-Area-Network, such as the Internet.
  • The hand-held device may access the Wide-Area-Network through a mobile telephone network, such as GSM, IS-95, UMTS, CDMA-2000, etc.
  • Through the Wide-Area-Network, e.g. the Internet, the hand-held device may have access to electronic time management and communication tools used by the user for communication and for storage of time management and communication information relating to the user. The tools and the stored information typically reside on a remote server accessed through the Wide-Area-Network.
  • A processor of the hand-held device may be configured for storing hearing aid parameters together with GPS-data in the Cloud, i.e. on a remote server accessed through the Internet, possibly together with a hearing profile of the user, e.g. for backup of hearing aid settings at various GPS-locations, and/or for sharing of hearing aid settings at various GPS-locations with other hearing aid users.
  • Thus, the processor of the hand-held device may be configured for retrieving a hearing aid setting of another user made at the current GPS-location. The hearing aid settings may be grouped according to hearing profile similarities and/or age and/or race and/or ear size, etc, and the hearing aid setting of another user may be selected in accordance with the user's belonging to such groups.
  • The hearing aid may comprise a data interface for transmission of control signals from the hand-held device to other parts of the hearing aid system, including the first hearing aid.
  • The hearing aid may comprise a data interface for transmission of the output of the one or more orientation sensors to the hand-held device.
  • The data interface may be a wired interface, e.g. a USB interface, or a wireless interface, such as a Bluetooth interface, e.g. a Bluetooth Low Energy interface.
  • The hearing aid may comprise an audio interface for reception of an audio signal from the hand-held device and possibly other audio signal sources.
  • The audio interface may be a wired interface or a wireless interface. The data interface and the audio interface may be combined into a single interface, e.g. a USB interface, a Bluetooth interface, etc.
  • The hearing aid may for example have a Bluetooth Low Energy data interface for exchange of sensor and control signals between the hearing aid and the hand-held device, and a wired audio interface for exchange of audio signals between the hearing aid and the hand-held device.
  • The first sound environment detector may comprise a first feature extractor for determination of characteristic parameters of the first audio input signal.
  • The feature extractor may determine characteristic parameters of the audio input signal, such as average and maximum sound pressure levels (SPLs), signal power, spectral data and other well-known features. Spectral data may include Discrete Fourier Transform coefficients, Linear Predictive Coding parameters, cepstrum parameters or corresponding differential cepstrum parameters.
  • The feature extractor may output the characteristic parameters to a first environment classifier configured for categorizing the sound environment based on the determined characteristic parameters and the geographical position.
  • The first environment classifier is configured for categorization of sound environments into a number of sound environment classes or categories, such as speech, babble speech, restaurant clatter, music, traffic noise, etc. The classification process may utilise a simple nearest neighbour search, a neural network, a Hidden Markov Model system or another system capable of pattern recognition. The output of the environmental classification can be a "hard" classification containing one single environmental class or a set of probabilities indicating the probabilities of the sound belonging to the respective classes. Other outputs may also be applicable.
  • The first environment classifier may output a determined sound environment category to a first parameter map configured for provision of the output for selection of the corresponding first signal processing algorithm of the first processor.
  • In this way, obtained classification results may be utilised in the hearing aid to automatically select signal processing characteristics of the hearing aid, e.g. to automatically switch to the most suitable algorithm for the sound environment in question. Such a hearing aid will be able to maintain optimum sound quality and/or speech intelligibility for the individual hearing aid user in various sound environments.
  • As an example, it may be desirable to switch between an omni-directional and a directional microphone preset program in dependence of, not just the level of background noise, but also on further signal characteristics of this background noise. In situations where the user of the hearing aid communicates with another individual in the presence of the background noise, it would be beneficial to be able to identify and classify the type of background noise. Omni-directional operation could be selected in the event that the noise being traffic noise to allow the user to clearly hear approaching traffic independent of its direction of arrival. If, on the other hand, the background noise was classified as being babble-noise, the directional listening program could be selected to allow the user to hear a target speech signal with improved signal-to-noise ratio (SNR) during a conversation.
  • Applying Hidden Markov Models for analysis and classification of the microphone signal may for example obtain a detailed characterisation of e.g. a microphone signal. Hidden Markov Models are capable of modelling stochastic and non-stationary signals in terms of both short and long time temporal variations.
  • The environment detector may be configured for recording the geographical position determined by the location detector together with the determined type of sound environment at the geographical position. Recording may be performed at regular time intervals, and/or with a certain geographical distance between recordings, and/or triggered by certain events, e.g. a shift in type of sound environment, a change in signal processing, such as a change in signal processing programme, a change in signal processing parameters, etc., etc.
  • When the hearing aid system is located within a threshold distance from a geographical position of a previous recording of a determined type of sound environment and/or within an area of previously recorded geographical positions with identical recordings of the type of sound environment, the environment detector may be configured for increasing the probability that the current sound environment is of the same type as already recorded at or proximate the current geographical position, or, determining that the current sound environment is of the already recorded type of sound environment.
  • The threshold distance may be predetermined, e.g. reflecting the uncertainty of the determination of geographical position of the location detector, e.g. less than or equal to the uncertainty of the location detector, or less than or equal to an average distance between recordings of geographical position and type of sound environment, or less than a characteristic size of significant features at the current geographical position such as a sports arena, a central station, a city hall, a theatre, etc. The threshold distance may also be adapted to the current environment, e.g. resulting in relatively small threshold distances in areas, e.g. urban areas, with short distances between recordings of different types of sound environments, and resulting in relatively large threshold distances in areas, e.g. open ranges, with large distances between recordings of different types of sound environments.
  • A user interface of the hearing aid system may be configured to allocate certain types of sound environment to certain geographical areas.
  • In absence of useful GPS signals, the location detector may determine the geographical position of the hearing aid system based on the postal address of a WIFI network the hearing aid system may be connected to, or by triangulation based on signals possibly received from various GSM-transmitters as is well-known in the art of mobile phones. Further, the location detector may be configured for accessing a calendar system of the user to obtain information on the expected whereabouts of the user, e.g. meeting room, office, canteen, restaurant, home, etc and to include this information in the determination of the geographical position. Thus, Information from the calendar system of the user may substitute or supplement information on the geographical position determined by otherwise, e.g. by a GPS receiver.
  • For example, the environment detector may automatically switch the hearing aid(s) of the hearing aid system to flight mode, i.e. radio(s) of the hearing aid(s) are turned off, when the user is in an airplane according to the location detector.
  • Also, when the user is inside a building, e.g. a high rise building, GPS signals may be absent or so weak that the geographical position cannot be determined by a GPS receiver. Information from the calendar system on the whereabouts of the user may then be used to provide information on the geographical position, or information from the calendar system may supplement information on the geographical position, e.g. indication of a specific meeting room may provide information on which floor in a high rise building, the hearing aid system is located. Information on height is typically not available from a GPS receiver.
  • The location detector may automatically use information from the calendar system, when the geographical position cannot be determined otherwise, e.g. when the GPS-receiver is unable to provide the geographical position. In the event that no information on geographical position is available to the location detector, e.g. from the GPS receiver and the calendar system, the environment detector may determine the type of sound environment in a conventional way based on the received sound signal; or, the hearing aid may be set to operate in a mode selected by the user, e.g. previously during a fitting session, or when the situation occurs.
  • The new hearing aid system may be a binaural hearing aid system with two hearing aids, one for the right ear and one for the left ear of the user.
  • Thus, the new hearing aid system may comprise a second hearing aid with
    a second microphone for provision of a second audio input signal in response to sound signals received at the second microphone in a sound environment,
    a second processor that is configured to process the second audio input signal in accordance with a second signal processing algorithm to generate a second hearing loss compensated audio signal,
    a second output transducer for conversion of the second hearing loss compensated audio signal to a second acoustic output signal.
  • The circuitry of the second hearing aid is preferably identical to the circuitry of the first hearing aid apart from the fact that the second hearing aid, typically, is adjusted to compensate a hearing loss that is different from the hearing loss compensated by the first hearing aid, since; typically, binaural hearing loss differs for the two ears.
  • The first sound environment detector may be configured for determination of the type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system.
  • The first sound environment detector may be configured for provision of a second output for selection of a second signal processing algorithm of the second processor.
  • Alternatively, the second hearing aid may comprise a second sound environment detector similar to the first sound environment detector and configured for determination of the type of sound environment surrounding a user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system, and for provision of a second output for selection of the second signal processing algorithm of the second processor.
  • In binaural hearing aid systems, it is important that the signal processing algorithms of the first and second signal processors are selected in a coordinated way. Since sound environment characteristics may differ significantly at the two ears of a user, it will often occur that independent sound environment determination at the two ears of a user differs, and this may lead to undesired different signal processing of sounds in the hearing aids. Thus, preferably the signal processing algorithms of the first and second processors are selected based on the same signals, such as sound signals received at the hand-held device, or both sound signals received at the left ear and sound signals received at the right ear, or a combination of sound signals received at the hand-held device and sound signals received at the left ear and sound signals received at the right ear, etc.
  • Like the first sound environment detector, the second sound environment detector may comprise a second feature extractor for determination of characteristic parameters of the second audio input signal.
  • The second feature extractor may output the characteristic parameters to a second environment classifier for categorizing the sound environment based on the determined characteristic parameters and the geographical position.
  • The second environment classifier may output a sound environment category to a second parameter map configured for provision of the output for selection of the second signal processing algorithm of the second processor.
  • A hearing aid system includes: a first hearing aid with a first microphone for provision of a first audio input signal in response to sound signals received at the first microphone in a sound environment, a first processor that is configured to process the first audio input signal in accordance with a first signal processing algorithm to generate a first hearing loss compensated audio signal, and a first output transducer for conversion of the first hearing loss compensated audio signal to a first acoustic output signal; a first sound environment detector configured for determining a type of sound environment surrounding a user of the hearing aid system, and for provision of a first output for selection of the first signal processing algorithm based on the determined type of sound environment; and a location detector configured for determining a geographical position of the hearing aid system; wherein the first sound environment detector is configured for determining the type of sound environment surrounding the user of the hearing aid system based on the first audio input signal and the geographical position of the hearing aid system.
  • Optionally, the location detector includes a GPS receiver.
  • Optionally, the first sound environment detector is configured for recording the geographical position determined by the location detector together with the type of sound environment at the geographical position.
  • Optionally, the first sound environment detector is configured for determining the type of sound environment by considering a probability of occurrence for a previously recorded type of sound environment that is within a distance threshold from the determined geographical position.
  • Optionally, the hearing aid system further includes a user interface configured to allocate certain sound environment categories to certain respective geographical areas.
  • Optionally, the location detector is configured for accessing a calendar system of the user to obtain information regarding a location of the user, and to determine the geographical position of the hearing aid system based on the information regarding the location of the user.
  • Optionally, the location detector is configured for automatically accessing the calendar system of the user to obtain the information regarding the location of the user, and to determine the geographical position of the hearing aid system based on the information regarding the location of the user, when the location detector is otherwise unable to determine the geographical position of the hearing aid system.
  • Optionally, the location detector is configured for obtaining a height of the geographical position from the calendar system.
  • Optionally, the first sound environment detector is configured for automatically switching the first hearing aid of the hearing aid system to a flight mode, when the user is in an airplane according to the location detector.
  • Optionally, the first hearing aid comprises at least one orientation sensor configured for providing information regarding an orientation of a head of the user when the user wears the first hearing aid in its intended operating position.
  • Optionally, the first hearing aid is configured for selection of the first signal processing algorithm based on the information regarding the orientation of the head of the user.
  • Optionally, the hearing aid system further includes a hand-held device communicatively coupled with the first hearing aid, the hand-held device accommodating the location detector.
  • Optionally, the hand-held device also accommodates the first sound environment detector.
  • Optionally, the hand-held device comprises a user interface configured for controlling the first hearing aid.
  • Optionally, the first hearing aid accommodates the first sound environment detector.
  • Optionally, the first sound environment detector comprises: a first feature extractor for determining characteristic parameters of the first audio input signal, a first environment classifier for categorizing the sound environment based on the determined characteristic parameters and the geographical position, and a first parameter map for provision of the first output for selection of the first signal processing algorithm.
  • Optionally, the hearing aid system further includes a second hearing aid with a second microphone for provision of a second audio input signal in response to sound signals received at the second microphone, a second processor that is configured to process the second audio input signal in accordance with a second signal processing algorithm to generate a second hearing loss compensated audio signal, a second output transducer for conversion of the second hearing loss compensated audio signal to a second acoustic output signal, wherein the first sound environment detector is configured for determining the type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system.
  • Optionally, the first sound environment detector is configured for provision of a second output for selection of the second signal processing algorithm.
  • Optionally, the second hearing aid comprises: a second sound environment detector configured for determining a type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system, and provision of a second output for selection of the second signal processing algorithm based on the type of sound environment determined by the second sound environment detector.
  • Other and further aspects and features will be evident from reading the following detailed description of the embodiments.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The drawings illustrate the design and utility of embodiments, in which similar elements are referred to by common reference numerals. These drawings are not necessarily drawn to scale. In order to better appreciate how the above-recited and other advantages and objects are obtained, a more particular description of the embodiments will be rendered, which are illustrated in the accompanying drawings. These drawings depict only typical embodiments and are not therefore to be considered limiting of its scope.
  • Fig. 1
    shows a new hearing aid system with a single hearing aid with an orientation sensor and a hand-held device with a GPS receiver and a sound environment detector,
    Fig. 2
    shows a new hearing aid system with a single hearing aid with an orientation sensor and a sound environment detector and a hand-held device with a GPS receiver,
    Fig. 3
    shows a new hearing aid system with two hearing aids with orientation sensors and sound environment detectors and a hand-held device with a GPS receiver, and
    Fig. 4
    shows a new hearing aid system with two hearing aids with orientation sensors and a hand-held device with a sound environment detector and a GPS receiver.
    DETAILED DESCRIPTION
  • Various exemplary embodiments are described hereinafter with reference to the figures. It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or not so explicitly described.
  • The new hearing aid system will now be described more fully hereinafter with reference to the accompanying drawings, in which various types of the new hearing aid system are shown. The new hearing aid system may be embodied in different forms not shown in the accompanying drawings and should not be construed as limited to the embodiments and examples set forth herein.
  • Similar reference numerals refer to similar elements in the drawings.
  • Fig. 1 schematically illustrates a new hearing aid system 10 with a first hearing aid 12 with a sound environment detector 14.
  • The first hearing aid 12 may be of any type configured to be head worn at the head, such as a BTE, a RIE, an ITE, an ITC, a CIC, etc, hearing aid.
  • The first hearing aid 12 comprises a first front microphone 16 and first rear microphone 18 connected to respective A/D converters (not shown) for provision of respective digital input signals 20, 22 in response to sound signals received at the microphones 16, 18 in a sound environment surrounding the user of the hearing aid system 10. The digital input signals 20, 22 are input to a hearing loss processor 24 that is configured to process the digital input signals 20, 22 in accordance with a signal processing algorithm to generate a hearing loss compensated output signal 26. The hearing loss compensated output signal 26 is routed to a D/A converter (not shown) and an output transducer 28 for conversion of the hearing loss compensated output signal 26 to an acoustic output signal.
  • The new hearing aid system 10 further comprises a hand-held device 30, e.g. a smart phone, accommodating the sound environment detector 14 for determination of the sound environment surrounding the user of the hearing aid system 10. The determination is based on a sound signal picked up by a microphone 32 in the hand-held device. Based on the determination, the sound environment detector 14 provides an output 34 to the hearing aid processor 24 for selection of the signal processing algorithm appropriate for the determined sound environment.
  • Thus, the hearing aid processor 24 is automatically switched to the most suitable algorithm for the determined environment whereby optimum sound quality and/or speech intelligibility is maintained in various sound environments. The signal processing algorithms of the processor 24 may perform various forms of noise reduction and dynamic range compression as well as a range of other signal processing tasks.
  • The first environment detector 14 benefits from the larger computing resources and power supply typically available in the hand-held device 30.
  • The sound environment detector 14 comprises a feature extractor 36 for determination of characteristic parameters of the received sound signals. The feature extractor 36 maps the signal from the microphone 32 onto sound features, i.e. the characteristic parameters. These features can be signal power, spectral data and other well-known features.
  • The sound environment detector 14 further comprises an environment classifier 38 for categorizing the sound environment based on the determined characteristic parameters output by the feature extractor 36. The environment classifier 38 categorizes the sounds into a number of environmental classes, such as speech, babble speech, restaurant clatter, music, traffic noise, etc. The classification process may utilise a simple nearest neighbour search, a neural network, a Hidden Markov Model system or another system capable of pattern recognition. The output of the environmental classification can be a "hard" classification containing one single environmental class or a set of probabilities indicating the probabilities of the sound belonging to the respective classes. Other outputs may also be applicable.
  • The sound environment detector 14 further comprises a parameter map 40 for the provision of the output 34 for selection of the signal processing algorithms. The parameter map 40 maps the output of the environment classifier 38 to a set of parameters for the hearing aid sound processor 20. Examples of such parameters are: Amount of noise reduction, amount of gain and amount of HF gain. Other parameters may be included.
  • The hand-held device 30 includes a location detector with a GPS receiver 42 configured for determining the geographical position of the hearing aid system 10. The illustrated hand-held device 30 is a smart phone also having mobile interface 48 comprising a GSM-interface for interconnection with a mobile phone network and a WIFI interface 48 as is well-known in the art of mobile phones. In absence of useful GPS signals, the position of the illustrated hearing aid system 10 may be determined as the address of the WIFI network or by triangulation based on signals received from various GSM-transmitters as is well-known in the art of mobile phones.
  • The illustrated environment detector 14 is configured for recording the determined geographical positions together with the determined types of sound environment at the respective geographical positions. Recording may be performed at regular time intervals, and/or with a certain geographical distance between recordings, and/or triggered by certain events, e.g. a shift in type of sound environment, a change in signal processing, such as a change in signal processing programme, a change in signal processing parameters, etc., etc.
  • When the hearing aid system 10 is located within an area of geographical positions with recordings of the same type of sound environment, the environment detector is configured for increasing the probability that the current sound environment is of the same type of sound environment, or, determining that the current sound environment is of the same type of sound environment.
  • A user interface (not shown) of the hearing aid system 10 may be configured to allocate certain types of sound environment to certain geographical areas.
  • The illustrated sound environment detector 14 is also configured for accessing a calendar system of the user, e.g. through the mobile interface 48, to obtain information on the whereabouts of the user, e.g. meeting room, office, canteen, restaurant, home, etc, and to include this information in the determination of the type of sound environment. Information from the calendar system of the user may substitute or supplement information on the geographical position determined by the GPS receiver.
  • For example, the environment detector 14 may automatically switch the hearing aid(s) of the hearing aid system 10 to flight mode, i.e. radio(s) of the hearing aid(s) are turned off, when the user is in an airplane as indicated in the calendar system of the user.
  • Also, when the user is inside a building, e.g. a high rise building, GPS signals may be absent or so weak that the geographical position cannot be determined by the GPS receiver. Information from the calendar system on the whereabouts of the user may then be used to provide information on the geographical position, or information from the calendar system may supplement information on the geographical position, e.g. indication of a specific meeting room may provide information on the floor in a high rise building. Information on height is typically not available from a GPS receiver.
  • The environment detector 14 may automatically use information from the calendar system, when the GPS-receiver is unable to provide the geographical position. In the event that no information on geographical position is available from the GPS receiver and calendar system, the environment detector may determine the type of sound environment in a conventional way based on the received sound signal; or, the hearing aid may be set to operate in a mode selected by the user, e.g. previously during a fitting session, or when the situation occurs.
  • The hearing aid 12 comprises one or more orientation sensors 44, such as gyroscopes, e.g. MEMS gyros, tilt sensors, roll ball switches, etc, configured for outputting signals for determination of orientation of the head of a user wearing the hearing aid, e.g. one or more of head yaw, head pitch, head roll, or combinations hereof, e.g. tilt, i.e. the angular deviation from the heads normal vertical position, when the user is standing up or sitting down. E.g. in a resting position, the tilt of the head of a person standing up or sitting down is 0°, and in a resting position, the tilt of the head of a person lying down is 90°.
  • The first processor 24 is configured for selection of the first signal processing algorithm of the processor 24 based on user head orientation as determined based on the output signals 46 of the one or more orientation sensors 44 and the output control signal 34 of the first sound environment detector 14. For example, if the user changes position from sitting up to lying down in order to take a nap, the environment detector 14 may cause the signal processor 24 to switch program accordingly, e.g. the first hearing aid 12 may be automatically muted.
  • The new hearing system 10 shown in Fig. 2 is similar to the new hearing aid system of Fig. 1 and operates in the same way, except for the fact that the sound environment detector 14 has been moved from the hand-held device 30 in Fig. 1 to the first hearing aid 12 of Fig. 2. In this way, the microphone output signals 20, 22 can be connected directly to the sound environment detector 14 so that the type of sound environment can be determined based on signals received by the microphones in the hearing aid without increasing data transmission requirements.
  • The new hearing aid system 10 shown in Fig. 3 is a binaural hearing aid system with two hearing aids, a first hearing aid 12A for the right ear and a second hearing aid 12B for the left ear of the user, and a hand-held device 30 comprising the GPS receiver 42 and the mobile interface 48.
  • Each of the illustrated first hearing aid 12A and second hearing aid 12B is similar to the hearing aid shown in Fig. 2 and operates in a similar way, except for the fact that the respective sound environment detectors 14A, 14B co-operate to provide co-ordinated selection of signal processing algorithms in the two hearing aids 12A, 12B as further explained below.
  • Each of the first and second hearing aids 12A, 12B' of the binaural hearing aid system 10 comprises a binaural sound environment detector 14A, 14B for determination of the sound environment surrounding a user of the binaural hearing aid system 10. The determination is based on the output signals of the microphones 20A, 22A, 20B, 22B. Based on the determination, the binaural sound environment detector 14A, 14B provides outputs 34A, 34B to the respective hearing aid processors 24A, 24B for selection of the signal processing algorithm appropriate for the determined sound environment. Thus, the binaural sound environment detectors 14A, 14B determine the sound environment based on signals from both hearing aids, i.e. binaurally, whereby hearing aid processors 24A, 24B are automatically switched in co-ordination to the most suitable algorithm for the determined sound environment whereby optimum sound quality and/or speech intelligibility are maintained in various sound environments by the binaural hearing aid system 10.
  • The binaural sound environment detectors 14A, 14B illustrated in Fig. 3 are both similar to the sound environment detector 14 shown in Fig. 2 apart from the fact that the first environment detector 14 only receives inputs from one hearing aid 12 while each of the binaural sound environment detectors 14A, 14B receives inputs from both hearing aids 12A, 12B. Thus, in Fig. 3, signals are transmitted between the hearing aids 12A, 12B so that the algorithms executed by the signal processors 24A, 24B are selected in coordination.
  • In Fig. 3, the output of the environment classifier 14A of the first hearing aid 12A is transmitted to the second hearing aid 12B, and the output of the environment classifier 14B of the second hearing aid 12B is transmitted to the first hearing aid 12A. The parameter maps 40A, 40B of the first and second hearing aids 12A, 12B then operate based on the same two inputs to produce the control signals 34A, 34B for selection of the processor algorithms, and since the parameter mapping units 34A, 34B receive identical inputs, algorithm selections in the two hearing aids 12A, 12B are co-ordinated.
  • The transmission data rate is low, since only a set of probabilities or logic values for the environment classes has to be transmitted between the hearing aids 12A, 12B. Rather high latency can be accepted. By applying time constants to the variables that will change according to the output of the parameter mapping, it is possible to smooth out differences that may be caused by latency. As already mentioned, it is important that signal processing in the two hearing instruments is coordinated. However if transition periods of a few seconds are allowed the system can operate with only 3-4 transmissions per second. Hereby, power consumption is kept low.
  • The sound environment detectors 14A, 14B incorporate determined positions provided by the hand-held unit 30 of the new hearing aid system 10 in the same way as disclosed above with reference to Figs. 1 and 2.
  • In the new binaural hearing aid system 10 shown in Fig. 4, co-ordinated signal processing in the two hearing aids 12A, 12B is obtained by provision of a single sound environment detector 14 similar to the sound environment detector shown in Fig. 1 and operating in a similar way apart from the fact that the sound environment detector 14 provides two control outputs 34A, 34B, one of which 34A is connected to the first hearing aid 12A, and the other of which 34B is connected to the second hearing aid 12B. The illustrated sound environment detector 14 is accommodated in the hand-held device 30.
  • Each of the hearing aids 12A, 12B is similar to the hearing aid 12 shown in Fig. 1 and operates in the same way.
  • Although particular embodiments have been shown and described, it will be understood that they are not intended to limit the claimed inventions, and it will be obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the claimed inventions. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. The claimed inventions are intended to cover alternatives, modifications, and equivalents.

Claims (15)

  1. A hearing aid system comprising:
    a first hearing aid with
    a first microphone for provision of a first audio input signal in response to sound signals received at the first microphone in a sound environment,
    a first processor that is configured to process the first audio input signal in accordance with a first signal processing algorithm to generate a first hearing loss compensated audio signal, and
    a first output transducer for conversion of the first hearing loss compensated audio signal to a first acoustic output signal;
    a first sound environment detector configured for
    determining a type of sound environment surrounding a user of the hearing aid system, and for
    provision of a first output for selection of the first signal processing algorithm based on the determined type of sound environment; and
    a location detector configured for determining a geographical position of the hearing aid system; and wherein
    the first sound environment detector is configured for
    determining the type of sound environment surrounding the user of the hearing aid system based on the first audio input signal and the geographical position of the hearing aid system.
  2. The hearing aid system according to claim 1, wherein the location detector includes a GPS receiver.
  3. The hearing aid system according to claim 1 or 2, wherein the first sound environment detector is configured for recording the geographical position determined by the location detector together with the type of sound environment at the geographical position.
  4. The hearing aid system according to claim 3, wherein the first sound environment detector is configured for determining the type of sound environment by considering a probability of occurrence for a previously recorded type of sound environment that is within a distance threshold from the determined geographical position.
  5. The hearing aid system according to any of the previous claims, further comprising a user interface configured to allocate certain sound environment categories to certain respective geographical areas.
  6. The hearing aid system according to any of the previous claims, wherein the location detector is configured for automatically accessing a calendar system of the user to obtain information regarding a location of the user, and to determine the geographical position of the hearing aid system based on the information regarding the location of the user, when the location detector is otherwise unable to determine the geographical position of the hearing aid system.
  7. The hearing aid system according to any of the previous claims, wherein the first sound environment detector is configured for automatically switching the first hearing aid of the hearing aid system to a flight mode, when the user is in an airplane according to the location detector.
  8. The hearing aid system according to any of the previous claims, wherein the first hearing aid comprises at least one orientation sensor configured for providing information regarding an orientation of a head of the user when the user wears the first hearing aid in its intended operating position, and wherein the first hearing aid is configured for selection of the first signal processing algorithm based on the information regarding the orientation of the head of the user.
  9. The hearing aid system according to any of the previous claims, further comprising a hand-held device communicatively coupled with the first hearing aid, the hand-held device accommodating the location detector.
  10. The hearing aid system according to claim 9, wherein the hand-held device also accommodates the first sound environment detector.
  11. The hearing aid system according to claim 9 or 10, wherein the hand-held device comprises a user interface configured for controlling the first hearing aid.
  12. The hearing aid system according to any of the previous claims, wherein the first hearing aid accommodates the first sound environment detector.
  13. The hearing aid system according to any of the previous claims, further comprising
    a second hearing aid with
    a second microphone for provision of a second audio input signal in response to sound signals received at the second microphone,
    a second processor that is configured to process the second audio input signal in accordance with a second signal processing algorithm to generate a second hearing loss compensated audio signal,
    a second output transducer for conversion of the second hearing loss compensated audio signal to a second acoustic output signal,
    wherein the first sound environment detector is configured for determining the type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system.
  14. The hearing aid system according to claim 13, wherein the first sound environment detector is configured for provision of a second output for selection of the second signal processing algorithm.
  15. The hearing aid system according to claim 13 or 14, wherein the second hearing aid comprises:
    a second sound environment detector configured for
    determining a type of sound environment surrounding the user of the hearing aid system based on the first and second audio input signals and the geographical position of the hearing aid system, and
    provision of a second output for selection of the second signal processing algorithm based on the type of sound environment determined by the second sound environment detector.
EP13173995.5A 2013-06-27 2013-06-27 A hearing aid operating in dependence of position Active EP2819436B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
DK13173995.5T DK2819436T3 (en) 2013-06-27 2013-06-27 A hearing aid that works depending on the position
EP13173995.5A EP2819436B1 (en) 2013-06-27 2013-06-27 A hearing aid operating in dependence of position
US13/932,815 US9094769B2 (en) 2013-06-27 2013-07-01 Hearing aid operating in dependence of position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP13173995.5A EP2819436B1 (en) 2013-06-27 2013-06-27 A hearing aid operating in dependence of position

Publications (2)

Publication Number Publication Date
EP2819436A1 true EP2819436A1 (en) 2014-12-31
EP2819436B1 EP2819436B1 (en) 2017-08-23

Family

ID=48672507

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13173995.5A Active EP2819436B1 (en) 2013-06-27 2013-06-27 A hearing aid operating in dependence of position

Country Status (2)

Country Link
EP (1) EP2819436B1 (en)
DK (1) DK2819436T3 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3065422A1 (en) * 2015-03-04 2016-09-07 William S. Woods Techniques for increasing processing capability in hear aids
EP3280157A1 (en) * 2016-08-04 2018-02-07 GN Hearing A/S Hearing device for receiving location information from wireless network
WO2019029810A1 (en) * 2017-08-10 2019-02-14 Sonova Ag Activating a mode of a hearing device
CN116634344A (en) * 2023-07-24 2023-08-22 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001076321A1 (en) 2000-04-04 2001-10-11 Gn Resound A/S A hearing prosthesis with automatic classification of the listening environment
US20070140512A1 (en) 2005-12-20 2007-06-21 Siemens Audiologische Technik Gmbh Signal processing for hearing devices having a number of compression algorithms
US20110293123A1 (en) * 2010-05-25 2011-12-01 Audiotoniq, Inc. Data Storage System, Hearing Aid, and Method of Selectively Applying Sound Filters
WO2012066149A1 (en) * 2010-11-19 2012-05-24 Jacoti Bvba Personal communication device with hearing support and method for providing the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001076321A1 (en) 2000-04-04 2001-10-11 Gn Resound A/S A hearing prosthesis with automatic classification of the listening environment
US20070140512A1 (en) 2005-12-20 2007-06-21 Siemens Audiologische Technik Gmbh Signal processing for hearing devices having a number of compression algorithms
US20110293123A1 (en) * 2010-05-25 2011-12-01 Audiotoniq, Inc. Data Storage System, Hearing Aid, and Method of Selectively Applying Sound Filters
WO2012066149A1 (en) * 2010-11-19 2012-05-24 Jacoti Bvba Personal communication device with hearing support and method for providing the same

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3065422A1 (en) * 2015-03-04 2016-09-07 William S. Woods Techniques for increasing processing capability in hear aids
US10129661B2 (en) 2015-03-04 2018-11-13 Starkey Laboratories, Inc. Techniques for increasing processing capability in hear aids
EP3280157A1 (en) * 2016-08-04 2018-02-07 GN Hearing A/S Hearing device for receiving location information from wireless network
US10390151B2 (en) 2016-08-04 2019-08-20 Gn Hearing A/S Hearing device for receiving location information from wireless network
US11070926B2 (en) 2016-08-04 2021-07-20 Gn Hearing A/S Hearing device for receiving location information from wireless network
WO2019029810A1 (en) * 2017-08-10 2019-02-14 Sonova Ag Activating a mode of a hearing device
US11206497B2 (en) 2017-08-10 2021-12-21 Sonova Ag Activating a mode of a hearing device
US11622205B2 (en) 2017-08-10 2023-04-04 Sonova Ag Activating a mode of a hearing device
CN116634344A (en) * 2023-07-24 2023-08-22 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment
CN116634344B (en) * 2023-07-24 2023-10-27 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment

Also Published As

Publication number Publication date
EP2819436B1 (en) 2017-08-23
DK2819436T3 (en) 2017-11-27

Similar Documents

Publication Publication Date Title
US9094769B2 (en) Hearing aid operating in dependence of position
US9648430B2 (en) Learning hearing aid
EP2884766B1 (en) A location learning hearing aid
JP6190351B2 (en) Learning type hearing aid
US10154357B2 (en) Performance based in situ optimization of hearing aids
US11277696B2 (en) Automated scanning for hearing aid parameters
US11501772B2 (en) Context aware hearing optimization engine
EP3107314A1 (en) Performance based in situ optimization of hearing aids
JP5252738B2 (en) Environmentally adaptive hearing aid
EP2819436B1 (en) A hearing aid operating in dependence of position
US20230292066A1 (en) Methods for controlling a hearing device based on environment parameter, related accessory devices and related hearing systems
US10257621B2 (en) Method of operating a hearing system, and hearing system
CN108322878B (en) Method for operating a hearing aid and hearing aid
DK201370356A1 (en) A hearing aid operating in dependence of position
CN115002635A (en) Sound self-adaptive adjusting method and system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130627

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

R17P Request for examination filed (corrected)

Effective date: 20150625

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

17Q First examination report despatched

Effective date: 20161108

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170324

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 922507

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170915

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013025352

Country of ref document: DE

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20171121

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20170823

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 922507

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170823

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171123

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: GN HEARING A/S

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171124

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171223

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171123

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013025352

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20180524

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180627

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180627

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180627

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130627

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180702

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170823

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170823

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180702

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20220617

Year of fee payment: 10

Ref country code: DK

Payment date: 20220617

Year of fee payment: 10

Ref country code: DE

Payment date: 20220621

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20220614

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20220622

Year of fee payment: 10

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602013025352

Country of ref document: DE

REG Reference to a national code

Ref country code: DK

Ref legal event code: EBP

Effective date: 20230630

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20230627

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20240103

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230627

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230630