EP4068805A1 - Method, computer program, and computer-readable medium for configuring a hearing device, controller for operating a hearing device, and hearing system - Google Patents
Method, computer program, and computer-readable medium for configuring a hearing device, controller for operating a hearing device, and hearing system Download PDFInfo
- Publication number
- EP4068805A1 EP4068805A1 EP21166351.3A EP21166351A EP4068805A1 EP 4068805 A1 EP4068805 A1 EP 4068805A1 EP 21166351 A EP21166351 A EP 21166351A EP 4068805 A1 EP4068805 A1 EP 4068805A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sound
- user
- hearing device
- program
- hearing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/558—Remote control, e.g. of amplification, frequency
Definitions
- the invention relates to a method, a computer program, and a computer-readable medium, in which the computer program is stored, for configuring a hearing device. Furthermore, the invention relates to a controller for operating the hearing device, and to a hearing system comprising at least the one hearing device and optionally a connected user device, such as a smartphone.
- Hearing devices are generally small and complex devices. Hearing devices can include a processor, a microphone as a sound input component, an integrated loudspeaker as a sound output component, a memory, a housing, and other electronical and mechanical components. Some example hearing devices are Behind-The-Ear (BTE), Receiver-In-Canal (RIC), In-The-Ear (ITE), Completely-In-Canal (CIC), and Invisible-In-The-Canal (IIC) devices. A user can prefer one of these hearing devices compared to another device based on hearing loss, aesthetic preferences, lifestyle needs, and budget.
- BTE Behind-The-Ear
- RIC Receiver-In-Canal
- ITE In-The-Ear
- CIC Completely-In-Canal
- IIC Invisible-In-The-Canal
- the accuracy of the classification system may be limited, what may lead to a misclassification of the situation.
- the hearing intention of the user may not be considered by the classifier, e.g. the user wants to communicate at a concert and the hearing aid adapts to the music instead of the conversation partner.
- a common way to consider the user's intention is to provide him or her with manual programs with predefined sets of feature parameters that the user can switch in between by pressing a button on the hearing instrument.
- a modern approach is to allow the user to adjust the single parameters directly via a mobile application.
- these solutions require a certain degree of understanding how the features affect the listening impression.
- the benefit of many features come with compromises (e.g. stronger noise reduction leads to reduced sound quality). Understanding these compromises is a complex matter for non-technical affine users. It also may take too long time until the user has the paired smartphone available or finds the right manual program on the hearing aid. So, the user experience to the user wearing the user device may be not convenient for the user. Further, since the manual programs do have to be set up in advance, an adequate manual program for a specific situation might just not be quickly available.
- a first aspect of the invention relates to a method for configuring a hearing device.
- the hearing device comprises at least one sound input component, at least one sound output component, and a sound processor, which is coupled to the sound output component and which is configured in accordance with a first sound program for modifying a sound output of the hearing device.
- the method comprises: receiving an audio signal from the at least one sound input component and/or a sensor signal from the at least one further sensor; determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal; determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value; configuring the sound processor in accordance with the second sound program such that the sound output is modified according to the second sound program; receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; and reconfiguring the sound processor in accordance with the first sound program.
- the method may be a computer-implemented method, which may be performed automatically by a hearing system, part of which the user's hearing device is.
- the hearing system may, for instance, comprise one or two hearing devices used by the same user. One or both of the hearing devices may be worn on and/or in an ear of the user.
- a hearing device may be a hearing aid, which may be adapted for compensating a hearing loss of the user.
- a cochlear implant may be a hearing device.
- the hearing system may optionally further comprise at least one connected user device, such as a smartphone, smartwatch or other devices carried by the user and/or a personal computer etc.
- the further sensor(s) may be any type(s) of physical sensor(s) - e.g.
- the first and/or second sound program may be referred to as a sound processing feature.
- the sound processing feature may for example be a Noise Canceller or a Beamformer Strength.
- the sound input may correspond to the user's speaking activity and/or the user's acoustic environment.
- the reconfiguration of the sound processor in accordance with the first sound program provides a revert function that allows the user to immediately return to the previous automatic setting, i.e. the first sound program.
- the revert function empowers the user to revert automatic changes that are not in agreement with his hearing intention.
- the user notices an undesired change to the acoustics of his surroundings, he provides the user input indicating that he does not agree with the configuration in accordance with the second sound program to return to the preferred previous setting.
- the major advantage of the above revert function over common interfaces is that the user can make changes to the hearing system, in particular the hearing device, without needing knowledge about the technical details. The user only expresses his disagreement with the classification of his environment. This is considered a great facilitation compared to common methods of interacting with the hearing instrument.
- a determination algorithm for determining, whether the first sound program is adapted to the determined classification value is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing device is able to learn the preferences of the user and to consider them in a future determination process.
- the reverse function also may deliver real-life feedback data on how happy the user is with the current classifier system comprising the determination algorithm and/or for which situations the determination algorithm and the corresponding automatic sound program steering procedures may be adapted.
- the adaption of the determination algorithm only may be carried out, if the predetermined user input has been given for a predetermined number of times under a similar speaking activity and/or acoustic environment.
- the predetermined user input is input via input means of the hearing device, an application of a mobile device, and/or a gesture detection.
- the gesture detection may be carried out by the hearing device, e.g. by a tap control with an accelerometer or pressure sensor of the hearing device. Alternatively, the gesture detection may be carried out by the connected user device.
- the hearing device if the predetermined user input is input by the user, although the sound program has not been changed, the hearing device provides a predetermined output to the user, which informs the user that the sound program has not been changed. For example, the acoustic environment of the user changes and the user perceives a change of his listening experience. Then, the user may believe that this change was induced by an automatic change of the sound program and may provide the predetermined user input indicating that he does not agree with this alleged change of the sound program. In this case, the predetermined output provides the user with the information that the sound program has not been changed automatically. So, the user knows that the change of the listening experience has an external cause and is not induced by an internal change of the hearing device. So, the predetermined output may enable a differentiation between the internal and the external change.
- the at least one classification value is determined by characterizing the user's speaking activity and/or the user's acoustic environment.
- the at least one classification value is determined by identifying a predetermined state characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal, and by determining the at least one classification value depending on the identified state.
- the one or more predetermined states are one or more of the following: Speech In Quiet; Speech In Noise; Being In Car; Reverberant Speech; Noise; Music; Quiet; Speech In Loud Noise.
- two or more classification values characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined; and the second sound program is adapted to the corresponding determined classification values.
- the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over a predetermined time interval.
- the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over two identical predetermined time intervals separated by a predetermined pause interval.
- the computer program may be executed in a processor of a hearing device, which hearing device, for example, may be carried by the person behind the ear.
- the computer-readable medium may be a memory of this hearing device.
- the computer program also may be executed by a processor of a connected user device, such as a smartphone or any other type of mobile device, which may be a part of the hearing system, and the computer-readable medium may be a memory of the connected user device. It also may be that some steps of the method are performed by the hearing device and other steps of the method are performed by the connected user device.
- the computer-readable medium may be a floppy disk, a hard disk, an USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or a FLASH memory.
- the computer-readable medium may also be a data communication network, e.g. the Internet, which allows downloading a program code.
- the computer-readable medium may be a non-transitory or transitory medium.
- a further aspect of the invention relates to a controller for operating the hearing device, the controller comprising a processor, which is adapted to carry out the steps of the above method.
- a further aspect of the invention relates to a hearing system comprising the hearing device worn by the hearing device user and a connected user device, wherein the hearing system comprises: a sound input component; a processor for processing a signal from the sound input component; a sound output component for outputting the processed signal to an ear of the user of the hearing device; a transceiver for exchanging data with the connected user device; at least one classifier configured to identify one or more predetermined classification values based on a signal from the at least one sound input component and/or from at least one further sensor; and wherein the hearing system is adapted for performing the above method.
- the hearing system may further include, by way of example, a second hearing device worn by the same user and/or a connected user device, such as a smartphone or other mobile device or personal computer, used by the same user.
- a connected user device such as a smartphone or other mobile device or personal computer
- the hearing system further comprises a mobile device, which includes the classifier.
- Fig. 1 schematically shows a hearing device 12 according to an embodiment of the invention.
- the hearing device 12 is formed as a behind-the-ear device carried by a hearing device user (not shown). It has to be noted that the hearing device 12 is a specific embodiment and that the method described herein also may be performed with other types of hearing devices, such as an in-the-ear device.
- the hearing device 12 comprises a part 15 behind the ear and a part 16 to be put in the ear channel of the user.
- the part 15 and the part 16 are connected by a tube 18.
- at least one sound input component 20, e.g. a microphone, a sound processor 22 and a sound output component 24, such as a loudspeaker, are provided in the part 15.
- the sound input component 20 may acquire environmental sound of the user and may generate a sound signal.
- the sound processor 22 may amplify the sound signal.
- the sound output component 24 may generate sound from the amplified sound signal and the sound may be guided through the tube 18 and the in-the-ear part 16 into the ear channel of the user.
- the hearing device 12 may comprise a processor 26 which is adapted for adjusting parameters of the sound processor 22, e.g. such that an output volume of the sound signal is adjusted based on an input volume.
- These parameters may be determined by a computer program which is referred to as a sound program run in the processor 26.
- a user may select a modifier (such as bass, treble, noise suppression, dynamic volume, etc.) and levels and/or values of these modifiers may be selected, from this modifier, an adjustment command may be created and processed as described above and below.
- processing parameters may be determined based on the adjustment command and based on this, for example, the frequency dependent gain and the dynamic volume of the sound processor 22 may be changed. All these functions may be implemented as different sound programs stored in a memory 30 of the hearing device 12, which sound programs may be executed by the processor 22.
- the hearing device 12 further comprises a transceiver 32 which may be adapted for wireless data communication with a transceiver 34 of a connected user device 70 (see figure 2 ).
- the hearing device 12 further comprises at least one classifier 48 configured to identify one or more predetermined classification values based on a signal from the sound input device 20 and/or from at least one further sensor 50 (see figure 2 ), e.g. an accelerometer and/or an optical and/or temperature sensor.
- the classification value may be used to determine a sound program, which may be automatically used by the hearing device 12, in particular depending on a sound input received via the sound input component 20 and/or the sensor 50.
- the sound input may correspond to a speaking activity and/or acoustic environment of the user.
- the hearing device 12 is configured for performing a method for configuring the hearing device 12 according to the present invention.
- Fig. 2 schematically shows a hearing system 60 according to an embodiment of the invention.
- the hearing system 60 includes a hearing device, e.g. the above hearing device 12 and a connected user device 70, such as a smartphone or a tablet computer.
- the connected user device 70 may comprise the transceiver 34, a processor 36, a memory 38, a graphical user interface 40 and a display 42.
- the connected user device 70 may comprise the classifier 48 or a further classifier 48.
- the hearing system 60 it is possible that the above-mentioned modifiers and their levels and/or values are adjusted with the connected user device 70 and/or that the adjustment command is generated with the connected user device 70.
- This may be performed with a computer program run in the processor 36 of the connected user device 70 and stored in the memory 38 of the connected user device 70.
- the computer program may provide the graphical user interface 40 on the display 42 of the connected user device 70.
- the graphical user interface 40 may comprise a control element 44, such as a slider.
- a control element 44 such as a slider.
- an adjustment command may be generated, which will change the sound processing of the hearing device 12 as described above and below.
- the user may adjust the modifier with the hearing device 12 itself, for example via the input means 28.
- Fig. 3 shows an example for a flow diagram of a method for configuring a hearing device, according to an embodiment of the invention.
- the method may be a computer-implemented method performed automatically in the hearing device 12 and/or the hearing system 60 of Fig. 1 .
- step S2 of the method in case the hearing device 12 currently provides a sound output to the user, the sound output may be modified in accordance with a first sound program.
- a sound program may be referred to as sound processing feature.
- the first and/or second sound program may be referred to as a sound processing feature.
- the sound processing feature may for example be a Noise Canceller or a Beamformer Strength.
- step S4 of the method an audio signal from the at least one sound input component 20 and/or a sensor signal from the at least one further sensor is received, e.g. by the sound processor 22 and the processor 26 of the hearing device 12.
- step S6 of the method the signal(s) received in step S4 are evaluated by the one or more classifiers 48 implemented in the hearing device 12 and/or the connected user device 70 so as to identify a state corresponding to the user's speaking activity and/or the user's acoustic environment, and at least one classification value is determined depending on the identified state.
- the one or more classification values characterize the identified state.
- the identified classification value(s) may be, for example, output by one of the classifiers 48 to one or both of the processors 26, 36. It also may be that at least one of the classifiers 48 is implemented in the corresponding processor 26, 36 itself or is stored as a program module in the memory 30, 38 so as to be performed by the corresponding processor 26, 36. As already mentioned herein above, all or some of the steps of the method are performed by the processor 26 of the hearing device 12 and/or by the processor 36 of the connected user device 70.
- the identified state may be one or more of the group of Speech In Quiet, Speech In Noise, Being In Car, Reverberant Speech, Noise, Music, Quiet, and Speech In Loud Noise.
- two or more classification values may be determined characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined.
- the second sound program may be adapted to the corresponding determined two or more classification values.
- the one or more predetermined classification values may be identified based on the audio signal from the at least one sound input component 20 and/or the sensor signal from the at least one further sensor 50 received over a one or more predetermined time intervals, e.g. over two identical predetermined time intervals separated by a predetermined pause interval.
- a second sound program is determined.
- the second sound program is different from the first sound program and is adapted in accordance with the determined classification value in order to provide the optimal listening experience to the user based on the identified speaking activity and/or acoustic environment of the user. For example, in the second sound program the setting of the Noise Canceller and/or the Beamformer Strength are/is different than in the first sound program.
- step S10 of the method the sound processor 22 is configured in accordance with the second sound program.
- a predetermined user input is received.
- the predetermined user input indicates that the user listening to the sound output does not agree with the configuration in accordance with the second sound program.
- the predetermined user input may be input via the input means 28 of the hearing device 12, an application of the connected user device 70, and/or a gesture detection, which may be carried out by the hearing device 12 and/or the connected user device 70.
- step S14 of the method the sound processor 22 is reconfigured in accordance with the first sound program.
- the hearing device 12 may provide a predetermined output to the user, which informs the user that the sound program has not been changed.
- a determination algorithm for determining, whether the first sound program is adapted to the determined classification value is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing system 60 is able to learn the preferences of the user and to consider them in a future determination process.
- an artificial intelligence may be integrated in the hearing system 60, which learns the preferences of the user in order to provide the optimal listening experience to the user.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
Abstract
A method for configuring a hearing device (12) is provided. The hearing device (12) comprises at least one sound input component (20), at least one sound output component (24), and a sound processor (22), which is coupled to the sound output component (24) and which is configured in accordance with a first sound program for modifying a sound output of the hearing device (12). The method comprises: receiving an audio signal from the at least one sound input component (20) and/or a sensor signal from the at least one further sensor (50); determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal; determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value; configuring the sound processor (22) in accordance with the second sound program such that the sound output is modified according to the second sound program; receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; and reconfiguring the sound processor (22) in accordance with the first sound program.
Description
- The invention relates to a method, a computer program, and a computer-readable medium, in which the computer program is stored, for configuring a hearing device. Furthermore, the invention relates to a controller for operating the hearing device, and to a hearing system comprising at least the one hearing device and optionally a connected user device, such as a smartphone.
- Hearing devices are generally small and complex devices. Hearing devices can include a processor, a microphone as a sound input component, an integrated loudspeaker as a sound output component, a memory, a housing, and other electronical and mechanical components. Some example hearing devices are Behind-The-Ear (BTE), Receiver-In-Canal (RIC), In-The-Ear (ITE), Completely-In-Canal (CIC), and Invisible-In-The-Canal (IIC) devices. A user can prefer one of these hearing devices compared to another device based on hearing loss, aesthetic preferences, lifestyle needs, and budget.
- In modern hearing devices, numerous features are implemented to facilitate speech intelligibility or to improve a hearing comfort for the user. However, the benefit of these features varies strongly dependent on the acoustic environment of the user. Therefore, conventional hearing aids classify the acoustic situation, e.g. the acoustic environment, of the wearer continuously in order to automatically adapt the feature parameters, such as the Noise Canceller or the Beamformer Strength, if the acoustic situation changes. Depending on the classified acoustic situation, a set of feature parameters is selected as a determined sound program. Because of the adaptation to the acoustic situation, the user might perceive a switch into a new sound program sudden and/or unexpected. Further, the accuracy of the classification system may be limited, what may lead to a misclassification of the situation. Additionally, the hearing intention of the user may not be considered by the classifier, e.g. the user wants to communicate at a concert and the hearing aid adapts to the music instead of the conversation partner.
- A common way to consider the user's intention is to provide him or her with manual programs with predefined sets of feature parameters that the user can switch in between by pressing a button on the hearing instrument. A modern approach is to allow the user to adjust the single parameters directly via a mobile application. However, these solutions require a certain degree of understanding how the features affect the listening impression. Also, the benefit of many features come with compromises (e.g. stronger noise reduction leads to reduced sound quality). Understanding these compromises is a complex matter for non-technical affine users. It also may take too long time until the user has the paired smartphone available or finds the right manual program on the hearing aid. So, the user experience to the user wearing the user device may be not convenient for the user. Further, since the manual programs do have to be set up in advance, an adequate manual program for a specific situation might just not be quickly available.
- It is an objective of the invention to provide a method, a computer program, and a computer-readable medium, in which the computer program is stored, for configuring a hearing device, and a controller for operating the hearing device and a system comprising the hearing device. It is a further objective of the invention to provide a convenient user experience to the user wearing the user device.
- These objectives are achieved by the subject-matter of the independent claims. Further exemplary embodiments are evident from the dependent claims and the following description.
- A first aspect of the invention relates to a method for configuring a hearing device. The hearing device comprises at least one sound input component, at least one sound output component, and a sound processor, which is coupled to the sound output component and which is configured in accordance with a first sound program for modifying a sound output of the hearing device. The method comprises: receiving an audio signal from the at least one sound input component and/or a sensor signal from the at least one further sensor; determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal; determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value; configuring the sound processor in accordance with the second sound program such that the sound output is modified according to the second sound program; receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; and reconfiguring the sound processor in accordance with the first sound program.
- The method may be a computer-implemented method, which may be performed automatically by a hearing system, part of which the user's hearing device is. The hearing system may, for instance, comprise one or two hearing devices used by the same user. One or both of the hearing devices may be worn on and/or in an ear of the user. A hearing device may be a hearing aid, which may be adapted for compensating a hearing loss of the user. Also a cochlear implant may be a hearing device. The hearing system may optionally further comprise at least one connected user device, such as a smartphone, smartwatch or other devices carried by the user and/or a personal computer etc. The further sensor(s) may be any type(s) of physical sensor(s) - e.g. an accelerometer and/or optical and/or temperature sensor - integrated in the hearing device or possibly also in a connected user device such as a smartphone or a smartwatch. The first and/or second sound program may be referred to as a sound processing feature. The sound processing feature may for example be a Noise Canceller or a Beamformer Strength. The sound input may correspond to the user's speaking activity and/or the user's acoustic environment.
- The reconfiguration of the sound processor in accordance with the first sound program provides a revert function that allows the user to immediately return to the previous automatic setting, i.e. the first sound program. The revert function empowers the user to revert automatic changes that are not in agreement with his hearing intention. When the user notices an undesired change to the acoustics of his surroundings, he provides the user input indicating that he does not agree with the configuration in accordance with the second sound program to return to the preferred previous setting.
- The major advantage of the above revert function over common interfaces is that the user can make changes to the hearing system, in particular the hearing device, without needing knowledge about the technical details. The user only expresses his disagreement with the classification of his environment. This is considered a great facilitation compared to common methods of interacting with the hearing instrument.
- According to an embodiment of the invention, a determination algorithm for determining, whether the first sound program is adapted to the determined classification value, is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing device is able to learn the preferences of the user and to consider them in a future determination process. Thus, the reverse function also may deliver real-life feedback data on how happy the user is with the current classifier system comprising the determination algorithm and/or for which situations the determination algorithm and the corresponding automatic sound program steering procedures may be adapted. In one or more embodiments, the adaption of the determination algorithm only may be carried out, if the predetermined user input has been given for a predetermined number of times under a similar speaking activity and/or acoustic environment.
- According to an embodiment of the invention, the predetermined user input is input via input means of the hearing device, an application of a mobile device, and/or a gesture detection. The gesture detection may be carried out by the hearing device, e.g. by a tap control with an accelerometer or pressure sensor of the hearing device. Alternatively, the gesture detection may be carried out by the connected user device.
- According to an embodiment of the invention, if the predetermined user input is input by the user, although the sound program has not been changed, the hearing device provides a predetermined output to the user, which informs the user that the sound program has not been changed. For example, the acoustic environment of the user changes and the user perceives a change of his listening experience. Then, the user may believe that this change was induced by an automatic change of the sound program and may provide the predetermined user input indicating that he does not agree with this alleged change of the sound program. In this case, the predetermined output provides the user with the information that the sound program has not been changed automatically. So, the user knows that the change of the listening experience has an external cause and is not induced by an internal change of the hearing device. So, the predetermined output may enable a differentiation between the internal and the external change.
- According to an embodiment of the invention, the at least one classification value is determined by characterizing the user's speaking activity and/or the user's acoustic environment.
- According to an embodiment of the invention, the at least one classification value is determined by identifying a predetermined state characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal, and by determining the at least one classification value depending on the identified state.
- According to an embodiment of the invention, the one or more predetermined states are one or more of the following: Speech In Quiet; Speech In Noise; Being In Car; Reverberant Speech; Noise; Music; Quiet; Speech In Loud Noise.
- According to an embodiment of the invention, two or more classification values characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined; and the second sound program is adapted to the corresponding determined classification values.
- According to an embodiment of the invention, the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over a predetermined time interval.
- According to an embodiment of the invention, the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over two identical predetermined time intervals separated by a predetermined pause interval.
- Further aspects of the invention relate to a computer program for configuring a hearing device for a user, with the hearing device comprising at least one sound input component, at least one sound output component, and a sound processor, which is coupled to the sound output component and which is configured in accordance with a first sound program of the hearing device, wherein the program, when being executed by a processor, is adapted to carry out the steps of the method of one of the previous claims.
- For example, the computer program may be executed in a processor of a hearing device, which hearing device, for example, may be carried by the person behind the ear. The computer-readable medium may be a memory of this hearing device. The computer program also may be executed by a processor of a connected user device, such as a smartphone or any other type of mobile device, which may be a part of the hearing system, and the computer-readable medium may be a memory of the connected user device. It also may be that some steps of the method are performed by the hearing device and other steps of the method are performed by the connected user device.
- Further aspects of the invention relate to a computer-readable medium, in which the computer program is stored. In general, the computer-readable medium may be a floppy disk, a hard disk, an USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or a FLASH memory. The computer-readable medium may also be a data communication network, e.g. the Internet, which allows downloading a program code. The computer-readable medium may be a non-transitory or transitory medium.
- A further aspect of the invention relates to a controller for operating the hearing device, the controller comprising a processor, which is adapted to carry out the steps of the above method.
- A further aspect of the invention relates to a hearing system comprising the hearing device worn by the hearing device user and a connected user device, wherein the hearing system comprises: a sound input component; a processor for processing a signal from the sound input component; a sound output component for outputting the processed signal to an ear of the user of the hearing device; a transceiver for exchanging data with the connected user device; at least one classifier configured to identify one or more predetermined classification values based on a signal from the at least one sound input component and/or from at least one further sensor; and wherein the hearing system is adapted for performing the above method.
- The hearing system may further include, by way of example, a second hearing device worn by the same user and/or a connected user device, such as a smartphone or other mobile device or personal computer, used by the same user.
- According to an embodiment, the hearing system further comprises a mobile device, which includes the classifier.
- It has to be understood that features of the method as described above and in the following may be features of the computer program, the computer-readable medium, the controller and/or the hearing system as described above and in the following, and vice versa.
- These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
- Below, embodiments of the present invention are described in more detail with reference to the attached drawings.
-
Fig. 1 schematically shows a hearing device according to an embodiment of the invention. -
Fig. 2 schematically shows a hearing system according to an embodiment of the invention. -
Fig. 3 shows a flow diagram of a method for configuring a hearing device, according to an embodiment of the invention. - The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols. In principle, identical parts are provided with the same reference symbols in the figures.
-
Fig. 1 schematically shows ahearing device 12 according to an embodiment of the invention. Thehearing device 12 is formed as a behind-the-ear device carried by a hearing device user (not shown). It has to be noted that thehearing device 12 is a specific embodiment and that the method described herein also may be performed with other types of hearing devices, such as an in-the-ear device. - The
hearing device 12 comprises apart 15 behind the ear and apart 16 to be put in the ear channel of the user. Thepart 15 and thepart 16 are connected by atube 18. In thepart 15, at least onesound input component 20, e.g. a microphone, asound processor 22 and asound output component 24, such as a loudspeaker, are provided. Thesound input component 20 may acquire environmental sound of the user and may generate a sound signal. Thesound processor 22 may amplify the sound signal. Thesound output component 24 may generate sound from the amplified sound signal and the sound may be guided through thetube 18 and the in-the-ear part 16 into the ear channel of the user. - The
hearing device 12 may comprise aprocessor 26 which is adapted for adjusting parameters of thesound processor 22, e.g. such that an output volume of the sound signal is adjusted based on an input volume. These parameters may be determined by a computer program which is referred to as a sound program run in theprocessor 26. For example, with an input mean 28, e.g. a knob, of thehearing device 12, a user may select a modifier (such as bass, treble, noise suppression, dynamic volume, etc.) and levels and/or values of these modifiers may be selected, from this modifier, an adjustment command may be created and processed as described above and below. In particular, processing parameters may be determined based on the adjustment command and based on this, for example, the frequency dependent gain and the dynamic volume of thesound processor 22 may be changed. All these functions may be implemented as different sound programs stored in amemory 30 of thehearing device 12, which sound programs may be executed by theprocessor 22. - The
hearing device 12 further comprises atransceiver 32 which may be adapted for wireless data communication with atransceiver 34 of a connected user device 70 (seefigure 2 ). - The
hearing device 12 further comprises at least oneclassifier 48 configured to identify one or more predetermined classification values based on a signal from thesound input device 20 and/or from at least one further sensor 50 (seefigure 2 ), e.g. an accelerometer and/or an optical and/or temperature sensor. The classification value may be used to determine a sound program, which may be automatically used by thehearing device 12, in particular depending on a sound input received via thesound input component 20 and/or thesensor 50. The sound input may correspond to a speaking activity and/or acoustic environment of the user. - The
hearing device 12 is configured for performing a method for configuring thehearing device 12 according to the present invention. -
Fig. 2 schematically shows ahearing system 60 according to an embodiment of the invention. Thehearing system 60 includes a hearing device, e.g. theabove hearing device 12 and aconnected user device 70, such as a smartphone or a tablet computer. The connecteduser device 70 may comprise thetransceiver 34, aprocessor 36, amemory 38, agraphical user interface 40 and adisplay 42. Alternatively or additionally to theclassifier 48 of thehearing device 12, the connecteduser device 70 may comprise theclassifier 48 or afurther classifier 48. - With the
hearing system 60 it is possible that the above-mentioned modifiers and their levels and/or values are adjusted with the connecteduser device 70 and/or that the adjustment command is generated with the connecteduser device 70. This may be performed with a computer program run in theprocessor 36 of the connecteduser device 70 and stored in thememory 38 of the connecteduser device 70. The computer program may provide thegraphical user interface 40 on thedisplay 42 of the connecteduser device 70. - For example, for adjusting the modifier, such as volume, the
graphical user interface 40 may comprise acontrol element 44, such as a slider. When the user adjusts the slider, an adjustment command may be generated, which will change the sound processing of thehearing device 12 as described above and below. Alternatively or additionally, the user may adjust the modifier with thehearing device 12 itself, for example via the input means 28. -
Fig. 3 shows an example for a flow diagram of a method for configuring a hearing device, according to an embodiment of the invention. The method may be a computer-implemented method performed automatically in thehearing device 12 and/or thehearing system 60 ofFig. 1 . - In optional step S2 of the method, in case the
hearing device 12 currently provides a sound output to the user, the sound output may be modified in accordance with a first sound program. In general, a sound program may be referred to as sound processing feature. The first and/or second sound program may be referred to as a sound processing feature. The sound processing feature may for example be a Noise Canceller or a Beamformer Strength. - In step S4 of the method, an audio signal from the at least one
sound input component 20 and/or a sensor signal from the at least one further sensor is received, e.g. by thesound processor 22 and theprocessor 26 of thehearing device 12. - In step S6 of the method, the signal(s) received in step S4 are evaluated by the one or
more classifiers 48 implemented in thehearing device 12 and/or the connecteduser device 70 so as to identify a state corresponding to the user's speaking activity and/or the user's acoustic environment, and at least one classification value is determined depending on the identified state. The one or more classification values characterize the identified state. The identified classification value(s) may be, for example, output by one of theclassifiers 48 to one or both of theprocessors classifiers 48 is implemented in the correspondingprocessor memory processor processor 26 of thehearing device 12 and/or by theprocessor 36 of the connecteduser device 70. - The identified state may be one or more of the group of Speech In Quiet, Speech In Noise, Being In Car, Reverberant Speech, Noise, Music, Quiet, and Speech In Loud Noise. Optionally, two or more classification values may be determined characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined. In case, the second sound program may be adapted to the corresponding determined two or more classification values. The one or more predetermined classification values may be identified based on the audio signal from the at least one
sound input component 20 and/or the sensor signal from the at least onefurther sensor 50 received over a one or more predetermined time intervals, e.g. over two identical predetermined time intervals separated by a predetermined pause interval. - In step S8 of the method, a second sound program is determined. The second sound program is different from the first sound program and is adapted in accordance with the determined classification value in order to provide the optimal listening experience to the user based on the identified speaking activity and/or acoustic environment of the user. For example, in the second sound program the setting of the Noise Canceller and/or the Beamformer Strength are/is different than in the first sound program.
- In step S10 of the method, the
sound processor 22 is configured in accordance with the second sound program. - In step S12 of the method, a predetermined user input is received. The predetermined user input indicates that the user listening to the sound output does not agree with the configuration in accordance with the second sound program. The predetermined user input may be input via the input means 28 of the
hearing device 12, an application of the connecteduser device 70, and/or a gesture detection, which may be carried out by thehearing device 12 and/or the connecteduser device 70. - In step S14 of the method, the
sound processor 22 is reconfigured in accordance with the first sound program. However, if the predetermined user input is input by the user, although the first sound program has not been changed, thehearing device 12 may provide a predetermined output to the user, which informs the user that the sound program has not been changed. - In an optional step S16 of the method, a determination algorithm for determining, whether the first sound program is adapted to the determined classification value, is adapted depending on the feedback of the user represented by the predetermined user input such that the
hearing system 60 is able to learn the preferences of the user and to consider them in a future determination process. For example, an artificial intelligence may be integrated in thehearing system 60, which learns the preferences of the user in order to provide the optimal listening experience to the user. - While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or controller or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.
-
- 12
- hearing device
- 15
- part behind the ear
- 16
- part in the ear
- 18
- tube
- 20
- sound input component
- 22
- sound processor
- 24
- sound output component
- 26
- processor
- 28
- input mean
- 30
- memory
- 32
- transceiver of hearing device
- 34
- transceiver of connected user device
- 36
- processor
- 38
- memory
- 40
- graphical user interface
- 42
- display
- 44
- control element, slider
- 48
- classifier
- 50
- further sensor
- 60
- hearing system
- 70
- connected user device
Claims (15)
- A method for configuring a hearing device (12), the hearing device (12) comprising at least one sound input component (20), at least one sound output component (24), and a sound processor (22), which is coupled to the sound output component (24) and which is configured in accordance with a first sound program for modifying a sound output of the hearing device (12), the method comprising:receiving an audio signal from the at least one sound input component (20) and/or a sensor signal from the at least one further sensor (50) as a sound input;determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal;determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value;configuring the sound processor (22) in accordance with the second sound program such that the sound output is modified according to the second sound program;receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; andreconfiguring the sound processor (22) in accordance with the first sound program.
- The method of claim 1, wherein
a determination algorithm for determining, whether the first sound program is adapted to the determined classification value, is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing device (12) is able to learn the preferences of the user and to consider them in a future determination process. - The method of one of the previous claims, wherein
the predetermined user input is input via input means (28) of the hearing device (12), an application of a connected user device (70), and/or a gesture detection. - The method of one of the previous claims, wherein,
if the predetermined user input is input by the user, although the sound program has not been changed, the hearing device (12) provides a predetermined output to the user, which informs the user that the sound program has not been changed. - The method of one of the previous claims, wherein
the at least one classification value is determined by characterizing the user's speaking activity and/or the user's acoustic environment. - The method of claim 5, wherein
the at least one classification value is determined by identifying a predetermined state characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal, and by determining the at least one classification value depending on the identified state. - The method of one of the previous claims, wherein
the one or more predetermined states are one or more of the following:Speech In Quiet;Speech In Noise;Being In Car;Reverberant Speech;Noise;Music;Quiet;Speech In Loud Noise. - The method of one of the previous claims, wherein
two or more classification values characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined; and
the second sound program is adapted to the corresponding determined classification values. - The method of one of the previous claims, wherein
the one or more predetermined classification values are identified based on the audio signal from the at least one sound input component (20) and/or the sensor signal from the at least one further sensor (50) received over a predetermined time interval. - The method of claim 9, wherein
the one or more predetermined classification values are identified based on the audio signal from the at least one sound input component (20) and/or the sensor signal from the at least one further sensor (50) received over two identical predetermined time intervals separated by a predetermined pause interval. - A computer program for configuring a hearing device (12) for a user, with the hearing device (12) comprising at least one sound input component (20), at least one sound output component (24), and a sound processor (22), which is coupled to the sound output component (24) and which is configured in accordance with a first sound program of the hearing device (12), wherein the program, when being executed by a processor (26, 36), is adapted to carry out the steps of the method of one of the previous claims.
- A computer-readable medium, in which a computer program according to claim 11 is stored.
- A controller for operating a hearing device (12), the controller comprising a processor (26, 36), which is adapted to carry out the steps of the method of one of claims 1 to 10.
- A hearing system (60) comprising a hearing device (12) worn by a hearing device user and a connected user device (70), wherein the hearing system (60) comprises:a sound input component (20);a processor (26) for processing a signal from the sound input component (20);a sound output component (24) for outputting the processed signal to an ear of the user of the hearing device (12);a transceiver (32) for exchanging data with the connected user device (70);at least one classifier (48) configured to identify one or more predetermined classification values based on a signal from the at least one sound input component (20) and/or from at least one further sensor (50); andwherein the hearing system (60) is adapted for performing the method of one of claims 1 to 10.
- Hearing system (60) in accordance with claim 14, further comprising a mobile device, which includes the classifier (48).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21166351.3A EP4068805A1 (en) | 2021-03-31 | 2021-03-31 | Method, computer program, and computer-readable medium for configuring a hearing device, controller for operating a hearing device, and hearing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21166351.3A EP4068805A1 (en) | 2021-03-31 | 2021-03-31 | Method, computer program, and computer-readable medium for configuring a hearing device, controller for operating a hearing device, and hearing system |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4068805A1 true EP4068805A1 (en) | 2022-10-05 |
Family
ID=75339578
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21166351.3A Withdrawn EP4068805A1 (en) | 2021-03-31 | 2021-03-31 | Method, computer program, and computer-readable medium for configuring a hearing device, controller for operating a hearing device, and hearing system |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP4068805A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2255548A2 (en) * | 2008-03-27 | 2010-12-01 | Phonak AG | Method for operating a hearing device |
EP3120578A1 (en) * | 2014-03-19 | 2017-01-25 | Bose Corporation | Crowd sourced recommendations for hearing assistance devices |
US20200314525A1 (en) * | 2019-03-28 | 2020-10-01 | Sonova Ag | Tap detection |
US20200380979A1 (en) * | 2016-09-30 | 2020-12-03 | Dolby Laboratories Licensing Corporation | Context aware hearing optimization engine |
-
2021
- 2021-03-31 EP EP21166351.3A patent/EP4068805A1/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2255548A2 (en) * | 2008-03-27 | 2010-12-01 | Phonak AG | Method for operating a hearing device |
EP3120578A1 (en) * | 2014-03-19 | 2017-01-25 | Bose Corporation | Crowd sourced recommendations for hearing assistance devices |
US20200380979A1 (en) * | 2016-09-30 | 2020-12-03 | Dolby Laboratories Licensing Corporation | Context aware hearing optimization engine |
US20200314525A1 (en) * | 2019-03-28 | 2020-10-01 | Sonova Ag | Tap detection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11641556B2 (en) | Hearing device with user driven settings adjustment | |
US20200107139A1 (en) | Method for processing microphone signals in a hearing system and hearing system | |
CN108696813B (en) | Method for operating a hearing device and hearing device | |
US11343618B2 (en) | Intelligent, online hearing device performance management | |
US20220369048A1 (en) | Ear-worn electronic device employing acoustic environment adaptation | |
US11882413B2 (en) | System and method for personalized fitting of hearing aids | |
CN113473341A (en) | Hearing aid device comprising an active vent configured for audio classification and method for operating the same | |
CN113395647A (en) | Hearing system with at least one hearing device and method for operating a hearing system | |
EP3641344A1 (en) | A method for operating a hearing instrument and a hearing system comprising a hearing instrument | |
CN113228710B (en) | Sound source separation in a hearing device and related methods | |
US20220201404A1 (en) | Self-fit hearing instruments with self-reported measures of hearing loss and listening | |
US8139779B2 (en) | Method for the operational control of a hearing device and corresponding hearing device | |
CN111279721B (en) | Hearing device system and method for dynamically presenting hearing device modification advice | |
US20220345101A1 (en) | A method of operating an ear level audio system and an ear level audio system | |
EP4068805A1 (en) | Method, computer program, and computer-readable medium for configuring a hearing device, controller for operating a hearing device, and hearing system | |
EP2688067B1 (en) | System for training and improvement of noise reduction in hearing assistance devices | |
US11758341B2 (en) | Coached fitting in the field | |
EP3941092A1 (en) | Fitting of hearing device dependent on program activity | |
EP4178228A1 (en) | Method and computer program for operating a hearing system, hearing system, and computer-readable medium | |
EP3996390A1 (en) | Method for selecting a hearing program of a hearing device based on own voice detection | |
US11323809B2 (en) | Method for controlling a sound output of a hearing device | |
US20230156410A1 (en) | Hearing system containing a hearing instrument and a method for operating the hearing instrument | |
EP4203513A1 (en) | Method and system for modifying audio signal processing of a hearing device in accordance with user preferences and hearing device system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20230406 |