WO2022121743A1 - Procédé d'optimisation des fonctions d'aide auditive et aides auditives - Google Patents

Procédé d'optimisation des fonctions d'aide auditive et aides auditives Download PDF

Info

Publication number
WO2022121743A1
WO2022121743A1 PCT/CN2021/134629 CN2021134629W WO2022121743A1 WO 2022121743 A1 WO2022121743 A1 WO 2022121743A1 CN 2021134629 W CN2021134629 W CN 2021134629W WO 2022121743 A1 WO2022121743 A1 WO 2022121743A1
Authority
WO
WIPO (PCT)
Prior art keywords
transfer function
wearer
path
hearable
audio signal
Prior art date
Application number
PCT/CN2021/134629
Other languages
English (en)
Chinese (zh)
Inventor
熊伟
仇存收
田立生
缪海波
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022121743A1 publication Critical patent/WO2022121743A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation

Definitions

  • the embodiments of the present application relate to the field of acoustic technologies, and in particular, to a method for optimizing a function of a hearable device and a hearable device.
  • Hearables are wearable electronic devices that are worn near the human ear.
  • the hearables include earphones, hearing aids, and cochlear implants. These hearable devices can provide the wearer with services such as playback of audio, voice services, and more.
  • the hearable device wears the earphone, and the earphone plays music, and the wearer can hear the music played by the earphone.
  • the sound heard by the human ear is actually produced by the vibration of the eardrum of the human ear caused by the sound wave signal.
  • the sound wave signal propagates to the position of the eardrum of the human ear. Since the sound wave signal changes the pressure at the eardrum of the human ear, sound pressure is formed, and the sound pressure causes the eardrum to vibrate so that the human ear can hear the sound. Therefore, if the earphone can obtain the sound pressure signal at the eardrum of the human ear, the earphone can adjust the sound wave signal played by the earphone according to the relationship between the sound pressure signal at the entrance of the ear canal and the sound pressure signal at the eardrum of the human ear, so as to realize the realization of the earphone.
  • the active noise reduction or transparent transmission function provides the wearer with a sound playback service with good sound effect.
  • the present application provides a method for optimizing the function of a hearable device and a hearable device.
  • the hearable device When the hearable device is in a working state, the effect of the noise reduction function or the transparent transmission function of the hearable device is improved, thereby improving the performance of the hearable device.
  • the wearer of the device provides a better user experience.
  • the present application provides a method for optimizing the function of a listenable device, the method may include: the listenable device plays an audio signal, and collects the response information of the audio signal in the ear canal of the wearer (that is, the sound pressure signal of the ERP). ), wherein the hearable device is worn by the wearer, and the audio signal generates response information as it propagates through the wearer's ear canal.
  • the hearable device sends the response information and the audio signal to the first device, then the first device can generate a second (Secondary Path, SP) path according to the response information and the audio signal, and the SP path is used to represent the audio signal and the external reference of the ear canal The relationship between the sound pressure signal of the point ERP.
  • SP Secondary Path
  • the first device generates an ED transfer function corresponding to the ERP to the eardrum reference point DRP according to the SP path and the acquired personalized data of the wearer, and the ED transfer function represents the relationship between the sound pressure signal of the ERP and the sound pressure signal of the DRP. .
  • the first device sends the ED transfer function and the SP path to the hearable device, and the hearable device receives the ED transfer function and the SP path, and can adjust the audio signal according to the ED transfer function.
  • the SP path represents the relationship between the audio signal played by the wearable device and the sound pressure of the ERP
  • the ED transfer function represents the relationship between the sound pressure signal of the ERP and the sound pressure signal of the DRP.
  • the listening device adjusts the audio signal according to the SP path, which can change the sound pressure signal of the ERP, and determine the sound pressure signal of the DRP according to the sound pressure signal of the ERP and the ED transfer function. That is, when the hearable device adjusts the audio signal, it can change the sound pressure signal of the DRP. Therefore, the hearable device can adjust the audio signal according to the SP path and the ED transfer function for the purpose of optimizing the function of the hearable device.
  • the above steps can be repeatedly performed, so that the hearable device can adjust the audio signal in real time, so that the purpose of optimizing the function of the hearable device in real time can be achieved.
  • the function of the hearable device may be an active noise reduction function or a transparent transmission function.
  • the hearable device can adjust the audio signal according to the SP path and the ED transfer function to achieve real-time optimization of the active noise reduction and/or pass-through function.
  • adjusting the audio signal by the listenable device may be adjusting the volume of the sound signal played by the listenable device, or may be adjusting the playback frequency of the sound signal played by the listenable device, or the like.
  • the physical quantity in the adjusted audio signal is not specifically limited here.
  • the first device may further include multiple preset SP paths, multiple preset ED transfer functions, and a preset mapping relationship between the preset SP paths and the preset ED transfer functions.
  • the preset SP path is generated according to the wearer's response information
  • the preset ED transfer function is generated according to the wearer's response information and the sound pressure signal of the DRP.
  • the preset SP path is generated according to the response information of the current wearer of the hearable device.
  • the response information and audio signals obtained from multiple tests of the wearable device are used to generate multiple sets of preset SP paths, and the sound pressure information at the DRP obtained from multiple tests is obtained to generate multiple sets of preset ED transfer functions.
  • the above-mentioned first device generates an ED transfer function corresponding to the ear canal reference point DRP from the ERP according to the SP path and the acquired personalized data of the wearer, and the ED transfer function represents the relationship between the sound pressure signal of the ERP and the sound pressure signal of the DRP.
  • the first device obtains personalized data
  • the personalized data is used to create the ED transfer function
  • the personalized data at least includes: the type of the hearable device, the tightness of the wearable device, and the type of the wearer's ear canal.
  • the first device obtains the first mapping relationship according to the wearer's personalized data and the preset mapping relationship, and the first mapping relationship is used to represent the corresponding relationship between the SP path and the ED transfer function.
  • the first device generates an ED transfer function by using the first mapping relationship and the SP path.
  • the first device includes multiple sets of preset SP paths and preset ED transfer functions for the wearer, and a mapping relationship between the preset SP paths and the preset ED transfer functions.
  • the mapping relationship between the SP path and the ED transfer function can be modified according to the wearer's personalized data, so that the hearable device can obtain an accurate ED transfer function through the modified mapping relationship.
  • the first device may further include multiple basic SP paths, multiple basic ED transfer functions, and a basic mapping relationship between the basic SP paths and the basic ED transfer functions.
  • the basic SP path is generated according to the response information
  • the basic ED transfer function is generated according to the response information and the sound pressure signal of the DRP.
  • the basic SP path and the basic ED transfer function are generated by collecting data from multiple wearers through multiple tests.
  • the collected data includes response information, audio signals, and sound pressure signals at the DRP. And generate the mapping relationship between the basic SP path and the basic ED transfer function.
  • the above-mentioned first device generates an ED transfer function corresponding to the ear canal reference point DRP from the ERP according to the SP path and the acquired personalized data of the wearer, and the ED transfer function represents the relationship between the sound pressure signal of the ERP and the sound pressure signal of the DRP. .
  • it may include: the first device obtains the wearer's personalized data, the personalized data is used to create the ED transfer function, and the personalized data at least includes: the type of the hearable device, the tightness of the wearer's ear and the wearer's ear.
  • One of the Tao types One of the Tao types.
  • the first device obtains the first mapping relationship according to the personalized data and the basic mapping relationship, and the first mapping relationship is used to represent the corresponding relationship between the SP path and the ED transfer function.
  • the first device obtains the ED transfer function through the first mapping relationship and the SP path.
  • the first device presets the base SP path and the base ED transfer function.
  • the mapping relationship between the SP path and the ED transfer function can be modified according to the wearer's personalized data, so that the hearable device can obtain an accurate ED transfer function according to the modified mapping relationship.
  • the functionality of the hearable can be optimized as the hearable adjusts the audio signal according to the SP path and ED transfer function.
  • the listenable device plays an audio signal, and before collecting the response information of the audio signal in the ear canal of the wearer, the method may further include: enabling active noise reduction of the listenable device and/or or transparent transmission.
  • the above-mentioned listening and wearing device adjusts the audio signal according to the ED transfer function, including: the listening and wearing device adjusts the audio signal according to the ED transfer function, so as to realize the purpose of adjusting the noise reduction depth of active noise reduction and/or adjusting the sound pressure signal of the transparent transmission function. .
  • the smaller the noise reduction depth the better the active noise reduction (also referred to as noise reduction) effect of the hearable device.
  • the transparent transmission function can be optimized.
  • the present application provides a method for optimizing the function of a hearable device.
  • the method is applied to the hearable device.
  • the method may include: the hearable device plays an audio signal, and collects the audio signal in the ear canal of the wearer. Response information, wherein the hearable device is worn by the wearer and the audio signal generates the response information when propagated in the wearer's ear canal.
  • the hearable device generates an SP path according to the response information and the audio signal, and the SP path is used to represent the relationship between the audio signal and the sound pressure signal of the external reference point ERP of the ear canal.
  • the hearable device generates the ED transfer function corresponding to the ERP to the eardrum reference point DRP according to the SP path and the acquired personalized data of the wearer.
  • the ED transfer function represents the relationship between the sound pressure signal of the ERP and the sound pressure signal of the DRP.
  • the hearable device adjusts the audio signal according to the ED transfer function.
  • the hearable device may further include multiple preset SP paths, multiple preset ED transfer functions, and a mapping relationship between the preset SP paths and the preset ED transfer functions; wherein, The preset SP path is generated according to the wearer's response information, and the preset ED transfer function is generated according to the wearer's response information and the sound pressure signal of the EDR.
  • the above-mentioned listening-worn device generates the ED transfer function corresponding to the ERP to the eardrum reference point DRP according to the SP path and the obtained personalized data of the wearer.
  • the ED transfer function represents the relationship between the sound pressure signal of the ERP and the sound pressure signal of the DRP.
  • the wearer's personalized data obtained by the hearable device the personalized data is used to create the ED transfer function
  • the personalized data at least includes: the type of the hearable device, the tightness of the wearable device and the wearer's ear canal one of the types.
  • the hearable device obtains the first mapping relationship according to the personalized data and the preset mapping relationship, and the first mapping relationship is used to represent the corresponding relationship between the SP path and the ED transfer function.
  • the hearable device generates an ED transfer function through the first mapping relationship and the SP path.
  • the hearable device may further include multiple basic SP paths, multiple basic ED transfer functions, and a basic mapping relationship between the basic SP paths and the basic ED transfer functions.
  • the basic SP path is generated according to the response information
  • the basic ED transfer function is generated according to the response information and the sound pressure signal of the DRP.
  • the above-mentioned hearable device obtains the wearer's personalized data, the personalized data is used to create the ED transfer function, and the personalized data at least includes: the type of the hearable device, the tightness of the wearable device and the type of the wearer's ear canal one of the.
  • the hearable device obtains the first mapping relationship according to the personalized data and the basic mapping relationship, and the first mapping relationship is used to represent the corresponding relationship between the SP path and the ED transfer function.
  • the hearable device obtains the ED transfer function through the first mapping relationship and the SP path.
  • the listenable device plays an audio signal, and before collecting the response information of the audio signal in the ear canal of the wearer, the method may further include: enabling active noise reduction of the listenable device and/or or transparent transmission.
  • the above-mentioned listening and wearing device adjusts the audio signal according to the ED transfer function, including: the listening and wearing device adjusts the audio signal according to the ED transfer function, so as to realize the purpose of adjusting the noise reduction depth of active noise reduction and/or adjusting the sound pressure signal of the transparent transmission function. .
  • the present application provides a hearable device comprising: one or more processors; a memory; and one or more computer programs. Wherein, one or more computer programs are stored in the memory, the one or more computer programs comprising instructions.
  • the listenable device When the instruction is executed by the listenable device, the listenable device is caused to perform the following steps: playing the audio signal, and collecting the response information of the audio signal in the ear canal of the wearer.
  • the hearable device is worn by the wearer, and the audio signal generates response information when the audio signal propagates through the wearer's ear canal.
  • Send the response information and the audio signal to the first device so that the first device generates the ED transfer function corresponding to the external reference point ERP of the ear canal to the reference point DRP of the eardrum according to the response information, and the ED transfer function represents the sound pressure signal of the ERP and the sound pressure of the DRP. signal relationship.
  • the hearable device when the instruction is executed by the hearable device, the hearable device is further caused to perform the following steps: enable the active noise reduction and/or transparent transmission function of the hearable device;
  • the hearable device When the audio signal is adjusted according to the ED transfer function, the hearable device specifically performs the following steps: adjusting the audio signal according to the ED transfer function, so as to adjust the noise reduction depth of the active noise reduction and/or adjust the sound pressure signal of the transparent transmission function. Purpose.
  • the present application provides an electronic device, comprising: one or more processors; a memory; and one or more computer programs. Wherein, one or more computer programs are stored in the memory, the one or more computer programs comprising instructions.
  • the hearable device When the instructions are executed by the hearable device, the hearable device is caused to perform the steps of: receiving response information and audio signals from the hearable device.
  • the listenable device is worn by the wearer, and when the shown listenable device plays an audio signal, the audio signal generates response information when the audio signal propagates through the wearer's ear canal.
  • an SP path is generated, and the SP path is used to represent the relationship between the audio signal and the sound pressure signal of the external reference point ERP of the ear canal.
  • the ED transfer function corresponding to the ERP to the eardrum reference point DRP is generated.
  • the ED transfer function represents the relationship between the sound pressure signal of the ERP and the sound pressure signal of the DRP.
  • the ED transfer function is sent to the hearable device so that the listenable device adjusts the audio signal according to the ED transfer function.
  • the electronic device may further include: multiple preset SP paths, multiple preset ED transfer functions, and a preset mapping relationship between the preset SP paths and the preset ED transfer functions.
  • the preset SP path is generated according to the wearer's response information
  • the preset ED transfer function is generated according to the wearer's response information and the sound pressure signal of the DRP.
  • the electronic device When the instruction is executed by the hearable device, the electronic device generates the ED transfer function corresponding to the ERP to the eardrum reference point DRP according to the SP path and the acquired personalized data of the wearer.
  • the ED transfer function represents the sound pressure signal of the ERP and the DRP sound pressure signal relationship.
  • the electronic device specifically performs the following steps: acquiring personalized data, the personalized data is used to create the ED transfer function, and the personalized data at least includes: the type of the listening device, the tightness of the listening device, and the type of the wearer's ear canal. one of.
  • the first mapping relationship is obtained according to the wearer's personalized data and the preset mapping relationship, and the first mapping relationship is used to represent the corresponding relationship between the SP path and the ED transfer function. Through the first mapping relationship and the SP path, an ED transfer function is generated.
  • the electronic device may further include: multiple basic SP paths, multiple basic ED transfer functions, and a basic mapping relationship between the basic SP paths and the basic ED transfer functions.
  • the basic SP path is generated according to the response information
  • the basic ED transfer function is generated according to the response information and the sound pressure signal of the DRP.
  • the electronic device When the instruction is executed by the hearable device, the electronic device generates the ED transfer function corresponding to the ERP to the eardrum reference point DRP according to the SP path and the acquired personalized data of the wearer.
  • the ED transfer function represents the sound pressure signal of the ERP and the DRP sound pressure signal relationship.
  • the electronic device specifically performs the following steps: acquiring personalized data of the wearer, and the personalized data is used to create the ED transfer function.
  • the first mapping relationship is obtained according to the personalized data and the basic mapping relationship, and the first mapping relationship is used to represent the corresponding relationship between the SP path and the ED transfer function. Through the first mapping relationship and the SP path, the ED transfer function is obtained.
  • the present application further provides a hearable device, comprising: one or more processors; a memory; and one or more computer programs.
  • a hearable device comprising: one or more processors; a memory; and one or more computer programs.
  • one or more computer programs are stored in the memory, and the one or more computer programs include instructions that, when executed by the hearable device, cause the hearable device to perform the second aspect and any possible design methods thereof Methods for optimizing the functionality of a hearable device in .
  • an embodiment of the present application provides a computer-readable storage medium, including computer instructions, when the computer instructions are executed on an electronic device, the electronic device is made to perform the above-mentioned first aspect, the second aspect, and any possible possibility thereof Methods for optimizing the functionality of hearables in the design of .
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a computer, enables the computer to execute the electronic device in the first aspect, the second aspect, and any possible design thereof. Ways to optimize the functionality of a hearable device.
  • an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device.
  • the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected by lines; the interface circuit is used for receiving signals from the memory of the electronic device and sending signals to the processor, and the signals are included in the memory Stored computer instructions; when the processor executes the computer instructions, it causes the electronic device to perform the method for optimizing the function of the hearable device in the first aspect, the second aspect, and any possible designs thereof.
  • FIG. 1A is a schematic diagram of different types of earphones worn by human ears according to an embodiment of the present application
  • FIG. 1B is a schematic structural diagram of a human ear canal provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an equivalent circuit of a human ear canal provided by an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a hearable device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an application scenario of a hearable device provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for optimizing a function of a hearable device provided by an embodiment of the present application
  • FIG. 6 is a flowchart of another method for optimizing the function of a hearable device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a system structure of an optimized listen-worn device provided by an embodiment of the present application.
  • FIG. 9A is a schematic diagram of an application scenario of a hearable device provided by an embodiment of the present application.
  • 9B is a schematic block diagram of an algorithm provided by an embodiment of the present application.
  • FIG. 10A is a schematic diagram of an application scenario of a hearable device provided by an embodiment of the present application.
  • 10B is a schematic block diagram of an algorithm provided by an embodiment of the present application.
  • 11 is a noise reduction depth curve diagram corresponding to an ANC function provided by an embodiment of the application.
  • FIG. 12A is a schematic diagram of an application scenario of a hearable device provided by an embodiment of the present application.
  • 12B is a schematic block diagram of an algorithm provided by an embodiment of the present application.
  • FIG. 13A is a schematic diagram of an application scenario of a hearable device provided by an embodiment of the present application.
  • 13B is a schematic block diagram of an algorithm provided by an embodiment of the present application.
  • FIG. 15 is a flowchart of another method for optimizing the function of a hearable device provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • the present application is described here by taking a hearable device represented by an earphone as an example.
  • the noise in the environment interferes with the sound played by the headphones.
  • the earphone is worn by the wearer, when the earphone is playing music, the wearer can hear the noise in the environment while listening to the sound played by the earphone.
  • headphones In order to provide good sound services, headphones generally have an Active Noise Cancellation (ANC) function.
  • ANC Active Noise Cancellation
  • the principle of active noise reduction is that the microphone in the earphone collects the noise signal in the environment where the earphone is located, and the earphone transmits the collected noise signal to the control circuit.
  • the control circuit can generate a sound wave signal with an opposite phase and similar amplitude to the noise signal.
  • the control circuit transmits the generated sound wave signal to the speaker in the earphone, and the sound wave signal is played through the speaker. Since the phase and amplitude of the sound wave signal are opposite to those of the noise signal, the sound wave signal played by the speaker can weaken the noise signal, thereby weakening the noise signal transmitted to the human ear through the earphone, so as to realize the function of active noise reduction of the earphone.
  • the range of hybrid active noise reduction is roughly 50Hz-3kHz
  • the range of feedback active noise reduction is roughly 50Hz-1kHz.
  • wearing headphones with ANC function can reduce the interference of noise to the wearer.
  • people still need to maintain a certain sensitivity to the sound in the environment to monitor real-time changes in the surrounding environment. For example, during a voice call, people use headphones to answer voices.
  • the headphones use active noise reduction to reduce the interference of noise in the environment on the sound heard by the wearer of the headphones.
  • the wearer of the headset needs to be aware of the alarm sound in the surrounding environment, etc., so that it can respond according to the sound in the environment. Therefore, the earphone needs to have a hear through (HT) function so that the earphone wearer can hear part of the sound in the environment.
  • HT hear through
  • the principle of transparent transmission of the earphone is that the microphone in the earphone collects the sound signal in the environment and transmits the sound signal to the signal processing circuit.
  • the signal processing circuit can filter and process the sound signal in the environment to obtain the analog sound signal, transmit the analog sound signal to the speaker, and play the analog sound signal through the speaker.
  • the earphone wearer can hear part of the sound in the environment.
  • the earphone collects the sound signal in the environment, and detects that the sound signal includes an alarm sound, and the signal processing circuit can remove the noise in the sound signal through the filter circuit and retain the alarm sound.
  • the alarm sound signal is simulated and amplified in sequence, and the amplified alarm sound signal is transmitted to the speaker. In this way, the speaker can play the alarm sound signal, so that the earphone wearer can hear the alarm sound in the environment, and the interference of the noise in the environment to the earphone wearer's hearing is reduced.
  • the earphone when the earphone is worn by the wearer, the sound emitted by the earphone propagates to the external reference point (ERP) at the entrance of the ear canal, and then the sound propagates to the eardrum reference point (Drum Reference Point, DRP) through the ear canal ).
  • the sound signal causes the sound pressure at the eardrum of the human ear to change, and the eardrum of the human ear vibrates under the action of the sound pressure, so that the earphone wearer hears the sound played by the earphone.
  • FIG. 1A is a schematic diagram of the speaker in the earphone and the ERP after different types of earphones are worn by the wearer.
  • Figure 1A (a) is a schematic diagram of the positional relationship between the speaker in the ear-mounted earphone and the ERP after the ear-mounted earphone is worn by the wearer.
  • Figure 1A (b) is a schematic diagram of the positional relationship between the speaker in the headset and the ERP after the headset is worn by the wearer.
  • Figure 1A (c) is a schematic diagram of the positional relationship between the speaker in the semi-in-ear headphones and the ERP after the semi-in-ear headphones are worn by the wearer, as shown in Figure 1A (d), after the in-ear headphones are worn by the wearer, Schematic diagram of the positional relationship between the speaker and the ERP in the in-ear headphones.
  • the propagation direction of the sound wave is the same as the vibration direction of the air particles, that is, the sound wave is a longitudinal wave. Therefore, when the sound wave propagates in the air, the density of the air particles changes with the propagation of the sound wave, and the pressure there also changes. This change in pressure due to the propagation of sound waves is called sound pressure. Sound waves (which can also be understood as sound) propagate from the ERP to the DRP, so that the sound pressure at the DRP changes, the sound pressure causes the eardrum of the human ear to vibrate, and the human can hear the sound.
  • the transfer function of the sound pressure signal at the EPR and the sound pressure signal at the DRP can be obtained.
  • the transfer function represents the mathematical representation of the sound pressure signal of the EPR and the sound pressure signal of the DRP. Therefore, the sound pressure signal at the ERP can be collected during the audio playback process of the headset, and the sound pressure signal at the DRP can be determined according to the transfer function.
  • the earphone can adjust the audio signal played by the speaker according to the transfer function, and the sound pressure signal at the ERP will also be adjusted. In this way, the sound pressure signal at the DRP can be adjusted through the transfer function.
  • the audio signal played by the speaker can be adjusted according to the transfer function, so that the earphone can provide a good active noise reduction or transparent transmission function and improve the sound effect of the sound played by the earphone.
  • the sound pressure signal at the DRP can be obtained by direct measurement, or the sound pressure signal at the DRP can be deduced by modeling the human ear.
  • a Doppler laser vibrometer can be used to measure the vibration of the human eardrum, and the vibration of the human eardrum can be converted into a sound pressure signal at the DRP through signal conversion.
  • the process of modeling the ear canal is to measure the geometric shape of the ear canal and segment the ear canal between the external auditory canal orifice and the eardrum reference point.
  • the ear canal from the external auditory canal orifice to the eardrum is divided into i segments.
  • the path between DRP and ERP at the entrance of the external auditory canal includes the D1 segment, the D2 segment...and the Di segment, and each segment of the ear canal can be equivalent to a circuit model. Based on the equivalent circuit model of each segment in the ear canal, the ear can be modeled.
  • FIG. 2 is a schematic diagram of an equivalent circuit of a human ear structure.
  • the D1 segment, D2 segment... and Di segment shown in Fig. 1B can all be equivalent to circuit models formed by acoustic impedance, acoustic capacitive reactance and acoustic inductive reactance.
  • P1 represents the sound pressure signal at the ERP
  • the equivalent circuit model of the ear canal of the D1 segment includes acoustic impedance R1, acoustic capacitive reactance C1 and acoustic inductive reactance L1.
  • the acoustic capacitive reactance C1 and the acoustic inductive reactance L1 are connected in parallel, and are connected in series with the acoustic impedance R1.
  • the circuit model of the ear canal of the Di segment includes the acoustic impedance Ri, the acoustic capacitive reactance Ci and the acoustic inductive reactance Li, and D as the load represents the sound pressure signal at the DRP.
  • the circuit model of each segment of the ear canal is the same, so the model of each segment of the ear canal will not be described in detail.
  • the multi-segment circuit model is connected in a cascaded manner, and the eardrum is placed in the modeled circuit in the form of an acoustic load.
  • the propagation of sound waves in the air is actually the perturbation of the sound waves that make the medium (ie, air particles) deviated from the equilibrium state, so as to realize sound propagation.
  • Acoustic impedance is the resistance that the sound wave needs to overcome to cause the displacement of the medium, that is, the resistance that the sound needs to overcome to propagate in the ear canal.
  • the acoustic impedance is equivalent to the resistance in the circuit, and the acoustic impedance can absorb part of the sound energy.
  • Acoustic capacitive reactance and acoustic inductive reactance are equivalent to capacitance and inductance in a circuit. Acoustic capacitive reactance and acoustic inductive reactance do not absorb the energy of sound and can change the direction or form of sound propagation.
  • the derivation of the circuit relationship, and the simulation analysis Based on the analysis of the circuit model shown in Figure 2, the derivation of the circuit relationship, and the simulation analysis. It can be collected under different models (ie different human ears), the cross-sectional area at the entrance of the ear canal is different, and the equivalent length of the ear canal is also different. According to the circuit model shown in Figure 2, the cross-sectional area S of the external auditory canal and the Mathematical relationship of track length L. The transfer function between the ERP and the DRP can be corrected by using individualized ear canal information such as the external auditory canal cross-sectional area S and the ear canal length L.
  • the method for measuring the shape and structure of the ear canal may be to inject foam into the human ear, and take out the ear canal model of the human ear after the foam is rapidly formed.
  • 3D three-dimensional
  • the model data of the human ear canal is obtained.
  • the human ear model created by this method is a smooth channel, which is different from the actual ear canal and cannot completely replace the real human ear. Therefore, after modeling the human ear, it also needs to be tested in the actual sound tube in order to correct the transfer function.
  • the physical modeling method can be used to model a specific ear, and a human ear model with high modeling accuracy can be obtained.
  • this modeling method includes steps such as acquiring, modeling, measuring, and revising the model results of the ear canal model.
  • the operation process is cumbersome, the requirements for the implementation environment are relatively high, and the implementation complexity is high.
  • the analysis of the modeling results is for a specific ear canal shape. If the method is directly applied to the earphone, it is difficult for the earphone to obtain the equivalent cross-sectional area of the ear canal and the equivalent human ear canal length, etc. parameters, it is difficult to model the ear canal for the current earphone wearer.
  • the acoustic pressure signal at the DRP is determined based on the acoustic signal at the ERP and the estimated ERP to DRP transfer function ED.
  • a certain excitation sound signal is played through the speaker, and the response of the ear canal is collected by the microphone at the ERP.
  • the headset infers the characteristics of the wearer's ear canal structure, so that the headset can obtain the signal at the DRP by solving the ED transfer function according to the ED transfer function closest to the current response in the historical database. estimate.
  • the earphone can adjust the sound signal played by the earphone according to the signal estimation at the DRP, so that the earphone can meet the better active noise reduction or transparent transmission function, and can also provide the wearer with a better sound effect.
  • the embodiment of the present application provides a method for optimizing the function of a hearable device, and the method can be applied to a hearable device.
  • the listenable device is preset with a second path (Secondary Path, SP) (or referred to as the SP path) obtained based on big data, the ERP to DRP (ED) transfer function, and the mapping relationship H between the SP path and the ED function domain.
  • SP Secondary Path
  • ED DRP
  • the hearable device is worn by the wearer, and a preset test audio signal can be played to obtain the ED inv transfer function modeling for the wearer's ear canal.
  • the listening device plays the test sound, which can collect the sound pressure signal at the ERP and the sound information fed back by the ear canal.
  • the listening device can calculate the SP according to the sound pressure signal at the ERP and the sound information fed back by the ear canal. inv path. Further, the hearable device can determine the ED inv transfer function according to the SP inv path. That is to say, the ED inv transfer function obtained in the embodiment of the present application is related to the wearer of the hearable device. In addition, the hearable device can also obtain personalized data related to the wearer, for example, the personalized data can be the size of the earmuffs used by the hearable device, the tightness of the hearing device, the type of ear canal Wait. The hearable device can correct the mapping relationship H inv between the SP inv path and the ED inv function domain according to the personalized data input by the wearer.
  • the hearable device can collect the mapping relationship H inv between the SP inv path and the ED inv transfer function distribution domain for the wearer of the hearable device, so that the hearable device can adjust noise reduction in real time during use.
  • the purpose of the function and the transparent transmission function is to provide the wearer of the hearable device with a good active noise reduction function, a transparent transmission function, and a better sound effect.
  • the second path is a path in which noise in the environment is a sound source, and the noise propagates through the earphone to the DER of the human ear.
  • the earphone is worn by the wearer, the earphone can play a preset prompt tone, and the microphone in the earphone collects the sound pressure signal at the ERP, and acquires the signal coupled with the information of the wearer's ear canal.
  • the headset can establish the SP inv path and ED inv transfer function for the wearer.
  • headphones can be used in conjunction with electronic devices.
  • the earphone is worn by the wearer, and the earphone establishes a communication connection with the electronic device.
  • the headset may provide the wearer with the function of interacting with the electronic device voice, or the headset may only provide the wearer with the function of playing the voice.
  • the headset establishes a communication connection with the electronic device, the headset is worn, when the electronic device plays an audio file, the electronic device decodes the audio file to generate voice information, the electronic device transmits the voice information to the headset, and the headset plays the voice information, the headset wearer hears the mobile phone Played audio file.
  • the display screen of the electronic device displays the video image
  • the headset provides the wearer with audio information in the video.
  • the wearer uses an electronic device to make a call, and the electronic device communicates with another electronic device.
  • the headset can be used to collect the voice signal sent by the wearer and transmit it to the electronic device.
  • the electronic device can transmit the collected voice signal to another electronic device. equipment.
  • the electronic device receives a voice signal transmitted by another electronic device, and the electronic device can play the voice signal through the earphone.
  • the method provided by the embodiments of the present application can create a personalized SP path and ED transfer function for the wearer of the hearable device, and correct the mapping relationship H inv between the SP path and the ED transfer function according to the collected personalized parameters.
  • the wearer can always provide the wearer with real-time active noise reduction and pass-through functions when the hearable device is used by the wearer, provide the wearer with a good listening experience, and improve the sound quality played by the hearable device. sound effects.
  • the hearable device 300 may include a processor 310, an internal memory 320, a charging interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a radio frequency module 350, a communication module 360, audio module 370, speaker 370A, call microphone 370B, feed-forward (Feed-Forward, FF) microphone 370C, feedback (Feed-Back, FB) microphone 370D, voice processing unit (Voice Process Unit, VPU) sensor 380, Button 390, etc.
  • a processor 310 may include a processor 310, an internal memory 320, a charging interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a radio frequency module 350, a communication module 360, audio module 370, speaker 370A, call microphone 370B, feed-forward (Feed-Forward, FF) microphone 370C, feedback (Feed-Back, FB) microphone 370D,
  • the hearable device 300 shown in FIG. 3 is only an example of the hearable device.
  • the structure illustrated in FIG. 3 does not constitute a limitation on the hearable device 300 . More or fewer components than shown may be included, or some components may be combined, or some components may be split, or a different arrangement of components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware. For example, if the hearable device 300 is a hearing aid, the hearable device 300 does not include the communication module 350, the radio frequency module 360, the receiver 370B, and the like.
  • the processor 310 may include one or more processing units, for example, the processor 310 may include an application processor (application processor, AP), a modem processor, a controller, a memory, a digital signal processor (digital signal processor, DSP) ), baseband processor, and/or neural-network processing unit (NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor a controller
  • memory a digital signal processor (digital signal processor, DSP) ), baseband processor, and/or neural-network processing unit (NPU), etc.
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the decision maker that directs the various components of the hearable device 300 to work in harmony as instructed. It is the nerve center and command center of the hearable device 300 .
  • the controller generates an operation control signal according to the instruction operation code and timing signal, and completes the control of fetching and executing instructions.
  • a memory may also be provided in the processor 310 for storing instructions and data.
  • the memory in the processor is a cache memory. Instructions or data that have just been used or recycled by the processor can be saved. If the processor needs to use the instruction or data again, it can be called directly from memory. Repeated access is avoided, and the waiting time of the processor is reduced, thereby improving the efficiency of the system.
  • the processor 310 may store the transfer function of the SP db path and the ED db obtained by summarizing the big data, and the mapping relationship H db between the SP db path and the ED db function distribution domain.
  • the hearable device 300 may directly call the data stored in the processor 310 to create the corresponding SP inv path and ED inv transfer function for the wearer.
  • the processor 310 may include an interface.
  • the interface may include an integrated circuit (Inter-Integrated Circuit, I2C) interface, an integrated circuit built-in audio (Inter-Integrated Circuit Sound, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, Universal Asynchronous Receiver Transmitter (Universal) Asynchronous Receiver/Transmitter, UART) interface, and/or Universal Serial Bus (Universal Serial Bus, USB) interface, etc.
  • I2C Inter-Integrated Circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • Universal Asynchronous Receiver Transmitter Universal Asynchronous Receiver Transmitter
  • UART Universal Asynchronous Receiver Transmitter
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (Serial Data Line, SDA) and a serial clock line (Derail Clock Line, SCL).
  • SDA Serial Data Line
  • SCL Serial Clock Line
  • the processor may contain multiple sets of I2C buses. The processor can separately couple touch sensors, chargers, etc. through different I2C bus interfaces.
  • the I2S interface can be used for audio communication.
  • the processor may contain multiple sets of I2S buses.
  • the processor can be coupled with the audio module through the I2S bus to realize the communication between the processor and the audio module.
  • the audio module can transmit audio signals to the communication module through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module and the communication module may be coupled through a PCM bus interface.
  • the audio module can also transmit audio signals to the communication module through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication, and the sampling rates of the two interfaces are different.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the hearable device 300 .
  • the hearable device 300 may use different interface connection manners in the embodiments of the present application, or a combination of multiple interface connection manners.
  • the charging management module 340 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 341 is used to connect the battery 342 , the charging management module 340 and the processor 310 .
  • the power management module receives input from the battery and/or charging management module, and supplies power to the processor, internal memory, and communication module.
  • the power management module can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the wireless communication function of the hearable device 300 may be implemented by the antenna 1, the antenna 2, the radio frequency module 350, the communication module 360, the modem, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in hearable device 300 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the radio frequency module 350 may provide a communication processing module applied on the hearable device 300 including 2G/3G/4G/5G wireless communication solutions.
  • the radio frequency module receives electromagnetic waves from the antenna 1, filters and amplifies the received electromagnetic waves, and transmits them to the modem for demodulation.
  • the radio frequency module can also amplify the signal modulated by the modem, and then turn it into electromagnetic waves and radiate it out through the antenna 1 .
  • a modem may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs audio signals through audio devices (not limited to speakers, receivers, etc.).
  • the listen-worn device provided by the embodiment of the present application can interact with a remote server (or cloud device), and the listen-worn device can transmit the acquired personalized parameters to the remote server, and the remote server can update the listen-worn device according to the personalized data.
  • the mapping relationship H db of the SP inv path of the type device and the transfer function of the ED inv is corrected to improve the sound effect of the headphones.
  • the communication module 360 can provide wireless local area network (Wireless Local Area Networks, WLAN), (such as Wireless Fidelity (Wireless Fidelity, Wi-Fi) network), Bluetooth (Blue Tooth, BT), A communication processing module for wireless communication solutions such as Frequency Modulation (FM), Near Field Communication (NFC), and Infrared (IR).
  • the communication module 360 may be one or more devices integrating at least one communication processing module.
  • the communication module receives electromagnetic waves via the antenna 2, modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor.
  • the communication module 360 can also receive the signal to be sent from the processor, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the Bluetooth headset can establish a communication connection with the electronic device through the antenna 2, so as to achieve the purpose of playing the sound of the electronic device through the Bluetooth headset.
  • Internal memory 321 may be used to store computer executable program code, which includes instructions.
  • the processor 310 executes various functional applications and data processing of the hearable device 300 by executing the instructions stored in the internal memory 321 .
  • the memory 321 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, a noise reduction function, a transparent transmission function, etc.), and the like.
  • the storage data area can store data created during the use of the hearable device 300 (such as audio data, the transfer function of the SP db path and the ED db based on big data, and the mapping relationship between the SP db path and the function distribution domain of the ED db ) H db etc.) etc.
  • data created during the use of the hearable device 300 such as audio data, the transfer function of the SP db path and the ED db based on big data, and the mapping relationship between the SP db path and the function distribution domain of the ED db ) H db etc.
  • the above-mentioned internal memory 321 includes the data partition (eg, data partition) described in the embodiments of the present application.
  • the data partition stores files or data that need to be read and written when the operating system starts, as well as wearer data created during the use of the hearable device (for example, the wearer's personalization obtained during the use of the hearable device). parameters, etc.).
  • the data partition may be a predetermined storage area in the above-mentioned internal memory 321 .
  • the data partition may be contained in RAM in the internal memory 321 .
  • the virtual data partition in this embodiment of the present application may be a storage area of the RAM in the internal memory 321 .
  • the virtual data partition may be a storage area of the ROM in the internal memory 321 .
  • the hearable device 300 can implement audio functions through the audio module 370, the speaker 370A, the call microphone 370B, the FF microphone 370C, the FB microphone 370D, the VPU sensor 380, and the application processor. Such as music playback, voice calls, recording, etc.
  • the audio module is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input to digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be provided in the processor 310 , or some functional modules of the audio module may be provided in the processor 310 .
  • Speaker 370A also referred to as “speaker”, is used to convert audio electrical signals into audio signals.
  • the hearable device may play audio signals through speaker 370A.
  • the call microphone 370B also called “microphone” or “microphone” is used to convert audio signals into electrical signals.
  • the wearer can make a sound by approaching the call microphone 370B through the human mouth, and input an audio signal into the call microphone 370B.
  • the FF microphone 370C can be disposed outside the hearable device 300 to collect noise in the environment where the hearable device is located.
  • the FB microphone 370D is disposed on the side of the hearable device close to the human ear, and is used to collect audio signals coupled with channel information of the human ear, so as to realize the function of active noise reduction of the hearable device 300 .
  • the hearable device 300 may be provided with at least one microphone.
  • a microphone may also be provided in the earpiece part of the earphone to collect sound in the environment, so that the earphone can realize functions such as noise reduction and transparent transmission.
  • the hearable device 300 may further be provided with three, four or more microphones to collect audio signals, reduce noise, identify sound sources, and implement directional recording functions.
  • VPU sensor 380 is a bone conduction sensor. It is a single-axis accelerometer using piezoelectric materials, which can be used to sense and measure the movement of the vocal cords. The VPU sensor 380 has low power consumption and can extract speech information when the hearable device 300 is in a high noise environment.
  • the keys 390 include a power-on key, a volume key, and the like.
  • the keys may be mechanical keys. It can also be a touch key.
  • the hearable device 300 receives key inputs and generates key signal inputs related to wearer settings and functional control of the hearable device 300 .
  • the hearing-worn device provided by the embodiment of the present application may be a hearing aid, a cochlear implant, an in-ear type, a semi-in-ear type, an on-ear type, a headphone, etc., an electronic device worn on the ear.
  • the embodiment of the present application does not limit the specific form of the hearable device.
  • the embodiment of the present application provides a method for optimizing the function of a hearable device, and the method can be applied to a hearable device. It can be understood that the method can be applied to a variety of hearable devices.
  • the hearable device is an earphone as an example to describe the method provided by the embodiment of the present application.
  • a general SP db path that is, the above-mentioned basic SP path
  • an ED db transfer function that is, the above-mentioned basic ED transfer function obtained based on big data
  • H db Domain mapping relationship
  • the headset can interact with a first device (such as an electronic device, a remote server, a cloud device, etc.) (or called a master device), and create an SP inv path for the wearer through the first device (That is, the above-mentioned preset SP path) and the ED inv transfer function (that is, the above-mentioned preset ED transfer function).
  • the first device may preset a general SP db path and ED db transfer function obtained based on big data, and a mapping relationship H db between the SP db path and the ED db function domain.
  • the earphone when the earphone is worn by the wearer and is used. Based on big data, the earphone can generate, for the current wearer, the SP inv path and the ED inv transfer function, and the corresponding relationship H inv between the SP inv path and the ED inv function domain. That is to say, the related information of the big data is preset in the hearable device, and the related information is the SP db path and the ED db transfer function, and the mapping relationship between the SP db and the ED db function domain.
  • the structure of the test equipment used in the experiment may be different from the structure of the earphones sold in the market.
  • the test equipment used in the measurement includes a probe microphone.
  • the probe microphone can obtain the accurate sound pressure signal at the DRP.
  • the principle of the specific experiment is: for different people (ie different ear canal shapes), using the test equipment under various earmuff sizes (for in-ear headphones), wearing the test equipment with different degrees of tightness, wearing the test equipment to measure the sound of the equipment
  • the relevant data used to create the SP db path and the ED db transfer function were collected at different distances from the loudspeaker to the ERP. So that testers can process the collected data to determine the SP db and ED db transfer functions under big data.
  • the specific measurement process can be: determine the test scene and record the test scene data, such as the test object, the shape of the ear canal of the test object, the type of test equipment worn by the test object (for example, whether there are earmuffs, etc.), the wearing tightness of the test equipment degree, the distance between the speaker of the test equipment and the ERP, etc.
  • the test equipment plays the preset test music, and the test equipment collects and generates the related data of the SP db path and the ED db transfer function.
  • the relevant data for generating the SP db path and the ED db transfer function may include: the sound pressure signal at the ERP, the sound pressure signal at the DRP, the response signal collected by the speaker, the primary path (PP) response signal, and the acoustic feedback Path (feedback path, FP) response signal and so on.
  • multiple sets of experimental data are obtained through repeated experiments and multiple measurements. Multiple sets of experimental data can be input into the computer, and the SP db and ED db transfer functions under big data can be obtained through computer processing and simulation calculation, and the mapping relationship H db of the SP db and ED db function domains can be obtained.
  • the above-mentioned related data for generating the SP db path and the ED db transfer function are collected in an offline state, that is, the test data can be obtained after measurement by the test equipment.
  • FIG. 4 is a schematic diagram of a scenario in which a hearable device interacts with a main control device to acquire test data according to an embodiment of the present application.
  • the wearer 100 can interact with the electronic device 200 so that the electronic device 200 is connected to the hearable device 300 through communication.
  • the electronic device 200 may be a mobile phone.
  • the electronic device 200 receives an operation instruction from the wearer 100 to establish a communication connection with the electronic device 200 (ie, the electronic device 200 interacts with the wearer 100 ), and responds to The operation command of the wearer 100 is connected with the listenable device 300 via Bluetooth.
  • FIG. 5 is a schematic diagram of a system provided by an embodiment of the present application.
  • 501 represents a schematic diagram of the system during offline training, and the system structure of offline training is used to obtain relevant data for creating SP db paths and ED db transfer functions.
  • 502 represents a database, and the database is used to store the SP db path and the ED db transfer function obtained by offline training, and the mapping relationship H db between the SP db path and the ED db function domain.
  • 503 represents the personalization database of the current user, which is used to store the SP inv path and the ED inv transfer function obtained according to the personalized data of the earphone wearer, and the mapping relationship H inv between the SP inv path and the ED inv transfer function.
  • 504 represents a schematic diagram of a system corresponding to a product of a hearable device (eg, earphones), and the hearable device is used to obtain the wearer's personalized data, such as for creating the SP inv path and the related parameters of the ED inv transfer function.
  • the system of 501 can collect the relevant data created to the SP db path and the ED db transfer function, then 501 can transmit the collected data to 502 so that 502 can generate the SP db path and the ED db transfer function.
  • 502 may generate SP db paths and ED db transfer functions of multiple sets of test data, so as to obtain the mapping relationship H db between the SP db paths and the ED db function domains.
  • 501 can also transmit the collected data to 503, and 503 can also obtain the wearer's personalized data of the hearable device, so that 503 can obtain the wearer's SP inv path and ED inv according to the data and personalized data transmitted by 501.
  • Transfer Function The wearer's personalized data in the 503 database can be used to modify the SP inv path and the ED inv transfer function, so as to obtain the mapping relationship H db between the modified SP db path and the ED db function domain.
  • the 501 offline training system includes a probe microphone, so that the offline training system can collect the sound pressure signal at the DRP of the human ear.
  • 504 is that the system of the hearable device does not include a probe microphone, then 504 cannot collect the sound pressure signal at the wearer's DRP.
  • 504 may obtain 503 the mapping relationship H db between the revised SP db path and the ED db function domain in the database, so as to perform ED modeling for the wearer of the hearable device according to H db .
  • the test equipment includes a probe microphone, the test equipment plays a preset test audio signal (eg, test music), and collects the sound pressure signal at the ERP, the sound pressure signal at the DRP, the response signal collected by the speaker, and the primary path. (primary path, PP) response signal, acoustic feedback path (feedback path, FP) response signal, etc.
  • PP primary path
  • FP acoustic feedback path
  • ERP(z) represents the response at the ERP collected by the feedback (Feed-back, FB) microphone;
  • Ref(z) represents the response collected by the feed-forward microphone (Feed-Forward, FF) microphone in the hearable device;
  • Spk(z) represents the response of the speaker;
  • SP(z) represents the transfer function response from the speaker to the FB microphone;
  • PP(z) represents the transfer function response from the FF microphone to the FB microphone;
  • FP(z) represents the transfer function from the speaker to the FF microphone response.
  • the modeling can use the following formula 2 to generate the ED model:
  • DRP(z) represents the response at the DRP collected by the probe microphone
  • ERP(z) represents the response at the ERP collected by the microphone
  • a database is established according to multiple sets of data.
  • the database includes the mapping relationship of multiple sets of data.
  • the database can be represented by the following formula 3:
  • EC represents different ear canals (EC) of different testers
  • ES represents the earmuff size (ES) used by the earphone wearer
  • WP represents the wearing posture (wear posture, WP) of the earphone wearer.
  • the relevant data obtained by the experiment is preset in the hearable device, and the relevant data includes: the general SP db path and the ED db transfer function, and the mapping relationship between the SP db path and the ED db function domain Hdb .
  • the hearable device can be used independently, that is, it does not need to cooperate with the first device to realize its function, such as a cochlear implant, a hearing aid and other types of hearable device products.
  • the wearer of the hearable device can train the ED inv transfer function of the hearable device according to their own hearing conditions and the shape of the ear canal, so that the hearable device is more Suitable for the hearing condition of the wearer.
  • the relevant data of SP inv modeling and ED inv modeling are collected to generate SP inv and ED inv transfer of the wearer. function.
  • the listening device collects SP inv modeling and ED inv modeling. relevant data to generate a personalised database for that wearer.
  • the wearable device in the process of collecting the wearer's personalized data, the wearable device cannot obtain the wearer's personalized data in all possible usage scenarios. Therefore, the mapping relationship H db between the SP db path and the ED db function domain in the large database can be used to correct the wearer's personalized data, so that the hearable device can better provide the wearer with good performance in various scenarios. auditory experience.
  • the wearer's personalized data includes information such as whether the wearer uses earmuffs, the tightness of the wearer wearing the hearable device, and the movement state of the wearer.
  • the hearable device may collect relevant personalized data by interacting with the wearer's voice. For example, after the hearing-worn device starts the test, it asks questions for each personalized data through voice interaction, and collects the wearer's voice information to determine the wearer's personalized data. For example, if the listening device sends a question, "Please confirm whether to use earmuffs", if the listening device collects the wearer's voice information, the answer is "no earmuffs" or "no" or "no". The hearable can determine that the wearer is currently using a hearable that does not have eartips. In this case, the hearable device will no longer ask the wearer for personalization data such as the size of the earmuffs.
  • the hearable device can be connected to the main control device (or called the first device), and works under the control of the main control device, then the wearer's personalized data can be collected through the main control device.
  • the main control device is an electronic device such as a mobile phone and a computer, and the main control device includes a display screen.
  • the listening device establishes a communication connection with the main control device, and the display screen of the main control device displays an input interface for personalized data, and the input interface can obtain the information input by the wearer.
  • the wearer interacts with the main control device, and inputs personalized data through the input interface, so that the wearable device collects the wearer's personalized data, so as to establish the wearer's personalized database.
  • the listening device is an earphone
  • the first device is a mobile phone
  • the earphone and the mobile phone cooperate to implement the training of the transfer function in the earphone.
  • the process is as follows: the wearer wears the earphone, the mobile phone receives the operation information of the wearer, and the mobile phone is connected with the earphone through Bluetooth.
  • the wearer wears the earphones the earmuffs to be used (for in-ear earphones) are determined, the wearing posture is adjusted, and the tightness of the earphones worn is adjusted.
  • the mobile phone receives the operation of collecting the wearer's personalized data, and the mobile phone can receive the wearer's input information to collect the relevant personalized data.
  • the mobile phone receives the size of the earphone and earmuff input by the wearer, the wearing posture of the earphone, and the tightness of the earphone wearing.
  • the mobile phone sends a preset test audio signal to the headset, and the headset collects relevant data, and the headset can transmit the collected data to the mobile phone.
  • the mobile phone can use the data transmitted by the headset to create the SP inv path and ED inv transfer function for the wearer.
  • the mobile phone can use its computing power to combine the wearer’s personalized data collected by the mobile phone to obtain a personalized nonlinear mapping relationship H. inv .
  • the mobile phone can also obtain the big data to obtain H db , in this way, the mobile phone can obtain the personalized nonlinear mapping relationship H inv according to the personalized data and H db , the mobile phone can transmit the mapping relationship H inv to the headset, and use the initial data or the original data as the original data.
  • the form is set in the headset. In this way, when the headset is used by the wearer again, the wearer can be provided with a good active noise reduction or transparent transmission function according to the wearer's personalized data.
  • the listening device is an earphone
  • the general SP db path and ED db transfer function obtained based on big data are preset in the earphone, as well as the mapping relationship H db between the SP db path and the ED db function domain.
  • FIG. 6 is a flowchart of a method for optimizing a function of a hearable device provided by an embodiment of the present application. As shown in FIG. 6 , the method may include steps 601 to 606 .
  • Step 601 the earphone is worn by the wearer, and the active noise reduction and/or transparent transmission of the earphone is turned on.
  • the earphone is worn by the wearer, and the ANC and/or HT function of the earphone is turned on, so that the earphone can provide a good sound playback effect during the working process.
  • the earphone may include buttons, and the buttons on the earphone may be used to trigger the ANC and/or HT function of the earphone.
  • the earphone can acquire noise information in the current environment, and activate the ANC and/or HT function according to the noise information, so that the earphone can provide the wearer with a good listening experience.
  • Step 602 Play a preset audio signal, and collect response information from the wearer's ear canal, where the response information is used to create an SP inv path.
  • the earphone can collect the response information coupled with the ear canal information during the process of transmitting the sound wave of the audio signal in the ear canal.
  • the response information collected by the hearable device may include: the response at the ERP; the response of the speaker; the response on the primary path; the response on the feedback path, and the like.
  • the response information collected by the headset is related to the hardware structure of the headset.
  • the headset includes an FF microphone, a FB microphone, a speaker, and the like. After the earphone plays the preset audio signal, the speaker can collect the speaker response, the FF microphone can collect the response on the feedback path, the FB microphone can collect the response on the primary path, and so on.
  • Step 603 Create an SP inv path of the ear canal according to the response information.
  • the earphone does not include a probe microphone, so the earphone cannot directly obtain the sound pressure signal at the DRP.
  • the earphone can model the SP inv path based on the collected response signal to obtain the real-time SP cur (z).
  • Step 604 Obtain the ear canal modeling ED inv transfer function and the mapping relationship H inv between the SP inv path and the ED inv transfer function based on the obtained personalized data.
  • the earphone when the switch of the earphone is triggered, the earphone is in a working state, and the earphone can collect personalized data of the wearer.
  • the headset can obtain personalized data of the wearer through voice interaction.
  • the headset may include buttons, and the wearer's personalized data is collected based on the wearer's operation of the buttons.
  • the headset can use other input devices (such as a display, a touch keyboard, etc.), the headset is connected to the input device, and the headset can obtain the wearer's personalized data through the input device, or the input device can obtain the wearer's personality. data and send personalized data to the headset.
  • the embodiments of the present application do not specifically limit the manner in which the earphone collects the wearer's personalized data.
  • the headset wearing status in the personalization data may be determined by the headset.
  • the headset may include a direction sensor and a gyroscope sensor, so that the headset can determine the posture information of the headset according to the data of the direction sensor and the gyroscope sensor, and the headset can determine the tightness of the headset and whether the wearing posture of the headset has changed.
  • the headset may also include an acceleration sensor, and the headset may reflect whether the headset is carried by the wearer according to the data of the acceleration sensor and is in a motion state, and the headset can determine whether the headset wearer is in a motion state. That is to say, the personalization data of the wearer may be determined by the earphone according to the data of its own sensor, or the personalized data may be obtained by the interaction between the earphone and the wearer.
  • the earphone collects the wearer's personalized data, and corrects the mapping relationship H inv between the SP inv path and the ED inv function domain according to the personalized data.
  • the preset audio signal played by the earphone is coupled with the wearer's ear canal, and the earphone can obtain the response information of the current wearer coupling the wearer's ear canal information, so that the earphone can model the real-time transfer function ED cur (z).
  • the earphone plays a preset audio signal
  • the earphone obtains the personalized data of the earphone wearer
  • the earphone can generate the real-time SP cur (z) path and ED cur (z) transfer function for the wearer.
  • the real-time ED cur (z) transfer function can reflect the relationship between the sound pressure signal at the ERP and the DRP. Since the feedforward microphone in the headset can collect the audio signal at the ERP, the headset can determine the location of the ear canal at the ERP according to the audio signal at the ERP. sound pressure signal.
  • the earphone plays an audio signal, and the earphone can adjust the sound pressure signal at the DRP according to the ED cur (z) transfer function and the sound pressure signal at the ERP.
  • the purpose of optimizing the ANC and/or HT function of the headset is achieved, and the sound effect of the audio signal played by the headset is improved, so as to provide the wearer with a good listening experience.
  • Step 605 Play the audio information, collect the response information of the wearer's ear canal, and update the SP cur path and the ED cur transfer function in real time.
  • the headphones can collect the sound pressure signal at the ERP in real time, and adjust the audio signal played by the headphones in real time according to the ED cur (z) transfer function to provide good noise reduction. or pass-through effect.
  • Step 606 Adjust the audio information played by the earphone based on the SP cur path and the ED cur transfer function updated in real time, so that the earphone realizes real-time active noise reduction and/or real-time transparent transmission.
  • the headset can also collect personalized data in real time in the process of use to update the SP cur path and the ED cur transfer function.
  • personalized data can be determined by the headset according to its own sensor data, such as the tightness of the headset, whether the wearer is in motion, etc. For example, if the headset is worn by the wearer, and the wearer is in a state of motion (such as walking, running, etc.), then as the wearer's pace changes, the acceleration sensor in the headset can detect the motion state of the headset, and the headset can detect the motion state of the headset according to the acceleration Data from the sensors determines the state of the wearer in real time.
  • the movement of the wearer may affect the tightness of the earphone, and the sensor in the earphone can detect the tightness of the earphone.
  • the headset can detect the tightness of the headset in real time.
  • the earphone can determine whether the tightness of the earphone has changed greatly by collecting the sound pressure signal at the ERP. If the tightness of the earphone is changed, the earphone can adjust the ED cur transfer function in real time according to the change of the tightness of the earphone (ie, the change of the personalized data), so that the earphone can optimize the functions such as ANC and/or HT in real time.
  • the method provided by the embodiments of the present application will be described below by using the hearable device to interact with the first device, and the listenable device to cooperate with the first device so that the hearable device is in a working state.
  • the headset can interact with the first device, and the first device can obtain the general SP db path and ED db transfer function obtained based on big data, and the mapping relationship H db between the SP db path and the ED db function domain.
  • the first device can interact with the earphone wearer to obtain personalized data of the wearer, and the first device creates an SP inv path and an ED inv transfer function for the earphone wearer.
  • the earphone can utilize the computing and data processing capabilities of the mobile phone to create the SP inv path and the ED inv transfer function for the earphone wearer.
  • the headset can communicate with the first device through the communication module.
  • the headset can utilize the computing and data processing capabilities of the remote device to create the SP inv path and the ED inv transfer function for the headset wearer.
  • the listening device is an earphone
  • the first device may be a mobile phone
  • the earphone establishes a connection through a communication method such as Bluetooth or WLAN
  • the earphone can receive an audio signal from the mobile phone and play the audio signal.
  • the headset may include a communication module, so that the headset can establish a communication connection with a mobile phone, a computer, and the like.
  • the headset establishes a communication connection with the mobile phone by means of short-range communication (eg, Bluetooth, WLAN, NB-IoT, etc.). In this way, the headset can interact with the mobile phone, and the mobile phone can collect the wearer's personalized data.
  • the earphone can play the test audio signal, collect the response information coupled with the wearer's ear canal information, and the earphone can send the response information to the mobile phone.
  • the mobile phone can obtain the SP inv path according to the response information, and correct the mapping relationship H inv between the SP inv path and the ED inv function domain according to the personalized data, and obtain the ED inv transfer function.
  • the mobile phone can transmit the SP inv path and ED inv transfer function for the current wearer to the headset, so that the headset can adjust the playback audio signal for the current wearer, optimizing the ANC and/or HT functions of the headset.
  • the SP db path and the ED db transfer function obtained based on the big data, and the mapping relationship H db between the SP db path and the ED db function domain can be preset in the headset. After the headset establishes a communication connection with the mobile phone, the headset sends the SP db path and the ED db transfer function obtained based on the big data, and the mapping relationship H db between the SP db path and the ED db function domain to the mobile phone.
  • the headset sends the download address to the mobile phone, and the mobile phone can access the download address, and download the SP db path and ED db transfer function based on big data, as well as the mapping of the SP db path and the ED db function domain. relation H db .
  • a radio frequency module may be included in the headset, so that the headset can interact with a remote server or cloud device.
  • the SP db path and the ED db transfer function obtained based on the big data, and the mapping relationship H db between the SP db path and the ED db function domain can be set on the remote server or cloud device.
  • the headset can be used to interact with the wearer, collect the wearer's personalized data, and transmit the collected wearer's personalized data to the mobile phone through the radio frequency module.
  • the cell phone can create the wearer's SP inv path and ED inv transfer function based on the wearer of the headset.
  • the remote server can send the obtained SP inv path and ED inv transfer function of the wearer to the headset, so that the headset can adjust the playing audio signal for the current wearer during the process of playing audio, providing good active noise reduction and transparency. transfer function.
  • the method provided by the embodiments of the present application is described by taking the listening device as an earphone and the first device as a mobile phone as an example.
  • FIG. 7 is a flowchart of a method for optimizing a function of a hearable device provided by an embodiment of the present application. As shown in FIG. 7 , the method includes steps 701 to 709 .
  • the earphone can create and obtain the SP inv path and the ED inv transfer function for the wearer according to the personalized data.
  • the mobile phone creates the SP inv path and the ED inv transfer function for the wearer.
  • step 702, step 703, and step 708 in the embodiment of the present application are the same as step 601, step 602, and step 605 in the foregoing embodiment.
  • Step 701 Establish a communication connection between the headset and the mobile phone.
  • the communication connection between the headset and the mobile phone is established by using a Bluetooth connection.
  • the Bluetooth function of the mobile phone and the headset are both turned on, and the headset and the mobile phone are successfully connected via Bluetooth, then the headset can interact with the mobile phone data through Bluetooth.
  • Step 702 The earphone is worn by the wearer, and the active noise reduction and/or transparent transmission function of the earphone is turned on.
  • the earphone is connected to the mobile phone through Bluetooth, and the mobile phone can send control information to the earphone, and the control information is used to control the state of the function provided by the earphone.
  • a Bluetooth connection is established between the headset and the mobile phone, and the mobile phone can display a control interface of the headset, and the control interface includes switch controls for functions in the headset. For example, switch controls for active noise reduction and switch controls for transparent transmission.
  • the mobile phone receives the trigger operation of the active noise reduction switch control by the wearer, and the mobile phone sends the control information to enable the active noise reduction function to the headset.
  • the earphone is worn by the wearer, the earphone includes buttons, and the buttons on the earphone can be used to enable the function of the earphone.
  • the headset includes a button for the active noise reduction function and a button for the transparent transmission function. The button for the active noise reduction function on the headset is triggered, and the headset activates the active noise reduction function.
  • Step 703 The earphone plays the preset audio signal, and collects the response information of the wearer's ear canal, and the response information is used to create the SP inv path.
  • the preset audio signal played by the earphone may be an audio signal pre-stored in the earphone.
  • the audio signal is an audio signal sent by the mobile phone to the headset.
  • the mobile phone can send a preset audio signal to the headset, and the headset plays the preset audio signal.
  • the earphone plays a preset audio signal for the earphone to collect the feedback response information of the ear canal, so that the SP inv path can be created according to the response information.
  • Step 704 The headset transmits the collected response information to the mobile phone.
  • the headset transmits the collected response information to the mobile phone, so that the mobile phone can process the response information, and generate the SP inv path according to the response information.
  • Step 705 The mobile phone receives the response information transmitted by the headset, and collects the wearer's personalized data, and the personalized data is used for ED inv modeling.
  • the mobile phone may display a personalized data collection interface, and the mobile phone may acquire information input by the wearer on the personalized data collection interface, so that the mobile phone may acquire the wearer's personalized information.
  • the personalized data may include whether the earphones worn by the wearer include earmuffs, the size of the earmuffs, the tightness of the earphones, and the like.
  • Step 706 The mobile phone creates the SP inv path of the wearer's ear canal according to the obtained response information, and corrects the mapping relationship H inv between the SP inv path and the ED inv function domain according to the personalized data to obtain the ED inv transfer function.
  • the mobile phone obtains the SP db path and ED db transfer function according to the big data, and the mapping relationship H db between the SP db path and the ED db function domain, and obtains the SP inv path and ED inv transfer function personalized for the wearer.
  • the specific implementations of steps 605 to 605 are the same as the above-mentioned steps 503 and 504, and the specific implementation may refer to the above-mentioned steps 503 and 504, which will not be repeated here.
  • Step 707 The mobile phone transmits the generated ED inv transfer function for the wearer to the earphone, and sends audio data to the earphone.
  • the earphone can adjust the played audio signal according to the ED inv transfer function, so as to satisfy the function of active noise reduction or transparent transmission.
  • Step 708 The earphone adjusts the played audio signal according to the ED inv transfer function, collects the response information of the wearer's ear canal, and sends the response information to the mobile phone.
  • Step 709 The mobile phone uses the response information to update the SP inv path and the ED inv transfer function, and transmits the updated ED transfer function to the headset, so that the headset can perform active noise reduction and/or transparent transmission in real time.
  • the mobile phone obtains the SP inv path and the ED inv transfer function according to the personalized data and response information.
  • the mobile phone in the process of using the headset, the mobile phone obtains the SP inv path curve of the headset wearer according to the real-time data.
  • the schematic diagram of the ED inv transfer function curve in the process of using the headset, the mobile phone obtains the SP inv path curve of the headset wearer according to the real-time data.
  • the SP curve shown in FIG. 8 is the power gain on the SP path collected by the headset in different frequency bands of the audio file currently being played by the headset.
  • the ED curve is the power gain change of the sound pressure at ERP and DRP when the current headset is playing an audio file.
  • the modeling switch of the ED inv transfer function can be controlled by the wearer during use of the earphone, the wearer can trigger the modeling switch, and the earphone obtains the wearer's personalized data to create the ED inv transfer function.
  • the wearer can also not trigger the modeling switch, and the headset will not obtain the wearer's personalized data, nor will the ED inv transfer function be created.
  • FIG. 9A is a schematic diagram of an application scenario in which a mobile phone and a headset are used together.
  • 801 denotes a human ear
  • 802 denotes an earphone
  • 803 denotes a mobile phone.
  • the earphone 803 is worn on the human ear 801, and the mobile phone 803 can be connected to the earphone 802 through Bluetooth.
  • the display interface of the mobile phone 803 shows that the ANC function is on, and the switch for creating the ED transfer function is off.
  • FIG. 9B is a schematic diagram of an ANC algorithm architecture set in a mobile phone or a headset.
  • the headset can collect the response signal of the reference microphone, the response signal at the ERP, and DL represents the ED transfer function and SP path obtained based on big data.
  • Ref represents the response information collected by the feedforward microphone
  • W ff (Z) represents the response information on the feedback path obtained according to the response information collected by Ref.
  • W fb (Z) represents the response information on the primary path collected by the earphone.
  • SPK represents the audio signal played by the speaker of the headset.
  • ED(z) 1.
  • the feedforward microphone collects the response information Ref to obtain W ff (Z).
  • the earphone can collect the sound pressure signal at ERP and transmit it to the calculator.
  • SP(z) transmits the ED transfer function and SP path obtained by DL to the calculator to obtain the SP inv path and ED inv transfer function of the current wearer.
  • the earphone adjusts the audio signal of the speaker in the earphone according to the obtained W ff (Z), W fb (Z) and DL.
  • the ED transfer function for the wearer is not considered, and noise reduction is only achieved at the entrance of the ear canal, then the noise reduction degree at the ERP is better than that at the DRP. noise depth.
  • FIG. 10A is a schematic diagram of an application scenario in which a mobile phone and a headset are used together.
  • 801 denotes a human ear
  • 802 denotes an earphone
  • 803 denotes a mobile phone.
  • the earphone 803 is worn on the human ear 801, and the mobile phone 803 can be connected to the earphone 802 through Bluetooth.
  • the display interface of the mobile phone 803 shows that the ANC function is turned on, and the switch for creating the ED transfer function is turned on.
  • FIG. 10B is a schematic diagram of an ANC algorithm architecture set in a mobile phone or a headset.
  • the headset can collect the response signal of the reference microphone, the response signal at the ERP, and DL represents the ED transfer function and SP path obtained based on big data.
  • W ff (Z) represents the response information on the feedback path collected by the earphone
  • W fb (Z) represents the response information on the primary path collected by the earphone.
  • the headset plays an audio signal
  • the SP(z) detection module and the ED(z) estimation module work synchronously to collect relevant parameters for creating the SP path, and update the relevant parameters of the SP path in real time.
  • the SP(z) detection module is used to detect the related data of the SP path
  • the SP(z) update module is used to update the related data of the SP path.
  • SP(z) can update the SP path in real time according to the data of the SP(z) update module.
  • SP(z) can get the real-time SP cur (z) and transmit SP cur (z) to the operator.
  • the calculator can also obtain the response information of the ERP to obtain parameters for generating the ED transfer function.
  • the ED(z) estimation module is used to estimate the current sound pressure signal at the DRP, and the ED(z) update module estimates the ED db transfer function according to the sound pressure signal at the DRP.
  • the ED(z) module generates a real-time ED transfer function based on the ED(z) update module and data from the operator.
  • the ED(z) update module is used to update the parameters in the ED cur (z) transfer function to obtain W fb (Z).
  • the earphone can adjust the audio signal of the speaker in the earphone according to W ff (Z), W fb (Z) and DL.
  • the ED transfer function for the wearer is considered to achieve noise reduction at the DRP. Therefore, the noise reduction degree at the DRP is better than that at the ERP.
  • FIG. 11 is a schematic diagram of the comparison between the noise reduction effect of creating the ED inv transfer function and the noise reduction effect of closing the creation of the ED inv transfer function in the process of active noise reduction.
  • the value of the noise reduction depth corresponding to the ANC curve after the ED is turned on is smaller, and the noise reduction effect after the ED is turned on is better. Therefore, compared with the noise reduction at the ERP, the effect of noise reduction at the EDP is better if the ED transfer function is used.
  • the method provided by the embodiment of the present application has better noise reduction degree and wider bandwidth, and has better noise reduction effect when the ED transfer function is activated, compared with the active noise reduction at the ERP.
  • modeling through the ED transfer function can bring a better transparent transmission effect.
  • FIG. 12A is a schematic diagram of an application scenario in which a mobile phone and a headset are used together.
  • 801 denotes a human ear
  • 802 denotes an earphone
  • 803 denotes a mobile phone.
  • the earphone 803 is worn on the human ear 801, and the mobile phone 803 can be connected to the earphone 802 through Bluetooth.
  • the display interface of the mobile phone 803 shows that the HT function is on, and the switch for creating the ED transfer function is off.
  • FIG. 12B is a schematic diagram of an ANC algorithm architecture set in a mobile phone or a headset.
  • the earphone can collect the response signal of the reference microphone, the response signal at the ERP, and DL represents the ED db transfer function and SP db path obtained based on big data.
  • W ff (Z) represents the response information on the feedback path collected by the earphone
  • W fb (Z) represents the response information on the primary path collected by the earphone.
  • the feedforward microphone collects the response information Ref.
  • the response information W ff (Z) on the primary path of the earphone is obtained.
  • the earphone can collect the sound pressure signal at ERP and transmit it to the calculator.
  • SP(z) transmits the ED transfer function and SP path obtained by DL to the calculator to obtain the SP inv path and ED inv transfer function of the current wearer. .
  • the earphone can obtain the response information W fb (Z) on the primary path according to the SP inv path and the ED inv transfer function.
  • the earphone adjusts the audio signal of the speaker in the earphone according to the obtained W ff (Z), W fb (Z) and DL, so that the earphone realizes the transparent transmission function.
  • the ED transfer function for the wearer is not considered, and only the transparent transmission effect at the ERP can be guaranteed, so it can be determined that the transparent transmission bandwidth at the ERP is better than that at the DRP.
  • FIG. 13A is a schematic diagram of an application scenario in which a mobile phone and a headset are used together.
  • 801 denotes a human ear
  • 802 denotes an earphone
  • 803 denotes a mobile phone.
  • the earphone 803 is worn on the human ear 801, and the mobile phone 803 can be connected to the earphone 802 through Bluetooth.
  • the display interface of the mobile phone 803 shows that the HT function is in an on state, and the switch for creating the ED transfer function is in an on state.
  • FIG. 13B is a schematic diagram of an ANC algorithm architecture set in a mobile phone or a headset.
  • the earphone can collect the response signal of the reference microphone, the response signal at the ERP, and DL represents the ED db transfer function and SP db path obtained based on big data.
  • W ff (Z) represents the response information on the feedback path collected by the earphone
  • W fb (Z) represents the response information on the primary path collected by the earphone.
  • the earphone plays a preset audio signal, and the HT function of the earphone is turned on.
  • the earphone can collect the sound pressure signal at ERP and transmit it to the calculator.
  • SP(z) transmits the ED transfer function and SP path obtained by DL to the calculator to obtain the SP inv path and ED inv transfer function of the current wearer.
  • the earphone can obtain the response information W fb (Z) on the primary path according to the SP inv path and the ED inv transfer function.
  • the earphone obtains Ref through feed-forward microphone acquisition, and the ED(z) module obtains ED(z) according to the Ref collected in real time and the relevant parameters for creating the ED function.
  • the earphone adjusts the audio signal of the speaker in the earphone according to the obtained W ff (Z), W fb (Z) and DL.
  • the SP(z) detection module is in a working state, and the response data is collected to create the SP path.
  • the SP(z) detection module can update SP cur (z) in real time according to the currently collected data.
  • the ED(z) estimation module obtains the estimation of ED cur (z) according to the personalized nonlinear mapping function H inv and SP cur (z) obtained by offline training, and then the ED (z) update module estimates the ED cur (z) in the system parameters are updated.
  • the earphone in the process of transparently transmitting the sound, the earphone can achieve the purpose of the audio signal at the DRP, which can improve the transparent transmission effect at the DRP. Therefore, the transparent transmission bandwidth at the DRP point is stronger than that at the ERP point. transmission bandwidth.
  • the headset enables the transparent transmission function to create a schematic diagram of the value change of the real-time transparent transmission when the ED transfer function module is turned on and off.
  • Figure 14 taking the waveform of the audio signal heard by the human ear when the earphone is not worn as the reference standard, the sound signal heard by the human ear is closer when the ED transfer function module is turned on, and the audio signal received by the human ear when the earphone is not worn. Signal. The closer the sound pressure signal of the DRP is to the sound pressure when no headphones are worn, the better the transparent transmission effect will be.
  • the ED transfer function creation module when the ED is turned on, the sound pressure transparently transmitted by the earphone is closer to the sound pressure signal of the human ear DRP when the earphone is not worn. Therefore, when the ED transfer function creation module is enabled, the transparent transmission effect of the headset is better, the bandwidth of the transparent transmission of the headset is wider, and the transparent transmission bandwidth at the DRP point is stronger than the transparent transmission bandwidth at the ERP point.
  • the listenable device is an earphone
  • the first device is a cloud device as an example to describe the method provided by the embodiments of the present application. As shown in FIG. 15 , the method includes steps 901 to 909 .
  • Step 901 The headset establishes a connection with the cloud device.
  • the headset includes a radio frequency module, and the headset can establish a communication connection with the cloud device through the radio frequency module, so as to achieve the purpose of data transmission between the headset and the cloud device.
  • the cloud device is provided with big data to obtain the SP db path and the ED db transfer function, and the mapping relationship H db between the SP db and the ED db function domain.
  • the headset establishes communication with the cloud device, and the message sent by the headset to the cloud device includes the logo of the headset, so that the cloud device can model the headset to obtain the SP inv path and ED inv transmission of the wearer of the headset. function.
  • Step 902 The earphone is worn by the wearer, and the active noise reduction and/or transparent transmission of the earphone is turned on.
  • Step 903 The earphone plays a preset audio signal, and collects the response information of the wearer's ear canal, and the response information is used to create the SP path.
  • Step 904 The headset transmits the collected response information to the cloud device, the headset acquires the wearer's personalized data, and sends the personalized data to the cloud device.
  • Step 905 The cloud device receives the response information and personalized data transmitted by the headset, and the personalized data is used for ED inv modeling.
  • Step 906 The cloud device creates the SP inv path of the wearer's ear canal according to the obtained response information, and corrects the mapping relationship H inv between the SP inv path and the ED inv function domain according to the personalized data to obtain the ED inv transfer function.
  • Step 907 The cloud device transmits the generated ED inv transfer function for the wearer to the headset.
  • Step 908 The headset adjusts the played audio signal according to the ED inv transfer function, collects the response information of the wearer's ear canal, and transmits the response information to the cloud device.
  • Step 909 The cloud device uses the response information to update the SP inv path and the ED inv transfer function, and transmits the updated ED inv transfer function to the headset, so that the headset can implement real-time noise reduction and/or real-time transparent transmission.
  • the hearable device is an earphone.
  • the hearable device is another device, the above method can also be used. It will not be repeated here.
  • the above-mentioned listen-worn device includes corresponding hardware structures and/or software modules for executing each function.
  • the embodiments of the present application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Experts may use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of the embodiments of the present application.
  • the electronic device may be divided into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • Embodiments of the present application further provide an electronic device, including: one or more processors and one or more memories.
  • One or more memories coupled to one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to execute
  • the above related method steps are to implement the method for optimizing the function of the hearable device in the above embodiment.
  • Embodiments of the present application further provide a chip system, where the chip system includes at least one processor and at least one interface circuit.
  • the processor and interface circuits may be interconnected by wires.
  • an interface circuit may be used to receive signals from other devices, such as the memory of an electronic device.
  • an interface circuit may be used to send signals to other devices, such as a processor.
  • the interface circuit may read the instructions stored in the memory and send the instructions to the processor. When executed by the processor, the instructions can cause the electronic device to perform each step in the above-described embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
  • Embodiments of the present application further provide a computer storage medium, where the computer storage medium includes computer instructions, when the computer instructions are executed on the above-mentioned electronic device, the electronic device is made to perform various functions or steps performed by the mobile phone in the above-mentioned method embodiments .
  • Embodiments of the present application further provide a computer program product, which, when the computer program product runs on a computer, enables the computer to perform various functions or steps performed by the mobile phone in the above method embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, and can also be implemented in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk and other mediums that can store program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Headphones And Earphones (AREA)

Abstract

Les modes de réalisation de la présente invention concernent un procédé d'optimisation des fonctions d'aides auditives et des aides auditives, se rapportant au domaine de la technologie acoustique. Dans ledit procédé, lorsque les aides auditives se trouvent dans un état de fonctionnement, les effets de la fonction de réduction active du bruit ou de la fonction de passage des aides auditives peuvent être améliorés, apportant ainsi une meilleure expérience de l'utilisateur pour le porteur des aides auditives. Ledit procédé comprend les étapes suivantes : lorsqu'un signal audio est lu par des aides auditives, acquisition d'informations de réponse ; envoi des informations de réponse et d'informations audio à un premier dispositif, et génération par le premier dispositif d'un SP en fonction des informations de réponse et des informations audio ; et génération par le premier dispositif d'une fonction de transfert ED selon le SP et les données personnalisées acquises, et envoi de la fonction de transfert ED aux aides auditives, puis réglage par les aides auditives du signal audio lu selon la fonction de transfert d'ED. De cette manière, la profondeur de réduction de bruit de la réduction active de bruit des aides auditives et/ou le signal de pression acoustique transmis sont ajustés, de façon à obtenir l'objectif d'optimisation de la fonction des aides auditives.
PCT/CN2021/134629 2020-12-10 2021-11-30 Procédé d'optimisation des fonctions d'aide auditive et aides auditives WO2022121743A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011435355.5A CN114630223B (zh) 2020-12-10 2020-12-10 一种优化听戴式设备功能的方法及听戴式设备
CN202011435355.5 2020-12-10

Publications (1)

Publication Number Publication Date
WO2022121743A1 true WO2022121743A1 (fr) 2022-06-16

Family

ID=81896232

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134629 WO2022121743A1 (fr) 2020-12-10 2021-11-30 Procédé d'optimisation des fonctions d'aide auditive et aides auditives

Country Status (2)

Country Link
CN (1) CN114630223B (fr)
WO (1) WO2022121743A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208909A1 (en) * 2010-09-14 2013-08-15 Phonak Ag Dynamic hearing protection method and device
CN107426660A (zh) * 2016-04-08 2017-12-01 奥迪康有限公司 包括定向传声器***的助听器
CN109996165A (zh) * 2017-12-29 2019-07-09 奥迪康有限公司 包括适于位于用户耳道处或耳道中的传声器的听力装置
US20200252734A1 (en) * 2017-01-31 2020-08-06 Widex A/S Method of operating a hearing aid system and a hearing aid system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE502006004146D1 (de) * 2006-12-01 2009-08-13 Siemens Audiologische Technik Hörgerät mit Störschallunterdrückung und entsprechendes Verfahren
US9648410B1 (en) * 2014-03-12 2017-05-09 Cirrus Logic, Inc. Control of audio output of headphone earbuds based on the environment around the headphone earbuds
US10511915B2 (en) * 2018-02-08 2019-12-17 Facebook Technologies, Llc Listening device for mitigating variations between environmental sounds and internal sounds caused by the listening device blocking an ear canal of a user
EP3660835B1 (fr) * 2018-11-29 2024-04-24 AMS Sensors UK Limited Procédé de réglage d'un système audio activé à annulation de bruit et système audio activé à annulation de bruit
CN111935589B (zh) * 2020-09-28 2021-02-12 深圳市汇顶科技股份有限公司 主动降噪的方法、装置、电子设备和芯片

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208909A1 (en) * 2010-09-14 2013-08-15 Phonak Ag Dynamic hearing protection method and device
CN107426660A (zh) * 2016-04-08 2017-12-01 奥迪康有限公司 包括定向传声器***的助听器
US20200252734A1 (en) * 2017-01-31 2020-08-06 Widex A/S Method of operating a hearing aid system and a hearing aid system
CN109996165A (zh) * 2017-12-29 2019-07-09 奥迪康有限公司 包括适于位于用户耳道处或耳道中的传声器的听力装置

Also Published As

Publication number Publication date
CN114630223A (zh) 2022-06-14
CN114630223B (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
CN107690119B (zh) 配置成定位声源的双耳听力***
CN111095944B (zh) 耳塞式耳机装置、耳罩式耳机装置和方法
EP3016410B1 (fr) Dispositif de mesure et système de mesure
CN103988485A (zh) 测量设备、测量***及测量方法
CN114727212A (zh) 音频的处理方法及电子设备
Shimokura et al. Simulating cartilage conduction sound to estimate the sound pressure level in the external auditory canal
US11991499B2 (en) Hearing aid system comprising a database of acoustic transfer functions
US20240078991A1 (en) Acoustic devices and methods for determining transfer functions thereof
US10092223B2 (en) Measurement system
WO2022121743A1 (fr) Procédé d'optimisation des fonctions d'aide auditive et aides auditives
CN116033312B (zh) 耳机控制方法及耳机
CN115086851A (zh) 人耳骨传导传递函数测量方法、装置、终端设备以及介质
CN207518802U (zh) 脖戴式语音交互耳机
CN218772357U (zh) 一种耳机
CN116744169B (zh) 耳机设备、声音信号的处理方法及佩戴贴合度测试方法
WO2023093412A1 (fr) Procédé d'annulation active du bruit et dispositif électronique
CN109729471A (zh) 用于脖戴式语音交互耳机的anc降噪装置
WO2023160286A1 (fr) Procédé et appareil d'adaptation de paramètres de réduction de bruit
US20230054213A1 (en) Hearing system comprising a database of acoustic transfer functions
US20240114296A1 (en) Hearing aid comprising a speaker unit
CN111213390B (zh) 声音转换器
JP6234081B2 (ja) 測定装置
CN115942173A (zh) 用于确定hrtf的方法和听力设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21902450

Country of ref document: EP

Kind code of ref document: A1