CN112164381A - Voice wearable device and audio data processing method thereof - Google Patents

Voice wearable device and audio data processing method thereof Download PDF

Info

Publication number
CN112164381A
CN112164381A CN202010908250.0A CN202010908250A CN112164381A CN 112164381 A CN112164381 A CN 112164381A CN 202010908250 A CN202010908250 A CN 202010908250A CN 112164381 A CN112164381 A CN 112164381A
Authority
CN
China
Prior art keywords
audio data
module
audio
voice
wearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010908250.0A
Other languages
Chinese (zh)
Inventor
柳江
彭轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Miaoyan Technology Co ltd
Original Assignee
Shenzhen Miaoyan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Miaoyan Technology Co ltd filed Critical Shenzhen Miaoyan Technology Co ltd
Priority to CN202010908250.0A priority Critical patent/CN112164381A/en
Publication of CN112164381A publication Critical patent/CN112164381A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/18Methods or devices for transmitting, conducting or directing sound
    • G10K11/26Sound-focusing or directing, e.g. scanning
    • G10K11/34Sound-focusing or directing, e.g. scanning using electrical steering of transducer arrays, e.g. beam steering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention provides a voice wearing device and an audio data processing method thereof, wherein the audio data processing method of the voice wearing device comprises the following steps: step 101: acquiring first audio data in an audio providing device; step 102: synchronizing with the step 101, and acquiring wearing state information of the voice wearing device; step 103: adjusting the first audio data and the volume according to the wearing state information to obtain second audio data; step 104: outputting the second audio data. The invention relates to a voice wearing device and an audio data processing method thereof.A voice wearing device processes audio data by taking the current wearing state information of a wearer as reference and outputs an audio signal matched with the current wearing state information; the active noise reduction device can appropriately start active noise reduction operation according to the ambient sound intensity and the wearing coupling state, and improves the definition of audio and the wearing comfort level.

Description

Voice wearable device and audio data processing method thereof
Technical Field
The invention belongs to the technical field of intelligent wearing, and particularly relates to a voice wearing device and an audio data processing method thereof.
Technical Field
Along with the development and the perfection of intelligent equipment and wearing equipment, more and more wearing equipment of taking audio output function, such as wireless earphone, wired earphone, the hearing-aid earphone of wear-type, back-mounted type, ear-mounted type, earplug type, bone conduction type, VR helmet equipment, intelligent glasses and the wearing formula equipment that has the audio frequency function etc. its audio output has characteristics such as inherent. These devices have limited adjustability due to their fixed configuration, making it difficult to cover most users' head and/or ear shapes with one or more device models. In some cases, the device is not tightly coupled to the head/ear, and the wearer feels the coupling loose, so that the audio output channel of the device is open or semi-open to the auditory system, and the auditory system feels the audio frequency response changes, or the auditory system feels the ambient noise. In some cases, the device is coupled too tightly to the head/ear, and the wearer feels the coupling tight, which may cause the head and/or ear to be under overpressure, causing the auditory system to feel that the audio frequency response output by the device has changed, or that the auditory system feels that the volume of the audio output by the device has increased significantly. Such devices are not tightly or too tightly coupled to wear, resulting in the user not being able to truly hear the original audio, reducing the user experience expectations. Therefore, there is a need to provide a technology to solve the problem of adapting devices to various people and users, improve user experience, and make more people hear wonderful original audio.
According to the above analysis, the prior art solution has the following problems:
firstly, the method comprises the following steps: the situation that coupling is not reasonable enough exists, so that audio frequency and volume output by the voice wearable device are changed, and user experience is reduced.
Secondly, the method comprises the following steps: the wearing of the voice-worn device cannot be adaptive to various crowds, various user head types and/or ear types.
Thirdly, the method comprises the following steps: and the environment noise reduction processing is not combined with the actual wearing coupling condition of each wearer to carry out personalized special self-adaptive processing, so that the audio experience effect is poor.
Disclosure of Invention
The invention aims to provide a voice wearing device and an audio data processing method thereof, which can improve the definition of audio and the wearing comfort according to the intensity of environmental sound and/or the wearing coupling state.
The invention provides a voice wearing device which is worn on the head of a wearer and comprises an audio processing system and a sensor module for monitoring the coupling state of the voice wearing device and the head shape and/or the ear shape of the wearer, wherein the audio processing system comprises a wearing state acquiring module for acquiring wearing state information of the voice wearing device according to data of the coupling state monitored by the sensor module, a first audio acquiring module for acquiring first audio data in an audio providing device, a first processing module for acquiring second audio data by adjusting the first audio data and the volume according to the wearing state information, and a first output module for outputting the second audio data; the audio providing device is a built-in unit of the voice wearing device or an external terminal device independent of the voice wearing device.
Furthermore, the audio processing system further comprises a second audio acquisition module for acquiring third audio data of the surrounding environment of the voice wearable device, a second processing module for adjusting the third audio data and acquiring fourth audio data according to the wearing state information detected by the sensor module, a third processing module for synthesizing the second audio data and the fourth audio data and acquiring fifth audio data, and a second output module for outputting the fifth audio data.
Further, the audio processing system further comprises an energy monitoring module for monitoring the volume output of the audio data and a protection module for compressing and/or expanding the audio data.
Further, the first processing module includes an analog/digital conversion module, an audio framing module, a time domain to frequency domain conversion module, a frequency domain signal processing module, a frequency domain to time domain conversion module, an audio recombination module, a digital/analog conversion module, a noise reduction network module, a feature signal detection module, and a weighting parameter extraction module.
Further, the sensor module includes gesture detection sensor and wears the state sensor, it includes the coupling degree detection sensor to wear the state sensor, the coupling degree detection sensor is located the inboard of pronunciation wearing device and arranges in the position that meets the area the most with head form and/or ear form coupling.
Further, the voice wearing device further comprises a voice module, a storage module, an operation module and a communication module.
The invention also provides an audio data processing method of the voice wearing device, which comprises the following steps:
step 101: acquiring first audio data in an audio providing device;
step 102: synchronizing with the step 101, and acquiring wearing state information of the voice wearing device;
step 103: adjusting the first audio data and the volume according to the wearing state information to obtain second audio data;
step 104: outputting the second audio data.
The invention also provides an audio data processing method of the voice wearing device, which comprises the following steps:
step 201: acquiring first audio data in an audio providing device;
step 202: synchronizing with the step 201, and acquiring wearing state information of the voice wearing device;
step 203: adjusting the first audio data and the volume according to the wearing state information to obtain second audio data;
step 204: outputting the second audio data;
step 205: in synchronization with step 201, acquiring third audio data of the surrounding environment of the voice-worn device;
step 206: synchronously with step 203, adjusting the third audio data and the volume according to the wearing state information to obtain fourth audio data;
step 207: synthesizing the second audio data and the fourth audio data to obtain fifth audio data;
step 208: outputting the fifth audio data.
The invention also provides an audio data processing method of the voice wearing device, which comprises the following steps:
step 301: acquiring first audio data in an audio providing device;
step 302: synchronizing with the step 301, acquiring wearing state information of the voice wearing device;
step 303: adjusting the audio data and the volume according to the wearing state information to obtain second audio data;
step 304: monitoring the volume output energy of the second audio data, and judging whether the volume output energy exceeds a preset hearing threshold range;
step 305: compressing/expanding the second audio data;
step 306: outputting the second audio data or the compressed/expanded second audio data.
The invention also provides an audio data processing method of the voice wearing device, which comprises the following steps:
step 401: acquiring first audio data in an audio providing device;
step 402: synchronizing with the step 401, and acquiring wearing state information of the voice wearing device;
step 403: adjusting the audio data and the volume according to the wearing state information to obtain second audio data;
step 404: outputting the second audio data;
step 405: in synchronization with step 401, acquiring third audio data of the surrounding environment of the voice-worn device;
step 406: synchronously with step 403, according to the wearing state information, adjusting the third audio data and the volume to obtain fourth audio data;
step 407: synthesizing the second audio data and the fourth audio data to obtain fifth audio data;
step 408: monitoring the volume output energy of the fifth audio data, and judging whether the volume output energy exceeds a preset hearing threshold range;
step 409: compressing/expanding the fifth audio data;
step 410: outputting the fifth audio data or the compressed/expanded fifth audio data.
The invention relates to a voice wearing device and an audio data processing method thereof.A voice wearing device processes audio data by taking the current wearing state information of a wearer as reference and outputs an audio signal matched with the current wearing state information; in the state of wearing the coupling tight, the intelligent voice wearing device can turn down the low-frequency component of the audio frequency and turn down the output volume appropriately; in the wearing coupling relaxation state, the low-frequency component of the audio frequency can be correspondingly improved, and the output volume is moderately improved, so that the aim of automatically adjusting the audio frequency response and the output volume according to the wearing coupling state is fulfilled; the active noise reduction device can appropriately start active noise reduction operation according to the ambient sound intensity and the wearing coupling state, and improves the definition of audio and the wearing comfort level.
Drawings
The present invention will be further described in the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings.
FIG. 1 is a schematic diagram of a speech wearing system according to an embodiment of the present invention;
fig. 2 is a functional module schematic diagram of a voice wearing device of the voice wearing system shown in fig. 1;
fig. 3 is a schematic view of sub-functional modules of the first processing module of the speech-worn device shown in fig. 2 in an embodiment;
fig. 4 is a schematic diagram of sub-functional modules of a frequency domain signal processing module of the first processing module of the speech-worn device shown in fig. 3;
fig. 5 is a flowchart illustrating an audio data processing method of an embodiment of a voice-worn device of the voice-worn system shown in fig. 1;
FIG. 6 is a flowchart illustrating an audio data processing method of another embodiment of the speech-worn device shown in FIG. 1;
FIG. 7 is a flowchart illustrating an audio data processing method of another embodiment of the speech-worn device shown in FIG. 1;
FIG. 8 is a flowchart illustrating an audio data processing method of another embodiment of the speech-worn device shown in FIG. 1;
fig. 9(a) is a plan view of a wheatstone bridge type embodiment of the wearing state detection sensor of the voice worn device;
fig. 9(b) is a side view of one wheatstone bridge type embodiment of the wearing state detection sensor of the voice worn device;
fig. 9(c) is a side view of a stressed condition of one wheatstone bridge type embodiment of the wearing state detection sensor of the voice worn device;
FIG. 10 is an equivalent circuit diagram of the Wheatstone bridge type piezoresistive coupling degree detecting sensor embodiment shown in FIGS. 9(a) to 9(c) in a stressed state;
fig. 11(a) is a main circuit diagram of a speech-worn device based on a wheatstone bridge type piezoresistive coupling degree detection sensor;
fig. 11(b) is a circuit diagram of a piezoresistive coupling degree detection sensor in the embodiment of the speech-worn device.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
The technical solution of the present invention is described in detail with specific examples below.
The invention discloses a voice wearing system serving as wearing equipment of an Internet of things terminal, and belongs to the field of intelligent information equipment.
As shown in fig. 1, the voice wearing system includes a voice wearing device 22 worn on the head of a wearer 20, a terminal device 21, and a communication link 23 connecting the voice wearing device 22 and the terminal device 21. Wherein the communication link 23 includes, but is not limited to, a wired link and a wireless link. The voice wearing device 22 is worn around the head and the ears of the wearer 20, the form of the voice wearing device 22 includes, but is not limited to, a head-wearing type, a rear-hanging type, an ear-plug type, a bone-conduction type wireless earphone, a wired earphone, a hearing-assisted hearing earphone, a VR helmet device, smart glasses, a wearable device with an audio function, and the like, fig. 1 illustrates that the voice wearing device 22 is a rear-hanging type wireless earphone, and the form of the voice wearing device 22 shown in fig. 1 is not a limitation of the patent. The terminal device 21 includes, but is not limited to, a mobile phone, a watch, a computer, a player, a radio, a television, a hearing aid/hearing aid, or other devices with voice output function. The voice wearing device 22 can be used in cooperation with the terminal device 21, or can be used independently, for example, the voice wearing device 22 is an earphone/smart glasses/helmet with a music playing function, an earphone with a radio function, a hearing aid, or the like.
Fig. 2 is a schematic diagram illustrating functional modules of the speech-worn device 22, where the speech-worn device 22 includes a sensor module 111, a speech module 112, a storage module 113, an operation module 114, a communication module 115, and an audio processing system 120.
Sensor module 111 is used for monitoring the coupled state of pronunciation wearing device 22 and head type and/or ear type, and sensor module 111 includes gesture detection sensor and wears the state sensor, wears the state sensor and includes the coupling degree detection sensor, and the coupling degree detection sensor is located pronunciation wearing device 22's inboard and arranges the position that meets the area the most with head type and/or ear type coupling.
The voice module 112 is used for interconversion between the sound wave signal and the electric signal, and the voice module 112 includes a microphone, a receiver, an audio input and/or output interface, a corresponding signal filtering processing circuit unit, and the like.
The storage module 113 is used to store program codes, status information, data results, audio data and other data of the voice-worn device 22.
The calculation module 114 is used to run the related algorithms and logic of the audio processing system 120, calculate and process various audio data, sensor data and external responses.
The communication module 115 is used for establishing a communication link with the terminal device 21 and performing a human-computer interaction link with the wearer 20, and the communication module 115 includes two modes of a wired link and a wireless link. Through the communication module 115, the voice wearable device 22 can obtain the data information stream of the terminal device 21 or transmit the data information stream of the voice wearable device 22 to the terminal device 21.
In a preferred embodiment, the audio processing system 120 is composed of a plurality of modules, each module is a specific program code segment for implementing a certain function, and specifically includes a wearing state acquiring module 121, a first audio acquiring module 122, a first processing module 123 and a first output module 124.
The wearing state obtaining module 121 obtains wearing state information of the voice wearing device 22 through filtering analysis and processing according to the coupling state data monitored by the sensor module 111.
The first audio obtaining module 122 is configured to obtain first audio data from an audio providing apparatus, where the obtaining manner includes, but is not limited to, a wired and/or wireless link manner. Wherein the audio providing device can be a built-in unit of the voice wearing device 22, such as a built-in audio playing unit, a built-in radio receiving unit, a built-in hearing aid unit, etc.; the audio providing device may also be an external terminal device 21 independent of the speech-worn device 22, such as a mobile phone, a watch, a computer, a player, a radio, a television, a hearing aid/hearing aid or other devices with speech output function. The first audio data may be audio content such as music, speech, hearing assistance speech, broadcast speech, synthesized audio, and the like.
The first processing module 123 is configured to adjust a frequency response curve and a corresponding volume of the first audio data acquired by the first audio acquisition module 122 according to the wearing state information detected by the sensor module 111 of the speech wearing device 22, so as to obtain second audio data.
The adjustment method can be various:
in a first mode, the frequency response curve of the first audio data itself is adjusted, the signal amplitude of the corresponding specific frequency band is amplified and/or reduced, and the amplitude in the whole frequency response range may be amplified or reduced in an equal ratio.
And secondly, adjusting the output data of the audio providing device, adjusting the frequency response curve of the output data of the audio providing device through communication with the audio providing device, amplifying and/or reducing the signal amplitude of the corresponding specific frequency band, and possibly amplifying or reducing the amplitude in the whole frequency response range in an equal ratio.
The first output module 124 is configured to output the second audio data or the compressed/expanded second audio data to the speech module 112 for converting the electrical signal into the sound wave signal.
In another preferred embodiment, the audio processing system 120 further includes a second audio acquisition module 125, a second processing module 126, a third processing module 127, and a second output module 128.
And a second audio obtaining module 125, configured to obtain third audio data of the surrounding environment of the speech-worn device 22, where the obtaining mode includes, but is not limited to, a wired and/or wireless link mode. Wherein the third audio data belongs to a microphone signal, which may be an analog signal or a digital signal. Second audio capture module 125 may capture third audio data of the environment around voice-worn device 22 via one or more microphones and may also capture third audio data of the environment around voice-worn device 22 via a microphone array with filtering and amplification.
The second processing module 126 is configured to adjust the third audio data and the volume of the surrounding environment of the speech-worn device 22 obtained by the second audio obtaining module 125 according to the wearing state information detected by the sensor module 111 of the speech-worn device 22, so as to obtain fourth audio data, where the fourth audio data includes optimized surrounding environment sounds.
The adjustment can be done in a variety of ways:
in the first case, if the coupling in the wearing state is tight and the full-band energy of the third audio data is low, the amplitude of the third audio data within the overall frequency response range is reduced in an equal ratio until the amplitude is reduced to zero, so as to obtain the fourth audio data.
In the second case, if the wearing state coupling is tight and the low frequency band energy of the third audio data is high, the amplitude of the high frequency band of the third audio is reduced to zero, and the high energy of the low frequency band is reduced in an equal ratio to obtain the fourth audio data.
In case of a third situation, if the coupling in the wearing state is relatively loose, the signal amplitude of the third audio frequency in the corresponding specific frequency band is amplified and/or reduced by combining the loose area and the area, and the amplitude in the overall frequency response range may be amplified or reduced in an equal ratio.
And a third processing module 127, configured to perform synthesis processing on the second audio data and the fourth audio data to obtain fifth audio data, where the fifth audio data contains components of the ambient sound but has a phase opposite to that of the original ambient sound, and after passing through a propagation path of the audio, clean audio with no or almost no ambient sound is finally heard at the auditory system.
A second output module 128, configured to output the fifth audio data or the compressed/expanded fifth audio data to the speech module 112, so as to perform conversion from an electrical signal to a sound wave signal.
In another preferred embodiment, the audio processing system 120 further includes an energy monitoring module 129 and a protection module 130.
And the energy monitoring module 129 is configured to monitor volume output energy of the second audio data and the fifth audio data, and determine whether the output energy exceeds a preset hearing threshold curve range, and when the output energy of the audio volume is greater than a preset curve range and lasts for a preset time, the protection module 130 needs to be turned on.
The protection module 130 is configured to compress and/or expand the second audio data and the fifth audio data, such as by performing an equal scaling on the amplitude within the overall frequency response range.
It should be noted that wearer 20 may select through communication module 115 of speech-worn device 22 to select which embodiment mode audio processing system 120 is in, and even completely shut down the algorithm calculation process of speech-worn device 22, and directly link first audio acquisition module 122 to first output module 124.
In another preferred embodiment, as shown in FIG. 3, the first processing module 123 in one embodiment comprises a plurality of sub-functional modules. The first processing module 123 includes an analog/digital conversion module 123A, an audio framing module 123B, a time domain to frequency domain conversion module 123C, a frequency domain signal processing module 123D, a frequency domain to time domain conversion module 123E, an audio recombination module 123F, a digital/analog conversion module 123G, a noise reduction network module 123H, a feature signal detection module 123I, and a weighting parameter extraction module 123J.
The analog/digital conversion module 123A is configured to perform digital processing on the first audio data to obtain first audio data of a digital signal. Specifically, the analog/digital conversion module 123A may configure parameters such as a sampling rate and a channel delay. It should be noted that, if the audio input signal is already a digital signal, the input signal may skip the analog/digital conversion module 123A and directly enter the audio framing module 123B.
The audio framing module 123B is configured to perform a segmentation process on the digitized first audio data according to a time axis to obtain segmented digitized first audio data. It should be noted that the time length of the segment here is configurable.
A time-to-frequency domain converting module 123C, configured to perform time-to-frequency domain conversion on the segmented digitized first audio data to obtain a first frequency domain audio signal corresponding to the digitized first audio data. It should be noted that, in this embodiment, the time-domain-to-frequency-domain conversion module 123C may convert the digital signal first audio data into a corresponding first frequency-domain audio signal FFTs1 through Fast Fourier Transform (FFT).
The frequency domain signal processing module 123D is configured to perform a wave limiting/filtering process on the first frequency domain audio signal FFTS1 in a frequency division band, and perform corresponding weighted gain compensation control to obtain a second frequency domain audio signal FFTS 2. It should be noted that, in this embodiment, the first frequency domain audio signal FFTS1 may be split into a plurality of sub-bands through a mel filter, as shown in fig. 4, based on the weighting parameters provided by the weighting parameter extraction module 123J, the sub-bands (specifically, sub-band gain compensation 1, sub-band gain compensation 2, sub-band gain compensation 3, …, and sub-band gain compensation N) dynamically adjust the volume of the first frequency domain audio signal FFTS1, and then combine the signals of the sub-bands to form a new frequency response curve, so as to obtain the second frequency domain audio signal FFTS 2.
A frequency-domain-to-time-domain conversion module 123E, configured to perform a frequency-domain-to-time-domain conversion on the second frequency-domain audio signal FFTS2 to obtain a corresponding segmented digitized second time-domain audio signal IFFTS 2. It should be noted that, in this embodiment, the frequency-domain-to-time-domain conversion module 123E employs Inverse Fast Fourier Transform (IFFT).
An audio recombining module 123F is configured to recombine the segmented digitized second time domain audio signal IFFTS2 into a digitized second audio signal that is continuous on the time axis.
The digital-to-analog conversion module 123G is configured to recover the digitized second audio signal into an analog audio signal to output a second audio signal.
And the noise reduction network module 123H is configured to perform filtering processing on the sensor data detected by the wearing state, remove and/or reduce irrelevant noise signals, and perform corresponding enhancement on useful information. In this embodiment, a kalman filtering method is used in the front to reduce the influence of the human noise.
And the characteristic signal detection module 123I is configured to perform data processing on the sensor data after noise reduction, and extract a relevant characteristic signal matching parameter. It should be noted that, in this embodiment, the extracted relevant characteristic signals include a coupled pressure value, an area, a cavity space, and the like.
The weighting parameter extraction module 123J calculates a corresponding weighting parameter curve and a corresponding weighting factor according to the extracted wearing state characteristic parameters. In this embodiment, the weighting factors are a series of parameter sets corresponding to the frequency response curves.
The specific work flow of the first processing module 123 is as follows: the wearing state information (specifically, the sensor data detected by the wearing state) sequentially includes a noise reduction network module 123H, a feature signal detection module 123I, a weighting parameter extraction module 123J, and a frequency domain signal processing module 123D, and simultaneously, the audio signal (specifically, the first audio data) sequentially enters an analog/digital conversion module 123A, an audio framing module 123B, a time domain-to-frequency domain conversion module 123C, and a frequency domain signal processing module 123D, and the wearing state information and the audio signal input are processed by the frequency domain signal processing module 123D and then sequentially enter the frequency domain-to-time domain conversion module 123E, the audio recombination module 123F, and the digital/analog conversion module 123G and then output the audio signal.
Fig. 5-8 are schematic flow charts of audio data processing methods of various preferred embodiments of the speech-worn device 22. The execution main body of the audio data processing method is executed by each program module code segment in the audio processing system 120 shown in fig. 2, and it should be noted that the flowchart in the embodiment is not used to limit the order of executing the steps.
The invention also discloses an audio data processing method of the voice wearable device 22, which is based on sensor technology and signal processing technology and adopts a method of combining perception and digital signal processing.
Fig. 5 is a flow chart diagram of an audio data processing method of a preferred embodiment of the speech-worn device 22.
A method of processing audio data of a speech-worn device 22, comprising the steps of:
step 101: acquiring first audio data in an audio providing device;
step 102: synchronizing with the step 101, and acquiring wearing state information of the voice wearing device;
step 103: adjusting the first audio data and the volume according to the wearing state information to obtain second audio data;
step 104: outputting the second audio data.
In step 101, in an embodiment, the first audio obtaining module obtains first audio data in the audio providing apparatus through a wired and/or wireless link, where the first audio data may be audio content such as music, a call, hearing assistance voice, broadcast voice, and synthesized audio.
For step 102, in the embodiment, the wearing state obtaining module 121 detects the wearing state through the coupling degree detection sensor, and the wearing state can be reversely deduced by analyzing the corresponding sensor data.
In step 103, in the embodiment, the speech-worn device 22 calculates the wearing state to obtain the weighting parameter set, and then the first processing module 123 adjusts the frequency response curve and the amplitude of the first audio data according to the weighting parameter set.
In step 104, in an embodiment, the first output module 124 outputs second audio data, and the second audio data is transmitted to the voice module 112 of the voice wearable device 22 to perform conversion from an electrical signal to a sound wave signal.
Because of the difference of the head and ears of the user, the wearing state of the voice wearing device 22 also has a great difference, however, the wearing state has a certain influence on the frequency response of the hearing, such as the tighter the coupling, the better the low frequency response; in the case of the same volume output, the greater the perceived volume.
Fig. 6 is a flow chart of an audio data processing method of another preferred embodiment of the speech-worn device 22. In practical situation one, if the wearing coupling effect is poor, the auditory system easily perceives additional ambient sounds; in practical situation two, if the wearing is relatively tight, but the ambient sound is too loud, especially the low frequency sound is too energetic, the auditory system may also perceive additional ambient sound. To obtain a clearer, more comfortable audio experience, it is necessary to turn on the adaptive ambient noise processing function gracefully autonomously.
An audio data processing method of a voice-worn device 22 comprises the following steps:
step 201: acquiring first audio data in an audio providing device;
step 202: synchronizing with the step 201, and acquiring wearing state information of the voice wearing device;
step 203: adjusting the first audio data and the volume according to the wearing state information to obtain second audio data;
step 204: outputting the second audio data;
step 205: in synchronization with step 201, acquiring third audio data of the surrounding environment of the voice-worn device;
step 206: synchronously with step 203, adjusting the third audio data and the volume according to the wearing state information to obtain fourth audio data;
step 207: synthesizing the second audio data and the fourth audio data to obtain fifth audio data;
step 208: outputting the fifth audio data.
In this embodiment, steps 201 to 204 are the same as steps 101 to 104, respectively, and thus the description will not be repeated.
For step 205, in an embodiment, the second audio obtaining module 125 obtains third audio data of the surrounding environment of the speech-worn device, and the third audio data may be collected from the surrounding environment by a microphone or a microphone array. The third audio data may include audio information of any sound source in the surrounding environment, such as talk sound, engine sound, siren sound, impact sound, and the like.
For step 206, in an embodiment, the second processing module 126 adjusts the frequency response curve and the amplitude of the third audio data according to the degree of tightness of the wearing state coupling, the frequency component and the energy of the third audio data, so as to obtain fourth audio data, where the fourth audio data includes the optimized ambient sound.
For step 207, in an embodiment, the third processing module 127 performs a synthesis process on the second audio data and the fourth audio data to obtain fifth audio data. The fifth audio data contains components of the ambient sound, but has a phase opposite to that of the original ambient sound, and finally reduces the perception interference of the ambient sound to the auditory system after passing through the audio propagation path.
For step 208, in an embodiment, the second output module 128 outputs fifth audio data. The fifth audio data is transmitted to the voice module 112 of the voice wearable device 22 for converting the electric signal into the sound wave signal.
Fig. 7 is a flow chart of an audio data processing method of another preferred embodiment of the speech-worn device 22. In the embodiment, fig. 7 is that a function of protecting the auditory system is added on the basis of fig. 5, so as to avoid hearing damage to the auditory system caused by too large sound amplitude in a frequency band.
A method for processing audio data of a voice wearable device 22 comprises the following specific steps:
step 301: acquiring first audio data in an audio providing device;
step 302: synchronizing with the step 301, acquiring wearing state information of the voice wearing device;
step 303: adjusting the audio data and the volume according to the wearing state information to obtain second audio data;
step 304: monitoring the volume output energy of the second audio data, and judging whether the volume output energy exceeds a preset hearing threshold range;
step 305: compressing/expanding the second audio data; in an embodiment, the protection module performs an equal scaling reduction on the amplitude within the overall frequency response range.
Step 306: outputting the second audio data or the compressed/expanded second audio data.
In this embodiment, steps 301 to 303 are the same as steps 101 to 103, respectively, and thus the description will not be repeated.
For step 304, in an embodiment, the energy monitoring module 129 monitors the volume output energy of the second audio data to determine whether the volume output energy exceeds a preset hearing threshold range; it should be noted that the protection module 130 needs to be turned on only when the volume output energy of the second audio is greater than a preset curve range and lasts for a preset time.
For step 305, in an embodiment where the protection module 130 compresses/expands the second audio data, the protection module 130 performs an equal scaling of the amplitude within the overall frequency response range.
For step 306, in an embodiment, the first output module 124 outputs the second audio data or the compressed/expanded second audio data. The second audio data or the compressed/expanded second audio data is transmitted to the voice module 112 of the voice wearable device 22, so as to convert the electric signal into the sound wave signal.
Fig. 8 is a schematic flow chart of an audio data processing method of another preferred embodiment of the speech-worn device 22. In the embodiment, fig. 8 is that a function of protecting the auditory system is added on the basis of fig. 6, so as to avoid hearing damage to the auditory system caused by too large sound amplitude in a frequency band.
A method for processing audio data of a voice wearable device 22 comprises the following specific steps:
step 401: acquiring first audio data in an audio providing device;
step 402: synchronizing with the step 401, and acquiring wearing state information of the voice wearing device;
step 403: adjusting the audio data and the volume according to the wearing state information to obtain second audio data;
step 404: outputting the second audio data;
step 405: in synchronization with step 401, acquiring third audio data of the surrounding environment of the voice-worn device;
step 406: synchronously with step 403, according to the wearing state information, adjusting the third audio data and the volume to obtain fourth audio data;
step 407: synthesizing the second audio data and the fourth audio data to obtain fifth audio data;
step 408: monitoring the volume output energy of the fifth audio data, and judging whether the volume output energy exceeds a preset hearing threshold range;
step 409: compressing/expanding the fifth audio data;
step 410: outputting the fifth audio data or the compressed/expanded fifth audio data.
In this embodiment, steps 401 to 407 are the same as steps 201 to 207, respectively, and the description is not repeated.
For step 408, in this embodiment, the energy monitoring module 129 monitors the volume output energy of the fifth audio data to determine whether the volume output energy exceeds a preset hearing threshold range. It should be noted that the protection module needs to be turned on only when the volume output energy of the fifth audio is greater than a preset curve range and lasts for a preset time.
For step 409, in this embodiment, the protection module 130 compresses/expands the fifth audio data, and the protection module 130 performs geometric reduction on the amplitude in the overall frequency response range;
corresponding to step 410, in this embodiment, the second output module 128 outputs the fifth audio data or the compressed/expanded fifth audio data. The fifth audio data or the compressed/expanded fifth audio data is transmitted to the voice module 112 of the voice wearable device 22, so as to convert the electric signal into the sound wave signal.
Fig. 9(a) to 9(c) are schematic structural views of a wheatstone bridge type embodiment of the wearing state detection sensor, and in the embodiment, a piezoresistive wheatstone bridge sensor is preferable. Fig. 9(a) is a top view of a structural unit of a piezoresistive wheatstone bridge sensor, where 101 is a substrate material of the sensor, 102 is two hollow-out regions of the substrate material 101, 103 is a piezoresistive material unit disposed on the substrate material 101 between the two hollow-out regions 102, and the four piezoresistive material units 103 of the wheatstone bridge are respectively disposed on the upper and lower sides of the substrate material 101, and the four piezoresistive material units are respectively a first piezoresistive material unit 1031, a second piezoresistive material unit 1032, a third piezoresistive material unit 1033, and a fourth piezoresistive material unit 1034, and have resistances of R1, R2, R3, and R4.
As shown in fig. 9(b), the first and third piezoresistive material units R1(1031) and R3(1033) are disposed below the substrate material 101, the second and fourth piezoresistive material units R2(1032) and R4(1034) are disposed above the substrate material 101, the first and second piezoresistive material units R1(1031) and R2(1032) are symmetrically disposed, and the third and fourth piezoresistive material units R3(1033) and R4(1034) are symmetrically disposed.
As shown in FIG. 9(c), when an external pressure F is applied to the substrate material 101, the first piezoresistive material cell R1 and the third piezoresistive material cell R3 are pulled up and the resistance increases; the second piezoresistive material cell R2 and the fourth piezoresistive material cell R4 are squeezed and the resistance is reduced. The greater the pressure F, the more the resistance of the corresponding piezoresistive material unit increases and decreases. The piezoresistive material units R1, R2, R3, R4 are linked in a wheatstone bridge manner, and when an external pressure F is applied to the piezoresistive wheatstone bridge, the corresponding bridge output voltage changes accordingly, as shown in fig. 10.
FIG. 10 is an equivalent circuit diagram of an embodiment of a Wheatstone bridge type piezoresistive coupling degree detection sensor in a stressed state. As shown in fig. 10(a), when the piezoresistive wheatstone bridge is not subjected to an external force, the resistances of the resistors R1, R2, R3 and R4 are kept close and stable, so that the divided resistance voltage V1 obtained at the output point 2 of the bridge is kept stable and unchanged, and the divided resistance voltage V2 obtained at the output point 4 of the bridge is kept stable and unchanged, so that the output differential pressure V of the bridge is kept stable and unchanged from V1 to V2. As shown in fig. 10(b), when an external pressure F acts on the piezoresistive wheatstone bridge, the resistances R1 and R3 increase, the resistances R2 and R4 decrease, and therefore V1 falls, and at the same time V2 rises, so that the output differential pressure V of the bridge is multiplied by V1-V2. With the double-sided cantilever layout shown in fig. 9, the detection sensitivity of the piezoresistive wheatstone bridge can be significantly improved.
Fig. 11(a) and 11(b) are circuit diagrams of an embodiment of a speech-worn device based on a wheatstone bridge type piezoresistive coupling degree detection sensor. Fig. 11(a) is a main circuit diagram of the embodiment of the speech-worn device 22. The processor chip U1 is B300, the memory chip U3 is M95512, and the processor chip U1 and the memory chip U3 are communicated through an SPI interface; crystal X1 is responsible for providing a 38.4MHz clock to processor U1; the low-power receiver U2 can be directly and electrically connected to the RCVR interface of the processor chip U1, and it should be noted that if the high-power receiver is driven, a power amplifier needs to be added between the high-power receiver and the processor chip U1; an analog/digital conversion interface AI0 built in the processor chip U1 is electrically connected to one channel of the first audio module; the analog/digital conversion interface AI1 built in the processor chip U1 is electrically connected to a microphone for collecting ambient sound; an analog/digital conversion interface AI4 built in the processor chip U1 is electrically connected to the output of the sensor circuit unit; the SDA _ I2C/SCL _ I2C interface of the processor chip U1 is electrically connected to the sensor unit for controlling and dynamically configuring parameters of the sensor unit; the GPIO3/GPIO4 interface of the processor chip U1 is used for controlling the power supply of the sensor unit; the TX/RX interface of the processor chip U1 is used to communicate with other peripherals.
Fig. 11(b) is a circuit diagram of a piezoresistive coupling degree detection sensor in the embodiment of the speech-worn device. U7-U12 are 6 piezoresistive Wheatstone bridge sensors, and all the piezoresistive Wheatstone bridge sensors are electrically connected with an amplification chip U4; the amplification chip U4 is XR10910, the OUT interface of the chip is electrically connected with the AI4 interface of the processor chip U1, and the processor chip U1 obtains the original analog data of the sensor through the OUT interface of the U4; the SDA/SCL interface of the chip U4 is electrically connected with the SDA _ I2C/SCL _ I2C interface of the processor chip U1, and the processor chip U1 can conveniently select the data of any one piezoresistive Wheatstone bridge sensor through the I2C; the PMOS tube Q5/Q6 is used for controlling the power supply of the chip U4, and the overall energy consumption of the system can be reduced by reasonably controlling the power supply time sequence.
It should be noted that the circuit diagrams of the embodiments of the speech-worn device based on the wheatstone bridge type piezoresistive coupling degree detection sensor shown in fig. 11(a) and 11(b) are only used for processing data information of one channel, and if data information of two or more channels needs to be processed, two or more identical circuit configuration units are needed.
For ease of understanding, the following description will be given by way of example with reference to fig. 1 to 11 (b).
As shown in fig. 1, when a wearer 20 wears a speech-worn device 22, the speech-worn device 22 processes audio data by automatically monitoring wearing state information of the equipment 22 of the wearer and environmental sounds around the equipment as a precondition reference by using a method of sensing and digital signal processing based on sensor technology and signal processing technology, and outputs an audio signal matching the current state. The method has the characteristics of adaptively adjusting the output audio frequency response and audio volume and adaptively starting active noise reduction according to the environmental sound intensity and the wearing coupling state, and finally improves the definition of audio output and the wearing comfort level. For example, when wearer 20 listens to music in a library or office, if the wearing coupling is relatively tight, speech-worn device 22 may turn down low frequency components of the device's output audio and moderately scale down the amplitude of the overall frequency response range; if the wearing coupling is relatively loose, the voice-worn device 22 will correspondingly increase the low frequency component in the output audio of the device, and moderately amplify the amplitude in the overall frequency response range in an equal ratio. For example, when wearer 20 listens to music on an airplane or a high-speed rail, ambient sounds are relatively likely to interfere with the auditory system of wearer 20 because ambient sounds are loud, particularly when voice-worn device 22 is not closely coupled to the head and/or ear of wearer 20, even when an open-celled cavity is formed between voice-worn device 22 and wearer 20, where ambient sounds may be more noticeable and may vary depending on the head and/or ear of wearer 20; the invention can moderately start the active noise reduction operation by detecting the environmental sound intensity and the wearing coupling state, and can obviously improve the definition of audio and the wearing comfort level.
It should be particularly noted that, in an embodiment, the wearing coupling state information between the voice wearing device 22 and the wearer 20 may be transmitted to the cloud server through the terminal device 21, and by performing data mining and analysis on the wearing state information of the mass voice wearing device 22 in the cloud server, a plurality of common earphones more conforming to the ergonomic characteristics may be evaluated and designed for people with different head types and/or ear types characteristics, so as to provide a better criterion for considering both the cost and the wearing comfort.
The invention relates to a voice wearing device and an audio data processing method thereof.A voice wearing device processes audio data by taking the current wearing state information of a wearer as reference and outputs an audio signal matched with the current wearing state information; in the state of wearing the coupling tight, the intelligent voice wearing device can turn down the low-frequency component of the audio frequency and turn down the output volume appropriately; in the wearing coupling loose state, the low-frequency component of the audio frequency can be correspondingly improved, the output volume is moderately improved, and the purpose of automatically adjusting the audio frequency response and the output volume according to the wearing coupling state is achieved. Furthermore, the active noise reduction operation can be properly started according to the intensity of the environmental sound and the wearing coupling state, and the definition of audio and the wearing comfort level are improved.
It should be noted that the above embodiments can be freely combined as necessary. The above description is only a preferred embodiment of the present invention, but the present invention is not limited to the details of the above embodiment, and it should be noted that, for those skilled in the art, it is possible to make various modifications and alterations without departing from the principle of the present invention, and it should be understood that these modifications, alterations and equivalents should be regarded as the protection scope of the present invention.

Claims (10)

1. A voice wearing device is worn on the head of a wearer and is characterized by comprising an audio processing system and a sensor module for monitoring the coupling state of the voice wearing device and the head shape and/or the ear shape of the wearer, wherein the audio processing system comprises a wearing state acquiring module for acquiring wearing state information of the voice wearing device according to data of the coupling state monitored by the sensor module, a first audio acquiring module for acquiring first audio data in an audio providing device, a first processing module for acquiring second audio data by adjusting the first audio data and the volume according to the wearing state information, and a first output module for outputting the second audio data; the audio providing device is a built-in unit of the voice wearing device or an external terminal device independent of the voice wearing device.
2. The speech-worn device according to claim 1, wherein the audio processing system further includes a second audio obtaining module for obtaining third audio data of a surrounding environment of the speech-worn device, a second processing module for adjusting the third audio data according to the wearing state information detected by the sensor module and obtaining fourth audio data, a third processing module for synthesizing the second audio data and the fourth audio data and obtaining fifth audio data, and a second output module for outputting the fifth audio data.
3. The speech-worn device of claim 1 or 2, wherein the audio processing system further comprises an energy monitoring module that monitors the volume output of the audio data and a protection module for compressing and/or expanding the audio data.
4. The speech-worn device of claim 1, wherein the first processing module comprises an analog-to-digital conversion module, an audio framing module, a time-domain-to-frequency-domain conversion module, a frequency-domain signal processing module, a frequency-domain-to-time-domain conversion module, an audio recombination module, a digital-to-analog conversion module, a noise reduction network module, a feature signal detection module, and a weighting parameter extraction module.
5. The voice wearing device according to claim 1 or 2, wherein the sensor module comprises a posture detection sensor and a wearing state sensor, and the wearing state sensor comprises a coupling degree detection sensor, and the coupling degree detection sensor is located on the inner side of the voice wearing device and is arranged at a position where the area of the coupling with the head shape and/or the ear shape is the largest.
6. The voice wearing device according to claim 1 or 2, further comprising a voice module, a storage module, an operation module, and a communication module.
7. An audio data processing method of a voice wearable device is characterized by comprising the following steps:
step 101: acquiring first audio data in an audio providing device;
step 102: synchronizing with the step 101, and acquiring wearing state information of the voice wearing device;
step 103: adjusting the first audio data and the volume according to the wearing state information to obtain second audio data;
step 104: outputting the second audio data.
8. An audio data processing method of a voice wearable device is characterized by comprising the following steps:
step 201: acquiring first audio data in an audio providing device;
step 202: synchronizing with the step 201, and acquiring wearing state information of the voice wearing device;
step 203: adjusting the first audio data and the volume according to the wearing state information to obtain second audio data;
step 204: outputting the second audio data;
step 205: in synchronization with step 201, acquiring third audio data of the surrounding environment of the voice-worn device;
step 206: synchronously with step 203, adjusting the third audio data and the volume according to the wearing state information to obtain fourth audio data;
step 207: synthesizing the second audio data and the fourth audio data to obtain fifth audio data;
step 208: outputting the fifth audio data.
9. An audio data processing method of a voice wearable device is characterized by comprising the following steps:
step 301: acquiring first audio data in an audio providing device;
step 302: synchronizing with the step 301, acquiring wearing state information of the voice wearing device;
step 303: adjusting the audio data and the volume according to the wearing state information to obtain second audio data;
step 304: monitoring the volume output energy of the second audio data, and judging whether the volume output energy exceeds a preset hearing threshold range;
step 305: compressing/expanding the second audio data;
step 306: outputting the second audio data or the compressed/expanded second audio data.
10. An audio data processing method of a voice wearable device is characterized by comprising the following steps:
step 401: acquiring first audio data in an audio providing device;
step 402: synchronizing with the step 401, and acquiring wearing state information of the voice wearing device;
step 403: adjusting the audio data and the volume according to the wearing state information to obtain second audio data;
step 404: outputting the second audio data;
step 405: in synchronization with step 401, acquiring third audio data of the surrounding environment of the voice-worn device;
step 406: synchronously with step 403, according to the wearing state information, adjusting the third audio data and the volume to obtain fourth audio data;
step 407: synthesizing the second audio data and the fourth audio data to obtain fifth audio data;
step 408: monitoring the volume output energy of the fifth audio data, and judging whether the volume output energy exceeds a preset hearing threshold range;
step 409: compressing/expanding the fifth audio data;
step 410: outputting the fifth audio data or the compressed/expanded fifth audio data.
CN202010908250.0A 2020-09-02 2020-09-02 Voice wearable device and audio data processing method thereof Pending CN112164381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010908250.0A CN112164381A (en) 2020-09-02 2020-09-02 Voice wearable device and audio data processing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010908250.0A CN112164381A (en) 2020-09-02 2020-09-02 Voice wearable device and audio data processing method thereof

Publications (1)

Publication Number Publication Date
CN112164381A true CN112164381A (en) 2021-01-01

Family

ID=73858666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010908250.0A Pending CN112164381A (en) 2020-09-02 2020-09-02 Voice wearable device and audio data processing method thereof

Country Status (1)

Country Link
CN (1) CN112164381A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102118667A (en) * 2009-12-31 2011-07-06 歌尔声学股份有限公司 Unsealed earplug-type headset, and device and method for enhancing voice of receiving end
CN103348697A (en) * 2010-12-10 2013-10-09 沃福森微电子股份有限公司 Active noise cancelling ear phone system
CN104702763A (en) * 2015-03-04 2015-06-10 乐视致新电子科技(天津)有限公司 Method, device and system for adjusting volume
CN205028650U (en) * 2015-08-25 2016-02-10 牛跃华 Reverse noise treatment equipment who disturbs acoustic propagation of audio frequency phase place
CN108737923A (en) * 2018-05-22 2018-11-02 Oppo广东移动通信有限公司 Volume adjusting method and related product
CN213547789U (en) * 2020-09-02 2021-06-25 深圳市妙严科技有限公司 Voice wearable device and system thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102118667A (en) * 2009-12-31 2011-07-06 歌尔声学股份有限公司 Unsealed earplug-type headset, and device and method for enhancing voice of receiving end
CN103348697A (en) * 2010-12-10 2013-10-09 沃福森微电子股份有限公司 Active noise cancelling ear phone system
CN104702763A (en) * 2015-03-04 2015-06-10 乐视致新电子科技(天津)有限公司 Method, device and system for adjusting volume
CN205028650U (en) * 2015-08-25 2016-02-10 牛跃华 Reverse noise treatment equipment who disturbs acoustic propagation of audio frequency phase place
CN108737923A (en) * 2018-05-22 2018-11-02 Oppo广东移动通信有限公司 Volume adjusting method and related product
CN213547789U (en) * 2020-09-02 2021-06-25 深圳市妙严科技有限公司 Voice wearable device and system thereof

Similar Documents

Publication Publication Date Title
US11812223B2 (en) Electronic device using a compound metric for sound enhancement
CN107071647B (en) A kind of sound collection method, system and device
US7580536B2 (en) Sound enhancement for hearing-impaired listeners
CN109493877B (en) Voice enhancement method and device of hearing aid device
EP2744225B1 (en) Hearing instrument and method of identifying an output transducer of a hearing instrument
AU2004301961B2 (en) Sound enhancement for hearing-impaired listeners
US10034087B2 (en) Audio signal processing for listening devices
CN104754462A (en) Automatic regulating device and method for volume and earphone
KR20190065602A (en) Digital hearing device using bluetooth circuit and digital signal processing
KR20110011394A (en) The wireless hearing aid where the frequency range by output control is possible
CN213547789U (en) Voice wearable device and system thereof
CN111800699B (en) Volume adjustment prompting method and device, earphone equipment and storage medium
CN111491234B (en) Headset noise reduction earphone
CN112164381A (en) Voice wearable device and audio data processing method thereof
CN105979461A (en) Hearing aid earphone based on intelligent terminal and system thereof
CN207518801U (en) The remote music playing device of formula interactive voice earphone is worn for neck
CN113099338A (en) Intelligent control's audio chip and wireless earphone of making an uproar that falls
JP5502166B2 (en) Measuring apparatus and measuring method
CN207518797U (en) Neck wears the voice control optimization device of formula interactive voice earphone
CN207995324U (en) Neck wears formula interactive voice earphone
CN111954116A (en) Bluetooth headset and volume self-adaption method thereof
CN104954962A (en) Hairpin bone conduction hearing aid with voice communication function
JP2002062886A (en) Voice receiver with sensitivity adjusting function
CN113794963B (en) Speech enhancement system based on low-cost wearable sensor
US20240205588A1 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination