CN113965864A - Intelligent interaction method and device for hearing aid - Google Patents
Intelligent interaction method and device for hearing aid Download PDFInfo
- Publication number
- CN113965864A CN113965864A CN202111142603.1A CN202111142603A CN113965864A CN 113965864 A CN113965864 A CN 113965864A CN 202111142603 A CN202111142603 A CN 202111142603A CN 113965864 A CN113965864 A CN 113965864A
- Authority
- CN
- China
- Prior art keywords
- audio data
- mode
- instruction
- setting
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000003993 interaction Effects 0.000 title claims abstract description 35
- 230000005236 sound signal Effects 0.000 claims abstract description 36
- 238000013519 translation Methods 0.000 claims abstract description 9
- 230000008569 process Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 abstract 1
- 208000032041 Hearing impaired Diseases 0.000 description 16
- 230000006854 communication Effects 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 230000002452 interceptive effect Effects 0.000 description 6
- 238000000926 separation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 208000016354 hearing loss disease Diseases 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/60—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
- H04R25/603—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of mechanical or electronic switches or control elements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/61—Aspects relating to mechanical or electronic switches or control elements, e.g. functioning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides an intelligent interaction method and device for a hearing aid, which are applied to a hearing aid terminal and comprise the following steps: acquiring instruction information, wherein the instruction information is a language sound signal with set tone and set words; searching a related program according to the instruction information, and broadcasting state information that the related program is not executable if the instruction information does not have the related program; if the related programs are executable, the operation modes are adjusted according to the related programs, wherein the operation modes at least comprise a Bluetooth mode, a translation mode, an interaction mode, a verification and matching mode, a navigation mode and a start/stop mode.
Description
Technical Field
The invention relates to the field of hearing aids, in particular to an intelligent interaction method and device for a hearing aid.
Background
For the patients with hearing impairment, the hearing aid is indispensable, but part of traditional hearing aids can amplify noise while improving the sound receiving effect, so that poor experience feeling is brought to the patients, the audio separation technology is suitable for the field of voice interaction, the problem of pain in the industry can be solved, the existing audio separation core technology has more application scenes, and when the audio separation core technology is applied to the field of the hearing aid, the sound heard by the users with hearing impairment can be cleaner and purer, and the sound is not too loud as the users with hearing impairment begin to complain about;
in addition to the above application methods, the audio separation technology is widely applied to various interactive scenes, and for the interactive function of the hearing aid, hearing aid products in the market are mostly realized in a form of connecting a display terminal or an intelligent terminal, which brings great inconvenience to the user when the hearing aid is adjusted, especially when scene switching is performed in a hearing aid mode, and an intelligent interactive hearing aid method and device combining the audio separation technology are urgently needed.
SUMMARY OF THE PATENT FOR INVENTION
Aiming at the defects in the prior art, the invention provides an intelligent interaction method and device for a hearing aid, so as to reduce the interaction difficulty between a hearing aid terminal and a user.
According to a first aspect of the embodiments of the present disclosure, a preferred embodiment of the present invention provides a hearing aid intelligent interaction method, applied to a hearing aid terminal, including:
acquiring instruction information, wherein the instruction information is a language sound signal with set tone and set words;
searching a related program according to the instruction information, and broadcasting state information that the related program is not executable if the instruction information does not have the related program;
if the related program is executable, adjusting an operation mode according to the related program, wherein the operation mode at least comprises a Bluetooth mode, a translation mode, an interaction mode, a fitting mode, a navigation mode and a startup/shutdown mode.
In one embodiment, obtaining instruction information, the instruction information being a language sound signal with a set tone and a set vocabulary, includes:
receiving audio data which can contain the instruction information, and playing the audio data after denoising the audio data, wherein the audio data comprises a language sound signal, an environment sound signal and a noise sound signal;
in the process, extracting the characteristic information of the audio data, and matching the extracted characteristic information into subsequences;
and searching based on the obtained subsequences to obtain instruction data corresponding to the audio data.
In one embodiment, the extracting the feature information of the audio data in the process, and matching the extracted feature information into subsequences includes:
acquiring language sound signal characteristic storage data containing the set tone, the set words and the set instructions, wherein the set words are uncommon words, and the set tone is the sound characteristic of a set character;
caching the audio data, determining whether a set instruction exists in the audio data, and if not, not executing a program of which the set instruction is matched with the subsequences;
if so, determining whether the setting instruction meets the set tone and the set vocabulary characteristics, and if so, matching the setting instruction with the subsequences; and if not, executing the program of which the setting instruction is matched with the subsequences.
In one embodiment, the audio data is not played when the relevant program adjusts the operation mode.
According to a second aspect of the embodiments of the present disclosure, the present invention provides a hearing aid intelligent interaction device, applied to a hearing aid terminal, including:
the receiving module is used for acquiring instruction information, wherein the instruction information is a language sound signal with set tone and set vocabulary;
the searching module is used for searching a related program according to the instruction information, and broadcasting the state information that the related program cannot be executed if the instruction information does not have the related program;
and the execution module is used for adjusting the operation mode according to the related program if the related program is executable, wherein the operation mode at least comprises a Bluetooth mode, a translation mode, an interaction mode, a fitting mode, a navigation mode and a start/stop mode.
In one embodiment, the receiving module includes:
the acquisition module is used for receiving audio data which can contain the instruction information, denoising the audio data and then playing the audio data, wherein the audio data comprises a language sound signal, an environment sound signal and a noise sound signal;
the extraction module is used for extracting the characteristic information of the audio data in the process and matching the extracted characteristic information into subsequences;
and the matching module is used for searching based on the obtained subsequences to obtain instruction data corresponding to the audio data.
In one embodiment, the extraction module includes:
the acquisition module is used for acquiring language voice signal characteristic storage data containing the set tone, the set words and the set instructions, wherein the set words are uncommon words, and the set tone is the voice characteristics of the set characters;
the cache module is used for caching the audio data, determining whether a set instruction exists in the audio data, and if not, not executing a program of which the set instruction matches with the sub-sequence;
the checking module is used for determining whether the setting instruction meets the setting tone and the setting vocabulary characteristics or not if the setting instruction meets the setting tone and the setting vocabulary characteristics, and matching the setting instruction with the subsequences if the setting instruction meets the setting tone and the setting vocabulary characteristics; and if not, executing the program of which the setting instruction is matched with the subsequences.
In one embodiment, the audio data is not played when the relevant program adjusts the operation mode.
According to a third aspect of the disclosed embodiments, the present invention provides a hearing aid smart interaction device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the steps of the above method.
According to a fourth aspect of the embodiments of the present disclosure, the present patent provides a computer-readable storage medium having stored thereon a computer program, the computer program being executed by a processor for performing the steps of the above-mentioned method.
According to the technical scheme, the intelligent interaction method and device for the hearing aid provided by the invention have the following beneficial effects: this openly through audiphone to the extraction of setting for personage's tone quality, setting for the vocabulary, realize activation operation, and look for relevant procedure according to setting for the instruction, so that audiphone can jump to any kind of mode in bluetooth mode, translation mode, interactive mode, the mode of fitting, the navigation mode, the mode of starting/shutting, need not interact with the help of display terminal or intelligent terminal, the mutual degree of difficulty of hearing impaired personage and audiphone has been reduced, the setting of examining and examining the mode, make audiphone can not appear the condition of erroneous judgement in hearing impaired personage's communication process, can ensure the normal performance of audiphone.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the patentable embodiments of the invention, reference will now be made to the appended drawings, which are briefly described as embodiments or as required in the prior art description. In all the drawings, the elements or parts are not necessarily drawn to actual scale.
Fig. 1 is a flowchart of a hearing aid intelligent interaction method provided by the patent of the present invention;
fig. 2 is a flowchart of step S12 in a hearing aid intelligent interaction method according to the present invention;
fig. 3 is a flowchart of step S22 in the hearing aid smart interaction device according to the present invention;
fig. 4 is a block diagram of a hearing aid intelligent interaction device provided by the present invention;
fig. 5 is a partial flowchart of the operation of a hearing aid intelligent interaction device provided by the present invention;
fig. 6 is a block diagram of another hearing aid smart interaction device provided by the present patent.
Detailed Description
Embodiments of the patented technology of the present invention will be described in detail below with reference to the drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only used as examples, and the protection scope of the present invention is not limited thereby.
Fig. 1 is a flowchart of an intelligent interaction method for a hearing aid according to the present invention, which is applied to a hearing aid terminal, and the terminal can display information such as pictures, videos, short messages, and wechat. The terminal may be equipped with any terminal device having a display screen, such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like. The hearing aid intelligent interaction method provided by the embodiment, as shown in fig. 1, is applied to a hearing aid terminal, and includes the following steps S11-S13:
in step S11, acquiring instruction information, which is a language sound signal having a set tone and a set vocabulary;
in the implementation mode, the instruction information is a language sound self-sent by the hearing-impaired person, the language sound content is instruction content, when the hearing-impaired person communicates with surrounding characters, the language sound sent by the surrounding characters cannot be recognized by the system, and set words, such as specific words of 'wood', 'moxa' and the like, need to be repeated before and after the language sound content to activate the language sound content, so that system misjudgment is avoided;
in step S12, searching for a related program according to the instruction information, and if there is no related program in the instruction information, broadcasting state information that the related program is not executable;
optionally, when the language and sound content of the instruction information cannot find the related program, the system may report "invalid input content" according to the preset audio information, and report "program running" after the related program has been found, with the operation mode of the physical hearing-impaired person system unchanged;
in step S13, if the related program is executable, adjusting an operation mode according to the related program, where the operation mode at least includes a bluetooth mode, a translation mode, an interaction mode, a fitting mode, a navigation mode, and a start/stop mode;
need not to interact with the help of display terminal or intelligent terminal among this implementation, reduced the mutual degree of difficulty of hearing impaired personage and audiphone, for example: after entering the interactive mode, the system collects the language information of the user and searches the corresponding voice information to be played, so that man-machine conversation can be realized, and the intelligent degree is high.
In one embodiment, as shown in fig. 2, in step S12, obtaining instruction information, the instruction information being a language sound signal having a set tone and a set vocabulary, includes the following steps S21-S23:
in step S21, receiving audio data that may include the instruction information, and playing the audio data after denoising the audio data, where the audio data includes a language sound signal, an environment sound signal, and a noise sound signal;
optionally, the audio data is a composite sound of surrounding sounds, and the noise sound is specifically eliminated through a filter, so that the recognition degree, the definition and the comfort degree of the sound can be effectively improved;
in step S22, feature information extraction is performed on the audio data in the process, and the extracted feature information is matched into subsequences;
in step S23, a search is performed based on the obtained subsequence to obtain instruction data corresponding to the audio data;
optionally, the hearing aid terminal searches for feature information in the audio data, and can obtain corresponding instruction data, where the "language sound content" expressed by the hearing-impaired person is completely consistent with the stored sound content, for example: if the hearing-impaired person expresses 'ai, bluetooth connection', and the hearing-impaired person is contrary to the stored sound content 'bluetooth listening', the hearing-impaired person cannot switch to the expected mode and only broadcasts 'invalid input content'.
In one embodiment, as shown in fig. 3, in step S22, the process of extracting the feature information of the audio data and matching the extracted feature information into subsequences includes the following steps S31-S33:
in step S31, obtaining stored data of characteristics of speech sound signals including the set tone color, the set vocabulary to be the uncommon term, and the set command, the set tone color being the sound characteristics of the set person;
optionally, the set tone color is at least provided with the hearing-impaired person himself, and the sound characteristics of the attending physicians, family members and friends can be input, and the set vocabulary activation program is spoken by the person, so that in order to avoid interference with normal life, the uncommon language content is concise and uncommon in daily pronunciation;
in step S32, the audio data is buffered, and it is determined whether there is a setting instruction in the audio data, and if not, the setting instruction is not executed to match the sub-sequence program;
in step S33, if it is determined whether the setting command satisfies the setting tone and the setting vocabulary feature, the setting command is matched with the subsequence; if not, executing the program of which the setting instruction is matched with the subsequences;
the implementation mode only analyzes the language sound before and after the specific character sends the set vocabulary, so that the hearing aid cannot be misjudged by the hearing-impaired people in the communication process, the normal use performance of the hearing aid can be guaranteed, and the technical defect of voice interaction of the hearing aid is overcome.
In one embodiment, the audio data is not played when the relevant program adjusts the operation mode;
optionally, before the hearing aid terminal switches the operation mode, especially when adjusting the scene hearing aid system, the audio playing device of the hearing aid terminal is turned off briefly, so that the hearing-impaired person cannot hear the mode switching noise, and after the new mode is stably operated, the audio playing device is turned on again, so that the hearing-impaired person cannot hear the noise.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 4 is a block diagram of a hearing aid intelligent interaction device provided by the present invention, which may be implemented as part or all of an electronic device through software, hardware or a combination of both. As shown in fig. 4, the apparatus, applied to a hearing aid terminal, includes:
the receiving module 121 is configured to obtain instruction information, where the instruction information is a language sound signal with a set tone and a set vocabulary;
the searching module 122 is configured to search for a related program according to the instruction information, and if the instruction information does not have the related program, broadcast state information that the related program is not executable;
the execution module 123 is configured to adjust an operation mode according to the related program if the related program is executable, where the operation mode at least includes a bluetooth mode, a translation mode, an interaction mode, a fitting mode, a navigation mode, and a power on/off mode.
This openly through audiphone to the extraction of setting for personage's tone quality, setting for the vocabulary, realize activation operation, and look for relevant procedure according to setting for the instruction, so that audiphone can jump to any kind of mode in bluetooth mode, translation mode, interactive mode, the mode of fitting, the navigation mode, the mode of starting/shutting, need not interact with the help of display terminal or intelligent terminal, the mutual degree of difficulty of hearing impaired personage and audiphone has been reduced, the setting of examining and examining the mode, make audiphone can not appear the condition of erroneous judgement in hearing impaired personage's communication process, can ensure the normal performance of audiphone.
In an embodiment, as shown in fig. 4, the receiving module 121 includes:
the acquisition module 131 is configured to receive audio data that may include the instruction information, perform noise cancellation on the audio data, and play the audio data, where the audio data includes a language sound signal, an environment sound signal, and a noise sound signal;
an extracting module 132, configured to perform feature information extraction on the audio data in a process, and match the extracted feature information with a subsequence;
a matching module 133, configured to perform a search based on the obtained subsequence to obtain instruction data corresponding to the audio data.
In one embodiment, as shown in fig. 4, the extracting module 132 includes:
an obtaining module 141, configured to obtain stored data of language/sound signal characteristics including the set tone color, the set vocabulary, and the set instruction, where the set vocabulary is an uncommon term, and the set tone color is a sound characteristic of a set person;
a cache module 142, configured to cache the audio data, determine whether a setting instruction exists in the audio data, and if not, not execute the program in which the setting instruction matches the subsequence;
an auditing module 143, configured to determine whether the setting instruction satisfies a setting tone and a setting vocabulary feature, and if so, match the setting instruction with a subsequence; and if not, executing the program of which the setting instruction is matched with the subsequences.
In one embodiment, the audio data is not played when the relevant program adjusts the operation mode.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The embodiment of the present disclosure further provides a hearing aid intelligent interaction device:
fig. 6 is a block diagram illustrating a hearing aid smart interaction device 800 according to an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A hearing aid intelligent interaction method is applied to a hearing aid terminal and comprises the following steps:
acquiring instruction information, wherein the instruction information is a language sound signal with set tone and set words;
searching a related program according to the instruction information, and broadcasting state information that the related program is not executable if the instruction information does not have the related program;
if the related program is executable, adjusting an operation mode according to the related program, wherein the operation mode at least comprises a Bluetooth mode, a translation mode, an interaction mode, a fitting mode, a navigation mode and a startup/shutdown mode.
2. The method according to claim 1, wherein obtaining instruction information, the instruction information being a language sound signal having a set tone and a set vocabulary, comprises:
receiving audio data which can contain the instruction information, and playing the audio data after denoising the audio data, wherein the audio data comprises a language sound signal, an environment sound signal and a noise sound signal;
in the process, extracting the characteristic information of the audio data, and matching the extracted characteristic information into subsequences;
and searching based on the obtained subsequences to obtain instruction data corresponding to the audio data.
3. The method of claim 2, wherein extracting feature information from the audio data during the process, and matching the extracted feature information into a sequence comprises:
acquiring language sound signal characteristic storage data containing the set tone, the set words and the set instructions, wherein the set words are uncommon words, and the set tone is the sound characteristic of a set character;
caching the audio data, determining whether a set instruction exists in the audio data, and if not, not executing a program of which the set instruction is matched with the subsequences;
if so, determining whether the setting instruction meets the set tone and the set vocabulary characteristics, and if so, matching the setting instruction with the subsequences; and if not, executing the program of which the setting instruction is matched with the subsequences.
4. The method of claim 2, wherein the audio data is not played when the associated program adjusts the operating mode.
5. A hearing aid intelligent interaction device is characterized by being applied to a hearing aid terminal and comprising:
the receiving module is used for acquiring instruction information, wherein the instruction information is a language sound signal with set tone and set vocabulary;
the searching module is used for searching a related program according to the instruction information, and broadcasting the state information that the related program cannot be executed if the instruction information does not have the related program;
and the execution module is used for adjusting the operation mode according to the related program if the related program is executable, wherein the operation mode at least comprises a Bluetooth mode, a translation mode, an interaction mode, a fitting mode, a navigation mode and a start/stop mode.
6. The apparatus of claim 5, wherein the receiving module comprises:
the acquisition module is used for receiving audio data which can contain the instruction information, denoising the audio data and then playing the audio data, wherein the audio data comprises a language sound signal, an environment sound signal and a noise sound signal;
the extraction module is used for extracting the characteristic information of the audio data in the process and matching the extracted characteristic information into subsequences;
and the matching module is used for searching based on the obtained subsequences to obtain instruction data corresponding to the audio data.
7. The apparatus of claim 6, wherein the extraction module comprises:
the acquisition module is used for acquiring language voice signal characteristic storage data containing the set tone, the set words and the set instructions, wherein the set words are uncommon words, and the set tone is the voice characteristics of the set characters;
the cache module is used for caching the audio data, determining whether a set instruction exists in the audio data, and if not, not executing a program of which the set instruction matches with the sub-sequence;
the checking module is used for determining whether the setting instruction meets the setting tone and the setting vocabulary characteristics or not if the setting instruction meets the setting tone and the setting vocabulary characteristics, and matching the setting instruction with the subsequences if the setting instruction meets the setting tone and the setting vocabulary characteristics; and if not, executing the program of which the setting instruction is matched with the subsequences.
8. The apparatus of claim 6, wherein the audio data is not played when the associated program adjusts the operating mode.
9. A hearing aid smart interaction device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111142603.1A CN113965864A (en) | 2021-09-28 | 2021-09-28 | Intelligent interaction method and device for hearing aid |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111142603.1A CN113965864A (en) | 2021-09-28 | 2021-09-28 | Intelligent interaction method and device for hearing aid |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113965864A true CN113965864A (en) | 2022-01-21 |
Family
ID=79462950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111142603.1A Pending CN113965864A (en) | 2021-09-28 | 2021-09-28 | Intelligent interaction method and device for hearing aid |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113965864A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150281856A1 (en) * | 2014-03-25 | 2015-10-01 | Samsung Electronics Co., Ltd. | Method for adapting sound of hearing aid and hearing aid and electronic device performing the same |
CN105519138A (en) * | 2013-08-20 | 2016-04-20 | 唯听助听器公司 | Hearing aid having an adaptive classifier |
CN107135452A (en) * | 2017-05-31 | 2017-09-05 | 北京小米移动软件有限公司 | Audiphone adaptation method and device |
CN109903765A (en) * | 2019-03-01 | 2019-06-18 | 西安极蜂天下信息科技有限公司 | Sound control method and device |
CN110191406A (en) * | 2019-05-27 | 2019-08-30 | 深圳市中德听力技术有限公司 | A kind of hearing aid with wireless transmission function |
CN112040383A (en) * | 2020-08-07 | 2020-12-04 | 深圳市微纳集成电路与***应用研究院 | Hearing aid device |
CN112965590A (en) * | 2021-02-03 | 2021-06-15 | 张德运 | Artificial intelligence interaction method, system, computer equipment and storage medium |
-
2021
- 2021-09-28 CN CN202111142603.1A patent/CN113965864A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105519138A (en) * | 2013-08-20 | 2016-04-20 | 唯听助听器公司 | Hearing aid having an adaptive classifier |
US20150281856A1 (en) * | 2014-03-25 | 2015-10-01 | Samsung Electronics Co., Ltd. | Method for adapting sound of hearing aid and hearing aid and electronic device performing the same |
CN107135452A (en) * | 2017-05-31 | 2017-09-05 | 北京小米移动软件有限公司 | Audiphone adaptation method and device |
CN109903765A (en) * | 2019-03-01 | 2019-06-18 | 西安极蜂天下信息科技有限公司 | Sound control method and device |
CN110191406A (en) * | 2019-05-27 | 2019-08-30 | 深圳市中德听力技术有限公司 | A kind of hearing aid with wireless transmission function |
CN112040383A (en) * | 2020-08-07 | 2020-12-04 | 深圳市微纳集成电路与***应用研究院 | Hearing aid device |
CN112965590A (en) * | 2021-02-03 | 2021-06-15 | 张德运 | Artificial intelligence interaction method, system, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
陈飞;: "低功耗智能蓝牙云交互耳机的设计与实现", 电脑知识与技术, no. 28, 5 October 2018 (2018-10-05) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109446876B (en) | Sign language information processing method and device, electronic equipment and readable storage medium | |
CN109087650B (en) | Voice wake-up method and device | |
CN111696553B (en) | Voice processing method, device and readable medium | |
WO2021031308A1 (en) | Audio processing method and device, and storage medium | |
CN107493500A (en) | Multimedia resource player method and device | |
CN111836062A (en) | Video playing method and device and computer readable storage medium | |
CN109360549B (en) | Data processing method, wearable device and device for data processing | |
CN111063354B (en) | Man-machine interaction method and device | |
CN107135452B (en) | Hearing aid fitting method and device | |
CN106888327B (en) | Voice playing method and device | |
CN111009239A (en) | Echo cancellation method, echo cancellation device and electronic equipment | |
CN111988704B (en) | Sound signal processing method, device and storage medium | |
CN108600503B (en) | Voice call control method and device | |
CN108766427B (en) | Voice control method and device | |
CN107247794B (en) | Topic guiding method in live broadcast, live broadcast device and terminal equipment | |
CN116758896A (en) | Conference audio language adjustment method, device, electronic equipment and storage medium | |
CN113726952B (en) | Simultaneous interpretation method and device in call process, electronic equipment and storage medium | |
CN111694539B (en) | Method, device and medium for switching between earphone and loudspeaker | |
CN112866480B (en) | Information processing method, information processing device, electronic equipment and storage medium | |
CN113965864A (en) | Intelligent interaction method and device for hearing aid | |
CN112489653B (en) | Speech recognition method, device and storage medium | |
CN108491180B (en) | Audio playing method and device | |
CN108364631B (en) | Speech synthesis method and device | |
CN113825082B (en) | Method and device for relieving hearing aid delay | |
CN114245261A (en) | Real-time conversation translation method, system, earphone device and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |