CN111683317A - Prompting method and device applied to earphone, terminal and storage medium - Google Patents

Prompting method and device applied to earphone, terminal and storage medium Download PDF

Info

Publication number
CN111683317A
CN111683317A CN202010466273.0A CN202010466273A CN111683317A CN 111683317 A CN111683317 A CN 111683317A CN 202010466273 A CN202010466273 A CN 202010466273A CN 111683317 A CN111683317 A CN 111683317A
Authority
CN
China
Prior art keywords
characteristic value
preset
data
prompting
earphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010466273.0A
Other languages
Chinese (zh)
Other versions
CN111683317B (en
Inventor
张峰
张斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Zimi Electronic Technology Co Ltd
Original Assignee
Jiangsu Zimi Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zimi Electronic Technology Co Ltd filed Critical Jiangsu Zimi Electronic Technology Co Ltd
Priority to CN202010466273.0A priority Critical patent/CN111683317B/en
Publication of CN111683317A publication Critical patent/CN111683317A/en
Application granted granted Critical
Publication of CN111683317B publication Critical patent/CN111683317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention discloses a prompting method and device applied to an earphone, a server and a terminal storage medium. The method comprises the following steps: collecting environmental sound data and extracting a characteristic value of the environmental sound data; and when the characteristic value is judged to be matched with the preset characteristic value, prompting in a preset prompting mode. According to the technical scheme of the embodiment of the invention, the earphone can be ensured to prompt the audio concerned by the user on the basis of isolating the external noise, and the user experience is improved.

Description

Prompting method and device applied to earphone, terminal and storage medium
Technical Field
The embodiment of the invention relates to the technical field of earphones, in particular to a prompting method and device applied to earphones, a server and a terminal storage medium.
Background
In recent years, under the background of rapid popularization of new-generation consumer electronics devices such as global smart phones and tablet computers, earphone products, particularly wireless earphone products, have shown an explosive growth trend. The noise reduction earphone can isolate external noise and bring the promotion of tone quality, receives liking of more and more people gradually.
The existing noise reduction methods all eliminate the external environment noise as much as possible. The deficiencies of the prior methods include at least: when the user is called, the information can be eliminated as the ambient noise, so that the user of the earphone cannot normally communicate with the outside, and the user experience is poor.
Disclosure of Invention
The embodiment of the invention provides a prompting method, a prompting device, a prompting terminal and a prompting storage medium applied to an earphone, which can ensure that a user is prompted according to the audio frequency concerned by the user on the basis of isolating external noise, and improve the user experience.
In a first aspect, an embodiment of the present invention provides a prompting method applied to an earphone, where the method includes:
collecting environmental sound data and extracting a characteristic value of the environmental sound data;
and when the characteristic value is judged to be matched with the preset characteristic value, prompting in a preset prompting mode.
Optionally, the extracting the feature value of the environmental sound data includes:
converting the environment sound data into binary data according to a voice processing technology, wherein the binary data comprises the characteristic value;
correspondingly, judging that the characteristic value is matched with a preset characteristic value comprises the following steps:
and if the characteristic value and the characteristic containing the preset characteristic value meet the adaptability requirement, judging that the characteristic value is matched with the preset characteristic value.
Optionally, the generating of the preset feature value includes:
inputting first character data through a mobile terminal which establishes communication connection with the earphone;
uploading the first character data to a server through the mobile terminal, so that the server searches the matched historical character data in the corpus according to the first character data, and generates a preset characteristic value according to the voice data corresponding to the historical character data;
and receiving the preset characteristic value fed back by the server through the mobile terminal.
Optionally, when the first text data is entered through the mobile terminal establishing communication connection with the headset, the method further includes:
and inputting first voice data corresponding to the first character data through the mobile terminal, and uploading the first voice data to the server, so that the server generates a preset characteristic value according to the voice data corresponding to the historical character data and the first voice data.
Optionally, after the acquiring the environmental sound data, the method further includes:
sending the environment sound data to a mobile terminal which is in communication connection with the earphone, so that the mobile terminal extracts the characteristic value of the environment sound data; or the like, or, alternatively,
uploading the environmental sound data to a server through a mobile terminal which is in communication connection with the earphone, so that the server extracts the characteristic value of the environmental sound data;
correspondingly, when judging that the characteristic value is matched with the preset characteristic value, prompting by adopting a preset prompting mode comprises the following steps:
when a characteristic value matching message fed back by the mobile terminal is received, prompting in a preset prompting mode; the characteristic value matching message fed back by the mobile terminal is generated when the mobile terminal judges that the characteristic value is matched with a preset characteristic value; alternatively, the first and second electrodes may be,
when a characteristic value matching message fed back by the server and forwarded by the mobile terminal is received, prompting in a preset prompting mode; the message with the same characteristic value fed back by the server is generated when the server judges that the characteristic value is matched with a preset characteristic value.
Optionally, the method is applied to the earphone which starts an active noise reduction mode, or applied to the earphone which carries out passive noise reduction;
correspondingly, when being applied to the earphone of opening the active noise reduction mode, after adopting the preset prompting mode to prompt, still include:
switching the active noise reduction mode into an environment sound transparent mode; alternatively, the first and second electrodes may be,
and switching the active noise reduction mode into an environment sound transparent mode based on the received mode switching instruction.
Optionally, the preset prompting mode is a voice prompt and/or a vibration prompt.
In a second aspect, an embodiment of the present invention further provides a prompting device applied to an earphone, where the prompting device includes:
the characteristic value extraction module is used for collecting environmental sound data and extracting the characteristic value of the environmental sound data;
and the prompting module is used for prompting by adopting a preset prompting mode when the characteristic value is judged to be matched with the preset characteristic value.
In a third aspect, an embodiment of the present invention further provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where when the one or more programs are executed by the one or more processors, the one or more processors implement the method for prompting applied to an earphone according to any embodiment of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a prompting method applied to a headset according to any embodiment of the present invention.
The embodiment of the invention extracts the characteristic value of the environmental sound data by collecting the environmental sound data around the earphone; and judging whether the characteristic value of the environment sound data is matched with the preset characteristic value or not, and when the characteristic value of the environment sound data is matched with the preset characteristic value, prompting a user of the earphone by adopting a preset prompting mode. The problem of among the prior art earphone eliminate useful information in the external world when eliminating external environment noise is solved, can guarantee on isolated external noise's basis, point out to the audio frequency that the user cared, improved user experience.
Drawings
Fig. 1 is a schematic flowchart of a prompting method applied to an earphone according to an embodiment of the present invention;
fig. 2 is a block diagram of a prompting device applied to an earphone according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a terminal according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described through embodiments with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Individual features recited in the embodiments may be combined to form multiple alternatives, and each numbered embodiment should not be construed as a single embodiment.
Example one
Fig. 1 is a flowchart of a prompting method applied to an earphone according to an embodiment of the present invention. The technical scheme of the embodiment can be suitable for the condition that the noise reduction earphone prompts, specifically for example, the noise reduction earphone prompts the collected voice of the user calling the earphone from the outside. The method can be executed by the prompting device applied to the earphone, which can be implemented in a software and/or hardware manner and configured in a terminal, such as the earphone. Accordingly, as shown in fig. 1, the method comprises the following operations:
and S110, collecting the environmental sound data and extracting the characteristic value of the environmental sound data.
The earphone can adopt an external microphone to collect the environmental sound data, and the environmental sound data can be the sound of the external environment containing the name of the user of the earphone or the sound of the name of the user of the earphone called by the mobile terminal. The feature value of the ambient sound data may be binary data, a waveform feature of the ambient sound data, and a form expressed in a feature value parameter. For example, the characteristic value of the environmental sound data in the form of binary data may be obtained by sampling and quantizing the original environmental sound data by an analog-to-digital conversion method. The waveform characteristics of the environmental sound data can be obtained through certain transformation by using programming languages such as MATLAB or C + +, and the waveform characteristics can be obtained through fourier transformation of the environmental sound data exemplarily. The characteristic parameters can be expressed by selecting MFCC parameters or LPCC parameters, and the MFCC parameter extraction process comprises the following steps: processing the environmental sound to obtain signal data and sampling frequency, pre-emphasis, framing, windowing, discrete Fourier transform (FFT), obtaining a Mel filter, obtaining the sum energy total value of energy characteristic parameters, performing natural logarithm operation, Discrete Cosine Transform (DCT) and lifting cepstrum operation, calculating a second group and a third group of parameters and obtaining final output.
And environmental sound data are collected and characteristic values are extracted, so that subsequent characteristic matching is facilitated.
And S120, when the characteristic value is judged to be matched with the preset characteristic value, prompting in a preset prompting mode.
The feature value is the feature value of the environmental sound data in step S110, and the preset feature value may be a wakeup word data file corresponding to the voice data of the name of the earphone user generated by the server, where the wakeup word data file includes the feature value of the name of the earphone user. And storing the characteristic value and the preset characteristic value in the same file form, judging whether the characteristic value data and the preset characteristic data meet a certain matching degree, and if so, matching the characteristic value with the preset characteristic value.
Optionally, the generating step of the preset feature value includes: inputting first character data through a mobile terminal which establishes communication connection with an earphone; uploading the first character data to a server through a mobile terminal, so that the server searches the matched historical character data in the corpus according to the first character data, and generates a preset characteristic value according to the voice data corresponding to the historical character data; and receiving the preset characteristic value fed back by the server through the mobile terminal.
The mobile terminal inputs character data of the name of the earphone user, then the character data of the name of the earphone user is uploaded to the server, a plurality of character data are stored in the server, a wakeup word data file of the name of the earphone user is generated through a series of data operations according to the character data of the name of the earphone user uploaded by the mobile terminal and historical data of characters containing the name of the earphone user in the server, the wakeup word data file contains a preset characteristic value of the name of the earphone user, and the server transmits the preset characteristic value to the mobile terminal.
The preset characteristic value is generated by uploading the text data of the name of the earphone user to the server, so that convenience is provided for characteristic value matching.
Further, when the first text data is entered through the mobile terminal establishing communication connection with the earphone, the method further comprises the following steps: the first voice data corresponding to the first character data are input through the mobile terminal, and the first voice data are uploaded to the server, so that the server generates a preset characteristic value according to the voice data corresponding to the historical character data and the first voice data.
Besides the text data of the name of the earphone user can be input into the mobile terminal, the voice data of the name of the earphone user can also be input into the mobile terminal. Illustratively, a name Zhang III of a user is input into a mobile terminal, and text information of Zhang III is directly uploaded to a server, and the server can generate a corpus characteristic value file according to text data of Zhang III and in combination with a corpus of Zhang III collected by the server; or, a complete corpus characteristic value file of three pages and three pages, which is collected by the server respectively, can be generated by combining the corpus of three pages and three pages. In addition, the voice data of Zhang III can be directly recorded in the mobile terminal, and a corpus characteristic value file is generated by combining the collected corpus of Zhang III by the server. The generated corpus characteristic value file contains preset characteristic values of the names of the earphone users. And the newly recorded voice data is used for perfecting the existing corpus, so that the calculation result is more accurate. The embodiment of the invention does not specifically limit the input mode of the name information of the earphone user. The voice data of the name of the earphone user can be collected at the terminal for one or more persons, and can also be collected once or more times. The server firstly processes the acquired character data or voice data of the name of the earphone user and the original data stored in the server, the data processing modes are various and are generally divided into off-line processing and real-time processing, illustratively, the off-line processing is daily timing processing, and the data are calculated into various KPIs through some data processing frames to establish data dimensionality; importing the preprocessed data into an HIVE warehouse, and then developing and analyzing sentences according to requirements to obtain statistical results; finally, the processed data is visualized, the data can be displayed by using a third-party platform, the visualization effect can be adjusted at any time, and the visualization effect can also be realized by using a software programming mode. The visualized big data operation result is a wake-up word data file which generates the name of the earphone user and contains the preset characteristic value.
The preset characteristic value of the name of the earphone user can be generated by performing certain data operation on the text data or the voice data of the name of the earphone user in the server.
Optionally, the extracting the feature value of the environmental sound data includes: converting the environmental sound data into binary data according to a voice processing technology, wherein the binary data comprises a characteristic value;
correspondingly, judging that the characteristic value is matched with the preset characteristic value comprises the following steps: and if the characteristic value and the characteristic containing the preset characteristic value meet the adaptability requirement, judging that the characteristic value is matched with the preset characteristic value.
The earphone can extract the feature value of the environmental sound data through a software program in the body, wherein the software program is used for processing the environmental sound data as input through sampling and quantization by using programming languages such as C + +, javal and the like to obtain the environmental sound data stored in a binary file, and the binary file comprises the feature value of the environmental sound data.
Specifically, the feature value of the ambient sound data and the preset feature value of the name of the user of the earphone are stored in the same data file form. Illustratively, stored in the form of binary data. However, the feature value data of the ambient sound data and the feature value data of the name of the user of the earphone may not be identical, and by judging that the matching degree of the binary data corresponding to the two values reaches a set value, an exemplary and operation may be performed on the feature value data containing the ambient sound data and the feature value data of the name of the user of the earphone, where a result of the and operation is "1" indicating that the two feature value data are matched, a certain matching degree of the feature value may be set, such as 85% or 90%, and if the matching result reaches a set standard, it is judged that the feature value of the ambient sound data matches the preset feature value of the name of the user of the earphone.
Preferably, the wake-up word data generated by the server and containing the feature value of the name of the earphone user is sent to the earphone body through the mobile terminal, and the feature value of the collected environmental sound data is matched with the preset feature value of the wake-up word data generated by the server and containing the name of the earphone user on the earphone.
The matching of the characteristic value of the environmental sound data and the preset characteristic value of the name of the earphone user is realized through the form of binary data, and the earphone can send out prompt sound to the earphone user after the matching is successful.
Optionally, after the collecting the environmental sound data, the method further includes: sending the environment sound data to a mobile terminal which is in communication connection with the earphone so that the mobile terminal extracts the characteristic value of the environment sound data; or uploading the environmental sound data to a server through a mobile terminal which establishes communication connection with the earphone, so that the server extracts the characteristic value of the environmental sound data;
correspondingly, when judging that the eigenvalue matches with the preset eigenvalue, adopt the preset prompting mode to prompt, include: when a characteristic value matching message fed back by the mobile terminal is received, prompting in a preset prompting mode; the characteristic value matching message fed back by the mobile terminal is generated when the mobile terminal judges that the characteristic value is matched with a preset characteristic value; or when a characteristic value matching message fed back by a server and forwarded by the mobile terminal is received, prompting in a preset prompting mode; the message with the same characteristic value fed back by the server is generated when the server judges that the characteristic value is matched with the preset characteristic value. The obtained environmental sound data can be stored in the earphone body, the data can also be sent to the mobile terminal, and the characteristic value of the environmental sound data is extracted from the mobile terminal. In addition, the mobile terminal can also judge whether the extracted characteristic value is matched with the preset characteristic value according to the stored preset characteristic value, and the method for judging whether the extracted characteristic value is matched with the preset characteristic value can also be the same as the method for judging in the earphone body. When the mobile terminal judges that the extracted feature value of the environmental sound data is matched with the preset feature value, a feature value matching message can be generated and fed back to the earphone. When the earphone receives the characteristic value matching message fed back by the mobile terminal, a preset prompting mode can be adopted for prompting.
In addition, the environmental sound data can be forwarded to the server through the mobile terminal, whether the characteristic value is matched with the preset characteristic value or not is judged at the server, if the characteristic value is matched with the preset characteristic value successfully, the server feeds a prompt message back to the mobile terminal, and then an earphone user is prompted through the earphone.
Different characteristic value matching positions are selected, so that the matching mode is more diversified.
Optionally, the method is applied to the earphone which starts an active noise reduction mode, or applied to the earphone which carries out passive noise reduction;
correspondingly, when being applied to the earphone of opening the active noise reduction mode, after adopting the preset prompting mode to prompt, still include: switching the active noise reduction mode into an environment sound transparent mode; or, based on the received mode switching instruction, switching the active noise reduction mode into the environment sound transparent mode.
The active noise reduction mode is that the earphone is automatically opened when the earphone user uses the noise reduction mode, the noise reduction mode is used for eliminating noise of the collected environment sound, the passive noise reduction mode is used for manually opening the noise reduction mode when the earphone user uses the noise reduction mode, the environment sound transparent mode is used for not carrying out noise reduction processing on environment sound data, and the earphone user is allowed to carry out normal communication with the outside. The earphone can automatically switch the active noise reduction mode into the environment sound transparent mode according to the prompt information of the preset prompt mode, and also can switch the active noise reduction mode into the environment sound transparent mode according to the mode switching instruction sent by the earphone user, so that the earphone user can normally communicate with the outside.
The switching of the environment sound transparent mode enables the earphone user not to be influenced by the earphone working mode and to normally receive the audio information. Optionally, the preset prompting mode is a voice prompt and/or a vibration prompt.
The voice prompt may be only a voice prompt, and may be, for example, a "ding" prompt, a string of voice prompts, or a sentence prompt, for example, "you have an external call," the embodiment of the present invention does not limit the specific voice prompt manner; the vibration prompt can be a sound short vibration or a long vibration which is continuous for a plurality of time periods, and the vibration form is not limited by the embodiment of the invention.
The earphone is convenient to remind a user of the earphone through a preset prompting mode, so that the user can know that external audio information exists at the first time.
According to the technical scheme of the embodiment of the invention, the environment sound data is collected, the characteristic value of the environment sound data is extracted, whether the characteristic value of the environment sound data is matched with the preset characteristic value or not is judged, and if the characteristic value of the environment sound data is matched with the preset characteristic value, a prompt is given to a user of the earphone. The problem of among the prior art earphone eliminate useful information in the external world when eliminating external environment noise is solved, can guarantee on isolated external noise's basis, point out to the audio frequency that the user cared, improved user experience.
Example two
Fig. 2 is a block diagram of a prompting device applied to an earphone according to a second embodiment of the present invention. The device is used for executing the prompting method applied to the earphone provided by any embodiment. The device includes:
the feature value extraction module 210 is configured to collect environmental sound data and extract a feature value of the environmental sound data;
and the prompting module 220 is configured to prompt in a preset prompting manner when the characteristic value is judged to be matched with the preset characteristic value.
Optionally, the feature value extraction module is specifically configured to:
converting the environment sound data into binary data according to a voice processing technology, wherein the binary data comprises the characteristic value;
correspondingly, the prompt module is specifically configured to:
and if the characteristic value and the preset characteristic value meet the characteristic required by the adaptation degree, judging that the characteristic value is matched with the preset characteristic value.
Optionally, the step of generating the preset feature value of the prompt module includes:
inputting first character data through a mobile terminal which establishes communication connection with the earphone;
uploading the first character data to a server through the mobile terminal, so that the server searches the matched historical character data in the corpus according to the first character data, and generates a preset characteristic value according to the voice data corresponding to the historical character data;
and receiving the preset characteristic value fed back by the server through the mobile terminal.
Optionally, the step of generating the preset feature value of the prompt module further includes:
and inputting first voice data corresponding to the first character data through the mobile terminal, and uploading the first voice data to the server, so that the server generates a preset characteristic value according to the voice data corresponding to the historical character data and the first voice data.
Optionally, after the collecting the environmental sound data, the feature value extracting module further includes:
sending the environment sound data to a mobile terminal which is in communication connection with the earphone, so that the mobile terminal extracts the characteristic value of the environment sound data; or the like, or, alternatively,
uploading the environmental sound data to a server through a mobile terminal which is in communication connection with the earphone, so that the server extracts the characteristic value of the environmental sound data;
correspondingly, when judging that the characteristic value is matched with the preset characteristic value, prompting by adopting a preset prompting mode comprises the following steps:
when a characteristic value matching message fed back by the mobile terminal is received, prompting in a preset prompting mode; the characteristic value matching message fed back by the mobile terminal is generated when the mobile terminal judges that the characteristic value is matched with a preset characteristic value; alternatively, the first and second electrodes may be,
when a characteristic value matching message fed back by the server and forwarded by the mobile terminal is received, prompting in a preset prompting mode; the message with the same characteristic value fed back by the server is generated when the server judges that the characteristic value is matched with a preset characteristic value.
Optionally, the prompt module further includes:
the method is applied to the earphone which starts an active noise reduction mode or is applied to the earphone which carries out passive noise reduction;
correspondingly, when being applied to the earphone of opening the active noise reduction mode, after adopting the preset prompting mode to prompt, still include:
switching the active noise reduction mode into an environment sound transparent mode; alternatively, the first and second electrodes may be,
and switching the active noise reduction mode into an environment sound transparent mode based on the received mode switching instruction.
Optionally, the prompt module further includes:
the preset prompting mode is voice prompting and/or vibration prompting.
The second prompting device applied to the earphone provided by the embodiment of the invention can execute the prompting method applied to the earphone provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a terminal according to a third embodiment of the present invention. Fig. 3 illustrates a block diagram of an exemplary terminal 12 suitable for use in implementing any of the embodiments of the present invention. The terminal 12 shown in fig. 3 is only an example, and should not bring any limitation to the function and the scope of use of the embodiment of the present invention. The device 12 is typically a terminal with a headphone alert function.
As shown in fig. 3, the terminal 12 is embodied in the form of a general purpose computing device. The components of the terminal 12 may include, but are not limited to: one or more processors or processing units 16, a memory 28, and a bus 18 that couples the various components (including the memory 28 and the processing unit 16).
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The terminal 12 typically includes a variety of computer readable media. Such media may be any available media that is accessible by terminal 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer device readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The terminal 12 may further include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 3, and commonly referred to as a "hard drive"). Although not shown in FIG. 3, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product 40, with program product 40 having a set of program modules 42 configured to carry out the functions of embodiments of the invention. Program product 40 may be stored, for example, in memory 28, and such program modules 42 include, but are not limited to, one or more application programs, other program modules, and program data, each of which examples or some combination may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The terminal 12 may also communicate with one or more external devices 14 (e.g., keyboard, mouse, camera, etc., and display), one or more devices that enable a user to interact with the terminal 12, and/or any devices (e.g., network card, modem, etc.) that enable the terminal 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the terminal 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network such as the internet) via the Network adapter 20. As shown, the network adapter 20 communicates with the other modules of the terminal 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the terminal 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (RAID) devices, tape drives, and data backup storage devices, to name a few.
The processor 16 executes various functional applications and data processing by running a program stored in the memory 28, for example, implementing a prompting method applied to a headset according to the above embodiment of the present invention, the method includes:
collecting environmental sound data and extracting a characteristic value of the environmental sound data;
and when the characteristic value is judged to be matched with the preset characteristic value, prompting in a preset prompting mode.
Of course, those skilled in the art can understand that the processor can also implement the technical solution of the prompting method applied to the earphone provided by any embodiment of the present invention.
Example four
A fourth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for prompting applied to a headset, the method including:
collecting environmental sound data and extracting a characteristic value of the environmental sound data;
and when the characteristic value is judged to be matched with the preset characteristic value, prompting in a preset prompting mode.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the above method operations, and may also perform related operations in a prompting method applied to a headset provided by any embodiment of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out instructions of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A prompting method applied to an earphone is characterized by comprising the following steps:
collecting environmental sound data and extracting a characteristic value of the environmental sound data;
and when the characteristic value is judged to be matched with the preset characteristic value, prompting in a preset prompting mode.
2. The method according to claim 1, wherein extracting the feature value of the environmental sound data comprises:
converting the environment sound data into binary data according to a voice processing technology, wherein the binary data comprises the characteristic value;
correspondingly, judging that the characteristic value is matched with a preset characteristic value comprises the following steps:
and if the characteristic value and the preset characteristic value meet the characteristic required by the adaptation degree, judging that the characteristic value is matched with the preset characteristic value.
3. The method according to claim 1, wherein the step of generating the preset feature value comprises:
inputting first character data through a mobile terminal which establishes communication connection with the earphone;
uploading the first character data to a server through the mobile terminal, so that the server searches the matched historical character data in the corpus according to the first character data, and generates a preset characteristic value according to the voice data corresponding to the historical character data;
and receiving the preset characteristic value fed back by the server through the mobile terminal.
4. The method according to claim 3, wherein the method further comprises, while entering the first text data via the mobile terminal that establishes the communication connection with the headset:
and inputting first voice data corresponding to the first character data through the mobile terminal, and uploading the first voice data to the server, so that the server generates a preset characteristic value according to the voice data corresponding to the historical character data and the first voice data.
5. The method of claim 1, further comprising, after the collecting ambient sound data:
sending the environment sound data to a mobile terminal which is in communication connection with the earphone, so that the mobile terminal extracts the characteristic value of the environment sound data; or the like, or, alternatively,
uploading the environmental sound data to a server through a mobile terminal which is in communication connection with the earphone, so that the server extracts the characteristic value of the environmental sound data;
correspondingly, when judging that the characteristic value is matched with the preset characteristic value, prompting by adopting a preset prompting mode comprises the following steps:
when a characteristic value matching message fed back by the mobile terminal is received, prompting in a preset prompting mode; the characteristic value matching message fed back by the mobile terminal is generated when the mobile terminal judges that the characteristic value is matched with a preset characteristic value; alternatively, the first and second electrodes may be,
when a characteristic value matching message fed back by the server and forwarded by the mobile terminal is received, prompting in a preset prompting mode; the message with the same characteristic value fed back by the server is generated when the server judges that the characteristic value is matched with a preset characteristic value.
6. The method of claim 1, applied to headphones with active noise reduction mode on, or applied to headphones with passive noise reduction;
correspondingly, when being applied to the earphone of opening the active noise reduction mode, after adopting the preset prompting mode to prompt, still include:
switching the active noise reduction mode into an environment sound transparent mode; alternatively, the first and second electrodes may be,
and switching the active noise reduction mode into an environment sound transparent mode based on the received mode switching instruction.
7. The method according to any one of claims 1 to 6, wherein the predetermined prompting means is a voice prompt and/or a vibration prompt.
8. A reminder device for use with headphones, comprising:
the characteristic value extraction module is used for collecting environmental sound data and extracting the characteristic value of the environmental sound data;
and the prompting module is used for prompting by adopting a preset prompting mode when the characteristic value is judged to be matched with the preset characteristic value.
9. A terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of alerting applied to a headset as claimed in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of alerting applied to headphones as claimed in any one of claims 1 to 7.
CN202010466273.0A 2020-05-28 2020-05-28 Prompting method and device applied to earphone, terminal and storage medium Active CN111683317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010466273.0A CN111683317B (en) 2020-05-28 2020-05-28 Prompting method and device applied to earphone, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010466273.0A CN111683317B (en) 2020-05-28 2020-05-28 Prompting method and device applied to earphone, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111683317A true CN111683317A (en) 2020-09-18
CN111683317B CN111683317B (en) 2022-04-08

Family

ID=72453187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010466273.0A Active CN111683317B (en) 2020-05-28 2020-05-28 Prompting method and device applied to earphone, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111683317B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113840205A (en) * 2021-09-26 2021-12-24 东莞市猎声电子科技有限公司 Earphone with conversation reminding function and implementation method
CN115277931A (en) * 2022-06-27 2022-11-01 北京小米移动软件有限公司 Information presentation method, information presentation device, and storage medium
WO2023005560A1 (en) * 2021-07-28 2023-02-02 Oppo广东移动通信有限公司 Audio processing method and apparatus, and terminal and storage medium
WO2023040483A1 (en) * 2021-09-15 2023-03-23 中兴通讯股份有限公司 Headphone working mode control method and apparatus, terminal, and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3301950A1 (en) * 2016-04-29 2018-04-04 Huawei Technologies Co., Ltd. Method and apparatus for determining voice input anomaly, terminal, and storage medium
CN108156550A (en) * 2017-12-27 2018-06-12 上海传英信息技术有限公司 The playing method and device of headphone
CN108540661A (en) * 2018-03-30 2018-09-14 广东欧珀移动通信有限公司 Signal processing method, device, terminal, earphone and readable storage medium storing program for executing
CN110475170A (en) * 2019-07-10 2019-11-19 深圳壹账通智能科技有限公司 Control method, device, mobile terminal and the storage medium of earphone broadcast state

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3301950A1 (en) * 2016-04-29 2018-04-04 Huawei Technologies Co., Ltd. Method and apparatus for determining voice input anomaly, terminal, and storage medium
CN108156550A (en) * 2017-12-27 2018-06-12 上海传英信息技术有限公司 The playing method and device of headphone
CN108540661A (en) * 2018-03-30 2018-09-14 广东欧珀移动通信有限公司 Signal processing method, device, terminal, earphone and readable storage medium storing program for executing
CN110475170A (en) * 2019-07-10 2019-11-19 深圳壹账通智能科技有限公司 Control method, device, mobile terminal and the storage medium of earphone broadcast state

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023005560A1 (en) * 2021-07-28 2023-02-02 Oppo广东移动通信有限公司 Audio processing method and apparatus, and terminal and storage medium
WO2023040483A1 (en) * 2021-09-15 2023-03-23 中兴通讯股份有限公司 Headphone working mode control method and apparatus, terminal, and medium
CN113840205A (en) * 2021-09-26 2021-12-24 东莞市猎声电子科技有限公司 Earphone with conversation reminding function and implementation method
CN115277931A (en) * 2022-06-27 2022-11-01 北京小米移动软件有限公司 Information presentation method, information presentation device, and storage medium

Also Published As

Publication number Publication date
CN111683317B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN111683317B (en) Prompting method and device applied to earphone, terminal and storage medium
US10614803B2 (en) Wake-on-voice method, terminal and storage medium
CN103888581B (en) A kind of communication terminal and its method for recording call-information
US20240021202A1 (en) Method and apparatus for recognizing voice, electronic device and medium
CN110047481B (en) Method and apparatus for speech recognition
WO2019227580A1 (en) Voice recognition method, apparatus, computer device, and storage medium
US11783808B2 (en) Audio content recognition method and apparatus, and device and computer-readable medium
CN113488024B (en) Telephone interrupt recognition method and system based on semantic recognition
WO2023222088A1 (en) Voice recognition and classification method and apparatus
CN105489221A (en) Voice recognition method and device
CN106302933B (en) Voice information processing method and terminal
CN111128223A (en) Text information-based auxiliary speaker separation method and related device
WO2020238045A1 (en) Intelligent speech recognition method and apparatus, and computer-readable storage medium
CN107919138B (en) Emotion processing method in voice and mobile terminal
CN106713111B (en) Processing method for adding friends, terminal and server
CN107808007A (en) Information processing method and device
CN109346057A (en) A kind of speech processing system of intelligence toy for children
US8868419B2 (en) Generalizing text content summary from speech content
KR20130108173A (en) Question answering system using speech recognition by radio wire communication and its application method thereof
CN111400463B (en) Dialogue response method, device, equipment and medium
CN112259076B (en) Voice interaction method, voice interaction device, electronic equipment and computer readable storage medium
CN107403623A (en) Store method, terminal, Cloud Server and the readable storage medium storing program for executing of recording substance
CN113299309A (en) Voice translation method and device, computer readable medium and electronic equipment
CN113012683A (en) Speech recognition method and device, equipment and computer readable storage medium
CN106980640B (en) Interaction method, device and computer-readable storage medium for photos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant