CN109192209A - Voice recognition method and device - Google Patents
Voice recognition method and device Download PDFInfo
- Publication number
- CN109192209A CN109192209A CN201811238628.XA CN201811238628A CN109192209A CN 109192209 A CN109192209 A CN 109192209A CN 201811238628 A CN201811238628 A CN 201811238628A CN 109192209 A CN109192209 A CN 109192209A
- Authority
- CN
- China
- Prior art keywords
- electric signal
- voice command
- determined
- institute
- body surface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000001514 detection method Methods 0.000 claims description 30
- 230000015572 biosynthetic process Effects 0.000 claims description 15
- 238000003786 synthesis reaction Methods 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 15
- 238000004590 computer program Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000004458 analytical method Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1101—Detecting tremor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Dentistry (AREA)
- Physiology (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Telephone Function (AREA)
Abstract
The application provides a voice recognition method and device. The method comprises the following steps: the method comprises the steps of acquiring a first electric signal and determining a voice command according to the first electric signal, wherein the first electric signal is determined according to body surface pressure information which is determined according to body vibration generated when a user sends the voice command. In the scheme, the determined voice command is determined according to the body vibration generated when the human body sounds, and the body vibration is not easily interfered by external factors, so that the voice command is determined through the body vibration, and the accuracy of voice recognition is improved.
Description
Technical field
This application involves mobile communication technology field more particularly to a kind of audio recognition methods and device.
Background technique
Speech-sound intelligent controls equipment, can receive the voice of user, and parsed to obtain language to the voice command of user
Then sound order goes to execute corresponding function according to voice command.
Existing speech-sound intelligent controls equipment, when receiving the voice of user, is easy the interference by extraneous factor, causes
Obtained voice command inaccuracy is parsed, alternatively, speech-sound intelligent controls equipment can not since the speech volume that user sends is lower
Correct voice command is obtained by the speech analysis.
Summary of the invention
The application provides a kind of audio recognition method and device, to improve the accuracy of speech recognition.
In a first aspect, the application provides a kind of audio recognition method, this method comprises: the first electric signal is obtained, and according to
First electric signal, determines voice command, wherein the first electric signal is determined according to body surface pressure information, body surface pressure letter
Breath is the body vibration determination generated when issuing voice command according to user.In the program, determining voice command is according to people
The body vibration determination generated when body sounding, since body vibration is not easy to be interfered by extraneous factor, thus by body vibration come really
Determine voice command, improves the accuracy of speech recognition.
In one possible implementation, the above method can also include: to obtain the second electric signal, and the second electric signal is
What the sound wave generated when issuing voice command according to user determined, according to the first electric signal and the second electric signal, determine that voice is ordered
It enables.With this solution, body when having received user's sending voice command vibrates and sound wave, determines voice command by two aspects,
The accuracy of speech recognition can be improved.
In one possible implementation, upper speech commands can determine by the following method: according to the first telecommunications
Number and the second electric signal, determine synthesis electric signal, and according to synthesis electric signal, determine voice command.
In one possible implementation, upper speech commands can determine by the following method: according to the first telecommunications
Number determine the first voice command, the second voice command is determined according to the second electric signal;When the first voice command and the second voice are ordered
When enabling identical, determine that voice command is the first voice command or the second voice command.When the first voice command and the second voice
When ordering different, the semantic logic of the first voice command and the second voice command is judged, if the semantic logic of the first voice command
It is greater than the matching value of the second voice command and preset rules with the matching value of preset rules, it is determined that the first voice command is described
Voice command, otherwise, it determines the second voice command is institute's speech commands.
In one possible implementation, above-mentioned first electric signal can obtain by the following method: obtain body surface pressure
Force information determines the first electric signal then according to body surface pressure information.In the program, pass through what is generated when measurement human body sounding
Body vibration, obtains body surface pressure information, then body surface pressure information is converted to the first electric signal, and first electric signal is for determining
Upper speech commands.
In one possible implementation, above-mentioned first electric signal can obtain by the following method: receiving detection and set
The first electric signal that preparation is sent, first electric signal are to be determined by detection device according to the body surface pressure information got,
In, detection device can be worn on human body surface or by human hand held, and the body vibration that can measure human body sounding when generates is simultaneously
Body surface pressure information is obtained, the first electric signal is then determined according to body surface pressure information.In the program, received first electric signal
For determining upper speech commands.
Second aspect, the application also provide a kind of speech recognition equipment, which includes: the first acquisition module
And determining module.First obtains module for obtaining the first electric signal, wherein the first electric signal is true according to body surface pressure information
Fixed, which is the body vibration determination generated when issuing voice command according to user.Determining module is used for root
According to the first electric signal, voice command is determined.In the program, the body that determining voice command generates when being according to human body sounding vibrates
Determining, since body vibration is not easy to be interfered by extraneous factor, thus voice command is determined by body vibration, improve voice
The accuracy of identification.
In one possible implementation, above-mentioned apparatus can also include the second acquisition module, and second, which obtains module, uses
In obtaining the second electric signal, wherein the second electric signal is that the sound wave generated when issuing voice command according to user determines.It is above-mentioned
Determining module is used for, and according to the first electric signal and the second electric signal, determines voice command.With this solution, user's hair is had received
Body vibration when voice command and sound wave out determine voice command by two aspects, the accuracy of speech recognition can be improved.
In one possible implementation, above-mentioned determining module, specifically can be used for: according to above-mentioned first electric signal and
Second electric signal determines synthesis electric signal, and according to synthesis electric signal, determines voice command.
In one possible implementation, above-mentioned determining module, specifically can be used for: determine according to the first electric signal
One voice command determines the second voice command according to the second electric signal;When the first voice command is identical as the second voice command,
Determine that voice command is the first voice command or the second voice command.When the first voice command is different from the second voice command
When, the semantic logic of the first voice command and the second voice command is judged, if the semantic logic of the first voice command and default rule
Matching value then is greater than the matching value of the second voice command and preset rules, it is determined that the first voice command is voice life
It enables, otherwise, it determines the second voice command is institute's speech commands.
In one possible implementation, above-mentioned first module is obtained, specifically can be used for: obtaining body surface pressure letter
Breath, and according to body surface pressure information, determine the first electric signal.In the program, vibrated by the body generated when measurement human body sounding,
Body surface pressure information is obtained, then body surface pressure information is converted to the first electric signal, first electric signal is for determining upper predicate
Sound order.
In one possible implementation, above-mentioned first module is obtained, specifically can be used for: received detection device and send
The first electric signal, which is to be determined by detection device according to the body surface pressure information got, wherein detection
Equipment can be worn on human body surface or by human hand held, which can measure the body vibration generated when human body sounding simultaneously
Body surface pressure information is obtained, the first electric signal is then determined according to body surface pressure information.In the program, received first electric signal
For determining upper speech commands.
The third aspect, the embodiment of the present invention provide a kind of network equipment, comprising:
Memory, for storing program instruction;
Processor executes aforementioned first according to the program of acquisition for calling the program instruction stored in the memory
Method described in any embodiment in aspect or first aspect.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, the computer-readable storage medium
Matter is stored with computer executable instructions, and the computer executable instructions are for making computer execute aforementioned first aspect or the
Method described in any embodiment in one side.
Detailed description of the invention
Fig. 1 is a kind of audio recognition method flow diagram provided by the present application;
Fig. 2 is a kind of speech recognition equipment schematic diagram provided by the present application;
Fig. 3 is a kind of smart machine schematic diagram provided by the present application;
Fig. 4 is another smart machine schematic diagram provided by the present application;
Fig. 5 is a kind of structural schematic diagram of the network equipment provided by the present application.
Specific embodiment
In order to keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application make into
One step it is described in detail.Concrete operation method in embodiment of the method also can be applied to Installation practice or system embodiment
In.Wherein, in the description of the present application, unless otherwise indicated, the meaning of " plurality " is two or more.
As shown in Figure 1, being a kind of audio recognition method flow diagram provided by the present application, which can be with
It is executed by speech recognition equipment, which, which for example can be acoustic control TV, Acoustic-control watch, mobile phone etc. voice can be used, is controlled
The smart machine of system perhaps can also be the chip in any of the above-described smart machine or can also be that any of the above-described intelligence is set
The functional module with speech identifying function in standby.
Method includes the following steps:
Step 101, the first electric signal is obtained.
Wherein, the first electric signal is determined according to body surface pressure information, which issued according to user
The body vibration determination generated when voice command.
Step 102, according to the first electric signal, voice command is determined.
In above-mentioned steps 101 and step 102, the body that determining voice command generates when being according to human body sounding, which vibrates, to be determined
, since body vibration is not easy to be interfered by extraneous factor, thus voice command is determined by body vibration, improve speech recognition
Accuracy.
In one possible implementation, above-mentioned steps 101 can also include the second electric signal being obtained, wherein second
Electric signal is that the sound wave generated when issuing voice command according to user determines.Above-mentioned steps 102, can specifically include, according to
First electric signal and the second electric signal, determine voice command.
In one possible implementation, above-mentioned that voice command is determined according to the first electric signal and the second electric signal, it can
To be realized by the following method:
Method one determines synthesis electric signal according to the first electric signal and the second electric signal, and is determined according to synthesis electric signal
Voice command.
Method two, determines the first voice command according to the first electric signal, determines the second voice command according to the second electric signal,
Synthesis voice command is determined according to the first voice command and the second voice command, which is above-mentioned steps 102
The voice command of middle determination.
Method three, determines the first voice command according to the first electric signal, determines the second voice command according to the second electric signal;
When the first voice command is identical as the second voice command, determine that voice command is that the first voice command or the second voice are ordered
It enables.When the first voice command and the second voice command difference, judge that the first voice command and the semanteme of the second voice command are patrolled
Volume, if the semantic logic of the first voice command and the matching value of preset rules are greater than the matching of the second voice command and preset rules
Value, it is determined that the first voice command is institute's speech commands, otherwise, it determines the second voice command is institute's speech commands.
In one possible implementation, the first electric signal in above-mentioned steps 101 can obtain by the following method:
Method one obtains body surface pressure information then according to body surface pressure information and determines the first electric signal, wherein body surface
Pressure information is that the body generated when issuing voice command according to user vibrates to determine.First electric signal is used in above-mentioned step
Voice command is determined in rapid 102.
As an example, when user issues voice command, the body of user can generate body vibration, at this point, voice is known
A certain functional module in other device can detecte body vibration, and according to information such as the frequency of body vibration and/or amplitudes, obtain
Body surface pressure information, then the body surface pressure information is converted to the first electric signal by speech recognition equipment.
Method two receives the first electric signal that detection device is sent, which is by detection device according to acquisition
What the body surface pressure information arrived determined, wherein detection device can be worn on human body surface or by human hand held, the detection device
The body generated when can measure human body sounding vibrates and obtains body surface pressure information, then determines first according to body surface pressure information
Electric signal.First electric signal is used to determine voice command in above-mentioned steps 102.
Here detection device for example can be a kind of body that can detecte and vibrate and finally obtain the first electric signal
Bracelet, necklace etc. can be worn on the article of human body surface, or can be that mobile phone etc. can be by the article of human hand held.
As an example, when user issues voice command, the body of user can generate body vibration, at this point, detection is set
A certain functional module in standby can detecte body vibration, and according to features such as frequency, the amplitudes of body vibration, obtain body surface pressure
Information, then the body surface pressure information is converted to the first electric signal by detection device, and first electric signal is sent to voice
Identification device.
In one possible implementation, above-mentioned body vibration is specifically as follows bone vibration.
Through the above scheme, the body vibration generated when determining voice command according to human body sounding, since body vibrates
It is not easy to be interfered by extraneous factor, thus determines voice command by body vibration, improves the accuracy of speech recognition.
Based on the same inventive concept, Fig. 2 illustratively shows a kind of speech recognition equipment provided by the present application, the device
The process of audio recognition method can be executed.As shown in Fig. 2, the device includes:
First obtains module 201, for obtaining the first electric signal, wherein the first electric signal is according to body surface pressure information
Determining, which is the body vibration determination generated when issuing voice command according to user.
Second obtains module 202, for obtaining the second electric signal, wherein the second electric signal be issued according to user described in
What the sound wave generated when voice command determined.
Determining module 203, for determining voice command according to the first electric signal.
In one possible implementation, above-mentioned determining module 203, can be also used for, according to the first electric signal and
Two electric signals, determine voice command.
The speech recognition equipment, the body vibration generated when determining voice command according to human body sounding, since body shakes
It is dynamic to be not easy to be interfered by extraneous factor, thus voice command is determined by body vibration, improve the accuracy of speech recognition.
In one possible implementation, above-mentioned determining module 203, specifically can be used for: according to above-mentioned first telecommunications
Number and the second electric signal, determine synthesis electric signal, and according to synthesis electric signal, determine voice command.
In one possible implementation, above-mentioned determining module 203, specifically can be used for: according to the first electric signal, really
Fixed first voice command determines the second voice command according to the second electric signal, is ordered according to the first voice command and the second voice
Enable, determine synthesis voice command, the synthesis voice command be it needs to be determined that voice command.
In one possible implementation, above-mentioned determining module 203, specifically can be used for: true according to the first electric signal
Fixed first voice command, determines the second voice command according to the second electric signal;When the first voice command and the second voice command phase
Meanwhile determining that voice command is the first voice command or the second voice command.When the first voice command and the second voice command
When different, the semantic logic of the first voice command and the second voice command is judged, if the semantic logic of the first voice command and pre-
If the matching value of rule is greater than the matching value of the second voice command and preset rules, it is determined that the first voice command is the voice
Order, otherwise, it determines the second voice command is institute's speech commands.
In one possible implementation, above-mentioned first module 201 is obtained, specifically can be used for: obtaining body surface pressure
Information determines the first electric signal then according to body surface pressure information, wherein body surface pressure information is to issue voice according to user
The body generated when order vibrates to determine.First electric signal is used to determine voice command for above-mentioned determining module 203.
In one possible implementation, above-mentioned first module 201 is obtained, specifically can be used for: receiving detection device
The first electric signal sent, which is to be determined by detection device according to the body surface pressure information got, wherein
Detection device can be worn on human body surface or by human hand held, and the body that can measure human body sounding when generates is vibrated and obtained
Then body surface pressure information determines the first electric signal according to body surface pressure information.First electric signal is used to supply above-mentioned determining mould
Block 203 determines voice command.
Below with two specific examples, the above-mentioned audio recognition method of the application and device are specifically described.
Example 1
As shown in figure 3, being a kind of smart machine schematic diagram provided by the present application, which can be to be worn on user
The detection devices such as the equipment of body surface, such as Acoustic-control watch, or can also be the equipment being held by a user, such as smart phone
Deng.
Wherein, pressure sensing cell and pressure resolution unit can be used for realizing the above-mentioned first function of obtaining module 201, wheat
Gram wind and speech analysis unit can be used for realizing the above-mentioned second function of obtaining module 202.
When the smart machine is worn on user's body surface or user holds the smart machine and to the voice by user
When smart machine issues voice command A, the body of user generates body and vibrates A and sound wave A.
On the one hand, pressure sensing cell can detecte body vibration A, and vibrates A according to body and obtain body surface pressure information A,
Then obtained body surface pressure information A is sent to pressure resolution unit by pressure sensing cell, and then pressure resolution unit can be with
Body surface pressure information A is converted into the first electric signal A, then the first electric signal A is sent to determining module by pressure resolution unit.
On the other hand, microphone can receive sound wave A, sound wave A is then sent to speech analysis unit, then voice
Sound wave A is converted to the second electric signal A by resolution unit, and then the second electric signal A is sent to determining module by speech analysis unit.
To which determining module can determine voice command A according to the first electric signal A and the second electric signal A received.
Example 2
As shown in figure 4, being another smart machine schematic diagram provided by the present application, which can be acoustic control electricity
User's body surface can not be worn on depending on, sound control air conditioner etc. and the speech-sound intelligent that can not be held by a user controls equipment.
Wherein, first available first electric signal of module is obtained, microphone and speech analysis unit can be used in realization
The function of the second acquisition module 202 is stated, detection device can be a kind of bracelet or a kind of necklace etc., and detection device includes pressure
Power detection unit, pressure resolution unit and communication unit, detection device can be worn on user's body surface or can by with
Family is hand-held.When the detection device is worn on user's body surface or user holds the detection device and issues voice life by user
When enabling B, the body of user generates body and vibrates B and sound wave B.
On the one hand, the pressure sensing cell of detection device can detecte body vibration B, and vibrates B according to body and obtain body surface
Pressure information B, then obtained body surface pressure information B is sent to the pressure resolution unit of detection device by pressure sensing cell,
And then body surface pressure information B can be converted to the first electric signal B by pressure resolution unit, then pressure resolution unit is electric by first
Signal B is sent to the communication unit of detection device, and then first electric signal B can be sent to smart machine by communication unit
First obtains module.
On the other hand, the microphone of smart machine can receive sound wave B, and sound wave B is then sent to smart machine
Speech analysis unit, then sound wave B is converted to the second electric signal B by speech analysis unit, and then speech analysis unit is by second
Electric signal B is sent to the determining module of smart machine.
To which determining module can determine voice command B according to the first electric signal B and the second electric signal B received.
Certainly, two examples above is served only for doing specific explanations to the audio recognition method and device of the application, and does not have to
In limitation the application.
Based on design same as the previously described embodiments, the application also provides a kind of network equipment.
Fig. 5 is a kind of structural schematic diagram of the network equipment provided by the present application.As shown in figure 5, the network equipment 500 wraps
It includes:
Memory 501, for storing program instruction;
Processor 502 executes aforementioned according to the program of acquisition for calling the program instruction stored in the memory
One audio recognition method as described in the examples.
Based on design same as the previously described embodiments, the application also provides a kind of computer storage medium, the computer
Readable storage medium storing program for executing is stored with computer executable instructions, and the computer executable instructions are for making computer execute aforementioned
One audio recognition method as described in the examples.
It should be noted that be schematical, only a kind of logical function partition to the division of unit in the application, it is real
There may be another division manner when border is realized.Each functional unit in this application can integrate in one processing unit,
It is also possible to each unit to physically exist alone, can also be integrated in two or more units in a module.Above-mentioned collection
At unit both can take the form of hardware realization, can also realize in the form of software functional units.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.Computer program product
Including one or more computer instructions.When loading on computers and executing computer program instructions, all or part of real estate
Raw process or function according to the application.Computer can be general purpose computer, special purpose computer, computer network or its
His programmable device.Computer instruction may be stored in a computer readable storage medium, or computer-readable deposit from one
Storage media is transmitted to another computer readable storage medium, for example, computer instruction can be from a web-site, calculating
Machine, server or data center are (such as red by wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless
Outside, wirelessly, microwave etc.) mode transmitted to another web-site, computer, server or data center.Computer can
Reading storage medium can be any usable medium or include that one or more usable mediums are integrated that computer can access
The data storage devices such as server, data center.Usable medium can be magnetic medium, (for example, floppy disk, hard disk, tape),
Optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk Solid State Disk (SSD)) etc..
It should be understood by those skilled in the art that, the application can provide as method, system or computer program product.Cause
This, the shape of complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application
Formula.Moreover, the application, which can be used, can use storage in the computer that one or more wherein includes computer usable program code
The form for the computer program product implemented on medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.).
The application be referring to according to the present processes, equipment (system) and computer program product flow chart and/or
Block diagram describes.It should be understood that each process that can be realized by computer program instructions in flowchart and/or the block diagram and/or
The combination of process and/or box in box and flowchart and/or the block diagram.It can provide these computer program instructions to arrive
General purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices processor to generate one
Machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for realizing flowing
The device for the function of being specified in journey figure one process or multiple processes and/or block diagrams one box or multiple boxes.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Obviously, those skilled in the art can carry out various modification and variations without departing from the essence of the application to the application
Mind and range.In this way, if these modifications and variations of the application belong to the range of the claim of this application and its equivalent technologies
Within, then the application is also intended to include these modifications and variations.
Claims (14)
1. a kind of audio recognition method characterized by comprising
The first electric signal is obtained, first electric signal is determined according to body surface pressure information, and the body surface pressure information is
The body vibration determination generated when issuing voice command according to user;
According to first electric signal, institute's speech commands are determined.
2. the method as described in claim 1, which is characterized in that the described method includes:
The second electric signal is obtained, the sound wave that second electric signal generates when being according to user sending institute's speech commands is true
Fixed;
According to first electric signal and second electric signal, institute's speech commands are determined.
3. method according to claim 2, which is characterized in that described according to first electric signal and second telecommunications
Number, determine institute's speech commands, comprising:
According to first electric signal and second electric signal, synthesis electric signal is determined;
According to the synthesis electric signal, institute's speech commands are determined.
4. method according to claim 2, which is characterized in that described according to first electric signal and second telecommunications
Number, determine institute's speech commands, comprising:
The first voice command is determined according to first electric signal;
The second voice command is determined according to second electric signal;
When first voice command is identical as second voice command, determine that institute's speech commands are first voice
Order or second voice command;
When first voice command and the second voice command difference, first voice command and described second are judged
The semantic logic of voice command, if the semantic logic of the first voice command and the matching value of preset rules are greater than the second voice command
With the matching value of preset rules, it is determined that first voice command is institute's speech commands, otherwise, it determines second voice
Order is institute's speech commands.
5. such as the described in any item methods of Claims 1-4, which is characterized in that described to obtain first electric signal, comprising:
Obtain the body surface pressure information;
According to the body surface pressure information, first electric signal is determined.
6. such as the described in any item methods of Claims 1-4, which is characterized in that described to obtain first electric signal, comprising:
First electric signal that detection device is sent is received, first electric signal is by the detection device according to getting
The body surface pressure information determine.
7. a kind of speech recognition equipment characterized by comprising
First obtains module, and for obtaining the first electric signal, first electric signal is determined according to body surface pressure information, institute
Stating body surface pressure information is the body vibration determination generated when issuing voice command according to user;
Determining module, for determining institute's speech commands according to first electric signal.
8. device as claimed in claim 7, which is characterized in that described device further include:
Second obtains module, and for obtaining the second electric signal, second electric signal is to issue the voice according to the user
What the sound wave generated when order determined;
The determining module, is used for:
According to first electric signal and second electric signal, institute's speech commands are determined.
9. device as claimed in claim 8, which is characterized in that the determining module is specifically used for: according to first telecommunications
Number and second electric signal, determine that synthesis electric signal determines institute's speech commands and according to the synthesis electric signal.
10. device as claimed in claim 8, which is characterized in that the determining module is specifically used for:
The first voice command is determined according to first electric signal;
The second voice command is determined according to second electric signal;
When first voice command is identical as second voice command, determine that institute's speech commands are first voice
Order or second voice command;
When first voice command and the second voice command difference, first voice command and described second are judged
The semantic logic of voice command, if the semantic logic of the first voice command and the matching value of preset rules are greater than the second voice command
With the matching value of preset rules, it is determined that first voice command is institute's speech commands, otherwise, it determines second voice
Order is institute's speech commands.
11. such as the described in any item devices of claim 7-10, which is characterized in that described first obtains module, is specifically used for, obtains
The body surface pressure information is taken, and according to the body surface pressure information, determines first electric signal.
12. such as the described in any item devices of claim 7-10, which is characterized in that described first obtains module, is specifically used for, connects
Receive detection device send first electric signal, first electric signal be as the detection device according to getting
What body surface pressure information determined.
13. a kind of network equipment characterized by comprising
Memory, for storing program instruction;
Processor requires 1-6 to appoint for calling the program instruction stored in the memory according to the program execution benefit of acquisition
Method described in one.
14. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer can
It executes instruction, the computer executable instructions are for making computer perform claim require the described in any item methods of 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811238628.XA CN109192209A (en) | 2018-10-23 | 2018-10-23 | Voice recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811238628.XA CN109192209A (en) | 2018-10-23 | 2018-10-23 | Voice recognition method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109192209A true CN109192209A (en) | 2019-01-11 |
Family
ID=64942984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811238628.XA Pending CN109192209A (en) | 2018-10-23 | 2018-10-23 | Voice recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109192209A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003264883A (en) * | 2002-03-08 | 2003-09-19 | Denso Corp | Voice processing apparatus and voice processing method |
JP2004020952A (en) * | 2002-06-17 | 2004-01-22 | Denso Corp | Bone conduction sound oscillation detecting element and sound recognition system |
CN1513278A (en) * | 2001-05-30 | 2004-07-14 | 艾黎弗公司 | Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors |
CN102405463A (en) * | 2009-04-30 | 2012-04-04 | 三星电子株式会社 | Apparatus and method for user intention inference using multimodal information |
US20130336500A1 (en) * | 2012-06-19 | 2013-12-19 | Kabushiki Kaisha Toshiba | Signal processing apparatus and signal processing method |
KR101686348B1 (en) * | 2015-10-08 | 2016-12-13 | 고려대학교 산학협력단 | Sound processing method |
WO2017031500A1 (en) * | 2015-08-20 | 2017-02-23 | Bodyrocks Audio Incorporated | Devices, systems, and methods for vibrationally sensing audio |
CN106601227A (en) * | 2016-11-18 | 2017-04-26 | 北京金锐德路科技有限公司 | Audio acquisition method and audio acquisition device |
-
2018
- 2018-10-23 CN CN201811238628.XA patent/CN109192209A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1513278A (en) * | 2001-05-30 | 2004-07-14 | 艾黎弗公司 | Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors |
JP2003264883A (en) * | 2002-03-08 | 2003-09-19 | Denso Corp | Voice processing apparatus and voice processing method |
JP2004020952A (en) * | 2002-06-17 | 2004-01-22 | Denso Corp | Bone conduction sound oscillation detecting element and sound recognition system |
CN102405463A (en) * | 2009-04-30 | 2012-04-04 | 三星电子株式会社 | Apparatus and method for user intention inference using multimodal information |
US20130336500A1 (en) * | 2012-06-19 | 2013-12-19 | Kabushiki Kaisha Toshiba | Signal processing apparatus and signal processing method |
WO2017031500A1 (en) * | 2015-08-20 | 2017-02-23 | Bodyrocks Audio Incorporated | Devices, systems, and methods for vibrationally sensing audio |
KR101686348B1 (en) * | 2015-10-08 | 2016-12-13 | 고려대학교 산학협력단 | Sound processing method |
CN106601227A (en) * | 2016-11-18 | 2017-04-26 | 北京金锐德路科技有限公司 | Audio acquisition method and audio acquisition device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107408386B (en) | Electronic device is controlled based on voice direction | |
CN107799126B (en) | Voice endpoint detection method and device based on supervised machine learning | |
CN109558512B (en) | Audio-based personalized recommendation method and device and mobile terminal | |
CN110875060A (en) | Voice signal processing method, device, system, equipment and storage medium | |
CN109427333A (en) | Activate the method for speech-recognition services and the electronic device for realizing the method | |
CN108877770A (en) | For testing the methods, devices and systems of intelligent sound equipment | |
EP3444809A1 (en) | Personalized speech recognition method, and user terminal performing the method | |
CN111124108B (en) | Model training method, gesture control method, device, medium and electronic equipment | |
CN110992963B (en) | Network communication method, device, computer equipment and storage medium | |
WO2020143512A1 (en) | Infant crying recognition method, apparatus, and device | |
CN108665895A (en) | Methods, devices and systems for handling information | |
CN110162338A (en) | Operation method, device and Related product | |
CN102625203A (en) | Signal processing device, signal processing method, and program | |
EP2945156A1 (en) | Audio signal recognition method and electronic device supporting the same | |
US11895474B2 (en) | Activity detection on devices with multi-modal sensing | |
KR102512614B1 (en) | Electronic device audio enhancement and method thereof | |
CN111524501A (en) | Voice playing method and device, computer equipment and computer readable storage medium | |
KR20150123579A (en) | Method for determining emotion information from user voice and apparatus for the same | |
CN113763933B (en) | Speech recognition method, training method, device and equipment of speech recognition model | |
CN109410918A (en) | For obtaining the method and device of information | |
CN108364648A (en) | Method and device for obtaining audio-frequency information | |
CN109389978A (en) | Voice recognition method and device | |
CN109308900A (en) | Headphone device, speech processing system and method for speech processing | |
JP5426706B2 (en) | Audio recording selection device, audio recording selection method, and audio recording selection program | |
CN205282093U (en) | Audio player |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190111 |