CN107978316A - The method and device of control terminal - Google Patents
The method and device of control terminal Download PDFInfo
- Publication number
- CN107978316A CN107978316A CN201711130491.1A CN201711130491A CN107978316A CN 107978316 A CN107978316 A CN 107978316A CN 201711130491 A CN201711130491 A CN 201711130491A CN 107978316 A CN107978316 A CN 107978316A
- Authority
- CN
- China
- Prior art keywords
- user speech
- speech information
- terminal
- instruction
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 111
- 230000008569 process Effects 0.000 claims description 51
- 238000001514 detection method Methods 0.000 claims description 12
- 238000012544 monitoring process Methods 0.000 claims description 5
- 230000003993 interaction Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 19
- 238000012545 processing Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 11
- 238000003860 storage Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 230000005236 sound signal Effects 0.000 description 8
- 230000006399 behavior Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000009826 distribution Methods 0.000 description 5
- 238000003058 natural language processing Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000000712 assembly Effects 0.000 description 3
- 238000000429 assembly Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 241001062009 Indigofera Species 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Telephone Function (AREA)
Abstract
The disclosure is directed to a kind of method and device of control terminal, the described method includes:Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate that wearable device gathers user speech information;Perform the voice and wake up instruction, gather user speech information;The user speech information is sent to the terminal with the wearable device wireless connection, the user speech information is used to control the terminal to perform the operation indicated by user speech information.Wearable device can carry out various operations by user speech come control terminal in the technical solution that the disclosure provides, and realize various terminals function, increase the control application type of wearable device and bring brand-new voice man-machine interaction experience for user.
Description
Technical field
This disclosure relates to field of terminal technology, more particularly to the method and device of control terminal.
Background technology
Wearable device such as wireless headset are exactly by applications of wireless technology in hands-free headsets, allow user to exempt irritating electric wire
Impede, be worn at any time with user, wearable device can be established wireless by the wireless chip in equipment and smart mobile phone
Connection, after wireless connection foundation, user can control smart mobile phone conveniently by operation wearable device, and realization is beaten
Phone, the function such as listen to music.But user's operation bluetooth headset can only pass through the side of button when wearable device control terminal
Formula operates, and man-machine interaction mode is single, carrys out control terminal and can only realize to make a phone call and broadcast moreover, operating these wearable devices
The function such as put the music on, operation application type is single.
The content of the invention
The embodiment of the present disclosure provides a kind of method and device of control terminal, wearable device can by user speech come
Control terminal carries out various operations, realizes various terminals function, increases the control application type of these wearable devices and to use
Bring brand-new voice man-machine interaction experience in family.The technical solution is as follows:
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of method of control terminal, applied to wearable device, institute
The method of stating includes:
Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate the wearable device collection user speech letter
Breath;
Perform the voice and wake up instruction, gather user speech information;
The user speech information is sent to the terminal with the wearable device wireless connection, the user speech letter
Breath is used to control the terminal to perform the operation indicated by the user speech information.
In one embodiment, the method further includes:
The feedback voice messaging that the terminal returns is received, the feedback voice messaging is used to notify described in the terminal-pair
The implementation status of user speech information;
Play the feedback voice messaging.
In one embodiment, the acquisition voice wakes up instruction, including:
Obtain default button operation information;
Alternatively,
Obtain default wake-up voice messaging.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of method of control terminal, applied to terminal, the method
Including:
Receive the user speech information that the wearable device being connected with the terminal wireless is sent, the user speech information
For controlling the terminal to perform the operation indicated by the user speech information;
Determine the phonetic order in the user speech information;
Perform the operation of the phonetic order instruction.
In one embodiment, the operation for performing the phonetic order instruction, including:
Determine the application type involved by the operation of phonetic order instruction, the application type include system apply or
Third-party application;
When the operation of phonetic order instruction includes the operation in system application, calling system application interface, control
The operation that phonetic order described in the system application execution indicates;
In the operation during the operation that the phonetic order indicates includes third-party application, third-party application interface is called,
The third-party application is controlled to perform the operation of the phonetic order instruction.
In one embodiment, the method further includes:
The feedback voice messaging for the user speech information is obtained, wherein, the feedback voice messaging is used to notify
The implementation status of user speech information described in the terminal-pair;
The feedback voice messaging is returned to the wearable device.
In one embodiment, the phonetic order determined in the user speech information, including:
The voice end of the user speech information is detected, determines the user speech before the voice end;
User speech information before the voice end is sent to speech processes high in the clouds;
Receive the phonetic order in the user speech information that the speech processes high in the clouds returns.
In one embodiment, the feedback voice messaging obtained for the user speech information, including:
The interface applied involved by the operation indicated by the phonetic order, monitors the implementation status of the operation;
The feedback word of the user speech information is directed to according to implementation status generation;
The feedback word for the user speech information is sent to the speech processes high in the clouds;
Receive the backchannel message for the corresponding digital signal form of the feedback word that the speech processes high in the clouds returns
Breath;
The feedback voice messaging of the digital signal form is converted into the feedback voice messaging of analog signal form.
According to the third aspect of the embodiment of the present disclosure, there is provided a kind of device of control terminal, applied to wearable device, institute
Stating device includes:
First acquisition module, wakes up instruction, it is described wearable for indicating that the voice wakes up instruction for obtaining voice
Equipment gathers user speech information;
Acquisition module, wakes up instruction for performing the voice, gathers user speech information;
Sending module, for the user speech information to be sent to the terminal with the wearable device wireless connection,
The user speech information is used to control the terminal to perform the operation indicated by the user speech information.
In one embodiment, described device further includes:
First receiving module, the feedback voice messaging returned for receiving the terminal, the feedback voice messaging are used for
Notify the implementation status of user speech information described in the terminal-pair;
Playing module, for playing the feedback voice messaging.
In one embodiment, first acquisition module includes:
First acquisition submodule, for obtaining default button operation information;
Alternatively,
Second acquisition submodule, for obtaining default wake-up voice messaging.
According to the fourth aspect of the embodiment of the present disclosure, there is provided a kind of device of control terminal, applied to terminal, described device
Including:
Second receiving module, the user speech letter sent for receiving the wearable device being connected with the terminal wireless
Breath, the user speech information are used to control the terminal to perform the operation indicated by the user speech information;
Determining module, for determining the phonetic order in the user speech information;
Execution module, for performing the operation of the phonetic order instruction.
In one embodiment, the execution module includes:
Determination sub-module, it is described to apply class for determining the application type involved by the operation of the phonetic order instruction
Type includes system application or third-party application;
First control submodule, for when the operation that the phonetic order indicates includes the operation during system is applied, adjusting
With system application interface, the operation for controlling phonetic order described in the system application execution to indicate;
Second control submodule, during for operation in including third-party application in the operation that the phonetic order indicates,
Third-party application interface is called, controls the third-party application to perform the operation of the phonetic order instruction.
In one embodiment, described device further includes:
Second acquisition module, for obtaining the feedback voice messaging for the user speech information, wherein, the feedback
Voice messaging is used for the implementation status for notifying user speech information described in the terminal-pair;
Module is returned to, for returning to the feedback voice messaging to the wearable device.
In one embodiment, the determining module includes:
Detection sub-module, for detecting the voice end of the user speech information, determine the voice end it
Preceding user speech information;
First sending submodule, for sending the user speech information before the voice end to speech processes cloud
End;
First receiving submodule, the voice in user speech information for receiving the speech processes high in the clouds return refer to
Order.
In one embodiment, second acquisition module includes:
Monitoring submodule, for the interface applied involved by the operation that is indicated by the phonetic order, monitors the behaviour
The implementation status of work;
Submodule is generated, for being directed to the feedback text of the user speech information according to implementation status generation
Word;
Second sending submodule, for sending the feedback text for the user speech information to the speech processes high in the clouds
Word;
Second receiving submodule, the corresponding numeral letter of the feedback word returned for receiving the speech processes high in the clouds
The feedback voice messaging of number form;
Transform subblock, for the feedback voice messaging of the digital signal form to be converted into the anti-of analog signal form
Present voice messaging.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a kind of device of control terminal, applied to wearable device, bag
Include:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:
Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate the wearable device collection user speech letter
Breath;
Perform the voice and wake up instruction, gather user speech information;
The user speech information is sent to the terminal with the wearable device wireless connection, the user speech letter
Breath is used to control the terminal to perform the operation indicated by the user speech information.
According to the 6th of the embodiment of the present disclosure the aspect, there is provided a kind of device of control terminal, applied to terminal, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:
Receive the user speech information that the wearable device being connected with the terminal wireless is sent, the user speech information
For controlling the terminal to perform the operation indicated by the user speech information;
Determine the phonetic order in the user speech information;
Perform the operation of the phonetic order instruction.
According to the 7th of the embodiment of the present disclosure the aspect, there is provided a kind of computer-readable recording medium, is stored with computer and refers to
Order, applied to wearable device, the computer instruction realizes above application in the side of wearable device when being executed by processor
Step in method.
According to the eighth aspect of the embodiment of the present disclosure, there is provided a kind of computer-readable recording medium, is stored with computer and refers to
Order, applied to terminal, the computer instruction realizes step of the above application in the method for terminal when being executed by processor.
Wearable device, which obtains, in the present embodiment is used to indicate that the wearable device acquisition control is called out with the voice of voice
Wake up after instruction, perform the voice and wake up instruction, gather user speech information;The user speech information is sent to can wear with this
The terminal of equipment wireless connection is worn, controls the terminal to perform indicated by the user speech information with the user's voice messaging
Operation, in this way, wearable device can carry out various operations by user speech come control terminal, realize various terminals work(
Can, increase the control application type of earphone and bring brand-new voice man-machine interaction experience for user.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not
The disclosure can be limited.
Brief description of the drawings
Attached drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the disclosure
Example, and be used to together with specification to explain the principle of the disclosure.
Fig. 1 is a kind of block diagram of the system of method for realizing the control terminal according to an exemplary embodiment.
Fig. 2 is a kind of stream of the method for control terminal applied to wearable device according to an exemplary embodiment
Cheng Tu.
Fig. 3 is a kind of flow chart of the method for control terminal applied to terminal according to an exemplary embodiment.
Fig. 4 is a kind of flow chart of the method for control terminal according to an exemplary embodiment.
Fig. 5 is a kind of frame of the device of control terminal applied to wearable device according to an exemplary embodiment
Figure.
Fig. 6 is a kind of frame of the device of control terminal applied to wearable device according to an exemplary embodiment
Figure.
Fig. 7 is a kind of frame of the device of control terminal applied to wearable device according to an exemplary embodiment
Figure.
Fig. 8 is a kind of frame of the device of control terminal applied to wearable device according to an exemplary embodiment
Figure.
Fig. 9 is a kind of block diagram of the device of control terminal applied to terminal according to an exemplary embodiment.
Figure 10 is a kind of block diagram of the device of control terminal applied to terminal according to an exemplary embodiment.
Figure 11 is a kind of block diagram of the device of control terminal applied to terminal according to an exemplary embodiment.
Figure 12 is a kind of block diagram of the device of control terminal applied to terminal according to an exemplary embodiment.
Figure 13 is a kind of block diagram of the device of control terminal applied to terminal according to an exemplary embodiment.
Figure 14 is a kind of device of control terminal applied to wearable device according to an exemplary embodiment
Block diagram.
Figure 15 is a kind of block diagram of the device of control terminal applied to terminal according to an exemplary embodiment.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to
During attached drawing, unless otherwise indicated, the same numbers in different attached drawings represent the same or similar key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended
The example of the consistent apparatus and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is a kind of block diagram of the system of method for realizing the control terminal according to an exemplary embodiment, with
Under the system that refers to shown in Fig. 1 of each embodiment be described.
Wearable device side embodiment:
Fig. 2 is a kind of stream of the method for control terminal applied to wearable device according to an exemplary embodiment
Cheng Tu, as shown in Fig. 2, the method for the control terminal is used in the equipment such as wearable device, comprises the following steps 201 to 203:
In step 201, obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate that the wearable device is adopted
Collect user speech information.
In step 202, perform the voice and wake up instruction, gather user speech information.
In step 203, the user speech information is sent to terminal with the wearable device wireless connection, institute
User speech information is stated to be used to control the terminal to perform the operation indicated by voice content.
Here, can be that Intelligent bracelet, intelligent watch, intelligent glasses, intelligent earphone etc. are various wear the wearable device
The equipment worn.Illustrated in the present embodiment by taking wearable device is earphone as an example, which is mainly wireless headset, this is wireless
Earphone can establish wireless connection between terminal, exemplary, which can be bluetooth headset, which can be with
It is bluetooth connection, in this way, earphone can send user speech information by the wireless connection to terminal, passes through the user's voice
Information carrys out control terminal and carries out various operations.Explanation is needed exist for, earphone has two to the user speech information that terminal is sent
Kind situation, a kind of is the user speech information of control terminal, and another kind is the call voice that the call opposite end to terminal is sent
Information, therefore in order to distinguish both voices, earphone is only when getting voice wake-up instruction, just understand control terminal for data acquisition
User speech information, and user speech information is sent to terminal by wireless connection, control terminal carries out various operations.
Exemplary, which is bluetooth headset, as shown in Figure 1, bluetooth headset 11 includes bluetooth module 110, power supply
Module 111, key-press module 112, microphone 113, loudspeaker 114 and micro-control unit (Micro Control Unit, MCU)
115.Wherein, micro-control unit 115 usually controls the associated behaviour such as the integrated operation of bluetooth headset 11, data communication
Make;Power module 111 can be other modules with power in bluetooth headset 11, for running bluetooth headset 11.
Here, key-press module 112 includes power key (also calling off/closing key), volume+/ volume-key, and user can press
Power key opens bluetooth headset 11, and after bluetooth headset 11 is opened, bluetooth module 110 can broadcast the mark letter of bluetooth headset 11
Breath, after such terminal opened bluetooth function can bluetooth discovery can be to the bluetooth searched to the bluetooth headset 11, terminal
The bluetooth module 110 of earphone 11 sends connection and establishes request, and at this time, bluetooth module 110 can automatically reply connection establishment response,
In this way, the wireless connection between the bluetooth module 110 of bluetooth headset 11 and terminal can be established.
Here, can be got after bluetooth headset 11 is opened for indicating the earphone collection user speech information
Voice wakes up instruction, here it is possible to which increasing independent voice on key-press module 112 wakes up button, user's pressing keys module
After voice on 112 wakes up button, key-press module 112 can generate voice and wake up instruction, and be sent to micro-control unit 115
Voice wakes up instruction, after micro-control unit 115 gets voice wake-up instruction, it is possible to and perform the voice and wake up instruction,
Microphone 113 is controlled to gather user speech information, microphone 113 can be sent to micro-control after user speech information is collected
Unit 115 processed, it is wireless with bluetooth headset 11 that micro-control unit 115 controls bluetooth module 110 to be sent to the user's voice messaging
The terminal of connection.Here, the user's voice messaging is used to control the terminal to perform the behaviour indicated by the user speech information
Make, such as can be " phoning Zhang San ", after terminal receives the user speech information of " phoning Zhang San ", it is possible to perform
Operation indicated by user speech information:Dial the phone of Zhang San.
Explanation is needed exist for, user speech information can be carried out in addition to control terminal is made a phone call with control terminal
Other operations, as user speech can also be " today, how is weather ", terminal receives the user of " today, how is weather "
After voice messaging, it is possible to perform operation:Call weather application and and control the weather application to search weather and be user today
Show today weather condition as on a terminal screen display or the weather with voice broadcast today;Alternatively, user speech may be used also
To be " hair wechat say that I arrives to king five " a little while, terminal receives the user of " hair wechat says that I arrives a little while to king five "
After voice messaging, it is possible to perform operation:Call wechat apply and control the wechat apply to king five transmission message " I a little while
Arrive ".Earphone can gather user speech information and carry out the various operations of control terminal execution, realize various functions.
The present embodiment can obtain voice and wake up instruction, and the voice wakes up instruction and is used to indicate wearable device collection control
System voice;Perform the voice and wake up instruction, gather user speech information;The user speech information is sent to that can wear
The terminal of equipment wireless connection is worn, controls the terminal to perform indicated by the user speech information with the user's voice messaging
Operation, in this way, wearable device can carry out various operations by user's voice messaging come control terminal, realize various ends
Function is held, increases the control application type of earphone and brings brand-new voice man-machine interaction experience for user.
In a kind of possible embodiment, in the method for above-mentioned control terminal, the method can also include following step
Rapid A1 and A2.
In step A1, the feedback voice messaging that the terminal returns is received, the feedback voice messaging is used to notify institute
State the implementation status of user speech information described in terminal-pair.
In step A2, the feedback voice messaging is played.
Here, terminal is after user speech information is received, it may be determined that the phonetic order in the user's voice messaging, and
Perform the operation of phonetic order instruction;Meanwhile terminal can also send to wearable device and feed back voice messaging, described in feedback
The implementation status of user speech described in terminal-pair, exemplary, terminal is receiving the user speech information of " phoning Zhang San "
Afterwards, it is possible to perform operation:The phone of Zhang San is dialed, meanwhile, terminal can feed back the feedback voice messaging to earphone and " dial
Zhang San is phoned, just a moment,please ";Wearable device after feedback voice " calling to Zhang San, just a moment,please " is received,
The feedback voice can be played;In this way, user can recognize terminal-pair the user's voice by the feedback voice messaging
Implementation status.
Here, as shown in Figure 1, being illustrated so that the wearable device is bluetooth headset as an example, earphone can pass through bluetooth
The feedback voice messaging that 110 receiving terminal of module returns, bluetooth module 110 can be sent to micro-control after receiving feedback voice messaging
Unit 115 processed, micro-control unit 115 will controlling loudspeaker 114 play the feedback voice messaging.
The present embodiment can receive and play the feedback voice messaging that the terminal returns, and the feedback voice messaging is used for
Notify the implementation status of user speech information described in the terminal-pair;User can so understood by the feedback voice messaging
The implementation status of terminal-pair user speech is understood, so that user carries out subsequent treatment.
In a kind of possible embodiment, the step 201 in the method for above-mentioned control terminal is also implemented as following
Step B1.
In step bl is determined, default button operation information is obtained.
Here, it can be default button operation information which, which wakes up instruction, which can be hardkey
Button operation information or virtual key button operation information.It is exemplary, can be on the key-press module 112 of earphone
An independent voice is set to wake up button (can be that hardkey can also be virtual key), which can be
User presses the information that the voice wakes up button, in this way, after user presses voice wake-up button, key-press module 112 can
Default button operation information is got, and the default button operation information is sent to micro-control unit 115, in this way, micro-control
Unit 115 processed can control other modules to proceed by step 102 and step 103.Certainly, the default button operation information
It can also be that user presses the information of the power key and volume+key, in this way, after user presses the power key and volume+key at the same time,
Earphone just gets default button operation information.Various buttons described here can be that hardkey can also be virtual key,
This is not limited.
The present embodiment can activate voice control function by user's button operation, and efficient and convenient, learning cost is low.
In a kind of possible embodiment, the step 201 in the method for above-mentioned control terminal is also implemented as following
Step B2.
In step B2, default wake-up voice messaging is obtained.
Here, it can be default wake-up voice messaging which, which wakes up instruction, exemplary, can be by some default vocabulary
Voice messaging as default wake-up voice messaging, it is such as default to wake up the voice that voice messaging be " control " vocabulary
Information, when user is want using wearable device such as earphone by voice control terminal, user can input voice " control ", ear
After machine collects voice " control " by microphone 113, which can be sent to micro-control unit 115, micro-control
After unit 115 processed recognizes voice " control ", it is possible to control microphone 113 to gather user speech information, and control bluetooth mould
The user speech information is sent to the terminal with the earphone wireless connection by block 110.
The present embodiment can input voice to activate voice control function by user, without button, liberate user's both hands.
End side embodiment:
Fig. 3 is a kind of flow chart of the method for control terminal applied to terminal according to an exemplary embodiment,
As shown in figure 3, the method for the control terminal is used in the equipment such as terminal, comprise the following steps 301 to 303:
In step 301, the user speech information that the wearable device being connected with the terminal wireless is sent is received, it is described
User speech information is used to control the terminal to perform the operation indicated by the user speech information.
In step 302, the phonetic order in the user speech information is determined.
In step 303, the operation of the phonetic order instruction is performed.
Here, wearable device can send to terminal and control when being operated by user's voice messaging control terminal
The user speech information of system, terminal can carry out speech recognition after obtaining the user's voice messaging to the user's voice messaging,
Identify the phonetic order in the user's voice messaging, and perform the operation of phonetic order instruction, which can be terminal
The operation that third-party application performs in the operation of interior system application execution or terminal, is not limited herein.Example
, such as the user speech information controlled that terminal receives for " hair wechat says that I arrives to king five " a little while after, it is possible to
The voice is identified, obtains phonetic order:Hair wechat says that I arrives a little while to king five;In this way, terminal recognition goes out the language
After sound instruction, it is possible to perform the operation of phonetic order instruction:Wechat is opened, the good friend king five into wechat address list sends
Information " I arrives a little while ".
The present embodiment can determine the user's voice messaging after the user speech information of wearable device transmission is received
In phonetic order, and perform the operation of phonetic order instruction, such wearable device can be controlled by user speech
Terminal processed carries out various operations, realizes various terminals function, increases the control application type of earphone and is brought for user brand-new
Voice man-machine interaction experience.
In a kind of possible embodiment, the step 303 in the method for above-mentioned control terminal may be embodied as following step
Rapid C1 to C3.
In step C1, the application type involved by the operation of the phonetic order instruction, the application type bag are determined
Include system application or third-party application.
In step C2, when the operation of phonetic order instruction includes the operation in system application, calling system should
With interface, the operation for controlling phonetic order described in the system application execution to indicate.
In step C3, in the operation during the operation that the phonetic order indicates includes third-party application, the 3rd is called
Square application interface, controls the third-party application to perform the operation of the phonetic order instruction.
Here, terminal can first determine answering involved by the operation of the phonetic order instruction after phonetic order is identified
With type, which includes system application or third-party application;User speech information as terminal receives " is phoned
Zhang San ", the phonetic order identified is makes a phone call to Zhang San, at this time, it is possible to determines that phonetic order instruction makes a phone call to operate
It is related to system application-talk application;Such as user speech information that terminal receives " hair wechat says that I arrives to king five " a little while,
The phonetic order identified is is wechat message " I arrives a little while " to wechat good friend Wang Wufa, at this time, it is possible to determine language
The hair wechat operation of sound instruction instruction is related to third-party application-wechat application.
Here, terminal can call this to be when the operation that the phonetic order indicates includes the operation in system application
Unite application interface, control information is sent to system application by the system application interface, controls the system application execution language
The operation of sound instruction instruction.It is exemplary, after terminal receives user speech information " phoning Zhang San ", determine that phonetic order is
Make a phone call to Zhang San, at this time, terminal will talk application interface, control the talk application to perform and dial the operation of Zhang San's number.
Alternatively, when the operation that phonetic order described in terminal indicates includes the operation in third-party application, the 3rd can be called
Square application interface, sends control information to the third-party application by the third-party application interface, controls the third-party application
Perform the operation of the phonetic order instruction.Exemplary, terminal receives user speech information, and " hair wechat says me for a moment to king five
Youngster arrives " after, it is to send out wechat message to king five to determine phonetic order, which is " I arrives a little while ", at this time, terminal
The wechat application interface will be called, controls the wechat to apply and sends out wechat message " I arrives a little while " to king five.
The present embodiment can determine the application type involved by the operation of the phonetic order instruction, the application type bag
Include system application or third-party application;When the operation of phonetic order instruction includes the operation in system application, system is called
System application interface, the operation for controlling phonetic order described in the system application execution to indicate;In the behaviour of phonetic order instruction
When work includes the operation in third-party application, third-party application interface is called, controls the third-party application to perform the voice
Instruct the operation of instruction;Various terminals can be controlled to carry out the operation in various applications, realize various terminals function, increase earphone
Control application type and bring brand-new voice man-machine interaction experience for user.
In a kind of possible embodiment, the method for above-mentioned control terminal is also implemented as following steps D1 to D2.
In step D1, the feedback voice messaging for the user speech information is obtained, wherein, the backchannel message
Breath is used for the implementation status for notifying user speech information described in the terminal-pair.
In step d 2, the feedback voice messaging is returned to the wearable device.
Here, terminal is after user speech information is got, it may be determined that the phonetic order in the user's voice messaging, and
The operation of phonetic order instruction is performed, meanwhile, terminal can also generate pin according to the implementation status of the terminal-pair phonetic order
To the feedback voice messaging of the user speech information, for terminal after the feedback voice messaging is got, meeting can be wearable to this
Equipment returns to the feedback voice messaging, to this can wearable device feed back the execution of user speech information described in the terminal-pair
Situation.Exemplary, terminal is after the user speech information of " phoning Zhang San " of earphone transmission is received, it is possible to performs behaviour
Make:The phone of Zhang San is dialed, still, terminal finds not finding contact person " Zhang San " in address list during execution, this
When, terminal can generate feedback voice messaging " not finding contact person Zhang San ", after terminal gets feedback voice messaging, so that it may
To return to the feedback voice messaging to the earphone.Earphone, can after feedback voice " not finding contact person Zhang San " is received
To play the feedback voice messaging;Do not have in this way, user can be recognized by the feedback voice messaging in the address list of terminal
There is Zhang San, be probably the telephone number that user does not store Zhang San in the terminal at this time, it is also possible to be that the name that user deposits is not
Zhang San, at this time, user can re-enter user speech as the case may be.
The present embodiment returns to feedback voice messaging to wearable device, and the feedback voice messaging is used to notify the terminal
To the implementation status of the user speech information;It can so make user have a clear understanding of terminal-pair by the feedback voice messaging to use
The implementation status of family voice, so that user carries out subsequent treatment.
In a kind of possible embodiment, the step 301 in the method for above-mentioned control terminal can be implemented as following step
Rapid E1 to E3.
In step E1, the voice end of the user speech information is detected, before determining the voice end
User speech.
In step E2, the user speech information before the voice end is sent to speech processes high in the clouds.
In step E3, the phonetic order in the user speech information that the speech processes high in the clouds returns is received.
It is exemplary, as shown in Figure 1, terminal 12 includes bluetooth headset application module 121 and audio playing module 122, the indigo plant
Include voice recording submodule 1211, voice activity detection submodule 1212, speech intention distribution in tooth earpiece application module 121
Submodule 1213;Speech processes high in the clouds 13 includes automatic speech recognition (Automatic Speech Recognition, ASR) mould
Block 131, natural language processing (Natural Language Processing, NLP) module 132 and text-to-speech (Text
To Speech, TTS) module 133.
Here, in terminal 12, the voice recording submodule 1211 of bluetooth headset application module 121 is used for receiving the user's language
Message ceases, and is sent to voice activity detection submodule 1212 after the form that the user's voice messaging is converted into needing, the language
Sound activity detection submodule 1212 can use VAD (Voice Activity Detection, voice activity detection) algorithm, from
The endpoint that dynamic detection user speech terminates, determines the user speech before the voice end, it is therefore an objective in user speech
Identification and elimination prolonged mute phase, to have the function that to save traffic resource in the case where not reducing quality of service;Language
, can be by before voice end after sound activity detection submodule 1212 determines the user speech before the voice end
User speech information is sent to the automatic speech recognition module 131 in speech processes high in the clouds 13, which can
Using the vocabulary Content Transformation in human speech as computer-readable input-speech text, and will convert into speech text hair
Natural language processing module 132 is given, which can carry out morphological analysis, sentence to the speech text
Method is analyzed and semantic analysis, obtains the speech intention of the user's voice messaging, which is the language in the user's voice
Sound instructs, which can return to the phonetic order terminal 12 after obtaining the phonetic order
Speech intention distributes submodule 1213, after speech intention distribution submodule 1213 obtains the phonetic order i.e. speech intention, meeting
According to the speech intention calling system application interface or third party APP (Application, application program) interface, phase is controlled
Answer the operation being intended in application execution the user's voice.Exemplary, terminal receives user speech information " phoning Zhang San "
Afterwards, user view is determined to make a phone call to Zhang San, and at this time, speech intention distribution submodule 1213 can call terminal system application i.e.
Talk application interface, controls the talk application to call to Zhang San;Alternatively, terminal receives user speech information " hair wechat
Say that I arrives a little while to king five " after, determine user view for give king five send out wechat message, the wechat message for " I a little while
Arrive ", at this time, speech intention distribution submodule 1213 can call terminal third-party application i.e. wechat credit interface, control wechat
Wechat message " I arrives a little while " is sent out using to king five.
The present embodiment can detect the voice end of the user speech information, before determining the voice end
User speech information;And the user speech information before the voice end is sent to speech processes high in the clouds;From the voice
Processing obtains the phonetic order in user speech information at high in the clouds, carries out speech processes by speech processes high in the clouds, can reduce end
The load at end, and the processing in speech processes high in the clouds is more accurate.
In a kind of possible embodiment, the step D1 in the method for above-mentioned control terminal can be implemented as following steps
D11 to D15.
In step D11, the interface applied involved by the operation that is indicated by the phonetic order monitors the operation
Implementation status;
In step D12, the feedback word of the user speech information is directed to according to implementation status generation;
In step D13, the feedback word for the user speech information is sent to the speech processes high in the clouds.
In step D14, the corresponding digital signal form of the feedback word that the speech processes high in the clouds returns is received
Feedback voice messaging.
In step D15, the feedback voice messaging of the digital signal form is converted into the feedback of analog signal form
Voice messaging.
Here, the speech intention of terminal distributes submodule 1213 after the phonetic order i.e. speech intention is obtained, can basis
The speech intention calls terminal system application interface or third-party application interface, controls corresponding application execution the user voice
The operation being intended to refer to, meanwhile, speech intention distribution submodule 1213 can pass through system application interface or third-party application
Interface monitoring system is applied or the operation implementation status of third-party application, obtains the execution of user speech information described in terminal-pair
Situation, and according to the implementation status generation feedback word for being directed to the user speech information, then speech intention is divided
Hair submodule 1213, which can send the feedback word to the text-to-speech module 133 in speech processes high in the clouds 13, the word, turns language
The feedback word can be changed into the feedback voice messaging of digital signal form by sound module 133, and by the digital signal form
Feedback voice messaging returns to the audio playing module 122 of terminal 12, which can be by the digital signal shape
The feedback voice messaging of formula is converted into the feedback voice messaging of analog signal form, and the feedback voice then is sent to earphone 11
Bluetooth module 110, which is transmitted to micro-control unit after obtaining the feedback voice messaging of the analog signal form
115, which plays the feedback voice messaging of the analog signal form.
The interface applied involved by the operation that the present embodiment can be indicated by the phonetic order, monitors the operation
Implementation status, and for the feedback word of the user speech information according to implementation status generation, then to described
Speech processes high in the clouds sends the feedback word for the user speech;The feedback word returned by speech processes high in the clouds corresponds to
Digital signal form feedback voice;And the feedback voice of the digital signal form is converted into the feedback of analog signal form
Voice, in this way, the corresponding feedback voice of feedback word can be obtained from speech processes high in the clouds, can reduce the load of terminal, and
The processing in speech processes high in the clouds is more accurate.
The process of realization is discussed in detail below by several embodiments.
Fig. 4 is a kind of flow chart of the method for control terminal according to an exemplary embodiment, as shown in figure 4, should
Method can be realized by wearable device, terminal and speech processes high in the clouds, including step 401-415.
In step 401, wearable device obtains voice and wakes up instruction.
Wherein, the voice wakes up instruction and is used to indicate the wearable device collection user speech information;It is wearable to set
The standby voice that obtains wakes up instruction, including:Obtain default button operation information;Alternatively, obtain default voice messaging.
In step 402, wearable device performs the voice and wakes up instruction, gathers user speech information.
In step 403, the user speech information is sent to and wirelessly connects with the wearable device by wearable device
The terminal connect, terminal receive the user speech information that the wearable device being connected with the terminal wireless is sent, user's language
Message breath is used to control the terminal to perform the operation indicated by user speech information.
In step 404, terminal detects the voice end of the user speech information, determine the voice end it
Preceding user speech information.
In step 405, terminal sends the user speech information before the voice end to speech processes high in the clouds.
In a step 406, speech processes high in the clouds determines the phonetic order in the user speech information.
In step 407, the phonetic order in user speech information is sent to terminal by speech processes high in the clouds, and terminal receives
Phonetic order in the user speech information that the speech processes high in the clouds returns.
In a step 408, terminal determines the application type involved by the operation of the phonetic order instruction, in the voice
When the operation of instruction instruction includes the operation in system application, calling system application interface, controls the system application execution institute
State the operation of phonetic order instruction;When the operation of phonetic order instruction includes the operation in third-party application, calling the
Tripartite's application interface, controls the third-party application to perform the operation of the phonetic order instruction.
In step 409, the interface applied involved by the operation that terminal is indicated by the phonetic order, monitors the behaviour
The implementation status of work.
In step 410, terminal is directed to the feedback text of the user speech information according to implementation status generation
Word.
In step 411, terminal sends the feedback word for the user speech information to the speech processes high in the clouds.
In step 412, the feedback word is converted into the backchannel message of digital signal form by speech processes high in the clouds
Breath, and terminal is sent to, terminal receives the corresponding digital signal form of the feedback word that the speech processes high in the clouds returns
Feedback voice messaging.
In step 413, the feedback voice messaging of the digital signal form is converted into analog signal form by terminal
Feed back voice messaging.
In step 414, terminal returns to the feedback voice messaging to the wearable device, and wearable device receives institute
State the feedback voice messaging of terminal return.
Wherein, the feedback voice messaging is used for the implementation status for notifying user speech described in the terminal-pair.
In step 415, wearable device plays the feedback voice.
Following is embodiment of the present disclosure, can be used for performing embodiments of the present disclosure.
Fig. 5 is a kind of frame of the device of control terminal applied to wearable device according to an exemplary embodiment
Figure, the device can be implemented in combination with as some or all of of wearable device by software, hardware or both.Such as figure
Shown in 5, the device of the control terminal includes:First acquisition module 501, acquisition module 502 and sending module 503;Wherein:
First acquisition module 501, instruction is waken up for obtaining voice, and the voice wakes up instruction and is used to indicating described to wear
Wear equipment collection user speech information;
Acquisition module 502, wakes up instruction for performing the voice, gathers user speech information;
Sending module 503, for the user speech information to be sent to the end with the wearable device wireless connection
End, the user speech information are used to control the terminal to perform the operation indicated by the user speech information.
In a kind of possible embodiment, Fig. 6 is to be applied to wearable device according to an exemplary embodiment
The block diagram of the device of control terminal, as shown in fig. 6, the device of above-mentioned control terminal can be configured to include the first reception mould
Block 504 and playing module 505, wherein:
First receiving module 504, the feedback voice messaging returned for receiving the terminal, the feedback voice messaging are used
In the implementation status for notifying user speech information described in the terminal-pair;
Playing module 505, for playing the feedback voice messaging.
In a kind of possible embodiment, Fig. 7 is to be applied to wearable device according to an exemplary embodiment
The block diagram of the device of control terminal, as shown in fig. 7, the first acquisition module 501 in the device of above-mentioned control terminal can by with
It is set to including the first acquisition submodule 5011, wherein:
First acquisition submodule 5011, for obtaining default button operation information;
In a kind of possible embodiment, Fig. 8 is to be applied to wearable device according to an exemplary embodiment
The block diagram of the device of control terminal, as shown in figure 8, the first acquisition module 501 in the device of above-mentioned control terminal can by with
It is set to including the second acquisition submodule 5012, wherein:
Second acquisition submodule 5012, for obtaining default wake-up voice messaging.
Fig. 9 is a kind of block diagram of the device of control terminal applied to terminal according to an exemplary embodiment, should
Device can be implemented in combination with as some or all of of terminal by software, hardware or both.As shown in figure 9, the control
The device of terminal processed includes:Second receiving module 901, determining module 902 and execution module 903;Wherein:
Second receiving module 901, the user speech sent for receiving the wearable device being connected with the terminal wireless
Information, the user speech information are used to control the terminal to perform the operation indicated by the user speech information;
Determining module 902, for determining the phonetic order in the user speech information;
Execution module 903, for performing the operation of the phonetic order instruction.
In a kind of possible embodiment, Figure 10 is that the control applied to terminal according to an exemplary embodiment is whole
The block diagram of the device at end, as shown in Figure 10, the device of above-mentioned control terminal can also be configured to execution module 903 to include determining
Submodule 9031, the first control submodule 9032 and the second control submodule 9033, wherein:
Determination sub-module 9031, it is described to answer for determining the application type involved by the operation of the phonetic order instruction
Include system application or third-party application with type;
First control submodule 9032, for the operation in including system application in the operation that the phonetic order indicates
When, calling system application interface, the operation for controlling phonetic order described in the system application execution to indicate;
Second control submodule 9033, for the operation in including third-party application in the operation that the phonetic order indicates
When, third-party application interface is called, controls the third-party application to perform the operation of the phonetic order instruction.
In a kind of possible embodiment, Figure 11 is that the control applied to terminal according to an exemplary embodiment is whole
The block diagram of the device at end, as shown in figure 11, the device of above-mentioned control terminal can be configured to include the second acquisition module 904
With return module 905, wherein:
Second acquisition module 904, for obtaining the feedback voice messaging for the user speech information, wherein, it is described
Feedback voice messaging is used for the implementation status for notifying user speech information described in the terminal-pair;
Module 905 is returned to, for returning to the feedback voice messaging to the wearable device.
In a kind of possible embodiment, Figure 12 is that the control applied to terminal according to an exemplary embodiment is whole
The block diagram of the device at end, as shown in figure 12, the determining module 902 in the device of above-mentioned control terminal may be configured to include inspection
Submodule 9021, the first sending submodule 9022 and the first receiving submodule 9023 are surveyed, wherein:
Detection sub-module 9021, for detecting the voice end of the user speech information, determines that the voice terminates
User speech information before end;
First sending submodule 9022, for sending the user speech information before the voice end to voice
Manage high in the clouds;
First receiving submodule 9023, the voice in user speech information for receiving the speech processes high in the clouds return
Instruction.
In a kind of possible embodiment, Figure 13 is that the control applied to terminal according to an exemplary embodiment is whole
The block diagram of the device at end, as shown in figure 13, the second acquisition module 904 in the device of above-mentioned control terminal may be configured to wrap
Include monitoring submodule 9041, generation submodule 9042, the second sending submodule 9043, the second receiving submodule 9044 and conversion
Module 9045, wherein:
Monitoring submodule 9041, for the interface applied involved by the operation that is indicated by the phonetic order, monitors institute
State the implementation status of operation;
Submodule 9042 is generated, for being directed to the feedback of the user speech information according to implementation status generation
Word;
Second sending submodule 9043, for being sent to the speech processes high in the clouds for the anti-of the user speech information
Present word;
Second receiving submodule 9044, the corresponding number of the feedback word returned for receiving the speech processes high in the clouds
The feedback voice messaging of word signal form;
Transform subblock 9045, for the feedback voice messaging of the digital signal form to be converted into analog signal form
Feedback voice messaging.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in related this method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
Figure 14 is a kind of block diagram of the device of control terminal according to an exemplary embodiment, which is suitable for can
The equipment such as wearable device.Device 1400 includes processing component 1411, it further comprises one or more processors, and by depositing
Memory resource representated by reservoir 1412, can be by the instruction of the execution of processing component 1411, such as application program for storing.
The application program stored in memory 1412 can include it is one or more each correspond to the module of one group of instruction.
In addition, processing component 1411 is configured as execute instruction, to perform the above method.
Device 1400 can also include a power supply module 1413 and be configured as the power management of executive device 1400, one
Communication interface 1414 is configured as device 1400 being connected to the other equipments such as terminal, and input and output (I/O) interface
1415.Input and output (I/O) interface 1415 can provide interface between processing component 1401 and peripheral interface module, above-mentioned
Peripheral interface module can be with key-press module, such as button.These buttons may include but be not limited to:Power knob, volume button lamp.
Device 1400 can be operated based on the operating system for being stored in memory 1412, such as Windows ServerTM, Mac OS
XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
Device 1400 can also include audio component 1416, and audio component 1416 is configured as exporting and/or inputting audio
Signal.For example, audio component 1416 includes a microphone (MIC), when device 1400 is in operator scheme, such as voice collecting mould
During formula, microphone is configured as receiving external audio signal.The received audio signal can be by further via communication interface
1414 send.In certain embodiments, audio component 1416 further includes a loudspeaker, for exports audio signal.
A kind of computer-readable recording medium is present embodiments provided, when the instruction in the storage medium is by device 1400
Processor perform when realize following steps:
Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate the wearable device collection user speech letter
Breath;
Perform the voice and wake up instruction, gather user speech information;
The user speech information is sent to the terminal with the wearable device wireless connection, the user speech letter
Breath is used to control the terminal to perform the operation indicated by the user speech information.
Instruction in the storage medium can also realize following steps when being executed by processor:
The method further includes:
The feedback voice messaging that the terminal returns is received, the feedback voice messaging is used to notify described in the terminal-pair
The implementation status of user speech information;
Play the feedback voice messaging.
Instruction in the storage medium can also realize following steps when being executed by processor:
The acquisition voice wakes up instruction, including:
Obtain default button operation information;
Alternatively,
Obtain default wake-up voice messaging.
The disclosure additionally provides a kind of device of control terminal, applied to wearable device, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:
Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate the wearable device collection user speech letter
Breath;
Perform the voice and wake up instruction, gather user speech information;
The user speech information is sent to the terminal with the wearable device wireless connection, the user speech letter
Breath is used to control the terminal to perform the operation indicated by the user speech information.
The processor can be additionally configured to:
The method further includes:
The feedback voice messaging that the terminal returns is received, the feedback voice messaging is used to notify described in the terminal-pair
The implementation status of user speech information;
Play the feedback voice messaging.
The processor can be additionally configured to:
The acquisition voice wakes up instruction, including:
Obtain default button operation information;
Alternatively,
Obtain default wake-up voice messaging.
Figure 15 is a kind of block diagram of the device of control terminal according to an exemplary embodiment, which is applicable in
In equipment such as terminals.For example, device 1500 can be mobile phone, game console, computer, tablet device, individual digital helps
Reason etc..
Device 1500 can include following one or more assemblies:Processing component 1501, memory 1502, power supply module
1503, multimedia component 1504, audio component 1505, input/output (I/O) interface 1506, sensor component 15015, and
Communication component 1508.
The integrated operation of the usual control device 1500 of processing component 1501, such as with display, call, data communication,
The operation that camera operation and record operation are associated.Processing component 1501 can be performed including one or more processors 1520
Instruction, to complete all or part of step of above-mentioned method.In addition, processing component 1501 can include one or more moulds
Block, easy to the interaction between processing component 1501 and other assemblies.For example, processing component 1501 can include multi-media module,
To facilitate the interaction between multimedia component 1504 and processing component 1501.
Memory 1502 is configured as storing various types of data to support the operation in device 1500.These data
Example includes being used for the instruction of any application program or method operated on device 1500, contact data, telephone book data,
Message, picture, video etc..Memory 1502 can by any kind of volatibility or non-volatile memory device or they
Combination is realized, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM), it is erasable can
Program read-only memory (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash memory
Reservoir, disk or CD.
Power supply module 1503 provides electric power for the various assemblies of device 1500.Power supply module 1503 can include power management
System, one or more power supplys, and other components associated with generating, managing and distributing electric power for device 1500.
Multimedia component 1504 is included in the screen of one output interface of offer between described device 1500 and user.
In some embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel,
Screen may be implemented as touch-screen, to receive input signal from the user.Touch panel includes one or more touch and passes
Sensor is to sense the gesture on touch, slip and touch panel.The touch sensor can not only sense touch or slide dynamic
The border of work, but also detection and the duration and pressure associated with the touch or slide operation.In certain embodiments, it is more
Media component 1504 includes a front camera and/or rear camera.When device 1500 is in operator scheme, mould is such as shot
When formula or video mode, front camera and/or rear camera can receive exterior multi-medium data.Each preposition shooting
Head and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 1505 is configured as output and/or input audio signal.For example, audio component 1505 includes a wheat
Gram wind (MIC), when device 1500 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone quilt
It is configured to receive external audio signal.The received audio signal can be further stored in memory 1502 or via communication
Component 1508 is sent.In certain embodiments, audio component 1505 further includes a loudspeaker, for exports audio signal.
The interface 1506 of I/O provides interface, above-mentioned peripheral interface mould between processing component 1501 and peripheral interface module
Block can be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button
And locking press button.
Sensor component 1507 includes one or more sensors, and the state for providing various aspects for device 1500 is commented
Estimate.For example, sensor component 1507 can detect opening/closed mode of device 1500, the relative positioning of component, such as institute
The display and keypad that component is device 1500 are stated, sensor component 1507 can be with detection device 1500 or device 1,500 1
The position of a component changes, the existence or non-existence that user contacts with device 1500,1500 orientation of device or acceleration/deceleration and dress
Put 1500 temperature change.Sensor component 1507 can include proximity sensor, be configured in no any physics
Presence of nearby objects is detected during contact.Sensor component 1507 can also include optical sensor, as CMOS or ccd image are sensed
Device, for being used in imaging applications.In certain embodiments, which can also include acceleration sensing
Device, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 1508 is configured to facilitate the communication of wired or wireless way between device 1500 and other equipment.Dress
The wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof can be accessed by putting 1500.It is exemplary at one
In embodiment, communication component 1508 receives broadcast singal or broadcast correlation from external broadcasting management system via broadcast channel
Information.In one exemplary embodiment, the communication component 1508 further includes near-field communication (NFC) module, to promote short distance
Communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module
(UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 1500 can be by one or more application application-specific integrated circuit (ASIC), numeral
Signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided
Such as include the memory 1502 of instruction, above-metioned instruction can be performed to complete the above method by the processor 1520 of device 1500.Example
Such as, the non-transitorycomputer readable storage medium can be ROM, it is random access memory (RAM), CD-ROM, tape, soft
Disk and optical data storage devices etc..
A kind of computer-readable recording medium is present embodiments provided, when the instruction in the storage medium is by device 1500
Processor perform when realize following steps:
The described method includes:
Receive the user speech information that the wearable device being connected with the terminal wireless is sent, the user speech information
For controlling the terminal to perform the operation indicated by the user speech information;
Determine the phonetic order in the user speech information;
Perform the operation of the phonetic order instruction.
Instruction in the storage medium can also realize following steps when being executed by processor:
The operation for performing the phonetic order instruction, including:
Determine the application type involved by the operation of phonetic order instruction, the application type include system apply or
Third-party application;
When the operation of phonetic order instruction includes the operation in system application, calling system application interface, control
The operation that phonetic order described in the system application execution indicates;
In the operation during the operation that the phonetic order indicates includes third-party application, third-party application interface is called,
The third-party application is controlled to perform the operation of the phonetic order instruction.
Instruction in the storage medium can also realize following steps when being executed by processor:
The method further includes:
The feedback voice messaging for the user speech information is obtained, wherein, the feedback voice messaging is used to notify
The implementation status of user speech information described in the terminal-pair;
The feedback voice messaging is returned to the wearable device.
Instruction in the storage medium can also realize following steps when being executed by processor:
The phonetic order determined in the user speech information, including:
The voice end of the user speech information is detected, determines the user speech before the voice end;
User speech information before the voice end is sent to speech processes high in the clouds;
Receive the phonetic order in the user speech information that the speech processes high in the clouds returns.
Instruction in the storage medium can also realize following steps when being executed by processor:
The feedback voice messaging obtained for the user speech information, including:
The interface applied involved by the operation indicated by the phonetic order, monitors the implementation status of the operation;
The feedback word of the user speech information is directed to according to implementation status generation;
The feedback word for the user speech information is sent to the speech processes high in the clouds;
Receive the backchannel message for the corresponding digital signal form of the feedback word that the speech processes high in the clouds returns
Breath;
The feedback voice messaging of the digital signal form is converted into the feedback voice messaging of analog signal form.
The disclosure additionally provides a kind of device of control terminal, applied to terminal, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:
Receive the user speech information that the wearable device being connected with the terminal wireless is sent, the user speech information
For controlling the terminal to perform the operation indicated by the user speech information;
Determine the phonetic order in the user speech information;
Perform the operation of the phonetic order instruction.
The processor can be additionally configured to:
The operation for performing the phonetic order instruction, including:
Determine the application type involved by the operation of phonetic order instruction, the application type include system apply or
Third-party application;
When the operation of phonetic order instruction includes the operation in system application, calling system application interface, control
The operation that phonetic order described in the system application execution indicates;
In the operation during the operation that the phonetic order indicates includes third-party application, third-party application interface is called,
The third-party application is controlled to perform the operation of the phonetic order instruction.
The processor can be additionally configured to:
The method further includes:
The feedback voice messaging for the user speech information is obtained, wherein, the feedback voice messaging is used to notify
The implementation status of user speech information described in the terminal-pair;
The feedback voice messaging is returned to the wearable device.
The processor can be additionally configured to:
The phonetic order determined in the user speech information, including:
The voice end of the user speech information is detected, determines the user speech before the voice end;
User speech information before the voice end is sent to speech processes high in the clouds;
Receive the phonetic order in the user speech information that the speech processes high in the clouds returns.
The processor can be additionally configured to:
The feedback voice messaging obtained for the user speech information, including:
The interface applied involved by the operation indicated by the phonetic order, monitors the implementation status of the operation;
The feedback word of the user speech information is directed to according to implementation status generation;
The feedback word for the user speech information is sent to the speech processes high in the clouds;
Receive the backchannel message for the corresponding digital signal form of the feedback word that the speech processes high in the clouds returns
Breath;
The feedback voice messaging of the digital signal form is converted into the feedback voice messaging of analog signal form.
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice disclosure disclosed herein
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be appreciated that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claim.
Claims (20)
- A kind of 1. method of control terminal, it is characterised in that applied to wearable device, the described method includes:Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate the wearable device collection user speech information;Perform the voice and wake up instruction, gather user speech information;The user speech information is sent to the terminal with the wearable device wireless connection, the user speech information is used The operation indicated by the user speech information is performed in controlling the terminal.
- 2. according to the method described in claim 1, it is characterized in that, the method further includes:The feedback voice messaging that the terminal returns is received, the feedback voice messaging is used to notify user described in the terminal-pair The implementation status of voice messaging;Play the feedback voice messaging.
- 3. according to the method described in claim 1, it is characterized in that, it is described acquisition voice wake up instruction, including:Obtain default button operation information;Alternatively,Obtain default wake-up voice messaging.
- A kind of 4. method of control terminal, it is characterised in that applied to terminal, the described method includes:The user speech information that the wearable device being connected with the terminal wireless is sent is received, the user speech information is used for The terminal is controlled to perform the operation indicated by the user speech information;Determine the phonetic order in the user speech information;Perform the operation of the phonetic order instruction.
- 5. according to the method described in claim 4, it is characterized in that, the operation for performing phonetic order instruction, including:Determine the application type involved by the operation of the phonetic order instruction, the application type includes system application or the 3rd Fang Yingyong;When the operation of phonetic order instruction includes the operation during system is applied, calling system application interface, described in control The operation that phonetic order described in system application execution indicates;In the operation during the operation that the phonetic order indicates includes third-party application, third-party application interface, control are called The third-party application performs the operation of the phonetic order instruction.
- 6. according to the method described in claim 4, it is characterized in that, the method further includes:The feedback voice messaging for the user speech information is obtained, wherein, the feedback voice messaging is used to notify described The implementation status of user speech information described in terminal-pair;The feedback voice messaging is returned to the wearable device.
- 7. according to the method described in claim 4, it is characterized in that, the voice determined in the user speech information refers to Order, including:The voice end of the user speech information is detected, determines the user speech before the voice end;User speech information before the voice end is sent to speech processes high in the clouds;Receive the phonetic order in the user speech information that the speech processes high in the clouds returns.
- 8. the according to the method described in claim 6, it is characterized in that, backchannel obtained for the user speech information Message ceases, including:The interface applied involved by the operation indicated by the phonetic order, monitors the implementation status of the operation;The feedback word of the user speech information is directed to according to implementation status generation;The feedback word for the user speech information is sent to the speech processes high in the clouds;Receive the feedback voice messaging for the corresponding digital signal form of the feedback word that the speech processes high in the clouds returns;The feedback voice messaging of the digital signal form is converted into the feedback voice messaging of analog signal form.
- 9. a kind of device of control terminal, it is characterised in that applied to wearable device, described device includes:First acquisition module, wakes up instruction, the voice wakes up instruction and is used to indicate the wearable device for obtaining voice Gather user speech information;Acquisition module, wakes up instruction for performing the voice, gathers user speech information;Sending module, it is described for the user speech information to be sent to the terminal with the wearable device wireless connection User speech information is used to control the terminal to perform the operation indicated by the user speech information.
- 10. device according to claim 9, it is characterised in that described device further includes:First receiving module, the feedback voice messaging returned for receiving the terminal, the feedback voice messaging are used to notify The implementation status of user speech information described in the terminal-pair;Playing module, for playing the feedback voice messaging.
- 11. device according to claim 9, it is characterised in that first acquisition module includes:First acquisition submodule, for obtaining default button operation information;Alternatively,Second acquisition submodule, for obtaining default wake-up voice messaging.
- 12. a kind of device of control terminal, it is characterised in that applied to terminal, described device includes:Second receiving module, the user speech information sent for receiving the wearable device being connected with the terminal wireless, institute User speech information is stated to be used to control the terminal to perform the operation indicated by the user speech information;Determining module, for determining the phonetic order in the user speech information;Execution module, for performing the operation of the phonetic order instruction.
- 13. device according to claim 12, it is characterised in that the execution module includes:Determination sub-module, for determining the application type involved by the operation of the phonetic order instruction, the application type bag Include system application or third-party application;First control submodule, for when the operation that the phonetic order indicates includes the operation during system is applied, calling system System application interface, the operation for controlling phonetic order described in the system application execution to indicate;Second control submodule, during for operation in including third-party application in the operation that the phonetic order indicates, is called Third-party application interface, controls the third-party application to perform the operation of the phonetic order instruction.
- 14. device according to claim 12, it is characterised in that described device further includes:Second acquisition module, for obtaining the feedback voice messaging for the user speech information, wherein, the feedback voice Information is used for the implementation status for notifying user speech information described in the terminal-pair;Module is returned to, for returning to the feedback voice messaging to the wearable device.
- 15. device according to claim 12, it is characterised in that the determining module includes:Detection sub-module, for detecting the voice end of the user speech information, before determining the voice end User speech information;First sending submodule, for sending the user speech information before the voice end to speech processes high in the clouds;First receiving submodule, the phonetic order in user speech information for receiving the speech processes high in the clouds return.
- 16. device according to claim 14, it is characterised in that second acquisition module includes:Monitoring submodule, for the interface applied involved by the operation that is indicated by the phonetic order, monitors the operation Implementation status;Submodule is generated, for being directed to the feedback word of the user speech information according to implementation status generation;Second sending submodule, for sending the feedback word for the user speech information to the speech processes high in the clouds;Second receiving submodule, the corresponding digital signal shape of the feedback word returned for receiving the speech processes high in the clouds The feedback voice messaging of formula;Transform subblock, for the feedback voice messaging of the digital signal form to be converted into the backchannel of analog signal form Message ceases.
- A kind of 17. device of control terminal, it is characterised in that applied to wearable device, including:Processor;For storing the memory of processor-executable instruction;Wherein, the processor is configured as:Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate the wearable device collection user speech information;Perform the voice and wake up instruction, gather user speech information;The user speech information is sent to the terminal with the wearable device wireless connection, the user speech information is used The operation indicated by the user speech information is performed in controlling the terminal.
- A kind of 18. device of control terminal, it is characterised in that applied to terminal, including:Processor;For storing the memory of processor-executable instruction;Wherein, the processor is configured as:The user speech information that the wearable device being connected with the terminal wireless is sent is received, the user speech information is used for The terminal is controlled to perform the operation indicated by the user speech information;Determine the phonetic order in the user speech information;Perform the operation of the phonetic order instruction.
- 19. a kind of computer-readable recording medium, is stored with computer instruction, it is characterised in that applied to wearable device, institute State the step realized when computer instruction is executed by processor in claims 1 to 3 the method.
- 20. a kind of computer-readable recording medium, is stored with computer instruction, it is characterised in that applied to wearable device, institute State the step realized when computer instruction is executed by processor in claim 4 to 8 the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711130491.1A CN107978316A (en) | 2017-11-15 | 2017-11-15 | The method and device of control terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711130491.1A CN107978316A (en) | 2017-11-15 | 2017-11-15 | The method and device of control terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107978316A true CN107978316A (en) | 2018-05-01 |
Family
ID=62013601
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711130491.1A Pending CN107978316A (en) | 2017-11-15 | 2017-11-15 | The method and device of control terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107978316A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108922537A (en) * | 2018-05-28 | 2018-11-30 | Oppo广东移动通信有限公司 | Audio identification methods, device, terminal, earphone and readable storage medium storing program for executing |
CN109065050A (en) * | 2018-09-28 | 2018-12-21 | 上海与德科技有限公司 | A kind of sound control method, device, equipment and storage medium |
CN109192207A (en) * | 2018-09-17 | 2019-01-11 | 顺丰科技有限公司 | Voice communication assembly, voice communication method and system, equipment, storage medium |
CN109274723A (en) * | 2018-08-30 | 2019-01-25 | 出门问问信息科技有限公司 | A kind of information-pushing method and device based on earphone |
CN109413268A (en) * | 2018-10-10 | 2019-03-01 | 深圳市领芯者科技有限公司 | A kind of assisting navigation software plays the methods, devices and systems of voice |
CN109448709A (en) * | 2018-10-16 | 2019-03-08 | 华为技术有限公司 | A kind of terminal throws the control method and terminal of screen |
CN109637542A (en) * | 2018-12-25 | 2019-04-16 | 圆通速递有限公司 | A kind of outer paging system of voice |
CN109767764A (en) * | 2018-12-29 | 2019-05-17 | 浙江比逊河鞋业有限公司 | A kind of intelligent children's footwear and its control method based on voice control |
CN109783733A (en) * | 2019-01-15 | 2019-05-21 | 三角兽(北京)科技有限公司 | User's portrait generating means and method, information processing unit and storage medium |
CN109862178A (en) * | 2019-01-17 | 2019-06-07 | 珠海市黑鲸软件有限公司 | A kind of wearable device and its voice control communication method |
CN109859762A (en) * | 2019-01-02 | 2019-06-07 | 百度在线网络技术(北京)有限公司 | Voice interactive method, device and storage medium |
CN111010482A (en) * | 2019-12-13 | 2020-04-14 | 上海传英信息技术有限公司 | Voice retrieval method, wireless device and computer readable storage medium |
CN111048066A (en) * | 2019-11-18 | 2020-04-21 | 云知声智能科技股份有限公司 | Voice endpoint detection system assisted by images on child robot |
CN111667827A (en) * | 2020-05-28 | 2020-09-15 | 北京小米松果电子有限公司 | Voice control method and device of application program and storage medium |
CN112969116A (en) * | 2021-02-01 | 2021-06-15 | 深圳市美恩微电子有限公司 | Interactive control system of wireless earphone and intelligent terminal |
CN113409788A (en) * | 2021-07-15 | 2021-09-17 | 深圳市同行者科技有限公司 | Voice wake-up method, system, device and storage medium |
WO2021232913A1 (en) * | 2020-05-18 | 2021-11-25 | Oppo广东移动通信有限公司 | Voice information processing method and apparatus, and storage medium and electronic device |
US12001758B2 (en) | 2020-05-18 | 2024-06-04 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Voice information processing method and electronic device |
-
2017
- 2017-11-15 CN CN201711130491.1A patent/CN107978316A/en active Pending
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108922537B (en) * | 2018-05-28 | 2021-05-18 | Oppo广东移动通信有限公司 | Audio recognition method, device, terminal, earphone and readable storage medium |
CN108922537A (en) * | 2018-05-28 | 2018-11-30 | Oppo广东移动通信有限公司 | Audio identification methods, device, terminal, earphone and readable storage medium storing program for executing |
CN109274723A (en) * | 2018-08-30 | 2019-01-25 | 出门问问信息科技有限公司 | A kind of information-pushing method and device based on earphone |
CN109274723B (en) * | 2018-08-30 | 2021-09-14 | 出门问问信息科技有限公司 | Information pushing method and device based on earphone |
CN109192207A (en) * | 2018-09-17 | 2019-01-11 | 顺丰科技有限公司 | Voice communication assembly, voice communication method and system, equipment, storage medium |
CN109065050A (en) * | 2018-09-28 | 2018-12-21 | 上海与德科技有限公司 | A kind of sound control method, device, equipment and storage medium |
CN109413268A (en) * | 2018-10-10 | 2019-03-01 | 深圳市领芯者科技有限公司 | A kind of assisting navigation software plays the methods, devices and systems of voice |
CN109448709A (en) * | 2018-10-16 | 2019-03-08 | 华为技术有限公司 | A kind of terminal throws the control method and terminal of screen |
CN109637542A (en) * | 2018-12-25 | 2019-04-16 | 圆通速递有限公司 | A kind of outer paging system of voice |
CN109767764A (en) * | 2018-12-29 | 2019-05-17 | 浙江比逊河鞋业有限公司 | A kind of intelligent children's footwear and its control method based on voice control |
CN109859762A (en) * | 2019-01-02 | 2019-06-07 | 百度在线网络技术(北京)有限公司 | Voice interactive method, device and storage medium |
CN109783733B (en) * | 2019-01-15 | 2020-11-06 | 腾讯科技(深圳)有限公司 | User image generation device and method, information processing device, and storage medium |
CN109783733A (en) * | 2019-01-15 | 2019-05-21 | 三角兽(北京)科技有限公司 | User's portrait generating means and method, information processing unit and storage medium |
CN109862178A (en) * | 2019-01-17 | 2019-06-07 | 珠海市黑鲸软件有限公司 | A kind of wearable device and its voice control communication method |
CN111048066A (en) * | 2019-11-18 | 2020-04-21 | 云知声智能科技股份有限公司 | Voice endpoint detection system assisted by images on child robot |
CN111010482A (en) * | 2019-12-13 | 2020-04-14 | 上海传英信息技术有限公司 | Voice retrieval method, wireless device and computer readable storage medium |
WO2021232913A1 (en) * | 2020-05-18 | 2021-11-25 | Oppo广东移动通信有限公司 | Voice information processing method and apparatus, and storage medium and electronic device |
US12001758B2 (en) | 2020-05-18 | 2024-06-04 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Voice information processing method and electronic device |
CN111667827A (en) * | 2020-05-28 | 2020-09-15 | 北京小米松果电子有限公司 | Voice control method and device of application program and storage medium |
CN111667827B (en) * | 2020-05-28 | 2023-10-17 | 北京小米松果电子有限公司 | Voice control method and device for application program and storage medium |
CN112969116A (en) * | 2021-02-01 | 2021-06-15 | 深圳市美恩微电子有限公司 | Interactive control system of wireless earphone and intelligent terminal |
CN113409788A (en) * | 2021-07-15 | 2021-09-17 | 深圳市同行者科技有限公司 | Voice wake-up method, system, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107978316A (en) | The method and device of control terminal | |
CN105451111B (en) | Earphone control method for playing back, device and terminal | |
CN108710615B (en) | Translation method and related equipment | |
CN104168353B (en) | Bluetooth headset and its interactive voice control method | |
CN105282345B (en) | The adjusting method and device of In Call | |
CN103973544B (en) | Audio communication method, speech playing method and device | |
CN104991754B (en) | The way of recording and device | |
CN106161781A (en) | Method for regulation of sound volume and device | |
CN109360549B (en) | Data processing method, wearable device and device for data processing | |
CN104836897A (en) | Method and device for controlling terminal communication through wearable device | |
CN106791921A (en) | The processing method and processing device of net cast | |
CN105224601B (en) | A kind of method and apparatus of extracting time information | |
CN106888327B (en) | Voice playing method and device | |
CN107919124A (en) | Equipment awakening method and device | |
CN111696553A (en) | Voice processing method and device and readable medium | |
CN105872976A (en) | Positioning method and device | |
CN109067965A (en) | Interpretation method, translating equipment, wearable device and storage medium | |
CN108648754A (en) | Sound control method and device | |
CN106357913A (en) | Method and device for prompting information | |
CN104333641B (en) | Call method and device | |
CN105448300A (en) | Method and device for calling | |
CN108923810A (en) | Interpretation method and relevant device | |
WO2021244058A1 (en) | Process execution method, device, and readable medium | |
CN108874450A (en) | Wake up the method and device of voice assistant | |
CN106210247A (en) | Terminal control method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180501 |