CN108231079B - Method, apparatus, device and computer-readable storage medium for controlling electronic device - Google Patents

Method, apparatus, device and computer-readable storage medium for controlling electronic device Download PDF

Info

Publication number
CN108231079B
CN108231079B CN201810102160.5A CN201810102160A CN108231079B CN 108231079 B CN108231079 B CN 108231079B CN 201810102160 A CN201810102160 A CN 201810102160A CN 108231079 B CN108231079 B CN 108231079B
Authority
CN
China
Prior art keywords
electronic device
user
wake
command
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810102160.5A
Other languages
Chinese (zh)
Other versions
CN108231079A (en
Inventor
耿雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810102160.5A priority Critical patent/CN108231079B/en
Publication of CN108231079A publication Critical patent/CN108231079A/en
Application granted granted Critical
Publication of CN108231079B publication Critical patent/CN108231079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

According to example embodiments of the present disclosure, methods, apparatuses, devices, and computer-readable storage media for controlling an electronic device are provided. The method includes detecting whether a user is present in an environment in which the electronic device is located based on sensed information received from the sensor. The method also includes causing the electronic device to enter a wake-up activation state in response to detecting the presence of the user, the electronic device being capable of recognizing a wake-up command of the user to the electronic device in the wake-up activation state. The method further includes, in response to recognizing the wake-up command, causing the electronic device to enter a control-active state in which the electronic device is capable of responding to a user's voice control command. By the embodiment of the disclosure, the electronic device is prevented from executing unnecessary processing operation under the condition that the electronic device cannot receive the command of the user, the power consumption is reduced, and the false wake-up rate of the device is further reduced.

Description

Method, apparatus, device and computer-readable storage medium for controlling electronic device
Technical Field
Embodiments of the present disclosure relate generally to the field of intelligent interaction, and more particularly, to a method, apparatus, device, and computer-readable storage medium for controlling an electronic device.
Background
In recent years, with the rapid development of artificial intelligence technology, intelligent interactive systems, especially far-field speech interactive systems, are becoming one of important interactive portals, and have been widely applied to people's daily life, work, and even production process. For example, electronic devices having a voice interaction function, such as smart home appliances, greatly facilitate people's lives due to their wide range of applications. If various household appliances with voice interaction functions, such as an intelligent sound box, an intelligent television, an intelligent washing machine, an intelligent kitchen appliance and the like, are arranged in a home, a user only needs to send a specific awakening command to the household appliances, and then the household appliances can be awakened. After waking up, the user can start further voice interaction with the household appliance to control the household appliance to execute corresponding operation.
Since the wake-up command of the user may occur at any time, the electronic device will usually be in an operating state all the time to prevent missing the wake-up command of the user. Such an operating mechanism causes a large computational overhead of the electronic device and thus an increase in power consumption. In addition, the electronic device may be mistakenly awakened due to noise, object movement, and the like in the environment, which is also undesirable.
Disclosure of Invention
According to an example embodiment of the present disclosure, a scheme for controlling an electronic device is provided.
In a first aspect of the disclosure, a method of controlling an electronic device is provided. The method includes detecting whether a user is present in an environment in which the electronic device is located based on sensed information received from the sensor. The method also includes causing the electronic device to enter a wake-up activation state in response to detecting the presence of the user, the electronic device being capable of recognizing a wake-up command of the user to the electronic device in the wake-up activation state. The method further includes, in response to recognizing the wake-up command, causing the electronic device to enter a control-active state in which the electronic device is capable of responding to a user's voice control command.
In a second aspect of the disclosure, an apparatus for controlling an electronic device is provided. The apparatus includes a detection module configured to detect whether a user is present in an environment in which the electronic device is located based on sensed information received from the sensor. The apparatus also includes a first state switching module configured to, in response to detecting the presence of the user, cause the electronic device to enter a wake-up activated state in which the electronic device is capable of recognizing a wake-up command of the user to the electronic device. The apparatus further includes a second state switching module configured to, in response to recognizing the wake-up command, cause the electronic device to enter a control-active state in which the electronic device is capable of responding to a voice control command of the user.
In a third aspect of the disclosure, an apparatus is provided that includes one or more processors; and a storage device for storing one or more programs. When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement a method according to the first aspect of the disclosure.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow chart of a process of controlling an electronic device according to some embodiments of the present disclosure;
FIG. 3 illustrates a block diagram of an example structure of an electronic device, in accordance with some embodiments of the present disclosure;
FIG. 4 illustrates a state transition diagram of an electronic device, according to some embodiments of the present disclosure;
FIG. 5 shows a schematic block diagram of an apparatus for controlling an electronic device, in accordance with some embodiments of the present disclosure; and
FIG. 6 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
Fig. 1 illustrates a schematic diagram of an example environment 100 in which various embodiments of the present disclosure can be implemented. An electronic device 110 is placed or installed in the example environment 100. The example environment 100 also includes a user 120. User 120 interacts with electronic device 110 via an interaction link 130.
The electronic device 110 has a voice interaction function. That is, the user 120 may control the electronic device 110 to perform a corresponding operation by issuing a voice command. Thus, the electronic device 110 may also be referred to as a voice interaction device. To have the electronic device 110 respond to voice control commands, the user 120 may first wake up the electronic device 110 through voice, hand, head or body gestures, and/or other means. Accordingly, the electronic device 110 may receive the wake-up command of the user 120 via the interactive link 130 through a microphone, an image capture device, and/or other receiving device. The wake-up command may be a preset specific command. The electronic device 110 may perform a one-step voice interaction with the user 120 after detecting the particular wake-up command.
It should be understood that the electronic device 110 with voice interaction functionality may also support other ways of interaction, such as being activated, deactivated, controlling the execution of corresponding operations by means of virtual or physical buttons, by means of a remote control, etc. The electronic device 110 may include various intelligent home appliances, intelligent vehicle-mounted devices, robots, and other fixed or portable electronic devices with voice interaction functions, examples of which include, but are not limited to, intelligent speakers, intelligent televisions, intelligent refrigerators, intelligent washing machines, intelligent electric rice cookers, intelligent air conditioners, intelligent electric water heaters, intelligent set-top boxes, intelligent vehicle-mounted speakers, intelligent vehicle-mounted navigation devices, floor sweeping robots, chat robots, nursing robots, and so forth. Thus, environment 100 may also be a room, building, compartment, or any other space in which electronic device 110 is installed or placed.
As mentioned above, the wake-up manner of the electronic device 110 may include voice wake-up or gesture wake-up, etc., and the control manner of the electronic device 110 may be voice control. For convenience of description, the following description will be made by taking voice wakeup as an example. However, it should be understood that electronic devices that wake up by gestures or other means are equally suitable.
In a conventional interaction scenario, to prevent missing a wake-up command of a user, the electronic device, in particular the part of the time that is responsible for the voice interaction, is in an active (active) state. For the electronic device awakened and controlled by voice, a microphone of the electronic device continuously collects sounds in the environment, various sound processing algorithm modules process collected audio signals, and then the voice awakening algorithm module (also called as a voice awakening engine) identifies whether the audio signals contain awakening commands of a user. If a wake-up command is recognized, the audio signal collected by the microphone is transmitted to a speech recognition algorithm module (also called a speech recognition engine) for recognizing a control command of the electronic device from the audio signal, so that the electronic device can respond to the control command correctly. If the awakening command is not recognized, the microphone, the sound processing algorithm module and the voice awakening engine continue to continuously perform acquisition, processing and recognition processing.
It can be seen that, as the electronic device is continuously in the working state, the electronic device continuously processes and executes the sound processing and the wake-up command recognition, which will increase the device computation amount and further cause the power consumption to increase. Especially in some voice interactive electronic devices, the front-end noise reduction algorithm and voice wakeup cause great demands on hardware computing power, and therefore continuous processing operation will cause greater power consumption. Furthermore, in noisy environments, for example, where other audio sources are playing around the device, this may increase the false wake-up rate of the electronic device. Such false wake-up is undesirable. The reason is that after being woken up, the electronic device will further perform the identification of the control command and other processing related to the identification of the control command, such as sound processing operations like sound source localization, beam forming, noise suppression, dereverberation, non-linear processing, etc., with the aim of supporting a more accurate identification of the control command. All of these processes will incur unnecessary computational and power overhead.
To address the above-referenced problems, and potentially other related problems, embodiments of the present disclosure provide an improved voice interaction process. Specifically, embodiments of the present disclosure propose a scheme for controlling an electronic device. In this solution, the presence of a user in the environment in which the electronic device is located is detected by means of a sensor. If no user is detected, the electronic equipment is continuously in a standby state. If the presence of the user is detected, the electronic device enters a state in which a wake-up command for the user can be recognized. If a wake-up command is recognized, the electronic device enters a state in which it can respond to a control command of the user. Through the use of the sensor, the electronic device can be prevented from executing unnecessary processing operations under the condition that the electronic device cannot receive commands of a user, the power consumption is reduced, and the false wake-up rate of the device is further reduced. Further, by preferentially activating the recognition of the wake-up command and re-entering the response to the control command after recognizing the wake-up command, the power consumption can be further reduced.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
Fig. 2 shows a flowchart of a process 200 of controlling an electronic device, according to some embodiments of the present disclosure. Process 200 may be used to control a voice interaction process of electronic device 110 of fig. 1. In some embodiments, process 200 may be implemented by electronic device 110. Process 200 may also be implemented by other computing devices communicatively coupled with electronic device 110. In the following, for ease of explanation, the implementation of process 200 at electronic device 110 is discussed as an example.
At 210, the electronic device 110 detects the presence of the user 120 in the environment 100 in which the electronic device 110 is located based on the sensed information received from the sensors. According to an embodiment of the present disclosure, it is desirable to cause the electronic device 110 to perform voice interaction-related processing only when the user 120 is present in the environment 100, and to cause the electronic device 110 to enter a standby state when the user 120 is not present in the environment 100. For this purpose, the presence of the user 120 in the environment 100 is detected by means of sensors. The sensors may sense users in the environment 100 in real time.
Sensors may be placed in the environment 100 and communicatively coupled with the electronic device 110 to communicate sensed information related to the environment 100. In some embodiments, the sensor may be integrated into the electronic device 110. A low power sensor may be selected for sensing the presence of a user. Thus, although the sensor senses the user information in the environment 100 in real time, it does not cause excessive power consumption.
The sensor may be selected as any sensing device capable of detecting a specific object (e.g., a human body) in the environment. In some embodiments, the sensor may be a human body infrared sensor, such as an infrared pair tube and an infrared pyroelectric sensor, to facilitate the determination of the presence of a human user by sensing the presence of the human body infrared spectrum. In some embodiments, the sensor may also be a doppler sensor for detecting that the distance of the human body from the sensor (which is the distance from the electronic device 110 when the sensor is integrated with the electronic device 110) is within a predetermined range, and thereby determining the presence of the user. In some embodiments, the electronic device 110 may receive raw sensing data from sensors and determine whether the user 120 is present through analysis of the sensing data. In still other embodiments, the electronic device 110 may receive a sensory indication from a sensor that specifically indicates the presence or absence of the user 120.
Depending on the type of sensor, the presence or absence of the user 120 may be determined by various detection techniques. The judgment criteria for the presence of the user 120 may be: the user 120 may be present suddenly in the environment 100, the user 120 may be active in the environment 100, or the user 120 may be within a predetermined range of distance from the electronic device 110 (or sensor), etc. These decisions depend on the type of sensor and/or accuracy requirements and may be chosen according to the actual situation. In some embodiments, multiple sensors or different types of sensors may be utilized to sense sensory information related to the environment 100 to provide a more accurate determination of the presence of the user 120. It should be understood that the scope of the embodiments of the disclosure is not limited in this respect.
The sensors detect sensed information about the environment 100 in real time. By detection based on the sensed information, the electronic device 110 determines whether the presence of the user is detected at 220. If the presence of the user 120 is not detected, the electronic device 110 continues to be in a standby state. In the standby state, the electronic device 110 does not perform any processing related to voice interaction until the presence of the user 120 is detected. Neither the hardware nor software modules associated with voice interaction in electronic device 110 are operational in order to conserve power consumption and prevent false wake-up of electronic device 110.
If the presence of the user 120 is detected, the electronic device 110 enters a wake-up enabled state 230, in which the electronic device 110 is able to recognize a wake-up command from the user 120 to the electronic device 110. If process 200 is implemented by a computing device other than electronic device 110, upon detecting the presence of user 120, a trigger command may be sent to electronic device 110 causing electronic device 110 to enter a wake-activated state.
According to an embodiment of the present disclosure, in the wake-up active state, the electronic device 110 is only partially activated. Specifically, in the wake-up active state, the electronic device 110 can only recognize whether the user 120 issues a wake-up command to the electronic device 110. At this time, the hardware and/or software module in the electronic device 110 responsible for the wake command identification will be activated. For example, in the case where the electronic device 110 is activated by a voice command, the microphone array of the electronic device 110 (or one of the microphones of the microphone array) will be activated to capture audio signals in the environment 100, the sound processing algorithm module will be activated to process the captured audio signals (e.g., perform front end noise reduction, echo cancellation, etc.), and the voice wake-up engine will be activated to identify whether a wake-up command is present in the audio signals. The wake-up command may be a particular sentence, word, phrase, or other particular sound pattern.
In the wake-up active state, the electronic device 110 will continue to operate to recognize the wake-up command. At 240, the electronic device 110 determines whether a wake command is recognized. If a wake command is not recognized, the electronic device 110 will continue to remain in the wake-up active state and continue to recognize the wake command from the audio signal of the user 120. If a wake-up command is recognized, at 250, the electronic device 110 enters a control active state in which the electronic device 110 is able to respond to voice control commands of the user 120.
According to an embodiment of the present disclosure, in the control activated state, the electronic device 110 is fully activated. Hardware and/or software modules associated with voice interaction in electronic device 110 are operational. For example, a microphone array of the electronic device 110 will be activated to capture audio signals in the environment 100, and various sound processing algorithm modules will be activated to process the captured audio signals (e.g., perform front-end noise reduction, echo cancellation, sound source localization, beamforming, noise suppression, dereverberation, and non-linear processing, etc.) so as to be able to recognize voice control commands from the audio signals from the user 120. In addition, the electronic device 110 can also perform corresponding operations according to the recognized voice control command. For example, if electronic device 110 is a smart speaker and the recognized voice control command is "play song A," then electronic device 110 will play song A through the speaker.
In some embodiments, the electronic device 110 may also determine whether to switch back to the standby state from the control active state through the sensed information of the sensor. In particular, the electronic device 110 may detect whether the user 120 is no longer in the environment 100 (e.g., the user 120 disappears from the environment 100) based on additional sensed information received from the sensors. As mentioned above, the sensors sense sensed information related to the environment 100 in real time. If the sensed information from the sensors indicates that the user 120 is no longer in the environment 100 after the electronic device 110 enters the wake-up active state or the control active state, the electronic device 110 enters the standby state. In the standby state, the electronic device 110 is not able to recognize a wake-up command or respond to a voice control command. Therefore, in the standby state, the electronic device 110 will not need to perform the processing related to the power consumption and the voice interaction, thereby achieving the purposes of reducing the power consumption and the false wake-up rate.
In some embodiments, the electronic device 110 enters the standby state upon detecting that the user 120 is no longer in the environment 100 for a predetermined period of time. In this way, unnecessary operational complexity due to repeated switching states can be avoided, and a reduction in user experience due to intermittent voice interaction can be avoided. For example, if the user 120 leaves the environment 100 only temporarily and continues back to the environment 100 within a short time, the electronic device 110 may remain in the control-active state or the wake-active state, and may be quickly woken up by the user 120 or in response to a voice control command of the user 120. The predetermined period of time may be any length of time that is preset or configured by the user 120.
In some embodiments, electronic device 110 may maintain running an application in electronic device 110 that was launched in response to a voice control command after detecting that user 120 is no longer in environment 100. For example, during the control active state, the electronic device 110 launches an application to perform an operation (e.g., play a song, video, or perform other action) in response to a voice control command of the user 120. The applications may continue to remain running after detecting that the user 120 is no longer in the environment 100. This may avoid service interruptions due to the user 120 temporarily leaving the environment 100. Thus, the above-described process 200 need not be repeated again to launch the application in the event that the user 120 leaves and returns to the environment 100 for a brief period, further improving service continuity and user experience.
In some embodiments, electronic device 110 may stop the running of the application after a particular operation of the application is completed. Additionally or alternatively, the electronic device 110 may stop the running of the application upon detecting that the user 120 is no longer in the environment 100 for a predetermined period of time. The predetermined period of time may be the same or different than the predetermined period of time for entering the standby state, and may also be any time period set in advance or configured by the user 120. In one example, if it is detected that the user 120 is no longer in the environment 100 for a period of time, the electronic device 110 may enter a standby state and keep the launched application running. If the presence of the user 120 has not been detected for yet another period of time, the electronic device 110 may stop the running of the application. In another example, the electronic device 110 may also enter the standby state and stop the running of the application simultaneously after the user 120 is no longer in the environment 100 for some predetermined period of time. The specific application remains running until the standby state is entered.
The control process for the electronic device 110 is described above. FIG. 3 illustrates an example block diagram of an electronic device 110 with voice interaction functionality in accordance with an embodiment of this disclosure. In fig. 3, hardware and software modules related to voice interaction functions in the electronic device 110 are shown. How to control the operation of the respective modules of the electronic device 110 according to an embodiment of the present disclosure will be described with reference to the specific example of fig. 3.
As shown in fig. 3, the electronic device 110 includes an array of connected microphones 310, including one or more microphones, for capturing sound in the environment 100 and converting the sound into an audio signal. The electronic device 110 also includes a voice wake-up engine 330 for identifying whether a particular wake-up command is present in the audio signals collected by the microphone array 310. If a wake command is present, the electronic device 110 is woken up to perform further voice interaction. The electronic device 110 further includes a voice recognition engine 350 for recognizing a voice wake command from the audio signals collected by the microphone array 310 after the wake command, so that the electronic device 110 can perform a corresponding operation in response to the voice wake command.
To improve the recognition performance of the audio signals by the voice wake-up engine 330 and the voice recognition engine 350, the electronic device 110 optionally further comprises a sound processing module 320 for performing front-end noise reduction, echo cancellation, etc. on the sound from the microphone array 310. The electronic device 110 may also include a sound post-processing module 340 for performing further sound processing, such as echo cancellation, sound source localization, beamforming, noise suppression, dereverberation, and nonlinear processing. The voice wake-up engine 330 may recognize a wake-up command from the audio signal processed by the sound processing module 320. Upon recognizing the wake-up command, the voice wake-up engine 330 instructs the sound post-processing module 340 to perform sound source localization and beamforming. The speech recognition engine 350 may recognize a speech control command from the audio signal processed through the sound processing module 320 and the sound post-processing module 340 to respond to the recognized speech control command.
According to an embodiment of the present disclosure, the electronic device 110 further comprises a sensor 302 for sensing the environment 100 in real time and providing the electronic device 110 with sensory information. The electronic device 110 detects the presence of the user 120 in the environment 100 based on the sensed information of the sensor 302. If the user 120 is not present, the electronic device 110 is in a standby state. In the standby state, the microphone array 310 does not collect sound, and neither the sound processing module 320 nor the sound post-processing module 340 nor the voice wake-up engine 320 nor the voice recognition engine 350 performs processing. When the user 120 is detected, the electronic device 120 enters a wake-up enabled state in which the microphone array 310, the voice wake-up engine 330, and the sound processing module 320 are able to perform corresponding processing. In some embodiments, only one microphone of the microphone array 310 may be activated. If a wake-up command is recognized, the electronic device 120 enters a control activated state in which the sound post-processing module 340 and the voice recognition engine 350 are also activated to perform corresponding processes.
It should be understood that fig. 3 only shows some of the hardware and software modules of electronic device 110 that are relevant to voice interaction. Electronic device 110 may also include hardware and/or software modules for supporting other functions. In some embodiments, electronic device 110 may also include more, fewer, or different hardware and/or software modules to implement voice interaction functionality, depending on the design of the voice interaction functionality of electronic device 110. Embodiments of the present disclosure are not limited in scope in this respect.
In the example of fig. 3, the electronic device 110 supports a wake-up command issued by the voice of the user 120. In other embodiments, if the electronic device 110 supports the user 120 to issue the wake-up command in other ways, the electronic device 110 may also include corresponding software and/or hardware modules to implement this functionality. For example, if the electronic device 110 is designed to issue a wake command by gestures of the hand, face, or whole body of the user 120, the electronic device 110 may also include an image capture device (e.g., a camera) to capture an image of the environment 100, and the voice wake engine 330 may determine whether the user 120 issued the wake command by recognizing a particular gesture from the image.
Fig. 4 illustrates a state transition diagram 400 of the electronic device 110, according to some embodiments of the present disclosure. As shown in fig. 4, when the user 120 is not present in the environment 100, the electronic device 110 is in a standby state 410. If the sensed information of the sensor indicates that the user 120 is not present at all times, the electronic device 110 will continue to remain in the standby state 410. When the sensed information of the sensor indicates the presence of the user 120, the electronic device 110 enters the wake-up active state 420. Electronic device 110 in this state 420 is able to recognize a wake-up command for electronic device 110 by user 120. If no wake command is recognized, the electronic device 110 remains in the wake active state 420. If the wake-up command is not recognized and the sensed information of the sensor indicates that the user 120 is no longer in the environment 100, the electronic device 110 will return to the standby state 410. If a wake command is recognized, electronic device 110 enters control active state 430 from wake active state 420. Electronic device 110 is capable of responding to voice control commands of user 120 in state 430. The electronic device 110 may remain in the control active state 430 until the sensor detects that the user 120 is no longer present in the environment 100. In this case, the electronic device 110 enters the standby state 410.
Fig. 5 shows a schematic block diagram of an apparatus 500 for controlling an electronic device according to an embodiment of the present disclosure. Apparatus 500 may be implemented within electronic device 110 or may be implemented independently of electronic device 110. As shown in fig. 5, the apparatus 500 includes a detection module 510 configured to detect whether a user is present in an environment in which the electronic device is located based on sensed information received from the sensor. The apparatus 500 further includes a first state switching module 520 configured to cause the electronic device to enter a wake-up activated state in response to detecting the presence of the user, in which the electronic device is capable of recognizing a wake-up command of the user to the electronic device. The apparatus 500 further includes a second state switching module 530 configured to, in response to recognizing the wake-up command, cause the electronic device to enter a control-active state in which the electronic device is capable of responding to a voice control command of the user.
In some embodiments, the sensor may comprise at least one of: doppler sensor, infrared geminate transistor and infrared pyroelectric sensor.
In some embodiments, the detection module 510 may be further configured to detect whether the user is no longer in the environment based on additional sensed information received from the sensor. The apparatus 500 may further include a third state switching module configured to cause the electronic device to enter a standby state in which the electronic device is unable to recognize the wake-up command or respond to the voice control command in response to detecting that the user is no longer in the environment.
In some embodiments, the third state switching module may be further configured to cause the electronic device to enter a standby state in response to detecting that the user is no longer in the environment for a first predetermined period of time.
In some embodiments, the apparatus 500 may further include an application maintenance module configured to maintain running an application in the electronic device that was launched in response to the voice control command in response to detecting that the user is no longer in the environment.
In some embodiments, the apparatus 500 may further include an application stopping module configured to stop the running of the application in response to detecting that the user is no longer in the environment for a second predetermined period of time.
In some embodiments, the electronic device may include one of: intelligent household electrical appliances, intelligent mobile unit and robot.
In some embodiments, the sensor may be integrated in the electronic device.
Fig. 6 illustrates a schematic block diagram of an example device 600 that can be used to implement embodiments of the present disclosure. Device 600 may be used to implement an apparatus for controlling electronic device 110 of fig. 1, or may be used to implement electronic device 110. As shown, device 600 includes a Central Processing Unit (CPU)601 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)602 or loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Processing unit 601 performs the various methods and processes described above, such as process 200. For example, in some embodiments, process 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by CPU 601, one or more steps of process 200 described above may be performed. Alternatively, in other embodiments, CPU 601 may be configured to perform process 200 in any other suitable manner (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (18)

1. A method of controlling an electronic device, comprising:
detecting whether a user is present in an environment in which the electronic device is located in a standby state in which a component related to voice interaction in the electronic device is not activated for recognizing a wake-up command of the user to the electronic device or responding to a voice control command, based on sensing information received from a sensor;
in response to detecting the presence of the user, causing the electronic device to enter a wake-up active state from the standby state in which a component for wake-up command identification among the voice interaction related components is activated to identify the wake-up command of the user to the electronic device; and
in response to recognizing the wake command while the electronic device is in the wake-up activated state, causing the electronic device to enter a control activated state from the wake-up activated state in which the voice interaction related component is fully activated in response to the voice control command of the user.
2. The method of claim 1, wherein the sensor comprises at least one of: doppler sensor, infrared geminate transistor and infrared pyroelectric sensor.
3. The method of claim 1, further comprising:
detecting whether the user is no longer in the environment based on additional sensed information received from the sensor; and
causing the electronic device to enter the standby state from the wake-up active state or the control active state in response to detecting that the user is no longer in the environment.
4. The method of claim 3, wherein causing the electronic device to enter the standby state comprises:
causing the electronic device to enter the standby state in response to detecting that the user is no longer in the environment for a first predetermined period of time.
5. The method of claim 3, further comprising:
maintaining running an application in the electronic device that is launched in response to the voice control command in response to detecting that the user is no longer in the environment.
6. The method of claim 5, further comprising:
in response to detecting that the user is no longer in the environment for a second predetermined period of time, ceasing execution of the application.
7. The method of claim 1, wherein the electronic device comprises one of: intelligent household electrical appliances, intelligent mobile unit and robot.
8. The method of claim 1, wherein the sensor is integrated in the electronic device.
9. An apparatus for controlling an electronic device, comprising:
a detection module configured to detect whether a user is present in an environment in which the electronic device is located in a standby state in which a component related to voice interaction in the electronic device is not activated for recognizing a wake-up command or responding to a voice control command of the user to the electronic device, based on sensing information received from a sensor;
a first state switching module configured to, in response to detecting the presence of the user, cause the electronic device to enter a wake-up activated state from the standby state, in which a component for wake-up command recognition among the voice interaction related components is activated to recognize the wake-up command of the user to the electronic device; and
a second state switching module configured to cause the electronic device to enter a control active state from the wake active state in response to recognizing the wake command while the electronic device is in the wake active state, the voice interaction related component being fully activated in the control active state in response to the voice control command of the user.
10. The apparatus of claim 9, wherein the sensor comprises at least one of: doppler sensor, infrared geminate transistor and infrared pyroelectric sensor.
11. The apparatus of claim 9, wherein the detection module is further configured to detect whether the user is no longer in the environment based on additional sensed information received from the sensor, the apparatus further comprising:
a third state switching module configured to cause the electronic device to enter a standby state from the wake-up active state or the control active state in response to detecting that the user is no longer in the environment.
12. The apparatus of claim 11, wherein the third state switching module is further configured to cause the electronic device to enter the standby state in response to detecting that the user is no longer in the environment for a first predetermined period of time.
13. The apparatus of claim 11, further comprising:
an application maintenance module configured to maintain running an application in the electronic device that was launched in response to the voice control command in response to detecting that the user is no longer in the environment.
14. The apparatus of claim 13, further comprising:
an application stopping module configured to stop execution of the application in response to detecting that the user is no longer in the environment for a second predetermined period of time.
15. The apparatus of claim 9, wherein the electronic device comprises one of: intelligent household electrical appliances, intelligent mobile unit and robot.
16. The apparatus of claim 9, wherein the sensor is integrated in the electronic device.
17. An apparatus, the apparatus comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN201810102160.5A 2018-02-01 2018-02-01 Method, apparatus, device and computer-readable storage medium for controlling electronic device Active CN108231079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810102160.5A CN108231079B (en) 2018-02-01 2018-02-01 Method, apparatus, device and computer-readable storage medium for controlling electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810102160.5A CN108231079B (en) 2018-02-01 2018-02-01 Method, apparatus, device and computer-readable storage medium for controlling electronic device

Publications (2)

Publication Number Publication Date
CN108231079A CN108231079A (en) 2018-06-29
CN108231079B true CN108231079B (en) 2021-12-07

Family

ID=62670318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810102160.5A Active CN108231079B (en) 2018-02-01 2018-02-01 Method, apparatus, device and computer-readable storage medium for controlling electronic device

Country Status (1)

Country Link
CN (1) CN108231079B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110277094A (en) * 2018-03-14 2019-09-24 阿里巴巴集团控股有限公司 Awakening method, device and the electronic equipment of equipment
CN109166575A (en) * 2018-07-27 2019-01-08 百度在线网络技术(北京)有限公司 Exchange method, device, smart machine and the storage medium of smart machine
CN108972592A (en) * 2018-08-09 2018-12-11 北京云迹科技有限公司 Intelligent awakening method and device for robot
CN109240107B (en) * 2018-09-30 2022-07-19 深圳创维-Rgb电子有限公司 Control method and device of electrical equipment, electrical equipment and medium
CN109358751A (en) * 2018-10-23 2019-02-19 北京猎户星空科技有限公司 A kind of wake-up control method of robot, device and equipment
CN109451256A (en) * 2018-10-29 2019-03-08 四川文轩教育科技有限公司 A kind of network intelligence TV based on artificial intelligence
US11151993B2 (en) * 2018-12-28 2021-10-19 Baidu Usa Llc Activating voice commands of a smart display device based on a vision-based mechanism
CN109831700B (en) * 2019-02-02 2021-08-17 深圳创维-Rgb电子有限公司 Standby mode switching method and device, electronic equipment and storage medium
CN109920420A (en) * 2019-03-08 2019-06-21 四川长虹电器股份有限公司 A kind of voice wake-up system based on environment measuring
CN110677899B (en) * 2019-08-19 2024-03-29 深圳绿米联创科技有限公司 Data transmission method, device, terminal equipment and storage medium
CN110660392A (en) * 2019-10-10 2020-01-07 珠海格力电器股份有限公司 Voice awakening method, storage medium and terminal equipment
CN111182385B (en) * 2019-11-19 2021-08-20 广东小天才科技有限公司 Voice interaction control method and intelligent sound box
CN110933345B (en) * 2019-11-26 2021-11-02 深圳创维-Rgb电子有限公司 Method for reducing television standby power consumption, television and storage medium
CN113099354A (en) * 2020-01-09 2021-07-09 上海博泰悦臻电子设备制造有限公司 Method, apparatus, and computer storage medium for information processing
CN111312241A (en) * 2020-02-10 2020-06-19 深圳创维-Rgb电子有限公司 Unmanned shopping guide method, terminal and storage medium
CN111419130A (en) * 2020-03-30 2020-07-17 珠海格力电器股份有限公司 Control method and control device of dish-washing machine and dish-washing machine
CN113568497B (en) * 2020-04-28 2023-11-14 无锡小天鹅电器有限公司 Device control method, device, electronic device and computer storage medium
CN111948964B (en) * 2020-08-12 2024-04-02 深圳市月白电子科技有限公司 Method and device for switching working states of electronic equipment matched with piano
CN112259128B (en) * 2020-10-21 2023-07-28 恒玄科技(上海)股份有限公司 Audio device and voice recognition method
CN113064805A (en) * 2021-03-29 2021-07-02 联想(北京)有限公司 Control method and control device of electronic equipment
CN114678016A (en) * 2021-04-23 2022-06-28 美的集团(上海)有限公司 Device wake-up method and system, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981491A (en) * 2012-12-12 2013-03-20 四川长虹电器股份有限公司 Intelligent home system based on cloud platform
CN103970441A (en) * 2013-01-29 2014-08-06 三星电子株式会社 Method of performing function of device and device for performing the method
CN104620314A (en) * 2012-04-26 2015-05-13 纽昂斯通讯公司 Embedded system for construction of small footprint speech recognition with user-definable constraints
CN104991458A (en) * 2015-06-30 2015-10-21 联想(北京)有限公司 Control method and electronic equipment
CN106161755A (en) * 2015-04-20 2016-11-23 钰太芯微电子科技(上海)有限公司 A kind of key word voice wakes up system and awakening method and mobile terminal up
CN106878118A (en) * 2017-01-03 2017-06-20 美的集团股份有限公司 A kind of intelligent home appliance voice control method and system
CN107103906A (en) * 2017-05-02 2017-08-29 网易(杭州)网络有限公司 It is a kind of to wake up method, smart machine and medium that smart machine carries out speech recognition
CN107564532A (en) * 2017-07-05 2018-01-09 百度在线网络技术(北京)有限公司 Awakening method, device, equipment and the computer-readable recording medium of electronic equipment
CN107643921A (en) * 2016-07-22 2018-01-30 联想(新加坡)私人有限公司 For activating the equipment, method and computer-readable recording medium of voice assistant

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8768712B1 (en) * 2013-12-04 2014-07-01 Google Inc. Initiating actions based on partial hotwords
KR102179506B1 (en) * 2013-12-23 2020-11-17 삼성전자 주식회사 Electronic apparatus and control method thereof
CN106601250A (en) * 2015-11-10 2017-04-26 刘芨可 Speech control method and device and equipment
CN107464564B (en) * 2017-08-21 2023-05-26 腾讯科技(深圳)有限公司 Voice interaction method, device and equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104620314A (en) * 2012-04-26 2015-05-13 纽昂斯通讯公司 Embedded system for construction of small footprint speech recognition with user-definable constraints
CN102981491A (en) * 2012-12-12 2013-03-20 四川长虹电器股份有限公司 Intelligent home system based on cloud platform
CN103970441A (en) * 2013-01-29 2014-08-06 三星电子株式会社 Method of performing function of device and device for performing the method
CN106161755A (en) * 2015-04-20 2016-11-23 钰太芯微电子科技(上海)有限公司 A kind of key word voice wakes up system and awakening method and mobile terminal up
CN104991458A (en) * 2015-06-30 2015-10-21 联想(北京)有限公司 Control method and electronic equipment
CN107643921A (en) * 2016-07-22 2018-01-30 联想(新加坡)私人有限公司 For activating the equipment, method and computer-readable recording medium of voice assistant
CN106878118A (en) * 2017-01-03 2017-06-20 美的集团股份有限公司 A kind of intelligent home appliance voice control method and system
CN107103906A (en) * 2017-05-02 2017-08-29 网易(杭州)网络有限公司 It is a kind of to wake up method, smart machine and medium that smart machine carries out speech recognition
CN107564532A (en) * 2017-07-05 2018-01-09 百度在线网络技术(北京)有限公司 Awakening method, device, equipment and the computer-readable recording medium of electronic equipment

Also Published As

Publication number Publication date
CN108231079A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108231079B (en) Method, apparatus, device and computer-readable storage medium for controlling electronic device
EP3517849B1 (en) Household appliance control method, device and system, and intelligent air conditioner
CN106910500B (en) Method and device for voice control of device with microphone array
KR102335717B1 (en) Voice control system and wake-up method thereof, wake-up device and home appliance, coprocessor
CN110060685B (en) Voice wake-up method and device
EP3923273B1 (en) Voice recognition method and device, storage medium, and air conditioner
US8666751B2 (en) Audio pattern matching for device activation
CN108198553B (en) Voice interaction method, device, equipment and computer readable storage medium
CN108806673B (en) Intelligent device control method and device and intelligent device
CN109920419B (en) Voice control method and device, electronic equipment and computer readable medium
JP2008009120A (en) Remote controller and household electrical appliance
CN111599361A (en) Awakening method and device, computer storage medium and air conditioner
CN110767225B (en) Voice interaction method, device and system
CN112489413B (en) Control method and system of remote controller, storage medium and electronic equipment
CN112130918A (en) Intelligent device awakening method, device and system and intelligent device
US11620995B2 (en) Voice interaction processing method and apparatus
KR102395013B1 (en) Method for operating artificial intelligence home appliance and voice recognition server system
US20190130898A1 (en) Wake-up-word detection
CN110933345B (en) Method for reducing television standby power consumption, television and storage medium
CN110602197A (en) Internet of things control device and method and electronic equipment
CN101446812A (en) Control method and control device of state of equipment and equipment
CN106817653B (en) Audio setting method and device
WO2023155607A1 (en) Terminal devices and voice wake-up methods
CN116705033A (en) System on chip for wireless intelligent audio equipment and wireless processing method
CN114694661A (en) First terminal device, second terminal device and voice awakening method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant