CN108899027B - Voice analysis method and device - Google Patents

Voice analysis method and device Download PDF

Info

Publication number
CN108899027B
CN108899027B CN201810929740.1A CN201810929740A CN108899027B CN 108899027 B CN108899027 B CN 108899027B CN 201810929740 A CN201810929740 A CN 201810929740A CN 108899027 B CN108899027 B CN 108899027B
Authority
CN
China
Prior art keywords
voice
voice control
scene mode
control instruction
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810929740.1A
Other languages
Chinese (zh)
Other versions
CN108899027A (en
Inventor
韩雪
王慧君
文皓
王现林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201810929740.1A priority Critical patent/CN108899027B/en
Publication of CN108899027A publication Critical patent/CN108899027A/en
Application granted granted Critical
Publication of CN108899027B publication Critical patent/CN108899027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The invention discloses a voice analysis method and a voice analysis device. The method comprises the following steps: the method comprises the steps that a voice control center is connected with a plurality of voice devices, a voice control instruction sent by a user is received, and the voice control instruction is sent to the voice device corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice device can execute operation according to the voice control instruction. The invention solves the technical problem that the voice control process of the voice equipment is complicated because the voice equipment needs to be awakened by the awakening word and then the voice control instruction is issued.

Description

Voice analysis method and device
Technical Field
The invention relates to the field of smart home, in particular to a voice analysis method and device.
Background
With the popularity of voice intelligent devices, household appliances with voice control have become widely popular. In the process of controlling the voice devices through the voice control command, each voice device has a unique awakening word and a voice control instruction, and different voice devices correspond to different awakening words and different voice control commands respectively. When a user controls each voice device in daily life, the user needs to wake up the voice device by using a wake-up word corresponding to the voice device and then issue a voice control instruction.
Because a plurality of voice devices often exist in a family of a user in daily life, when the user uses the voice control instruction to control the voice devices, the user needs to memorize the awakening words and the voice control instruction of each voice device, the number of the voice control instructions is large, and the voice control instructions are not convenient and fast to use.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a voice analysis method and a voice analysis device, which at least solve the technical problem that the voice control process of voice equipment is complicated because the voice equipment needs to be awakened by an awakening word and then a voice control instruction is issued.
According to an aspect of an embodiment of the present invention, there is provided a voice parsing method applied to a voice control hub, where the voice control hub is connected to a plurality of voice devices, the method including: receiving a voice control instruction sent by a user; sending the voice control instruction to voice equipment corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice equipment executes operation according to the voice control instruction, wherein the scene mode comprises the following steps: a single scene mode binding a single device and a multi-scene mode binding multiple devices.
Further, before sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, the method further includes: acquiring awakening instructions of all voice devices connected with the voice control center; storing the awakening instructions and the voice control instructions of all the preset devices into a database of the voice control center; determining the bound voice equipment in the scene mode selected by the user; and acquiring a wake-up instruction and a voice control instruction corresponding to the voice equipment bound in the scene mode in the database.
Further, before sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, the method further includes: acquiring keywords contained in the voice control instruction, wherein the keywords comprise: the device name of the voice device, and/or an operation instruction supported by the voice device; judging whether the keywords are matched with the voice equipment in the scene mode; sending the voice control instruction to target voice equipment matched with the keyword under the condition that the voice equipment matched with the keyword exists in the voice equipment in the scene mode; and sending prompt information to the user under the condition that the voice equipment matched with the keywords does not exist in the voice equipment in the scene mode.
Further, sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center comprises: when the scene mode of the voice control center is a single scene mode, sending an awakening instruction corresponding to the voice equipment bound to the single scene mode in advance so as to awaken the voice equipment; after receiving the voice control instruction, sending the voice control instruction to the voice equipment.
Further, sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, including: acquiring a keyword of the voice equipment contained in the voice control instruction under the condition that the scene mode of the voice control center is a multi-scene mode, wherein the keyword comprises an equipment name of the voice equipment and/or an operation instruction supported by the voice equipment; determining a target voice device matched with the keyword in a plurality of voice devices bound by a multi-scene mode, wherein the target voice device comprises one or more voice devices; sending a wake-up instruction corresponding to the target voice equipment so as to wake up the target voice equipment; and after the voice equipment is awakened, sending the voice control instruction to the target voice equipment.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus applied to a voice control center, the voice control center being connected to a plurality of voice devices, the apparatus including: the receiving unit is used for receiving a voice control instruction sent by a user; a first sending unit, configured to send the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice device executes an operation according to the voice control instruction, where the scene mode includes: a single scene mode binding a single device and a multi-scene mode binding multiple devices.
Further, the apparatus further comprises: the first acquisition unit is used for acquiring awakening instructions of all voice devices connected with the voice control center; the storage unit is used for storing the awakening instructions and the voice control instructions of all the preset devices into a database of the voice control center; the determining unit is used for determining the voice equipment bound in the scene mode selected by the user; and the second acquisition unit is used for acquiring the awakening instruction and the voice control instruction corresponding to the voice equipment bound in the scene mode in the database.
Further, the apparatus further comprises: a third obtaining unit, configured to obtain a keyword included in the voice control instruction before sending the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, where the keyword includes: the device name of the voice device, and/or an operation instruction supported by the voice device; the judging unit is used for judging whether the keywords are matched with the voice equipment in the scene mode; a second sending unit, configured to send the voice control instruction to a target voice device that matches the keyword if a voice device that matches the keyword exists in the voice devices in the scene mode; and a third sending unit, configured to send a prompt message to the user when there is no voice device matching the keyword in the voice devices in the scene mode.
According to another aspect of the embodiment of the present invention, there is also provided a voice control center apparatus including the voice parsing device as described above.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the voice parsing method as described above.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the voice parsing method as described above through the computer program.
In the embodiment of the invention, the mode that the voice control center is connected with a plurality of voice devices is adopted, the voice control instruction sent by a user is received, and the voice control instruction is sent to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, so that the voice device executes operation according to the voice control instruction, the technical effect of quickly controlling the voice device is realized, and the technical problem that the voice control process of the voice device is complicated because the voice device needs to be awakened by an awakening word and then the voice control instruction is issued is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a voice control hub coupled to a voice device in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart illustrating an alternative speech parsing method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an alternative speech analysis apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an alternative voice control hub device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Before the technical solution of the embodiment of the present invention is introduced, an application scenario of the technical solution of the embodiment of the present invention is introduced, and the voice parsing method in the embodiment of the present invention is applied to a voice control center, as shown in a schematic connection diagram of a voice control center and voice devices in fig. 1, the voice control center is connected to a plurality of voice devices, and the connection manner may be connected in a wired or wireless manner, where the voice control center has a voice recognition system, and is capable of receiving and recognizing a voice control instruction sent by a user, and has a storage function for storing the voice control instruction and a device code or a device name of the voice device. It should be noted that the voice control center itself may also be a voice device, and is not limited herein.
According to an embodiment of the present invention, there is provided a speech parsing method, as shown in fig. 2, the method including:
s201, receiving a voice control instruction sent by a user;
s202, sending the voice control instruction to the voice equipment corresponding to the voice control instruction according to the scene mode of the voice control center, so that the voice equipment executes operation according to the voice control instruction, wherein the scene mode comprises the following steps: a single scene mode binding a single device and a multi-scene mode binding multiple devices.
It should be noted that the scene modes of the voice control center include a single scene mode and a multi-scene mode, where a single voice device is bound to the voice control center in the single scene mode, for example, a single scene mode in which only a single device such as an air conditioner and a washing machine is bound, and multiple voice devices are bound to the multi-scene mode, for example, a multi-scene mode in which a fan and an air conditioner, an induction cooker and an electric cooker, and a microwave oven are bound simultaneously. As a preferred technical scheme, the voice device can be freely bound according to the actual requirements of the user, the user is required to preset a scene mode in the voice control hub, and when the user uses the voice hub, the corresponding scene mode is directly called. As another preferred embodiment, the user can directly set the scene mode of the voice control center through the voice control instruction, and can also call the scene mode of the voice control center through the voice control instruction.
In the technical scheme of the embodiment of the invention, when a user uses the voice control instruction to control other voice equipment through the voice control center, the voice control center can be set to be in an awakening state all the time, and can also be set to be awakened after receiving the voice control instruction of the user. Before a user controls a voice device through a voice control center, the voice control center generally traverses a device name, a device code, a voice control instruction and a wake-up instruction in the voice device, stores the device name, the device code, the voice control instruction and the wake-up instruction into the voice control center, and correspondingly wakes up and controls the voice device according to the voice control instruction of the user, so that the wake-up operation of the voice device by the user is avoided.
It should be noted that, in the technical solution of the embodiment of the present invention, a plurality of single scene modes may exist in the voice control center at the same time, each single scene mode is bound to one device, or a plurality of multi-scene modes may exist, and the devices bound to each multi-scene mode may be partially the same.
According to the embodiment of the invention, the mode that the voice control center is connected with the plurality of voice devices is adopted, the voice control instruction sent by a user is received, and the voice control instruction is sent to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, so that the voice device executes operation according to the voice control instruction, the technical effect of quickly controlling the voice device is realized, and the technical problem that the voice control process of the voice device is complicated because the voice device is controlled to wake up the voice device by using a wake-up word and then send the voice control instruction is solved.
As a preferred technical solution, in the embodiment of the present invention, before sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, the method further includes, but is not limited to, one of the following:
1) acquiring a wake-up instruction corresponding to the voice equipment bound in the single scene mode under the condition that the scene mode of the voice control center is the single scene mode; saving the awakening instruction and a voice control instruction corresponding to the voice equipment;
in a specific application scenario, only a single voice device is bound in a single scene mode, and therefore only the wake-up instruction and the voice control instruction of the single voice device in the single scene mode need to be acquired, and then the wake-up instruction and the voice control instruction of the single device in the single scene mode are stored. For example, in the single scene mode of the voice air conditioner, the wake-up instruction "air conditioner" of the voice air conditioner and the voice control instruction of the voice air conditioner are saved, and the voice control instruction of the voice air conditioner includes but is not limited to "turn on", "turn off", "heat", "cool", and the like.
2) Under the condition that the scene mode of the voice control center is a multi-scene mode, acquiring a plurality of awakening instructions corresponding to a plurality of voice devices bound in the multi-scene mode; saving a plurality of awakening instructions and voice control instructions of a plurality of voice devices;
in a specific application scenario, because a plurality of voice devices are bound in a multi-scene mode, a wake-up instruction and a voice control instruction of each voice device need to be traversed, and then the wake-up instruction and the voice control instruction of the plurality of voice devices in the multi-scene mode are stored in a voice control hub, so that the voice control hub can quickly wake up and control the voice devices in the subsequent use process of the voice devices. For example, in the multi-scenario mode of the air conditioner and the fan, the voice air conditioner wake-up command "air conditioner" and the voice fan wake-up commands "fan" and "fan" are stored, and it should be noted herein that the voice device may have one or more wake-up commands, which is only an example and is not limited herein. Meanwhile, the voice control instructions of the voice air conditioner, such as turning on, turning off, heating and cooling, and the voice control instructions of the voice fan, such as turning on, turning off, accelerating the wind speed and slowing down the wind speed, are obtained.
3) The method comprises the steps of obtaining awakening instructions of all voice devices connected with a voice control center, storing the awakening instructions and the voice control instructions of all preset devices into a database of the voice control center, determining the voice devices bound in a scene mode selected by a user, and obtaining the awakening instructions and the voice control instructions corresponding to the voice devices bound in the scene mode in the database.
In a specific application scenario, for example, the awakening instruction and the voice control instruction of the voice device can be conveniently acquired, and the awakening instructions and the voice control instructions of all the voice devices connected with the voice control center can be traversed between scene modes in which the voice center is set by a user, and then the awakening instructions and the voice control instructions of all the voice devices are stored in a database of the voice control center. After a user selects a scene mode of a voice control center, a voice control instruction and a wake-up instruction of the voice equipment bound in the scene mode are obtained and bound according to the bound voice equipment in the scene mode, namely, a single voice equipment in a single scene mode or a plurality of voice equipment in a multi-scene mode, so that the wake-up instruction and the voice control instruction of the subsequent voice equipment can be rapidly obtained.
As a preferred technical solution, in the embodiment of the present invention, before sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, the method further includes, but is not limited to: acquiring keywords contained in a voice control instruction, wherein the keywords comprise: the device name of the voice device, and/or an operation instruction supported by the voice device; judging whether the keywords are matched with the voice equipment in the scene mode; under the condition that the voice equipment matched with the keywords exists in the voice equipment in the scene mode, sending a voice control instruction to target voice equipment matched with the keywords; and sending prompt information to the user under the condition that the voice equipment matched with the keywords does not exist in the voice equipment in the scene mode.
In a specific application scenario, sometimes a voice control instruction of a user is unclear, and therefore the voice control instruction of the user needs to be recognized, specifically, a keyword in the voice control instruction of the user is obtained, where the keyword of the voice control instruction generally includes a device name of a voice device and/or an operation instruction supported by the voice device. For example, the device name of the voice device in the voice control instruction "turn on the air conditioner" is "air conditioner", and the operation instruction supported by the voice device in the voice control instruction is "turn on". The keyword is matched with the keyword of the user voice control instruction by judging whether the keyword is matched with the voice equipment in the scene, specifically, by calling the voice control instruction and the awakening instruction of the voice equipment in the single scene mode stored in the voice control center. When there is a voice device matching the keyword in the voice devices in the scene mode, for example, the current scene mode of the voice control center is a single scene mode bound with the air conditioner, it may be determined that the keyword in the voice control instruction matches the voice device, and the voice control instruction is sent to the air conditioner. And under the condition that the voice equipment matched with the keywords does not exist in the voice equipment bound with the scene mode, for example, under the condition that the current scene mode of the voice control center is the voice equipment bound with the electric cooker, and the keyword 'air conditioner' in the voice control instruction is not matched with the equipment name of the voice equipment electric cooker, a prompt message is sent to the user to prompt the user that the voice equipment is not matched, so that the user sends the voice control instruction again or changes the scene mode.
As a preferred technical solution, in this embodiment, the sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center includes, but is not limited to: under the condition that the scene mode of the voice control center is a single scene mode, sending an awakening instruction corresponding to the voice equipment bound in the single scene mode so as to awaken the voice equipment; and after the voice equipment is awakened, sending the voice control instruction to the voice equipment.
In a specific application scenario, when a voice control center receives a voice control instruction sent by a user, a wake-up instruction and a voice control instruction of a voice device in a single scene mode are obtained according to a keyword in the voice control instruction, the wake-up instruction is firstly sent to the voice device bound by the voice control center in the current single scene mode, for example, in the single scene mode bound by an air conditioner, after receiving a voice control instruction 'start-up' of the user, the wake-up instruction of the air conditioner is firstly sent to the air conditioner to wake up the air conditioner of the voice device, then an air conditioner voice control instruction 'start-up' corresponding to the keyword 'start-up' is obtained, and the voice control instruction in the voice control instruction is sent to the voice device air conditioner to start the air conditioner.
As a preferred technical solution, for the purpose of fast response of the voice device, the voice control instruction of the user may not be determined in the single scene mode, and when the voice control center receives the voice control instruction sent by the user, the wake-up instruction corresponding to the voice device is directly sent to the voice device bound in the single scene mode to wake up the voice device, and then the voice control instruction of the user is sent to the voice device, so that the voice device recognizes the voice control instruction and executes a corresponding operation. For example, in the single scene mode of the air conditioner, after receiving a voice control instruction of "start the air conditioner" from a user, a wake-up instruction of the air conditioner is sent to the air conditioner to wake up the voice device air conditioner, and then the voice control instruction of "start the air conditioner" is sent to the voice device air conditioner to start the air conditioner. In another preferred technical scheme, for some voice devices with stronger security, after a user selects a single scene mode and before a voice control instruction of the user is received, a wake-up instruction corresponding to the voice device is sent to the voice device bound in the single scene mode in advance, so that the voice device in the single scene mode is quickly started, and the security requirement of the voice device is met.
As a preferred technical solution, in this embodiment, the sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center includes, but is not limited to: acquiring keywords of the voice equipment contained in the voice control instruction under the condition that the scene mode of the voice control center is a multi-scene mode, wherein the keywords comprise the equipment name of the voice equipment and/or an operation instruction supported by the voice equipment; determining target voice equipment matched with the keywords in the multiple voice equipment bound by the multi-scene mode, wherein the target voice equipment comprises one or more voice equipment; sending a wake-up instruction corresponding to the target voice equipment so as to wake up the target voice equipment; and after the voice equipment is awakened, sending the voice control instruction to the target voice equipment.
In a specific application scenario, under the condition that the scene mode of the voice control center is a multi-scene mode, for example, under the multi-scene mode of the electromagnetic oven, the electric cooker and the microwave oven, a keyword of the voice device included in the voice control instruction is acquired, for example, the voice control instruction is turned on, the keyword is recognized to only include the operation instruction of the voice device, the turning on is identified, whether the keyword is matched with the voice device bound to the current multi-scene mode of the voice control center is determined, and the operation instruction can be determined to be matched with the operation instruction of the electromagnetic oven, the electric cooker and the microwave oven, so that the electromagnetic oven, the electric cooker and the microwave oven are all used as target voice devices, and the voice control instruction is sent to the electromagnetic oven, the electric cooker and the microwave oven. And when the voice control instruction is 'heating by induction cooker', the keyword of the voice control instruction contains the equipment name 'induction cooker' of the voice equipment and the operation instruction 'heating', the equipment names of the electric cooker and the microwave oven can be judged not to be matched with the 'induction cooker', the induction cooker is determined to be the target voice equipment, and the voice control instruction 'heating by induction cooker' is sent to the induction cooker. In addition, when the voice control instruction is 'start cooking', the keyword can be judged to be the operation instruction 'cook', so that the operation instruction supported by the electromagnetic oven and the microwave oven can be judged not to be matched with 'cook', the electric cooker can be determined to be the target voice equipment, and the voice control instruction is sent to the known electric cooker so as to enable the electric cooker to execute corresponding operation.
As an optional implementation scheme, in an actual application scenario, there are wireless devices that do not support a voice control function and only support a wireless communication function, such as a television that only has an infrared remote controller control, an air conditioner that has a near field communication NFC function, and the like, so that through the wireless communication function that the voice control hub itself has, a code pairing, NFC, and the like wireless connection can be performed with the wireless device that does not have the voice control function, such as a television, and the like, through the voice control hub, the voice control instruction of the wireless device itself is traversed, the voice control instruction is stored in the voice control hub, and the name of the wireless device corresponding to the wireless device is set. Therefore, the wireless equipment can be bound to the corresponding scene mode, and the voice control of the wireless equipment by the user is completed through the voice control center.
According to the embodiment of the invention, the mode that the voice control center is connected with the plurality of voice devices is adopted, the voice control instruction sent by a user is received, and the voice control instruction is sent to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, so that the voice device executes operation according to the voice control instruction, the technical effect of quickly controlling the voice device is realized, and the technical problem that the voice control process of the voice device is complicated because the voice device needs to be awakened by an awakening word and then the voice control instruction is issued is solved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is also provided a speech analysis apparatus for implementing the speech analysis method, as shown in fig. 3, the apparatus includes:
1) a receiving unit 301, configured to receive a voice control instruction sent by a user;
2) a first sending unit 302, configured to send the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice device executes an operation according to the voice control instruction, where the scene mode includes: a single scene mode binding a single device and a multi-scene mode binding multiple devices.
As a preferable technical solution, the apparatus further includes:
1) the first acquisition unit is used for acquiring awakening instructions of all voice devices connected with the voice control center;
2) the storage unit is used for storing the awakening instructions and the voice control instructions of all the preset devices into a database of the voice control center;
3) the determining unit is used for determining the voice equipment bound in the scene mode selected by the user;
4) and the second acquisition unit is used for acquiring the awakening instruction and the voice control instruction corresponding to the voice equipment bound in the scene mode in the database.
As a preferable technical solution, the apparatus further includes:
1) a third obtaining unit, configured to obtain a keyword included in the voice control instruction before sending the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, where the keyword includes: the device name of the voice device, and/or an operation instruction supported by the voice device;
2) the judging unit is used for judging whether the keywords are matched with the voice equipment in the scene mode;
3) a second sending unit, configured to send the voice control instruction to a target voice device that matches the keyword if a voice device that matches the keyword exists in the voice devices in the scene mode;
4) and a third sending unit, configured to send a prompt message to the user when there is no voice device matching the keyword in the voice devices in the scene mode.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described herein again.
Example 3
According to an embodiment of the present invention, there is also provided a voice control center apparatus for implementing the voice parsing device, as shown in fig. 4, the electronic device includes the voice parsing device, and the voice control center apparatus includes:
1) a receiving unit 401, configured to receive a voice control instruction sent by a user;
2) a sending unit 402, configured to send the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice device executes an operation according to the voice control instruction, where the scene mode includes: a single scene mode binding a single device and a multi-scene mode binding multiple devices.
As a preferable technical solution, in an embodiment of the present invention, the apparatus further includes:
1) the first acquisition unit is used for acquiring awakening instructions of all voice devices connected with the voice control center;
2) the storage unit is used for storing the awakening instructions and the voice control instructions of all the preset devices into a database of the voice control center;
3) the determining unit is used for determining the voice equipment bound in the scene mode selected by the user;
4) and the second acquisition unit is used for acquiring the awakening instruction and the voice control instruction corresponding to the voice equipment bound in the scene mode in the database.
As a preferable technical solution, in an embodiment of the present invention, the apparatus further includes:
1) a third obtaining unit, configured to obtain a keyword included in the voice control instruction before sending the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, where the keyword includes: the device name of the voice device, and/or an operation instruction supported by the voice device;
2) the judging unit is used for judging whether the keywords are matched with the voice equipment in the scene mode;
3) a second sending unit, configured to send the voice control instruction to a target voice device that matches the keyword if a voice device that matches the keyword exists in the voice devices in the scene mode;
4) and a third sending unit, configured to send a prompt message to the user when there is no voice device matching the keyword in the voice devices in the scene mode.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described herein again.
Example 4
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium includes a stored program, and when the program runs, the apparatus on which the storage medium is located is controlled to execute the voice parsing method as described above
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, receiving a voice control instruction sent by a user;
s2, sending the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice device executes operation according to the voice control instruction, wherein the scene mode comprises: a single scene mode binding a single device and a multi-scene mode binding multiple devices.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described again here.
Example 5
According to an embodiment of the present invention, there is also provided an electronic device for implementing the voice parsing method, including: comprising a memory, a processor and a computer program stored on said memory and executable on said processor, said processor performing the speech parsing method as described above by means of said computer program.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, receiving a voice control instruction sent by a user;
s2, sending the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice device executes operation according to the voice control instruction, wherein the scene mode comprises: a single scene mode binding a single device and a multi-scene mode binding multiple devices.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described again here.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A voice analysis method is applied to a voice control center, and the voice control center is connected with a plurality of voice devices, and is characterized by comprising the following steps:
receiving a voice control instruction sent by a user;
sending the voice control instruction to voice equipment corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice equipment executes operation according to the voice control instruction, wherein the scene mode comprises the following steps: a single scene mode binding a single device and a multi-scene mode binding a plurality of devices;
wherein, sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center comprises: when the scene mode of the voice control center is a single scene mode, sending an awakening instruction corresponding to the voice equipment bound to the single scene mode in advance so as to awaken the voice equipment; after receiving the voice control instruction, sending the voice control instruction to the voice equipment;
or, under the condition that the scene mode of the voice control center is a multi-scene mode, acquiring a keyword of the voice device contained in the voice control instruction, wherein the keyword includes a device name of the voice device and/or an operation instruction supported by the voice device; determining a target voice device matched with the keyword in a plurality of voice devices bound by a multi-scene mode, wherein the target voice device comprises one or more voice devices; sending a wake-up instruction corresponding to the target voice equipment so as to wake up the target voice equipment; and after the voice equipment is awakened, sending the voice control instruction to the target voice equipment.
2. The method according to claim 1, wherein before sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, the method further comprises:
acquiring awakening instructions of all voice devices connected with the voice control center;
storing the awakening instructions and the voice control instructions of all the preset devices into a database of the voice control center;
determining the bound voice equipment in the scene mode selected by the user;
and acquiring a wake-up instruction and a voice control instruction corresponding to the voice equipment bound in the scene mode in the database.
3. The method according to claim 2, wherein before sending the voice control instruction to the voice device corresponding to the voice control instruction according to the scene mode of the voice control center, the method further comprises:
acquiring keywords contained in the voice control instruction, wherein the keywords comprise: the device name of the voice device, and/or an operation instruction supported by the voice device;
judging whether the keywords are matched with the voice equipment in the scene mode;
sending the voice control instruction to target voice equipment matched with the keyword under the condition that the voice equipment matched with the keyword exists in the voice equipment in the scene mode;
and sending prompt information to the user under the condition that the voice equipment matched with the keywords does not exist in the voice equipment in the scene mode.
4. A voice analysis device applied to a voice control center, wherein the voice control center is connected with a plurality of voice devices, the device comprising:
the receiving unit is used for receiving a voice control instruction sent by a user;
a first sending unit, configured to send the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, so that the voice device executes an operation according to the voice control instruction, where the scene mode includes: a single scene mode binding a single device and a multi-scene mode binding a plurality of devices;
the device is used for sending a wake-up instruction corresponding to the voice equipment bound to the single scene mode in advance to wake up the voice equipment under the condition that the scene mode of the voice control center is the single scene mode; after receiving the voice control instruction, sending the voice control instruction to the voice equipment;
or, the apparatus is configured to, when a scene mode of the voice control center is a multi-scene mode, obtain a keyword of the voice device included in the voice control instruction, where the keyword includes a device name of the voice device and/or an operation instruction supported by the voice device; determining a target voice device matched with the keyword in a plurality of voice devices bound by a multi-scene mode, wherein the target voice device comprises one or more voice devices; sending a wake-up instruction corresponding to the target voice equipment so as to wake up the target voice equipment; and after the voice equipment is awakened, sending the voice control instruction to the target voice equipment.
5. The apparatus of claim 4, further comprising:
the first acquisition unit is used for acquiring awakening instructions of all voice devices connected with the voice control center;
the storage unit is used for storing the awakening instructions and the voice control instructions of all the preset devices into a database of the voice control center;
the determining unit is used for determining the voice equipment bound in the scene mode selected by the user;
and the second acquisition unit is used for acquiring the awakening instruction and the voice control instruction corresponding to the voice equipment bound in the scene mode in the database.
6. The apparatus of claim 5, further comprising:
a third obtaining unit, configured to obtain a keyword included in the voice control instruction before sending the voice control instruction to a voice device corresponding to the voice control instruction according to a scene mode of the voice control center, where the keyword includes: the device name of the voice device, and/or an operation instruction supported by the voice device;
the judging unit is used for judging whether the keywords are matched with the voice equipment in the scene mode;
a second sending unit, configured to send the voice control instruction to a target voice device that matches the keyword if a voice device that matches the keyword exists in the voice devices in the scene mode;
and a third sending unit, configured to send a prompt message to the user when there is no voice device matching the keyword in the voice devices in the scene mode.
7. A speech control center apparatus comprising the speech parsing device of any one of claims 4-6.
8. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the voice parsing method of any one of claims 1 to 3.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the speech parsing method of any one of claims 1 to 3 by the computer program.
CN201810929740.1A 2018-08-15 2018-08-15 Voice analysis method and device Active CN108899027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810929740.1A CN108899027B (en) 2018-08-15 2018-08-15 Voice analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810929740.1A CN108899027B (en) 2018-08-15 2018-08-15 Voice analysis method and device

Publications (2)

Publication Number Publication Date
CN108899027A CN108899027A (en) 2018-11-27
CN108899027B true CN108899027B (en) 2021-02-26

Family

ID=64354479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810929740.1A Active CN108899027B (en) 2018-08-15 2018-08-15 Voice analysis method and device

Country Status (1)

Country Link
CN (1) CN108899027B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109521908B (en) * 2018-11-28 2021-12-24 广州朗国电子科技有限公司 Switching method of single-finger and multi-finger touch modes of intelligent whiteboard
CN109640217A (en) * 2018-12-19 2019-04-16 维沃移动通信有限公司 A kind of speaker control method and terminal device
CN109859752A (en) * 2019-01-02 2019-06-07 珠海格力电器股份有限公司 Voice control method, device, storage medium and voice joint control system
CN110600027B (en) * 2019-08-26 2022-12-02 深圳市丰润达科技有限公司 Voice terminal scene control method, voice terminal scene application method, voice terminal, cloud and system
CN111429917B (en) * 2020-03-18 2023-09-22 北京声智科技有限公司 Equipment awakening method and terminal equipment
CN111640434A (en) * 2020-06-05 2020-09-08 三星电子(中国)研发中心 Method and apparatus for controlling voice device
CN113284489B (en) * 2021-04-16 2024-07-09 珠海格力电器股份有限公司 Voice equipment control method and device, storage medium and voice equipment
CN115242571A (en) * 2021-04-25 2022-10-25 佛山市顺德区美的电热电器制造有限公司 Distributed voice interaction method and device, readable storage medium and household appliance
CN115250208A (en) * 2021-04-27 2022-10-28 佛山市顺德区美的电热电器制造有限公司 Control method and device, storage medium, household appliance and control equipment
CN114913851A (en) * 2022-04-19 2022-08-16 青岛海尔空调器有限总公司 Method and device for controlling voice equipment, voice equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197571A (en) * 2013-03-15 2013-07-10 张春鹏 Control method, device and system
CN108306797A (en) * 2018-01-30 2018-07-20 百度在线网络技术(北京)有限公司 Sound control intelligent household device, method, system, terminal and storage medium
CN108320747A (en) * 2018-02-08 2018-07-24 广东美的厨房电器制造有限公司 Appliances equipment control method, equipment, terminal and computer readable storage medium
CN108337139A (en) * 2018-01-29 2018-07-27 广州索答信息科技有限公司 Home appliance voice control method, electronic equipment, storage medium and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6671379B2 (en) * 2014-10-01 2020-03-25 エクスブレイン・インコーポレーテッド Voice and connectivity platforms

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197571A (en) * 2013-03-15 2013-07-10 张春鹏 Control method, device and system
CN108337139A (en) * 2018-01-29 2018-07-27 广州索答信息科技有限公司 Home appliance voice control method, electronic equipment, storage medium and system
CN108306797A (en) * 2018-01-30 2018-07-20 百度在线网络技术(北京)有限公司 Sound control intelligent household device, method, system, terminal and storage medium
CN108320747A (en) * 2018-02-08 2018-07-24 广东美的厨房电器制造有限公司 Appliances equipment control method, equipment, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN108899027A (en) 2018-11-27

Similar Documents

Publication Publication Date Title
CN108899027B (en) Voice analysis method and device
CN108006889B (en) Air conditioner control method and device
WO2021196638A1 (en) Household appliance control method and apparatus, and computer storage medium
US10264424B2 (en) Information processing method and central control device
CN108337139A (en) Home appliance voice control method, electronic equipment, storage medium and system
CN108521883A (en) Control the method and device of network insertion
CN103578472A (en) Method and device for controlling electrical equipment
CN112526892B (en) Method and device for controlling intelligent household equipment and electronic equipment
CN104898440B (en) Household electric appliance control method and device
CN113531818A (en) Running mode pushing method and device for air conditioner and air conditioner
CN111487884A (en) Storage medium, and intelligent household scene generation device and method
CN110839171A (en) Method and device for applying television screen saver and computer storage medium
CN108521355A (en) Method, intelligent terminal, household appliance and the device of self-defined voice control device
CN112925219A (en) Method and device for executing smart home scene
CN114120996A (en) Voice interaction method and device
CN104883719A (en) Method, apparatus and system of accessing to wireless local area network by wireless input device
CN103970103A (en) Intelligent home scene control method and system and scene controller
CN114415530A (en) Control method, control device, electronic equipment and storage medium
CN114253147A (en) Intelligent device control method and device, electronic device and storage medium
CN107205094B (en) Device control method and device, electronic device and terminal
CN113341738A (en) Method, device and equipment for controlling household appliance
CN113825004B (en) Multi-screen sharing method and device for display content, storage medium and electronic device
CN112781248B (en) Voice control method and device for intelligent water heater, electronic equipment and storage medium
CN110908498A (en) Gesture associated control function method and terminal equipment
CN110164426A (en) Sound control method and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant