CN112256230A - Menu interaction method and system and storage medium - Google Patents

Menu interaction method and system and storage medium Download PDF

Info

Publication number
CN112256230A
CN112256230A CN202011109051.XA CN202011109051A CN112256230A CN 112256230 A CN112256230 A CN 112256230A CN 202011109051 A CN202011109051 A CN 202011109051A CN 112256230 A CN112256230 A CN 112256230A
Authority
CN
China
Prior art keywords
menu
voice
display
instruction
switching instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011109051.XA
Other languages
Chinese (zh)
Other versions
CN112256230B (en
Inventor
胡子坚
孙峰
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Group Co Ltd
Guangdong Midea Kitchen Appliances Manufacturing Co Ltd
Original Assignee
Midea Group Co Ltd
Guangdong Midea Kitchen Appliances Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Group Co Ltd, Guangdong Midea Kitchen Appliances Manufacturing Co Ltd filed Critical Midea Group Co Ltd
Priority to CN202011109051.XA priority Critical patent/CN112256230B/en
Publication of CN112256230A publication Critical patent/CN112256230A/en
Application granted granted Critical
Publication of CN112256230B publication Critical patent/CN112256230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a menu interaction method and system and a storage medium. The menu interaction method is used for a menu interaction system, and the menu interaction system comprises a voice device and a display device. The menu interaction method comprises the following steps: acquiring a menu according to a menu starting instruction, wherein the menu step comprises voice broadcasting information and display information; sending the voice broadcast information to the voice equipment, and sending the display information to the display equipment; and broadcasting the menu according to the voice broadcasting information by the voice equipment, and displaying the menu according to the display information by the display equipment. The menu interaction method of the embodiment of the invention respectively broadcasts the menu through the voice device and displays the menu through the display device, thus the voice and the display are on different devices, and the user can conveniently understand the menu further.

Description

Menu interaction method and system and storage medium
Technical Field
The invention relates to the field of intelligent cooking, in particular to a menu interaction method and system and a storage medium.
Background
In the related art, with the popularization of smart speakers, more and more families have smart speakers or other devices capable of voice interaction. Generally, smart speakers block factors such as price cost, and although a voice interaction entrance is provided, there is no screen, so that in many scenes, such as menu guidance, etc., the user cannot completely understand the voice by light.
Disclosure of Invention
The embodiment of the invention provides a menu interaction method and system and a storage medium.
The menu interaction method is used for a menu interaction system, and the menu interaction system comprises voice equipment and display equipment. The menu interaction method comprises the following steps: acquiring a menu according to a menu starting instruction, wherein the menu step comprises voice broadcasting information and display information; sending the voice broadcast information to the voice equipment, and sending the display information to the display equipment; and the voice equipment broadcasts the menu step according to the voice broadcast information, and the display equipment displays the menu step according to the display information.
The menu interaction method of the embodiment of the invention respectively broadcasts the menu through the voice device and displays the menu through the display device, thus the voice and the display are on different devices, and the user can conveniently understand the menu further.
In some embodiments, the step of obtaining the recipe according to the recipe start instruction includes: the voice equipment generates the menu starting instruction according to the acquired voice instruction; and/or the display equipment generates the menu starting instruction according to the acquired input instruction.
In some embodiments, the step of obtaining the recipe according to the recipe start instruction includes: enabling the menu starting instruction to be sent to a cloud terminal; and receiving the menu synchronously sent to the voice equipment and the display equipment by the cloud.
In some embodiments, the recipe interaction system includes a cooking device. The method for acquiring the menu according to the menu starting instruction comprises the following steps: enabling the menu starting instruction to be sent to the cooking equipment; and receiving the menu synchronously sent to the voice equipment and the display equipment by the cooking equipment.
In some embodiments, the recipe interaction method includes: and acquiring a menu step switching instruction, and synchronously sending voice broadcast information of the switched menu step to the voice equipment and display information to the display equipment according to the menu step switching instruction.
In some embodiments, the obtaining the recipe step switching instruction includes: the voice equipment acquires a first switching instruction, and the display equipment acquires a second switching instruction; under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is consistent with the second switching instruction, enabling the first switching instruction or the second switching instruction to generate the menu step switching instruction
In some embodiments, the recipe interaction method includes: and under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is inconsistent with the second switching instruction, ignoring the first switching instruction and the second switching instruction or sending out prompt information of instruction conflict.
In some embodiments, the speech device includes a display unit, the display device including a speech unit; the menu interaction method comprises the following steps: sending the voice broadcast information to the display device, and sending the display information to the voice device; the voice unit broadcasts the menu step according to the voice broadcast information, and the display unit displays the menu step according to the display information.
In some embodiments, the recipe interaction method according to claim 1, wherein a total number of the voice device and the display device is at least three, and the causing of the voice broadcast information to be transmitted to the voice device and the causing of the display information to be transmitted to the display device includes: according to a selection instruction, determining a voice device for receiving the voice broadcast information and a display device for receiving the display information; and sending the voice broadcast information to the determined voice equipment, and sending the display information to the determined display equipment.
The menu interaction system comprises a controller, wherein the controller is used for realizing the steps of the menu interaction method in any one of the above embodiments.
The menu interaction system of the embodiment of the invention respectively broadcasts the menu step through the voice device and displays the menu step through the display device, thus the voice and the display are on different devices, and the user can conveniently understand the menu further.
The computer readable storage medium of the embodiments of the present invention stores thereon a computer program, which, when executed by a processor, implements the steps of the recipe interaction method of any of the above-described embodiments.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of a recipe interaction method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a recipe interaction system according to an embodiment of the present invention;
FIG. 3 is an interaction diagram of a recipe interaction system according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart diagram of a recipe interaction method according to an embodiment of the invention;
FIG. 5 is a block diagram of a recipe interaction system according to an embodiment of the present invention;
fig. 6 to 11 are schematic flow charts of a recipe interaction method according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the embodiments of the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
In the description of the embodiments of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. Specific meanings of the above terms in the embodiments of the present invention can be understood by those of ordinary skill in the art according to specific situations.
Referring to fig. 1 and fig. 2, a menu interaction method according to an embodiment of the present invention is provided, where the menu interaction method is used in a menu interaction system 100, the menu interaction system 100 includes a voice device 10 and a display device 20, and the menu interaction method includes:
step 01: acquiring a menu according to a menu starting instruction, wherein the menu step comprises voice broadcasting information and display information;
step 02: sending the voice broadcast information to the voice device 10, and sending the display information to the display device 20;
step 03: the voice device 10 broadcasts the menu step according to the voice broadcast information, and the display device 20 displays the menu step according to the display information.
The menu interaction method according to the embodiment of the present invention can be implemented by the menu interaction system 100 according to the embodiment of the present invention. Specifically, referring to fig. 2, the menu interaction system 100 includes a controller 30. Step 01, step 02 and step 03 can be implemented by the controller 30, that is, the controller 30 is configured to: acquiring a menu according to a menu starting instruction, wherein the menu step comprises voice broadcasting information and display information; sending the voice broadcast information to the voice device 10, and sending the display information to the display device 20; the voice device 10 broadcasts the menu step according to the voice broadcast information, and the display device 20 displays the menu step according to the display information.
The menu interaction method and system 100 of the embodiment of the invention respectively report the menu step through the voice device 10 and display the menu step through the display device 20, so that the voice and the display are on different devices, and the user can conveniently understand the menu further.
It can be understood that when a user wants to use the voice broadcasting information and the display information at the same time in cooking, the user can simultaneously acquire the voice broadcasting information and the display information through the menu interaction system 100. Specifically, the user may input a related instruction to input a recipe start instruction, and the controller 30 may obtain a recipe step according to the recipe start instruction, where the recipe step includes voice broadcasting information and displaying information. In some embodiments, the voice broadcast information and the display information may be associated together according to a recipe step, and the recipe step may include related voice broadcast information and display information such as weight information of the cooking materials and the cooking materials, cooking steps, expected cooking duration, cooking knowledge base, cooking prompts and the like. And prompting the menu by voice broadcast information in a voice playing mode, and prompting the menu by display information in a picture, video and character mode. The controller 30 may cause the voice broadcasting information to be transmitted to the voice device 10 and cause the display information to be transmitted to the display device 20. The voice device 10 broadcasts the menu step according to the voice broadcast information, and the display device 20 displays the menu step according to the display information. The speech device 10 may include a bluetooth speaker, a home speaker, a speaker, etc. electronic devices that may convert electrical signals into acoustic signals. The display device 20 includes an electronic device having a display screen capable of displaying information, such as a smart phone, a tablet computer, a television, and a notebook computer. In the menu interaction system 100 of the embodiment of the invention, the voice device 10 and the display device 20 are separately arranged, so that a user can obtain voice broadcast information and display information on different electronic devices, and the user can more conveniently obtain menu steps in a cooking scene. Specifically, existing voice devices and electronic devices which are expected to be added to the menu interaction method and system can be bound in advance. The voice device 10 and the display device 20 realize menu interaction in a linkage and synchronous mode. In some embodiments, the speech device 10 and the display device 20 may each be portable electronic devices that facilitate placement of a user in a desired location.
It will be appreciated that in some embodiments, all of the functions of the controller 30 may be performed by the controller or processor, or control panel, or computer board of the speech device 10 itself.
In some embodiments, all of the functions of the controller 30 may be performed by a controller or processor, or control panel, or computer board of the display device 20 itself.
In some embodiments, part of the functions of the controller 30 can be implemented by the controller or processor or control board of the speech device 10 itself, or a computer board, and another part of the functions of the controller 30 can be implemented by the controller or processor or control board of the display device 20 itself, or a computer board.
In some embodiments, all of the functions of the controller 30 may be implemented by a separately fabricated control box or control terminal including a controller, processor, control board or computer board. The embodiments of the present invention are not particularly limited.
In some embodiments, the step of obtaining the recipe according to the recipe start instruction further comprises:
the voice device 10 generates a menu starting instruction according to the acquired voice instruction; and/or
The display device 20 generates a recipe start instruction according to the acquired input instruction.
In some embodiments, the above steps may be implemented by the controller 30, that is, the controller 30 is configured to: the voice device 10 generates a menu starting instruction according to the acquired voice instruction; and/or the display device 20 generates a menu starting instruction according to the acquired input instruction.
Therefore, the step of acquiring the menu can be realized by using the menu starting instruction in two different modes, so that the user has better use experience.
Specifically, in one embodiment, the menu starting instruction may be input through the speech device 10, the speech device 10 generates the menu starting instruction according to the acquired speech instruction, and the speech device 10 may include an acoustic-electric element, and the acoustic-electric element may convert an acoustic signal into an electric signal to generate the speech instruction. In another embodiment, the menu starting instruction may be input through the display device 20, the display device 20 generates the menu starting instruction according to the obtained input instruction, the display device 20 may include a touch display screen and/or a key, and the user may input information such as a menu name on the touch display screen to generate the input instruction, or may input information such as a menu name through the key to generate the input instruction. In yet another embodiment, the menu starting instruction may also be generated using the voice device 10 and the display device 20. And is not particularly limited herein.
In one example, when the user wants to generate a recipe start command through the speech device 10, the user may say: the voice of "please help me to search for a recipe of chiffon cake" so that the voice device 10 can generate a recipe start instruction according to the obtained voice instruction. When the user wants to generate a recipe start instruction through the display device 20, the input instruction may be: "how do the chiffon cake", the display device 20 may generate a recipe start instruction according to the acquired input instruction.
Referring to fig. 3 and 4 together, in some embodiments, the step of obtaining the recipe according to the recipe starting command includes:
step 013: sending a menu starting instruction to the cloud 40;
step 014: and receiving the menu synchronously sent to the voice device 10 and the display device 20 by the cloud 40.
In certain embodiments, both steps 013 and 014 may be implemented by the controller 30, that is, the controller 30 is configured to: sending a menu starting instruction to the cloud 40; and receiving the menu synchronously sent to the voice device 10 and the display device 20 by the cloud 40.
In this way, the voice device 10 and the display device 20 can obtain the relevant information of the menu steps through the cloud 40 to realize menu interaction in a linkage synchronization manner.
Specifically, the controller 30 may send the recipe start instruction to the cloud 40, the cloud 40 may include a cloud server, the cloud server has the characteristics of high distribution and high virtualization, and the cloud server may be configured as required and may be flexibly adjusted. Cloud 40 can deploy intelligent menu information, including voice broadcast information and display information such as culinary art step, culinary art picture, culinary art knowledge base and suggestion. In some embodiments, the voice device 10 and the display device 20 may be pre-bound in advance, and the cloud 40 may synchronously send the menu step to the voice device 10 and the display device 20 that are pre-bound in advance.
In the embodiment shown in fig. 4, the voice device 10 generates a menu starting instruction according to the acquired voice instruction, and sends the menu starting instruction to the cloud 40. In other embodiments, the display device 20 generates a menu starting instruction according to the acquired input instruction, and sends the menu starting instruction to the cloud 40.
Referring to fig. 5 and 6 together, in some embodiments, the recipe interaction system 100 includes a cooking device 50. The method for acquiring the menu according to the menu starting instruction comprises the following steps:
step 015: causing a recipe start instruction to be sent to the cooking apparatus 50;
step 016: receiving a recipe step in which the cooking apparatus 50 synchronizes the transmission to the voice apparatus 10 and the display apparatus 20.
In some embodiments, step 015 and step 016 may both be implemented by the controller 30, that is, the controller 30 is configured to: the recipe start instruction is sent to the cooking device 50; receiving a recipe step in which the cooking apparatus 50 synchronizes the transmission to the voice apparatus 10 and the display apparatus 20.
As such, a recipe step that the cooking appliance 50 simultaneously transmits to the voice apparatus 10 and the display apparatus 20 may be received, so that the recipe step is applicable to the cooking appliance 50.
Specifically, the recipe interaction system 100 includes a cooking device 50, and the cooking device 50 may include an electromagnetic oven, a gas range, an electric cooker, a pressure cooker, an air fryer, an oven, a microwave oven, and other devices having a cooking function. The controller 30 may transmit the recipe start instruction to the cooking apparatus 50, the cooking apparatus 50 may search its own recipe database according to the recipe name included in the recipe start instruction, or the cooking apparatus 50 also transmits a recipe request instruction to the cloud 40, so that the cloud 40 returns the step of the searched recipe, and the cooking apparatus 50 may synchronously transmit the step of the acquired recipe to the voice apparatus 10 and the display apparatus 20.
In some embodiments, some or all of the functions of the controller 30 may be implemented by a controller or processor, or control panel, or computer board of the cooking appliance 50 itself.
Referring to fig. 3 and 7 together, in some embodiments, the menu interaction method includes:
step 04: acquiring a menu step switching instruction;
step 05: and synchronously sending the voice broadcast information of the switched menu step to the voice device 10 and the display information to the display device 20 according to the menu step switching instruction.
In certain embodiments, both step 04 and step 05 may be implemented by the controller 30, that is, the controller 30 is configured to: and acquiring a menu step switching instruction, and synchronously sending voice broadcast information of the switched menu step to the voice device 10 and displaying information to the display device 20 according to the menu step switching instruction.
Therefore, the user can switch the menu steps more conveniently and quickly according to the cooking requirement, and obtain the voice broadcast information and the display information.
Specifically, the recipe step switching instruction may be a switching instruction to switch to the next step, switch to the previous step, or the like. The recipe step switching instruction may be a voice instruction generated by the user using the voice device 10, such as: the user can use a menu step switching instruction of 'please help me to switch to the next step' and the like by voice. The recipe step switching may also be an input instruction generated by the user using the display device 20, such as: the user can switch the recipe step using the display device 20, the display device 20 includes an operation interface, and the user can click or slide a "next step" button of the operation interface to switch the recipe step. The controller 30 acquires a recipe step switching instruction, and synchronously sends the voice broadcast information of the switched recipe step to the voice device 10 and the display information to the display device 20 according to the recipe step switching instruction, so that the recipe interaction is realized.
Referring to fig. 3 and 8, in some embodiments, the step of obtaining the menu includes:
step 042: the voice device 10 acquires a first switching instruction, and the display device 20 acquires a second switching instruction;
step 044: and under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is consistent with the second switching instruction, enabling the first switching instruction or the second switching instruction to generate a menu step switching instruction.
In certain embodiments, both steps 042 and 044 may be implemented by the controller 30, that is, the controller 30 is configured to: the voice device 10 acquires a first switching instruction, and the display device 20 acquires a second switching instruction; and under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is consistent with the second switching instruction, enabling the first switching instruction or the second switching instruction to generate a menu step switching instruction.
Thus, the inconvenience in use caused by the conflict of the switching instruction can be avoided.
Specifically, since the instruction input window may include the voice device 10 and the display device 20, a user may input a switching instruction through the voice device 10 at a moment and input a switching instruction through the display device 20 at a moment, in order to avoid system conflict, when the interval duration between the first switching instruction acquired by the voice device 10 and the second switching instruction acquired by the display device 20 is less than the preset duration and the first switching instruction is consistent with the second switching instruction, the first switching instruction or the second switching instruction generates a recipe step switching instruction. For example, the preset duration may be 2 seconds, and when the first switching instruction acquired by the voice device 10 is a "next step" instruction, and the second switching instruction acquired by the display device 20 is also a "next step" instruction within 2 seconds, the menu step switching instruction may be generated according to the first switching instruction or the second switching instruction, so that inconvenience in use due to different instructions may be avoided. After the menu step switching instruction is confirmed, the voice broadcast information corresponding to the step is sent to the voice device 10, and the corresponding display information is sent to the display device 20 synchronously.
Referring to fig. 9, in some embodiments, the obtaining of the menu step switching instruction includes:
step 046: and under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is inconsistent with the second switching instruction, ignoring the first switching instruction and the second switching instruction or sending out prompt information of instruction conflict.
In certain embodiments, step 046 may be implemented by controller 30, that is, controller 30 is configured to: and under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is inconsistent with the second switching instruction, ignoring the first switching instruction and the second switching instruction or sending out prompt information of instruction conflict.
Thus, the conflict problem of the first switching instruction and the second switching instruction can be solved.
Specifically, under the condition that the interval duration of the acquired first switching instruction and the acquired second switching instruction is less than the preset duration and the first switching instruction is inconsistent with the second switching instruction, the first switching instruction and the second switching instruction can be ignored, and step switching processing is not performed. And prompt information of instruction conflict can be sent to remind the user to confirm the first switching instruction and the second switching instruction again, so that the accuracy of the switching instruction in the step of acquiring the menu is ensured. The prompt information may be prompted by the voice device 10 by a prompt voice, or may be prompted by the display device 20 by prompt text, graphics, or the like.
In an example, the preset duration may be 2 seconds, when the first switching instruction acquired by the voice device 10 is a "next step" instruction, and within 2 seconds, the second switching instruction acquired by the display device 20 is a "previous step" instruction, at this time, the first switching instruction may be inconsistent with the second switching instruction, so that the first switching instruction and the second switching instruction may be omitted, the voice device 20 continues to broadcast the voice broadcast information of the current menu step, the display device 20 continues to display the display information of the current menu step, or the voice device 10 and/or the display device 20 performs prompt information of instruction conflict.
Referring to fig. 10, in some embodiments, the speech device 10 includes a display unit and the display device 20 includes a speech unit. The menu interaction method comprises the following steps:
step 06: causing the voice broadcast information to be sent to the display device 20 and causing the display information to be sent to the voice device 10;
step 07: and broadcasting the menu according to the voice broadcasting information by the voice unit, and displaying the menu according to the display information by the display unit.
In certain embodiments, both step 06 and step 07 may be implemented by controller 30, that is, controller 30 is configured to: causing the voice broadcast information to be sent to the display device 20 and causing the display information to be sent to the voice device 10; and broadcasting the menu according to the voice broadcasting information by the voice unit, and displaying the menu according to the display information by the display unit.
In one example, the audio device 10 includes a display unit, the display unit may include a display screen, and the audio device 10 may include a bluetooth speaker or the like with a display screen. The display device 20 comprises a speech unit, which may comprise a speaker, and the display device 20 comprises a smartphone with a speaker and a display screen. The controller 30 can send the voice broadcast information to the display device 20, send the display information to the voice device 10, broadcast the recipe step according to the voice broadcast information by the voice unit, and display the recipe step according to the display information by the display unit, so that the user can obtain the recipe step on different devices, and the user can obtain the recipe step more conveniently in the cooking scene.
In this embodiment, the user can freely select the device for broadcasting information by voice and displaying information according to the requirement, for example: the voice device 10 includes a display unit, the display device 20 includes a voice unit, and the user can select the voice device 10 to play the voice broadcast information, select the display unit of the voice device 10 and the display device 20 to simultaneously display the information. Therefore, the user can more conveniently obtain the menu step in the cooking scene, and various requirements of the user on the menu obtaining step mode are met.
Referring to fig. 11, in some embodiments, the total number of speech devices 10 and display devices 20 is at least three. Make voice broadcast information send to voice device 10, make display information send to display device 20, include:
step 022: according to the selection instruction, determining a voice device 10 for receiving the voice broadcast information and a display device 20 for receiving the display information;
and 024: the voice broadcast information is caused to be transmitted to the determined voice device 10, and the display information is caused to be transmitted to the determined display device 20.
In certain embodiments, both step 022 and step 024 may be implemented by controller 30, that is, controller 30 is configured to: according to the selection instruction, determining a voice device 10 for receiving the voice broadcast information and a display device 20 for receiving the display information; the voice broadcast information is caused to be transmitted to the determined voice device 10, and the display information is caused to be transmitted to the determined display device 20.
Therefore, the related information of the menu steps can be flexibly acquired on various devices.
In some embodiments, the total number of speech devices 10 and display devices 20 is at least three. For example: the voice device 10 includes a bluetooth speaker, and the display device 20 includes a smart phone and a tablet computer. According to the selection instruction, the voice device 10 receiving the voice broadcast information is determined to be a bluetooth sound box, the display device 20 receiving the display information is determined to be a smart phone, and the cloud 40 or the cooking device 50 can send the voice broadcast information to the bluetooth sound box and synchronously send the display information to the smart phone.
It should be noted that the voice device 10 and the display device 20 may be pre-set or dynamically bound, and are not limited herein. Dynamic binding, it is understood that other voice devices 10 or other display devices 20 can be added at any time into the recipe interaction method and system to participate in the recipe interaction process during the recipe step.
It should be noted that the specific values mentioned above are only for illustrating the implementation of the invention in detail and should not be construed as limiting the invention. In other examples or embodiments or examples, other values may be selected in accordance with the present invention and are not specifically limited herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, and the program is executed by a processor to implement the steps of any one of the above menu interaction methods.
For example, in the case where the program is executed by a processor, the steps of the following control method are implemented:
step 01: acquiring a menu according to a menu starting instruction, wherein the menu step comprises voice broadcasting information and display information;
step 02: sending the voice broadcast information to the voice device 10, and sending the display information to the display device 20;
step 03: the voice device 10 broadcasts the menu step according to the voice broadcast information, and the display device 20 displays the menu step according to the display information.
The computer-readable storage medium may be disposed in the recipe interaction system 100, or may be disposed in the cloud server, and the recipe interaction system 100 may communicate with the cloud server to obtain the corresponding program.
It will be appreciated that the computer program comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
The controller of the menu interaction system 100 is a single chip integrated with a processor, a memory, a communication module, and the like. The processor may refer to a processor included in the controller. The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processing module-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of embodiments of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (11)

1. A menu interaction method is used for a menu interaction system, and the menu interaction system comprises:
a voice device, and
the display device is provided with a display device,
the menu interaction method comprises the following steps:
acquiring a menu according to a menu starting instruction, wherein the menu step comprises voice broadcasting information and display information;
sending the voice broadcast information to the voice equipment, and sending the display information to the display equipment;
and the voice equipment broadcasts the menu step according to the voice broadcast information, and the display equipment displays the menu step according to the display information.
2. The menu interaction method according to claim 1, wherein the step of obtaining a menu according to the menu starting instruction comprises:
the voice equipment generates the menu starting instruction according to the acquired voice instruction; and/or
And the display equipment generates the menu starting instruction according to the acquired input instruction.
3. The menu interaction method according to claim 1, wherein the step of obtaining a menu according to the menu starting instruction comprises:
enabling the menu starting instruction to be sent to a cloud terminal;
and receiving the menu synchronously sent to the voice equipment and the display equipment by the cloud.
4. The recipe interaction method according to claim 1, wherein the recipe interaction system comprises:
a cooking device is provided with a cooking device,
the method for acquiring the menu according to the menu starting instruction comprises the following steps:
enabling the menu starting instruction to be sent to the cooking equipment;
and receiving the menu synchronously sent to the voice equipment and the display equipment by the cooking equipment.
5. The recipe interaction method according to claim 1, wherein the recipe interaction method comprises:
acquiring a menu step switching instruction;
and synchronously sending the voice broadcast information of the switched menu step to the voice equipment and the display information to the display equipment according to the menu step switching instruction.
6. The menu interaction method according to claim 5, wherein the acquiring menu step switching instruction comprises:
the voice equipment acquires a first switching instruction, and the display equipment acquires a second switching instruction;
and under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is consistent with the second switching instruction, enabling the first switching instruction or the second switching instruction to generate the menu step switching instruction.
7. The recipe interaction method according to claim 6, wherein the recipe interaction method comprises:
and under the condition that the interval duration of the obtained first switching instruction and the second switching instruction is less than the preset duration and the first switching instruction is inconsistent with the second switching instruction, ignoring the first switching instruction and the second switching instruction or sending out prompt information of instruction conflict.
8. The menu interaction method according to claim 1, wherein the voice device comprises a display unit, and the display device comprises a voice unit;
the menu interaction method comprises the following steps:
sending the voice broadcast information to the display device, and sending the display information to the voice device;
the voice unit broadcasts the menu step according to the voice broadcast information, and the display unit displays the menu step according to the display information.
9. The recipe interaction method according to claim 1, wherein a total number of the voice device and the display device is at least three,
make voice broadcast information send to voice equipment makes display information send to display device includes:
according to a selection instruction, determining a voice device for receiving the voice broadcast information and a display device for receiving the display information;
and sending the voice broadcast information to the determined voice equipment, and sending the display information to the determined display equipment.
10. A menu interaction system, characterized in that it comprises a controller for implementing the steps of the menu interaction method according to any one of claims 1-9.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the recipe interaction method according to any one of claims 1 to 9.
CN202011109051.XA 2020-10-16 2020-10-16 Menu interaction method and system and storage medium Active CN112256230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011109051.XA CN112256230B (en) 2020-10-16 2020-10-16 Menu interaction method and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011109051.XA CN112256230B (en) 2020-10-16 2020-10-16 Menu interaction method and system and storage medium

Publications (2)

Publication Number Publication Date
CN112256230A true CN112256230A (en) 2021-01-22
CN112256230B CN112256230B (en) 2024-07-05

Family

ID=74244013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011109051.XA Active CN112256230B (en) 2020-10-16 2020-10-16 Menu interaction method and system and storage medium

Country Status (1)

Country Link
CN (1) CN112256230B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114343437A (en) * 2022-01-14 2022-04-15 深圳市伊欧乐科技有限公司 Auxiliary cooking system and method based on voice recognition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204185A (en) * 2017-05-03 2017-09-26 深圳车盒子科技有限公司 Vehicle-mounted voice exchange method, system and computer-readable recording medium
CN207778540U (en) * 2018-01-12 2018-08-28 广东万家乐厨房科技有限公司 Intelligent range hood and smart kitchen systems
CN109067997A (en) * 2018-08-31 2018-12-21 上海与德通讯技术有限公司 The method and mobile terminal of voice guidance culinary art
CN109308897A (en) * 2018-08-27 2019-02-05 广东美的制冷设备有限公司 Sound control method, module, household appliance, system and computer storage medium
CN109637531A (en) * 2018-12-06 2019-04-16 珠海格力电器股份有限公司 Voice control method and device, storage medium and air conditioner
CN110338681A (en) * 2019-08-15 2019-10-18 广东工业大学 A kind of intelligence chopping block
CN111524516A (en) * 2020-04-30 2020-08-11 青岛海信网络科技股份有限公司 Control method based on voice interaction, server and display device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204185A (en) * 2017-05-03 2017-09-26 深圳车盒子科技有限公司 Vehicle-mounted voice exchange method, system and computer-readable recording medium
CN207778540U (en) * 2018-01-12 2018-08-28 广东万家乐厨房科技有限公司 Intelligent range hood and smart kitchen systems
CN109308897A (en) * 2018-08-27 2019-02-05 广东美的制冷设备有限公司 Sound control method, module, household appliance, system and computer storage medium
CN109067997A (en) * 2018-08-31 2018-12-21 上海与德通讯技术有限公司 The method and mobile terminal of voice guidance culinary art
CN109637531A (en) * 2018-12-06 2019-04-16 珠海格力电器股份有限公司 Voice control method and device, storage medium and air conditioner
CN110338681A (en) * 2019-08-15 2019-10-18 广东工业大学 A kind of intelligence chopping block
CN111524516A (en) * 2020-04-30 2020-08-11 青岛海信网络科技股份有限公司 Control method based on voice interaction, server and display device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114343437A (en) * 2022-01-14 2022-04-15 深圳市伊欧乐科技有限公司 Auxiliary cooking system and method based on voice recognition
CN114343437B (en) * 2022-01-14 2023-08-04 深圳市伊欧乐科技有限公司 Auxiliary cooking system and method based on voice recognition

Also Published As

Publication number Publication date
CN112256230B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
KR102674808B1 (en) Audio apparatus and control method thereof
US10453331B2 (en) Device control method and apparatus
WO2017193540A1 (en) Method, device and system for playing overlay comment
US8063884B2 (en) Information processing apparatus, display control method, and program for controlling a display of the information processing apparatus based on an input received from a remote controller
CN110741651A (en) Methods, systems, and media for presenting notifications indicating recommended content
CN109379613B (en) Audio and video synchronization adjustment method, television, computer readable storage medium and system
CN105430508A (en) Video play method and device
CN109525881A (en) Sound draws synchronous method, device and equipment
KR20220068894A (en) Method and apparatus for playing audio, electronic device, and storage medium
CN106453032B (en) Information-pushing method and device, system
CN105101013A (en) Method and device for playing voice signals
EP2339834A2 (en) Display Apparatus and Method of Controlling Contents Thereof
CN112492095B (en) System, terminal, method, apparatus and storage medium for controlling terminal
CN112256230B (en) Menu interaction method and system and storage medium
CN104243607A (en) Method and device for acquiring equipment information
WO2018010338A1 (en) Display method and device
CN105185396A (en) Method and device for playing audio signal
EP4380163A1 (en) Livestreaming method and apparatus, storage medium, and electronic device
US20100111320A1 (en) Acoustic system and update method of the acoustic system
CN112449235A (en) Pairing method, pairing device and television terminal
US11050579B2 (en) Distribution destination specifying device and distribution destination specifying method
WO2017074944A1 (en) Methods and devices for switching between different tv program accompanying sounds
JP4672813B2 (en) Electronic device, electronic device system, and control method of electronic device
CN107340990B (en) Playing method and device
CN105554630A (en) Telephone receiver, audio playing method and device, electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant