WO2020029500A1 - Procédé de personnalisation de commande vocale, dispositif, appareil et support de stockage informatique - Google Patents

Procédé de personnalisation de commande vocale, dispositif, appareil et support de stockage informatique Download PDF

Info

Publication number
WO2020029500A1
WO2020029500A1 PCT/CN2018/121040 CN2018121040W WO2020029500A1 WO 2020029500 A1 WO2020029500 A1 WO 2020029500A1 CN 2018121040 W CN2018121040 W CN 2018121040W WO 2020029500 A1 WO2020029500 A1 WO 2020029500A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice command
voice
user
operation instruction
information
Prior art date
Application number
PCT/CN2018/121040
Other languages
English (en)
Chinese (zh)
Inventor
韦泽光
张玉
陈琳婷
杨煜豪
程万里
Original Assignee
珠海格力电器股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珠海格力电器股份有限公司 filed Critical 珠海格力电器股份有限公司
Publication of WO2020029500A1 publication Critical patent/WO2020029500A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to the field of computer technology, and in particular, to a method, an apparatus, and a device for customizing a voice command, and a computer storage medium.
  • smart home devices provide more convenience to people's lives, and there are more and more control methods for smart home devices.
  • users can control smart home devices through voice, or through applications installed on terminals.
  • applications Application, APP
  • voice the association relationship between the voice library and the function is usually defined in advance when the smart home device leaves the factory, and the user can perform corresponding voice control according to the instructions or prompts.
  • the voice and function instruction sets are predefined, but there are certain differences in the usage habits of different users.
  • the predefined voice control schemes may not meet the user's usage habits.
  • the user's experience when controlling a smart home device by voice is obviously poor, and the limited functions included in the predefined voice control scheme may not meet the user's needs, further reducing the user's experience.
  • Embodiments of the present invention provide a method, an apparatus, and a device for customizing a voice command, and a computer storage medium, which are used to customize a voice control scheme and improve a user experience.
  • a method for customizing a voice command includes:
  • the user's voice information can be collected to generate a voice command, and corresponding operation instructions are generated according to the user's demonstration operation, and the two are associated.
  • the voice command is generated based on the user's voice information.
  • the operation instructions obtained based on the user's demonstration operation can be not limited to the voice control instructions when the smart home device leaves the factory, and the application range is wider.
  • the generating a first voice command according to the collected voice information includes:
  • the first voice command is generated based on the common characteristics.
  • the establishing an association relationship between the first voice command and the first operation instruction includes:
  • the first feedback information indicates that the result of executing the first operation instruction meets the requirements of the user, store the first voice command after associating with the first operation instruction, otherwise prompt the user again Perform a demo operation.
  • the operation instruction is executed once, and then the operation instruction is verified to confirm whether the function implemented by the operation instruction is consistent with the expected effect of the user.
  • the method further includes:
  • the voice information input by the user can be semantically recognized, and the voice command generated based on the voice information can be associated with the semantic recognition result, so that all the instructions associated with the same semantic recognition result can achieve the same Function, so that even if the user enters a dialect or other language, it can support it, improving the generalization ability of speech recognition.
  • the method further includes:
  • the recorded data is recorded. Said second voice command;
  • the voice command that matches the re-entered voice command is updated according to the second voice command.
  • the voice command database may be updated according to the voice command input for the first time to improve the recognition capability of the voice command database.
  • the method further includes:
  • Output a second prompt message for prompting the user whether to set an associated operation instruction when the voice commands received multiple times consecutively fail to match any of the voice commands in the voice command library;
  • the first prompt information is output.
  • a voice command customization device including:
  • a generating unit configured to generate a first voice command according to the collected voice information
  • An output unit configured to output a first prompt message instructing a user to input a demonstration operation for realizing at least one function of the smart home device
  • the generating unit is further configured to generate a first operation instruction for performing an operation step in the demonstration operation process based on the demonstration operation;
  • An association unit configured to establish an association relationship between the first voice command and the first operation instruction, and store the association relationship, so that when a voice command matching the first voice command is received To execute the first operation instruction.
  • the generating unit is specifically configured to:
  • the first voice command is generated based on the common characteristics.
  • association unit is specifically configured to:
  • the first feedback information indicates that the result of executing the first operation instruction meets the requirements of the user, store the first voice command after associating with the first operation instruction, otherwise prompt the user again Perform a demo operation.
  • the device further includes a semantic recognition unit, configured to:
  • the device further includes an update unit, configured to:
  • the recorded data is recorded. Said second voice command;
  • the voice command that matches the re-entered voice command is updated according to the second voice command.
  • the output unit is further configured to:
  • Output a second prompt message for prompting the user whether to set an associated operation instruction when the voice commands received multiple times consecutively fail to match any of the voice commands in the voice command library;
  • the first prompt information is output.
  • a voice command customization device including:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the method according to the first aspect.
  • a computer storage medium is provided.
  • the computer storage medium stores computer instructions, and when the computer instructions are run on a computer, the computer is caused to execute the method according to the first aspect.
  • FIG. 1 is a schematic flowchart of a voice command customization method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of correlating a semantic recognition result with a voice command according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a voice control process according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a voice command customization device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a voice command customization device according to an embodiment of the present invention.
  • both the voice and function instruction sets in the voice control scheme are pre-defined, but different users have different usage habits.
  • the predefined voice control scheme may not meet the user's usage habits.
  • the use experience when controlling a smart home device by voice is obviously poor, and the limited functions included in the pre-defined voice control scheme may not meet the user's needs, further reducing the user's experience.
  • embodiments of the present invention provide a method, device, and device for customizing a voice command, and a computer storage medium.
  • a user's voice information can be collected to generate a voice command, and a corresponding operation instruction is generated according to a user's demonstration operation.
  • the two are related, so that the voice command is generated based on the user's voice information, which can better conform to the user's speaking habits, and the operation instructions obtained based on the user's demonstration operation can be not limited to when the smart home device leaves the factory.
  • Voice control instructions for a wider range of applications can be collected to generate a voice command, and a corresponding operation instruction is generated according to a user's demonstration operation.
  • an embodiment of the present invention provides a method for customizing a voice command.
  • the method may be performed by a device (hereinafter referred to as a device) provided with a voice control module, such as a smart home device or a terminal.
  • a device hereinafter referred to as a device
  • a voice control module such as a smart home device or a terminal.
  • the smart home device may be a smart device. Air conditioners, smart gas stoves, smart TVs or smart refrigerators, etc.
  • the terminals can be devices such as mobile phones or tablet computers (PADs) that can be installed with APPs for controlling smart home devices. The flow of this method is described below.
  • Step 101 Generate a first voice command according to the collected voice information.
  • a new voice control scheme may be created in a smart home device or APP.
  • the smart home device or APP may provide a visual operation interface for the user. The interface prompts step by step to complete the customization process.
  • a new voice command needs to be generated. Then the user can be prompted to enter voice information. After the user enters the voice information, the smart home device or the terminal where the APP is installed can collect the voice information entered by the user.
  • a microphone is generally provided in the smart home device or the terminal on which the APP is installed, so the smart home device or the terminal on which the APP is installed can collect voice information input by the user through the microphone.
  • a first voice command can be generated according to the voice information input by the user.
  • the voice information input by the user may be collected multiple times, and The voice information collected multiple times is analyzed to extract common features of the voice information, and then a first voice command is generated according to the common features.
  • the collected voice information input by the user three times is "turn on the air conditioner", "help me turn on the air conditioner", and "please turn on the air conditioner”, then the common feature of the three input is "turn on the air conditioner", so the user can input
  • This voice of "turn on the air conditioner” is used as a common feature, and it is used as the first voice command.
  • the vector features in the voice information are separately extracted through the voice recognition model, and then the vector features extracted multiple times are compared to obtain common features, and a first voice command is generated based on the common features. .
  • the user may be prompted to input the next voice information. Specifically, it can be prompted through text information, or can also be prompted through voice.
  • Step 102 The first prompt information is output to instruct the user to input a demonstration operation for realizing at least one function of the smart home device.
  • the first prompt information may be output to prompt the user for a demonstration operation.
  • the first prompt information may be output in the form of text information, for example, the words "Please perform a demonstration operation" may be displayed on the display unit; or the first prompt information may also be output by voice, for example, by using The speaker included in the smart home device or the terminal on which the APP is installed outputs "Please perform a demonstration operation"; of course, it can also be output by combining the above two methods.
  • Step 103 Generate a first operation instruction for performing an operation step in the demonstration operation process based on the demonstration operation.
  • the demonstration operation performed by the user is used to demonstrate the operation steps required to implement at least one function of the smart home device.
  • the device collects the demonstration operation, it can obtain the operation steps included in the demonstration operation.
  • a first operation instruction is generated.
  • each operation step corresponds to a function or a function instruction of the smart home device.
  • the user performed operations such as “turning on the air conditioner”, “adjusting the wind intensity”, and “adjusting the wind direction”.
  • the function instruction may also be other possible function instructions, such as the gear adjustment of a smart home device, or the page jump of an APP.
  • steps 101 and 103 can also be converted, that is, step 103 is performed first, and then step 101 is performed, and then step 102 is used to prompt the user to record a voice command.
  • Step 104 Execute the first operation instruction.
  • Step 105 Determine whether the result of executing the first operation instruction meets the requirements of the user.
  • the first operation instruction may be verified to check whether the first operation instruction can implement the function that the user wants to implement. Therefore, the first operation instruction may be executed once, and then After the execution is completed, a prompt message is output to let the user confirm whether it is what the user expected. After the user performs feedback, the first feedback information may be received, and based on the first feedback information, it is determined whether the result of the first operation instruction meets the requirements of the user.
  • Step 106 If the determination result of step 105 is yes, then establish an association relationship between the first voice command and the first operation instruction, and store the association relationship.
  • the first voice command may be associated with the first operation instruction to generate a new voice control scheme, and the new voice The control scheme is stored. In this way, if the user receives the first voice command again during the use of the smart home device, the first operation instruction associated with the first voice command can be found, and then the first operation instruction is executed to achieve the above.
  • the user demonstrates the functions implemented by the operation.
  • the association between the voice command and the operation instruction may be stored in a storage unit included in the smart home device, so that the smart home device can successfully complete voice control even when there is no network, or the voice command and the The association relationship between the operation instructions is stored on the server side. In this way, the smart home device or APP can obtain the operation instructions associated with the voice command input by the user from the server side.
  • the user can send a voice command entered by the user to the server, and after the server matches the associated operation instruction, it sends the operation instruction to the smart home device to implement voice control; or
  • the APP may send a voice command input by the user to the server, the server sends the matching operation instruction to the APP, and the APP sends the operation instruction to the smart home device to implement voice control.
  • Step 107 If the determination result of step 105 is No, the user is prompted to perform the demo operation again.
  • the user may be prompted to perform the demo operation again, and then a new first operation instruction is generated based on the re-done demo operation.
  • the method further includes the following steps:
  • Step 201 Perform semantic recognition on the collected voice information, and output a semantic recognition result.
  • the users are located in different geographical locations and the types of languages used in daily life are different. For example, Shanghainese may be more accustomed to speak Shanghai dialect, and Sichuanese are more accustomed to speak Sichuan dialect, so users are customizing new voice control
  • dialects may also be input, and different dialects may have the same semantics, so the corresponding operation instructions should be the same. Therefore, in the embodiment of the present invention, after receiving the voice information input by the user, the voice information can also be semantically recognized and the result of the semantic recognition can be output to the user, so that the user can confirm whether the semantic recognition result is correct.
  • the semantic recognition results are generally described in a common language, for example, they can be described in Mandarin. Specifically, during the output, the semantic recognition result may be displayed on the display unit in text form, or the semantic recognition result may be played through a speaker voice.
  • Step 202 Determine whether the semantic recognition result is the semantics expressed by the collected voice information.
  • the user may perform feedback based on the output semantic recognition result.
  • the device may receive the second feedback information input by the user, and determine whether the foregoing semantic recognition result is input by the user based on the second feedback information.
  • the semantics of voice information may be performed by the user.
  • Step 203 If the determination result of step 202 is yes, associate the first voice command with the semantic recognition result.
  • the first voice command can be associated with the semantic recognition result, and the same semantics
  • the operation instructions corresponding to all voice commands associated with the recognition result are the same. In this way, even if the voice commands generated according to different dialects are different, as long as the semantic recognition results associated with these voice commands are the same, the corresponding operation instructions are also the same, so that the voice control can support dialects or spoken life at the same time. .
  • Step 204 If the determination result of step 202 is NO, the received modified semantics of the user is associated with the first voice command.
  • the user may be prompted to input the correct semantics.
  • the user may modify the original semantic recognition result or input the correct semantics by himself, and then the device associates the received modified semantics of the user with the first voice command.
  • the association relationship may be applied to the voice control.
  • Figure 3 is a schematic diagram of the process of a user controlling a smart home device by voice.
  • Step 301 Receive a second voice command input by a user.
  • Step 302 Determine whether a voice command matching the second voice command exists in the voice command library.
  • the device can receive the second voice command And matching the second voice command with the voice command library to determine whether a voice command matching the second voice command exists in the voice command library.
  • the second voice command is matched with the voice command database, all features included in the second voice command may be compared with features included in each voice command in the voice command database.
  • Step 303 If the determination result of step 302 is yes, execute the operation instruction associated with the voice command matching the second voice command.
  • the smart home device may directly execute an operation instruction associated with a voice command matching the second voice command; when the device is a terminal where the APP is installed, the voice matching the second voice command may be The command-associated operation instruction is sent to the smart home device, so that the smart home device executes the operation instruction.
  • Step 304 If the determination result in step 302 is no, the user is prompted to re-enter the voice command.
  • the device may prompt the device to input the voice command again.
  • the device may temporarily save the second voice command.
  • the device may also output to the user the semantic recognition result associated with the one or more voice commands, so that the user confirms the voice command that needs to be entered, and after the user selects and confirms, executes the corresponding operation instruction.
  • Step 305 Determine whether a voice command matching the re-entered voice command exists in the voice command library.
  • Step 306 If the determination result of step 305 is yes, update the voice command that successfully matches the re-entered voice command according to the second voice command.
  • the voice command that successfully matches the re-entered voice command can be updated according to the second voice command to strengthen the Voice commands to make it more compatible.
  • the second voice command can be compared with the re-entered voice command, and the common features of the two can be extracted and stored.
  • the voice command input by the user for multiple consecutive times does not match successfully, it indicates that there is no voice control scheme associated with the voice command, and then the second prompt information may be output, and the second prompt information is used for Prompt the user whether to set the operation instruction associated with the voice command.
  • the user feedback needs to set the associated operation instruction, output a first prompt message to prompt the user to perform the operation demonstration, and after the user performs the demonstration operation, the newly generated
  • the operation instructions are associated with common features in voice commands that are input multiple times in succession to obtain a new voice control scheme.
  • the user's voice information can be collected to generate a voice command, and the corresponding operation instruction can be generated according to the user's demonstration operation, and the two are associated.
  • the voice command is generated based on the user's voice information, and then It can be more in line with the user's speaking habits, and the operation instructions obtained based on the user's demonstration operation can be not limited to the voice control instructions when the smart home device leaves the factory, and has a wider scope of application.
  • an embodiment of the present invention provides a voice command customization device, including:
  • a generating unit 401 configured to generate a first voice command according to the collected voice information
  • An output unit 402 configured to output a first prompt message instructing a user to input a demonstration operation for realizing at least one function of the smart home device
  • the generating unit 401 is further configured to generate a first operation instruction for performing an operation step in the demonstration operation process based on the demonstration operation;
  • the association unit 403 is configured to establish an association relationship between the first voice command and the first operation instruction, and store the association relationship, so that when a voice command matching the first voice command is received, the first operation instruction is executed.
  • the generating unit 401 is specifically configured to:
  • a first voice command is generated based on a common feature.
  • association unit 403 is specifically configured to:
  • the first voice command is associated with the first operation instruction and stored, otherwise the user is prompted to perform a demo operation again.
  • the device further includes a semantic recognition unit 404, configured to:
  • the device further includes an update unit 405, configured to:
  • the second voice is recorded command
  • the voice command that matches the re-entered voice command is updated according to the second voice command.
  • the output unit 402 is further configured to:
  • Output a second prompt message for prompting the user whether to set an associated operation instruction when the voice commands received multiple times consecutively fail to match any of the voice commands in the voice command library;
  • the first prompt information is output.
  • the device can be used to execute the method provided in the embodiment shown in FIGS. 1-3. Therefore, for the functions that can be implemented by the functional modules of the device, please refer to the description of the embodiment shown in FIG. . Among them, although the semantic recognition unit 404 and the update unit 405 are shown together in FIG. 4, they are not mandatory function units, and therefore are shown by dashed lines.
  • an embodiment of the present invention provides a voice command customization device, including at least one processor 501.
  • the at least one processor 501 is configured to execute the computer program stored in the memory when executed The steps of the voice command customization method provided by the illustrated embodiment.
  • the at least one processor 501 may specifically include a central processing unit (CPU), an application-specific integrated circuit (ASIC), may be one or more integrated circuits for controlling program execution, and may be used
  • a hardware circuit developed by a field programmable gate array (FPGA) can be a baseband processor.
  • the at least one processor 501 may include at least one processing core.
  • the device further includes a memory 502, and the memory 502 may include a read-only memory (ROM), a random access memory (RAM), and a disk memory.
  • the memory 502 is configured to store data required when the at least one processor 501 runs.
  • the number of the memories 502 is one or more.
  • the memory 502 is shown together in FIG. 5, but it needs to be known that the memory 502 is not a required functional module, and therefore is shown by a dotted line in FIG. 5.
  • an embodiment of the present invention provides a computer-readable storage medium.
  • the computer-readable storage medium stores computer instructions.
  • the computer instructions When the computer instructions are run on a computer, the computer executes the operations shown in FIG. 1-3. Shown method.
  • the computer-readable storage medium includes: a universal serial bus flash disk (Universal Serial Bus flash drive (USB), a mobile hard disk, a read-only memory (ROM), and a random access memory (ROM) Random Access Memory (RAM), magnetic disks or compact discs, and other storage media that can store program code.
  • USB Universal Serial Bus flash drive
  • ROM read-only memory
  • RAM Random Access Memory
  • magnetic disks or compact discs and other storage media that can store program code.
  • the disclosed device and method may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit or unit is only a logical function division.
  • the combination can either be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical or other forms.
  • Each functional unit in the embodiment of the present invention may be integrated into one processing unit, or each unit may also be an independent physical module.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, all or part of the technical solutions of the embodiments of the present invention may be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions for making a computer device, for example, it may be A personal computer, a server, or a network device, or a processor executes all or part of the steps of the method described in each embodiment of the present invention.
  • the foregoing storage medium includes: a universal serial bus flash drive (universal serial bus flash drive), a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disc, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé de personnalisation de commande vocale, un dispositif, un appareil et un support de stockage informatique utilisés pour personnaliser une solution de commande vocale et pour améliorer l'expérience utilisateur. Le procédé comprend : la génération d'une première commande vocale en fonction d'informations vocales acquises (101) ; l'émission de premières informations d'invitation invitant un utilisateur à entrer une opération de démonstration permettant au moins une fonction d'un appareil domestique intelligent (102), et la génération, sur la base de l'opération de démonstration, d'une première instruction d'opération pour exécuter des étapes d'opération dans l'opération de démonstration (103) ; et l'établissement d'une association entre la première commande vocale et la première instruction d'opération et le stockage de l'association (106), de sorte que la première instruction d'opération est exécutée lors de la réception d'une commande vocale correspondant à la première commande vocale.
PCT/CN2018/121040 2018-08-06 2018-12-14 Procédé de personnalisation de commande vocale, dispositif, appareil et support de stockage informatique WO2020029500A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810887444.XA CN108831469B (zh) 2018-08-06 2018-08-06 语音命令定制方法、装置和设备及计算机存储介质
CN201810887444.X 2018-08-06

Publications (1)

Publication Number Publication Date
WO2020029500A1 true WO2020029500A1 (fr) 2020-02-13

Family

ID=64153673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/121040 WO2020029500A1 (fr) 2018-08-06 2018-12-14 Procédé de personnalisation de commande vocale, dispositif, appareil et support de stockage informatique

Country Status (2)

Country Link
CN (1) CN108831469B (fr)
WO (1) WO2020029500A1 (fr)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108831469B (zh) * 2018-08-06 2021-02-12 珠海格力电器股份有限公司 语音命令定制方法、装置和设备及计算机存储介质
CN109584875A (zh) * 2018-12-24 2019-04-05 珠海格力电器股份有限公司 一种语音设备控制方法、装置、存储介质及语音设备
CN109901707A (zh) * 2018-12-27 2019-06-18 安徽语讯科技有限公司 一种配置到***内的学习型***操作模块
CN109871119A (zh) * 2018-12-27 2019-06-11 安徽语讯科技有限公司 一种学习型智能语音操作方法和***
US11170774B2 (en) * 2019-05-21 2021-11-09 Qualcomm Incorproated Virtual assistant device
CN110570867A (zh) * 2019-09-12 2019-12-13 安信通科技(澳门)有限公司 一种本地新增语料的语音处理方法及***
CN110580904A (zh) * 2019-09-29 2019-12-17 百度在线网络技术(北京)有限公司 通过语音控制小程序的方法、装置、电子设备及存储介质
CN110784384B (zh) * 2019-10-16 2021-11-02 杭州九阳小家电有限公司 一种家电语音技能的生成方法及智能家电
CN111785265A (zh) * 2019-11-26 2020-10-16 北京沃东天骏信息技术有限公司 智能音箱设置方法和装置、控制方法和装置、智能音箱
CN111063353B (zh) * 2019-12-31 2022-11-11 思必驰科技股份有限公司 允许自定义语音交互内容的客户端处理方法及用户终端
CN111261158A (zh) * 2020-01-15 2020-06-09 上海思依暄机器人科技股份有限公司 一种功能菜单定制方法、语音快捷控制方法和机器人
CN113160807A (zh) * 2020-01-22 2021-07-23 广州汽车集团股份有限公司 一种语料库更新方法及其***、语音控制设备
CN111179933A (zh) * 2020-01-23 2020-05-19 珠海荣邦电子科技有限公司 一种语音控制方法、装置及智能终端
CN114067792B (zh) * 2020-08-07 2024-06-14 北京猎户星空科技有限公司 一种智能设备的控制方法及装置
CN114246450B (zh) * 2020-09-21 2024-02-06 佛山市顺德区美的电热电器制造有限公司 信息处理方法、装置、烹饪设备及计算机可读存储介质
CN114327200A (zh) * 2021-11-03 2022-04-12 珠海格力电器股份有限公司 页面展示方法、装置及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030110040A1 (en) * 2001-12-07 2003-06-12 Creative Logic Solutions Inc. System and method for dynamically changing software programs by voice commands
CN103646646A (zh) * 2013-11-27 2014-03-19 联想(北京)有限公司 一种语音控制方法及电子设备
CN103713905A (zh) * 2013-12-29 2014-04-09 广州视源电子科技股份有限公司 一种操作步骤自定义方法、装置及***
CN106484270A (zh) * 2016-09-12 2017-03-08 深圳市金立通信设备有限公司 一种语音操作事件添加方法及终端
CN108831469A (zh) * 2018-08-06 2018-11-16 珠海格力电器股份有限公司 语音命令定制方法、装置和设备及计算机存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE364219T1 (de) * 2000-09-08 2007-06-15 Koninkl Philips Electronics Nv Spracherkennungsverfahren mit ersetzungsbefehl
CN101937693B (zh) * 2010-08-17 2012-04-04 深圳市子栋科技有限公司 基于语音命令的视音频播放方法及***
CN102842306B (zh) * 2012-08-31 2016-05-04 深圳Tcl新技术有限公司 语音控制方法及装置、语音响应方法及装置
CN105845136A (zh) * 2015-01-13 2016-08-10 中兴通讯股份有限公司 语音控制方法、装置及终端
CN105989841B (zh) * 2015-02-17 2019-12-27 上海汽车集团股份有限公司 车载语音控制方法及装置
CN105931637A (zh) * 2016-04-01 2016-09-07 金陵科技学院 一种可自定义指令识别的语音拍照***
CN108174030B (zh) * 2017-12-26 2020-11-17 努比亚技术有限公司 定制化语音控制的实现方法、移动终端及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030110040A1 (en) * 2001-12-07 2003-06-12 Creative Logic Solutions Inc. System and method for dynamically changing software programs by voice commands
CN103646646A (zh) * 2013-11-27 2014-03-19 联想(北京)有限公司 一种语音控制方法及电子设备
CN103713905A (zh) * 2013-12-29 2014-04-09 广州视源电子科技股份有限公司 一种操作步骤自定义方法、装置及***
CN106484270A (zh) * 2016-09-12 2017-03-08 深圳市金立通信设备有限公司 一种语音操作事件添加方法及终端
CN108831469A (zh) * 2018-08-06 2018-11-16 珠海格力电器股份有限公司 语音命令定制方法、装置和设备及计算机存储介质

Also Published As

Publication number Publication date
CN108831469B (zh) 2021-02-12
CN108831469A (zh) 2018-11-16

Similar Documents

Publication Publication Date Title
WO2020029500A1 (fr) Procédé de personnalisation de commande vocale, dispositif, appareil et support de stockage informatique
US11600265B2 (en) Systems and methods for determining whether to trigger a voice capable device based on speaking cadence
US10489112B1 (en) Method for user training of information dialogue system
US9953648B2 (en) Electronic device and method for controlling the same
US20160293168A1 (en) Method of setting personal wake-up word by text for voice control
JP4942970B2 (ja) 音声認識における動詞誤りの回復
US10811005B2 (en) Adapting voice input processing based on voice input characteristics
KR102108500B1 (ko) 번역 기반 통신 서비스 지원 방법 및 시스템과, 이를 지원하는 단말기
US20170046124A1 (en) Responding to Human Spoken Audio Based on User Input
CN102842306B (zh) 语音控制方法及装置、语音响应方法及装置
US20160328205A1 (en) Method and Apparatus for Voice Operation of Mobile Applications Having Unnamed View Elements
US20150371628A1 (en) User-adapted speech recognition
US10860289B2 (en) Flexible voice-based information retrieval system for virtual assistant
US20060253272A1 (en) Voice prompts for use in speech-to-speech translation system
CN107331400A (zh) 一种声纹识别性能提升方法、装置、终端及存储介质
WO2020024620A1 (fr) Procédé et dispositif de traitement d'informations vocales, appareil et support d'enregistrement
KR20160132748A (ko) 전자 장치 및 그 제어 방법
WO2020233363A1 (fr) Procédé et dispositif de reconnaissance vocale, appareil électronique, et support de stockage
WO2019228138A1 (fr) Procédé et appareil de lecture de musique, support d'informations et dispositif électronique
WO2019239656A1 (fr) Dispositif et procédé de traitement d'informations
WO2020135773A1 (fr) Procédé de traitement de données, dispositif et support de stockage lisible par ordinateur
KR20190001435A (ko) 음성 입력에 대응하는 동작을 수행하는 전자 장치
KR20170051994A (ko) 음성인식 디바이스 및 이의 동작 방법
WO2017092322A1 (fr) Procédé de commande d'un navigateur sur un téléviseur intelligent, et téléviseur intelligent
KR102584324B1 (ko) 음성 인식 서비스 제공 방법 및 이를 위한 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18929666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18929666

Country of ref document: EP

Kind code of ref document: A1