CN111292744A - Voice instruction recognition method, system and computer readable storage medium - Google Patents

Voice instruction recognition method, system and computer readable storage medium Download PDF

Info

Publication number
CN111292744A
CN111292744A CN202010074215.3A CN202010074215A CN111292744A CN 111292744 A CN111292744 A CN 111292744A CN 202010074215 A CN202010074215 A CN 202010074215A CN 111292744 A CN111292744 A CN 111292744A
Authority
CN
China
Prior art keywords
voice
message
instruction
recognition
intelligent terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010074215.3A
Other languages
Chinese (zh)
Other versions
CN111292744B (en
Inventor
陈乙银
塞力克·斯兰穆
郑斌
胡泰东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Grey Shark Technology Co ltd
Original Assignee
Nanjing Thunder Shark Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Thunder Shark Information Technology Co Ltd filed Critical Nanjing Thunder Shark Information Technology Co Ltd
Priority to CN202010074215.3A priority Critical patent/CN111292744B/en
Publication of CN111292744A publication Critical patent/CN111292744A/en
Application granted granted Critical
Publication of CN111292744B publication Critical patent/CN111292744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a voice instruction recognition method, a system and a computer readable storage medium, wherein the voice instruction recognition method comprises the following steps: starting a voice instruction recognition script in an intelligent terminal to load a voice model in the voice instruction recognition script; when an application program in the intelligent terminal is started, starting the audio acquisition equipment of the intelligent terminal; acquiring an instruction audio input to audio acquisition equipment, and identifying the instruction audio to an instruction message; comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold value; executing preset execution operation in the matched voice model, and controlling the application program based on the execution operation. After the technical scheme is adopted, the working time during speech semantic recognition can be reduced and the power consumption in the speech operation process can be reduced through the training of the speech model and the configuration of the preset instruction.

Description

Voice instruction recognition method, system and computer readable storage medium
Technical Field
The present invention relates to the field of voice control, and in particular, to a method, a system, and a computer readable storage medium for recognizing a voice command.
Background
With the rapid popularization of intelligent terminals, tablet computers and notebook computers, people have more and more dependence on the use of the equipment. For use of such a device, a user generally inputs a designation based on a touch screen that the device has, for example, clicking, double-clicking, long-pressing an operation button displayed on the touch screen, to output an operation instruction to the device.
In order to enrich the instruction input of the user to the equipment, a plurality of equipment manufacturers develop the function of voice operation. And analyzing the voice sent to the equipment by the user into operation on the equipment by identifying the voice, and executing corresponding operation.
In the prior art, the method is realized by converting voice input into a voice command through voice recognition and then mapping the voice command and a game command in a game, and in the concrete implementation, a voice acquisition recognition module and a voice control command set are required to be packaged into an SDK and deeply integrated into a game module, or the modification of an input driving program in terminal equipment is required to realize high cost, and the modification can be completed only by deep cooperation development of a game manufacturer and an equipment manufacturer. And the mode has poor compatibility, needs to be adapted for each game instruction, and does not consider the power consumption problem of voice recognition. In addition, if the voice recognition process is long or is stuck, the instruction input of the user is affected.
Therefore, a novel voice instruction recognition method is needed, a model applied to low-power scene control can be obtained through training, the flows of voice recognition and instruction conversion are reduced when voice instruction recognition is carried out, and the cruising ability of the intelligent terminal is improved.
Disclosure of Invention
In order to overcome the above technical drawbacks, an object of the present invention is to provide a method, a system and a computer readable storage medium for recognizing a voice command, which can reduce the working time during voice semantic recognition and reduce the power consumption during voice operation by training a voice model and configuring a preset command.
The invention discloses a voice instruction recognition method, which comprises the following steps:
starting a voice instruction recognition script in an intelligent terminal to load a voice model in the voice instruction recognition script;
when an application program in the intelligent terminal is started, starting the audio acquisition equipment of the intelligent terminal;
acquiring an instruction audio input to audio acquisition equipment, and identifying the instruction audio to an instruction message;
comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold value;
executing preset execution operation in the matched voice model, and controlling the application program based on the execution operation.
Preferably, the step of starting the voice command recognition script in the intelligent terminal to load the voice model in the voice command recognition script includes:
starting a voice instruction recognition script in the intelligent terminal, and judging whether a voice model exists in the voice instruction recognition script or not;
when the voice model does not exist, forming a prompt interface in the intelligent terminal and displaying information for activating the voice message receiving function;
receiving at least one externally formed voice message;
recognizing each voice message to form at least one recognition result message, and displaying the recognition result message on a mapping interface;
an operation unit of the target application program is also displayed on the mapping interface;
and associating each recognition result message with one or more operation units to form a configuration relationship and then storing the configuration relationship.
Preferably, the step of recognizing each voice message to form at least one recognition result message and displaying the recognition result message on a mapping interface comprises:
analyzing the voice message and converting the voice message into a text message;
extracting key words in the text message;
and storing the keyword as at least one recognition result message, and sending the recognition result message to a server side so as to generate a voice model at the server side.
Preferably, the step of extracting the keywords in the text message comprises:
acquiring a target application program and commonly used phrases of the target application program;
comparing the text message with the commonly used phrases, and extracting the content which is matched with the commonly used phrases or has the similarity higher than a preset threshold value in the text message;
and saving the content as a keyword or modifying the content to the common expression with the closest similarity as the keyword.
Preferably, associating each recognition result message with one or more operation units, and the step of storing after forming the configuration relationship includes:
receiving external operation executed on the mapping interface, and moving the positions of the operation unit and the mapping interface according to the external operation;
associating the recognition result message with the operation unit when any operation unit moves to a position corresponding to a recognition result message;
and storing the association relationship between each operation unit and the recognition result message as the configuration relationship of the voice model.
Preferably, after the step of associating each recognition result message with one or more operation units and storing the recognition result messages after forming the configuration relationship, the method further comprises the following steps:
naming the configuration relationship, and downloading the voice model from the server side;
modifying the name of the voice model into the name of the configuration relationship, and storing the configuration relationship into the voice model;
the speech model is saved to a database.
Preferably, the execution operation preset in the matched speech model is executed, and the step of controlling the application program based on the execution operation includes:
according to execution operation preset in the voice model, a touch event aiming at a display unit of the intelligent terminal is constructed, and the touch event is sent to a control unit of the intelligent terminal;
based on the injection scheme of the installation system of the intelligent terminal, the control unit generates touch control and takes effect to form an execution operation control application program.
Preferably, the step of constructing a touch event for the display unit of the intelligent terminal according to an execution operation preset in the voice model and sending the touch event to the control unit of the intelligent terminal further includes:
and displaying a prompt symbol on a display unit of the intelligent terminal to serve as a notification signal for successful construction of the touch event.
The invention also discloses a voice command recognition system, which comprises:
the script module is arranged in an intelligent terminal, and when the script module is activated, the voice instruction recognition script arranged in the script module is started;
the loading module loads a voice model in the voice instruction recognition script in the script module;
the audio acquisition equipment is used for acquiring instruction audio when an application program in the intelligent terminal is started;
and the control unit is used for identifying the instruction audio to the instruction message, comparing the instruction message with the identification result message in the voice model, matching the instruction message with the voice model when the instruction message is matched with the identification result message or the similarity is greater than a similarity threshold value, and executing preset execution operation in the matched voice model so as to control the application program based on the execution operation.
The invention also discloses a computer readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of:
starting a voice instruction recognition script in an intelligent terminal to load a voice model in the voice instruction recognition script;
when an application program in the intelligent terminal is started, starting the audio acquisition equipment of the intelligent terminal;
acquiring an instruction audio input to audio acquisition equipment, and identifying the instruction audio to an instruction message;
comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold value;
executing preset execution operation in the matched voice model, and controlling the application program based on the execution operation.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. the trained model supports a plurality of applications in the same scene or a plurality of applications in different scenes;
2. the mapping mode is more direct, and a user can associate the trained voice model with an operation instruction conveniently;
3. when the voice model is used, the power consumption and the time for recognition are reduced, and the process of converting the voice into operation is effectively accelerated;
4. after the voice command is recognized, the control command mapped by the command does not need to be analyzed, and the consumed time for analysis is transferred to a more early stage by utilizing a pre-set voice model, so that the speed of converting the voice command into the control command is increased;
5. the sharing of the speech models improves the reusability of the speech command recognition system.
Drawings
FIG. 1 is a flow chart illustrating a voice command recognition method according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method for recognizing a voice command according to a further preferred embodiment of the present invention;
FIG. 3 is a flow chart illustrating a method for recognizing a voice command according to a further preferred embodiment of the present invention;
FIG. 4 is a block diagram of a voice command recognition system according to a preferred embodiment of the present invention.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
Referring to fig. 1, a flow chart of a voice command recognition method according to a preferred embodiment of the present invention is shown, in which the voice command recognition method includes the following steps:
s100: in an intelligent terminal, starting a voice instruction recognition script to load a voice model in the voice instruction recognition script
The intelligent terminal is used as a device for receiving voice commands and converting the voice commands into control commands for the intelligent terminal, and a voice command recognition script, such as an API based on natural language or non-natural language processing, and a voice model (or a newly-built voice model when the voice model is not stored) are stored in the voice command recognition script. For the loading of the voice model, the operation of the user on the intelligent terminal can select the correspondence of the voice model, for example, the user needs to perform voice control on a certain target application program, such as 'royal glory', 'stimulating battlefield', 'flight video', etc., when selecting the voice model, the voice model dedicated to the desired application program can be selected, that is, the voice model can be used for one application under an application scene in the intelligent terminal, and when the operations executed by the same voice command in different application programs are the same, such as 'return', 'enter a setting interface', etc., the universal voice model can be selected, or any voice model with uniform voice command conversion can be selected, that is, the voice model supports multiple applications under the same scene or multiple applications under different scenes.
S200: when an application program in the intelligent terminal is started, the audio acquisition equipment of the intelligent terminal is started
Based on the operation of a user, an application program in the intelligent terminal is started, and after the application program is started, based on the permission setting of the intelligent terminal on the application program, audio acquisition equipment of the intelligent terminal, such as a microphone, earphone equipment in wired or wireless connection with the intelligent terminal and the like, can be called.
S300: collecting instruction audio input to an audio collection device and identifying the instruction audio to an instruction message
The user inputs instruction audio to the intelligent terminal, for example, sounds are sent to the intelligent terminal, sounds are sent to earphone equipment connected with the intelligent terminal in a wired or wireless mode, and the instruction audio is collected by the audio collecting equipment. It can be understood that the audio acquisition device can work silently all the time when the application program runs, and when a user makes a sound, the audio acquisition device can acquire the instruction audio at any time, or preset a button and operation (such as double-click intelligent terminal and triple-click intelligent terminal) for starting the acquisition function, and start the acquisition of the instruction audio according to the activation of the preset button and operation. After the instruction audio is collected, the audio collecting device forwards the instruction audio to a control unit (such as a CPU, etc.), and the control unit converts the instruction audio of the audio signal into an instruction message in a text form or a digital form, which has been performed in many existing ways and is not described herein in detail.
S400: comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is more than a similarity threshold value
The control unit compares the converted instruction message with the recognition result message in the voice model, and it can be understood that, when the voice model is trained, the preset voice instruction for training is converted into the recognition result message and the mapping relation between the recognition result message and the corresponding operation is obtained. Therefore, after the instruction message is compared with the recognition result message, whether the instruction message can be matched with the recognition result message is judged (for example, the instruction message comprises the recognition result message, the instruction message is contained in the recognition result message, the expression meaning of the instruction message is equal to the expression meaning of the recognition result message, part of the instruction message is overlapped with part of the recognition result message, and the like), or under the condition that the instruction message is completely different from the recognition result message, the instruction message and the recognition result message have certain similarity through the same semantic meaning and the similar semantic meaning (the similarity is greater than a similarity threshold), so that the instruction message is determined to be matched with the recognition result message in the voice model.
S500: executing preset execution operation in the matched voice model, and controlling the application program based on the execution operation
And after the instruction message is successfully mapped with the voice model, controlling the application program based on preset execution operation which is preset in the voice model and mapped with the recognition result message. That is, in the voice command recognition method of the embodiment, the mapping configuration relationship from the command audio to the operation command is as follows: the instruction audio-instruction message-recognition result message in the voice model-execution operation-operation control of the application program. It will be appreciated that the execution of the operation may be a specific operation within the application, such as fast forwarding of streaming media within "Tencent video," Hutch control of the hero operation within "Wang Rong," etc.
With the configuration, the analysis mapping relation between the voice message and the execution operation is completed in the training of the voice model in advance by using the voice model, so that the analysis process of the execution operation corresponding to the voice message can be completely abandoned during the voice command recognition, namely the flow of the voice recognition operation is simplified.
Referring to fig. 2, in a preferred embodiment, the step S100 of starting the voice command recognition script in the intelligent terminal to load the voice model in the voice command recognition script includes:
s110: starting a voice instruction recognition script in the intelligent terminal and judging whether the voice instruction recognition script has a voice model or not
After the voice command recognition script is started, whether a voice model is available to be loaded in the script is judged.
S120: when the intelligent terminal does not have the voice model, a prompt interface is formed in the intelligent terminal to display information for activating the voice message receiving function
Without a speech model, training is required to form the speech model. The voice model training method can be completed in a server side and an intelligent terminal, and therefore when the voice model is trained, the voice model is displayed outwards through interaction media such as a display screen of the server side, a display screen connected with the server side, a display screen of the intelligent terminal and the like. After the display screens are provided, a prompt interface is formed when the initial voice model training method is used, the prompt interface is displayed on the display screens and displays information of the activated voice message receiving function, and a user needing to form a voice model is informed that the user can send voice messages to a server end, an intelligent terminal or equipment which can receive voice, such as a microphone connected with the server end and the intelligent terminal, so as to start voice recognition and model establishment.
S130: receiving at least one externally formed voice message
And prompting the user to send the information of the voice message to the equipment after entering the model training interface according to the displayed prompting interface. After receiving the model training interface, the user may send at least one voice message, for example, an operation instruction message including pure chinese, such as "attack", "defense", "city-back", "set", "retreat", etc., or an operation instruction message including foreign language, such as "attack", "defense", "back", "done", etc., or an operation instruction message including numbers, such as "666", "333", "886", etc., to a device (such as the server, the smart terminal, or a device that may receive voice, such as a microphone connected to the server, the smart terminal, etc., as described above) according to guidance of the model training interface.
S140: recognizing each voice message to form at least one recognition result message, and displaying the recognition result message on a mapping interface
After receiving the voice message, performing voice recognition on each voice message to form at least one recognition result message. It will be appreciated that the recognition result message may be formed to correspond entirely to the received voice message, e.g., the voice message input by the user to the device is "everywhere attacked" and the recognition result message is "everywhere attacked" and may also be formed to correspond to a portion of the received voice message, e.g., the recognition result message is "everywhere attacked" or "attacked". The recognition result message formed by recognition is displayed on a display screen of the equipment, in particular, a mapping interface of the display screen, so as to inform the user of the recognition result of the voice message by the equipment, the user can confirm the recognition accuracy, and when the recognition result message has high enough accuracy (larger than a set threshold value or confirmed by the user), the next step can be executed; and when the accuracy of the recognition result message is not enough (less than the set threshold value, or the user does not confirm), the user can be requested to re-input the voice message, or the voice message can be re-recognized until the accuracy of the recognition result message is high enough.
S150: operation unit for displaying target application program on mapping interface
Besides displaying the recognition result message, the mapping interface also displays at least one target application program, which is an application program that can use the voice model and execute corresponding operations according to the voice model, such as a game application program that executes corresponding operations according to the voice message, a media application program that executes streaming media control according to the voice message, and the like. On the mapping interface, a unique and easily-identified operation unit, such as the name, icon and the like of the target application program, can be adopted for the display of the target application program. That is to say, the mapping interface respectively displays the recognition result message and the operation unit corresponding to the target application program, so that the user can conveniently know which usage scenarios the recognition result message can correspond to.
S160: associating each recognition result message with one or more operation units, forming a configuration relationship, and storing
The user can input a control instruction in the mapping interface, and for each recognition result message, the recognition result message is associated with one or more operation units, so that the mapping relation between the recognition result message and the operation units is formed, the mapping relation extends to the mapping relation between the recognition result message and the target application program, the mapping relation further extends to the mapping relation between the voice message and the specific operation in the target application program, and the mapping relation is stored as a configuration relation. For example, the recognition result message is "attack", and the recognition result message of "attack" is associated with the game application such as "royal glory", "use call", "yin and yang teacher" according to the mapping operation of the user on the recognition result message, so that the "attack" after voice recognition corresponds to the specific operation of the target application in the formed voice model. The specific operation may be to link the recognition result message with a specific icon in the application program in the mapping interface according to the mapping operation of the user, so that the initial "attack" voice message is converted into execution of an attack icon of the game application program such as "royal glory", "use call", "yin-yang man", and the like.
Through the configuration, the trained model supports a plurality of applications in the same scene or a plurality of applications in different scenes, so that one voice message can be used in a plurality of application programs, and the occupied space of the voice model is saved; secondly, the user maps the voice message with the operation unit more directly.
In an advanced preferred embodiment, the step S140 of recognizing each of the voice messages to form at least one recognition result message and displaying the recognition result message on a mapping interface includes:
s141: parsing and converting voice messages into text messages
After receiving the voice message, the voice message in the form of voice signal can be converted into text message through the voice recognition module. The voice recognition module used in this embodiment may be a common APK or the like that converts voice into text.
S142: extracting key words in the text message;
for the converted text message, the keywords in the text message are extracted. The extraction of the keywords may be the whole text message (for example, when the number of words in the text message is small), or the noise-removed text in the text message, or the remaining keywords unrelated to the operation instruction, as described above.
S143: storing the keyword as at least one recognition result message, and sending the recognition result message to a server end to generate a voice model at the server end
And storing the obtained keyword as at least one recognition result message, and when the received voice message is the intelligent terminal, the intelligent terminal can send the recognition result message to the server side, and after the voice message is stored in the server side, the recognition result message is converted into a common voice model.
More specifically, the step S142 of lifting up the keyword in the text message includes:
s142-1: acquiring a target application program and commonly used phrases of the target application program;
and selecting part or all of the application programs installed in the intelligent terminal of the user as target application programs according to the selection operation of the user. After the target application program is selected, the commonly used phrases in the target application program are obtained. Taking the target application program of "royal glory" as an example, after the target application program is determined to include, commonly used messages of "royal glory" can be called from the network as commonly used words, such as "one wave", "make wild", "go back to city", "withdraw", etc., and also commonly used messages specially used for the user can be customized as commonly used words according to the configuration of the user, such as "shooter follow me", "do nothing back" lamps, and also can identify each interface of "royal glory", and convert the characters displayed in the interface into commonly used words, such as "mall", "set", "hero", etc., which are directly displayed in the interface of the target application program; taking the target application program of "Tengchin video" as an example, after the target application program is determined to be included, the common messages of "Tengchin video" can be called from the network as common terms, such as "quit", "recommend", "increase volume", and the like, and the common messages specific to the target application program can be customized as common terms, such as "fast forward for 15 seconds", "fast backward for 30 seconds", "next" and the like, according to the configuration of the user, and the characters displayed in the interface can be converted into the common terms, such as "daily recommendation", "movie", "art", "sports", and the like, which are directly displayed in the target application program interface.
S142-2: comparing the text message with the commonly used phrases, and extracting the content which is matched with the commonly used phrases or has the similarity higher than a preset threshold value in the text message;
after having the commonly used phrases, the text message obtained after recognition is compared with the commonly used phrases, and the following cases may be possible when the comparison is made:
1) complete matching of text message and commonly used word
Taking the commonly used expression as "attack" or "city return" as an example, when the text message converted from the voice message is "attack" or "city return", on one hand, the text message represents "attack" or "city return" which the voice input by the user to the terminal sends, on the other hand, the text message is completely retained under the condition that the text message is completely matched with the commonly used expression.
2) Matching parts of a text message with commonly used phrases
Taking the commonly used words as "attack" or "city return" as an example, when the text message converted from the voice message is "i want to attack", "attack the other side", "i want to return to the city" or "catch up to return to the city", on the one hand, the text message means that the voice input by the user to the terminal is "i want to attack", "attack the other side", "i want to return to the city" or "catch up to return to the city", on the other hand, the text message contains all the commonly used words, and the text message does not completely retain the text message, but extracts the commonly used words included in the text message as the recognition result message, so as to save the occupied space of the voice model.
3) Partial matching of text messages to commonly used phrases
Taking the commonly used phrases "fast forward for 15 seconds", "fast backward for 30 seconds", "playing music atmosphere", as an example, when the text message converted from the voice message is "fast backward", "fast forward" or "music on coming point", on one hand, it means that the voice inputted by the user to the terminal is "fast backward", "fast forward" or "music on coming point", on the other hand, the whole text message includes the part of the commonly used phrases, i.e. the text message is included in the commonly used phrases, so the text message can be selectively and completely retained, e.g. only the "fast backward", "fast forward" or "music on coming point", or automatically mapped to the commonly used phrases according to the inclusion procedure of the text message and the commonly used phrases, e.g. when the text message is "fast backward", the extracted word is "fast backward for 30 seconds" closest to the "fast backward".
4) Matching parts of text messages with parts of commonly used phrases
Taking the commonly used phrases as "fast forward for 15 seconds", "fast backward for 30 seconds", "music playing atmosphere adjustment", as an example, when the text message converted from the voice message is "i want to fast forward", "i want to fast backward", or "i want to play music", on one hand, it means that "i want to fast forward", "i want to fast backward", or "i want to play music" the voice input by the user to the terminal is sent for the text message, on the other hand, the text message includes a part of the commonly used phrases, that is, a part of the text message overlaps with a part of the commonly used phrases, and the overlapping part of the text message and the commonly used phrases is reserved, and only the "fast backward", "fast forward", or "music playing" is reserved.
5) The similarity between the part of the text message and the part of the commonly used phrases is higher than a threshold value
Taking the commonly used phrases as "fast forward for 15 seconds", "fast backward for 30 seconds", "playing music to adjust atmosphere" as an example, when the text message converted from the voice message is "i want to forward", "i want to review" or "i want to click a song", on the one hand, the text message means "i want to forward", "i want to review" or "i want to click a song" which is sent by the voice input by the user to the terminal, on the other hand, the part of the text message is not overlapped with the part of the commonly used phrases basically or has no overlapped part, but the control instruction expressed by the text message is substantially the same as the control instruction expressed by the commonly used phrases. Therefore, in this case, in addition to the recognition of the text message, the step S322 simply recognizes the meaning of the expression, compares the recognized meaning with each of the expression meanings in the common expressions, and if the expression meanings themselves match, considers that the text message and the common expressions have a certain degree of similarity, and if the degree of similarity is greater than a set threshold, the text message or the common expressions may be selectively included as the keyword.
S142-3: saving content as keywords or modifying content to common phrases with closest similarity as keywords
In each case, the extracted content is finally saved as a keyword, or the content is modified as a keyword using a common term as a standard. For example, in the above 4), 5), it is preferable to use a common term as a standard of use so that a procedure of extracting and expressing meaning understanding of a text message can be simplified. Based on the existing commonly used expressions, the analysis result of the meaning of the commonly used expressions can be used in advance, and the forming process of the voice model is simplified.
In a preferred embodiment, the step S150 of further displaying the operation unit of the target application on the mapping interface further includes:
s151: acquiring the type and key frame of a target application program;
the method comprises the steps of obtaining an application program list installed in the intelligent terminal, and determining types of application programs, such as games, media, social contact, reading and news, according to the application programs which can be set by a user and can be used as target application programs or all application programs. For the target applications, at least one key frame under activation and running of the target applications is also acquired, for example, a display frame under a target application starting interface, a display frame under an entry operation interface, a display frame under a most frequently used interface and the like.
S152: extracting part or all of operation units operating on target application program in key frame
And after the key frame is acquired, extracting part or all of the operation units corresponding to the operation of the target application program. For example, in a certain key frame, the operation unit includes an attack key, a defense key, and a skill key which are always displayed on the front end, and a direction key, an indication key, and a guidance key which are displayed after the user touches the display screen.
In another preferred embodiment, the step S160 of associating each recognition result message with one or more operation units and saving after forming the configuration relationship includes:
s161: receiving external operation executed on the mapping interface, and moving the operation unit at the position of the mapping interface according to the external operation;
an operation unit is displayed on the mapping interface to inform a user which operations within the target application are to be mapped to the speech model. After the user recognizes the operation units, external operations, such as long press, click, double click, etc., are applied to the display screen, and according to the external operations, when the user moves a contact portion of the display screen, such as a finger, a touch pen, etc., on the display screen, the operation unit also moves along with the movement of the contact portion, thereby changing the position of the operation unit within the mapping interface.
S162: associating the recognition result message with the operation unit when any operation unit moves to a position corresponding to a recognition result message;
the mapping interface is also displayed with an identification result message, and a blank area can be arranged beside the identification result message as the establishment of the mapping relation. For example, if one or more operation units are moved to the blank area and maintained for a certain time, it indicates that the operation unit is associated with the identification result message. Therefore, when any one or more of the operation units is moved to the position corresponding to the recognition result message based on the user's operation, and the user's contact portion is moved out of the display screen, the final position of the operation unit is indicated, and when the final position corresponds to the recognition result message, the recognition result message is associated with the operation unit.
S163: and storing the association relationship between each operation unit and the recognition result message as the configuration relationship of the voice model.
After the identification result message is associated with the operation units, the association relationship between each operation unit and the identification result message is stored, and if a next keyword or an identification result message corresponding to the keyword exists, the configuration can be continued, and the next operation unit or the identification result message can be obtained.
In a preferred embodiment, after the step S160 of associating each of the identification result messages with one or more operation units and storing the association result messages after forming a configuration relationship, the method further includes the following steps:
s170: naming the configuration relationship, and downloading the voice model from the server side;
according to the operation of a user, naming each saved configuration relation, wherein the naming mode can be the application of a target application program plus a voice model, such as a royal glory attack, a mission call, blood return and the like, or packaging and saving a plurality of configurations, and the naming mode is only the target application program and downloading a native voice model from a server side.
S180: modifying the name of the voice model into the name of the configuration relationship, and storing the configuration relationship into the voice model;
after receiving the native speech model, the speech model can be modified into the name of the configuration relationship, and the configuration relationship is saved into the speech model. And finally, executing the step S180, storing the voice model into a database, and ending the configuration interface or the mapping interface.
Referring to fig. 3, in a preferred embodiment, the step S500 of executing the predetermined execution operation in the matched speech model includes:
s510: according to execution operation preset in the voice model, a touch event aiming at a display unit of the intelligent terminal is constructed, and the touch event is sent to a control unit of the intelligent terminal;
and determining the recognition result message in the mapped voice model, determining the coordinate position of the display screen of the intelligent terminal corresponding to the execution operation, which needs to execute the operation, according to the preset execution operation corresponding to the recognition result message, and constructing a touch event to the display unit based on the coordinate positions. Preferably, the construction of the touch event can be interactively shown to the user, for example, a prompt symbol, such as a ripple animation effect, or a prompt sound effect, is displayed on a display unit of the smart terminal to serve as a notification signal that the construction of the touch event is successful, so as to inform the user that the voice command is recognized and mapped to the construction of the touch event.
S520: based on the injection scheme of the installation system of the intelligent terminal, the control unit generates touch control and takes effect to form an execution operation control application program
The method comprises the steps of integrating touch events (user touch, handle mapping touch in wired and wireless connection with an intelligent terminal) into an updated touch event to ensure that multi-point touch experience of the intelligent terminal is achieved simultaneously, and generating touch control aiming at an operation unit in an application program by utilizing an injection scheme of an installation system of the intelligent terminal, such as an android project scheme and a control unit, so that execution operation of the controllable application program is finally formed.
Referring to fig. 4, a voice command recognition system is shown, which includes a script module, which is disposed in an intelligent terminal, and when the script module is activated, starts a voice command recognition script disposed in the script module; the loading module loads a voice model in the voice instruction recognition script in the script module; the audio acquisition equipment is used for acquiring instruction audio when an application program in the intelligent terminal is started; and the control unit is used for identifying the instruction audio to the instruction message, comparing the instruction message with the identification result message in the voice model, matching the instruction message with the voice model when the instruction message is matched with the identification result message or the similarity is greater than a similarity threshold value, and executing preset execution operation in the matched voice model so as to control the application program based on the execution operation.
When the loading module does not have the voice model, the voice instruction recognition system further comprises a voice model training module, comprising: the prompting unit forms a prompting interface and displays information for activating the voice message receiving function; a receiving unit receiving at least one voice message externally formed; the recognition unit is used for recognizing each voice message to form at least one recognition result message and displaying the recognition result message on a mapping interface; the interaction unit is used for forming a mapping interface and displaying an operation unit of the target application program on the mapping interface; and the association unit associates each recognition result message with one or more operation units to form a configuration relationship and then stores the configuration relationship. In a preferred embodiment, the association unit comprises: the mobile element is connected with the interactive unit, receives external operation executed on the mapping interface, and moves the positions of the operation element and the mapping interface according to the external operation; a display element for highlighting the operation element in motion; the association element associates the identification result message with the operation element when any operation element moves to a position corresponding to the identification result message; and the storage element is used for storing the association relationship between each operation element and the recognition result message as the configuration relationship of the voice model.
In an embodiment, a computer-readable storage medium is also disclosed, on which a computer program is stored, which computer program, when executed by a processor, performs the steps of: starting a voice instruction recognition script in an intelligent terminal to load a voice model in the voice instruction recognition script; when an application program in the intelligent terminal is started, starting the audio acquisition equipment of the intelligent terminal; acquiring an instruction audio input to audio acquisition equipment, and identifying the instruction audio to an instruction message; comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold value; executing preset execution operation in the matched voice model, and controlling the application program based on the execution operation.
The smart terminal may be implemented in various forms. For example, the terminal described in the present invention may include an intelligent terminal such as a mobile phone, a smart phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, etc., and a fixed terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a smart terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
It should be noted that the embodiments of the present invention have been described in terms of preferred embodiments, and not by way of limitation, and that those skilled in the art can make modifications and variations of the embodiments described above without departing from the spirit of the invention.

Claims (10)

1. A voice command recognition method, comprising the steps of:
starting a voice instruction recognition script in an intelligent terminal to load a voice model in the voice instruction recognition script;
when an application program in the intelligent terminal is started, starting the audio acquisition equipment of the intelligent terminal;
acquiring an instruction audio input to audio acquisition equipment, and identifying the instruction audio to an instruction message;
comparing the instruction message with a recognition result message in a voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold value;
executing preset executing operation in the matched voice model, and controlling the application program based on the executing operation.
2. The voice instruction recognition method according to claim 1,
in an intelligent terminal, the step of starting a voice instruction recognition script to load a voice model in the voice instruction recognition script comprises the following steps:
starting a voice instruction recognition script in the intelligent terminal, and judging whether the voice instruction recognition script has a voice model or not;
when the voice model does not exist, forming a prompt interface in the intelligent terminal and displaying information for activating the voice message receiving function;
receiving at least one externally formed voice message;
recognizing each voice message to form at least one recognition result message, and displaying the recognition result message on a mapping interface;
an operation unit of the target application program is further displayed on the mapping interface;
and associating each recognition result message with one or more operation units to form a configuration relationship and then storing the configuration relationship.
3. The voice instruction recognition method according to claim 2,
the steps of recognizing each voice message to form at least one recognition result message and displaying the recognition result message on a mapping interface comprise:
analyzing voice messages and converting the voice messages into text messages;
extracting key words in the text message;
and storing the keyword as at least one recognition result message, and sending the recognition result message to a server side so as to generate a voice model at the server side.
4. The voice instruction recognition method according to claim 3,
the step of extracting the keywords in the text message comprises the following steps:
acquiring a target application program and commonly used phrases of the target application program;
comparing the text message with the commonly used phrases, and extracting the content which is matched with the commonly used phrases or has the similarity higher than a preset threshold value in the text message;
and saving the content as a keyword or modifying the content to the common expression with the closest similarity as the keyword.
5. The voice instruction recognition method according to claim 2,
associating each recognition result message with one or more operation units, and saving after forming a configuration relationship, wherein the step comprises the following steps:
receiving external operation executed on a mapping interface, and moving the positions of the operation unit and the mapping interface according to the external operation;
associating the recognition result message with the operation unit when any operation unit moves to a position corresponding to a recognition result message;
and storing the association relationship between each operation unit and the recognition result message as the configuration relationship of the voice model.
6. The voice instruction recognition method according to claim 2,
associating each recognition result message with one or more operation units, and after the step of storing after forming a configuration relationship, further comprising the following steps:
naming the configuration relationship, and downloading the voice model from the server side;
modifying the name of the voice model into the name of the configuration relationship, and storing the configuration relationship into the voice model;
and saving the voice model to a database.
7. The voice instruction recognition method according to claim 1,
executing preset executing operation in the matched voice model, wherein the step of controlling the application program based on the executing operation comprises the following steps:
according to execution operation preset in the voice model, a touch event aiming at a display unit of the intelligent terminal is constructed, and the touch event is sent to a control unit of the intelligent terminal;
based on the injection scheme of the installation system of the intelligent terminal, the control unit generates touch control and takes effect to form an execution operation control application program.
8. The voice instruction recognition method according to claim 7,
the steps of constructing a touch event for a display unit of the intelligent terminal according to execution operation preset in the voice model and sending the touch event to a control unit of the intelligent terminal further comprise:
and displaying a prompt symbol on a display unit of the intelligent terminal to serve as a notification signal for successful construction of the touch event.
9. A voice command recognition system, comprising:
the script module is arranged in an intelligent terminal, and when the script module is activated, the voice instruction recognition script arranged in the script module is started;
the loading module loads a voice model in the voice instruction recognition script in the script module;
the audio acquisition equipment is used for acquiring instruction audio when an application program in the intelligent terminal is started;
and the control unit is used for identifying the instruction audio to the instruction message, comparing the instruction message with the identification result message in the voice model, matching the instruction message with the voice model when the instruction message is matched with the identification result message or the similarity is greater than a similarity threshold value, and executing preset execution operation in the matched voice model so as to control the application program based on the execution operation.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of:
starting a voice instruction recognition script in an intelligent terminal to load a voice model in the voice instruction recognition script;
when an application program in the intelligent terminal is started, starting the audio acquisition equipment of the intelligent terminal;
acquiring an instruction audio input to audio acquisition equipment, and identifying the instruction audio to an instruction message;
comparing the instruction message with a recognition result message in a voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold value;
executing preset executing operation in the matched voice model, and controlling the application program based on the executing operation.
CN202010074215.3A 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium Active CN111292744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010074215.3A CN111292744B (en) 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010074215.3A CN111292744B (en) 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111292744A true CN111292744A (en) 2020-06-16
CN111292744B CN111292744B (en) 2023-04-28

Family

ID=71021309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010074215.3A Active CN111292744B (en) 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111292744B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112732379A (en) * 2020-12-30 2021-04-30 智道网联科技(北京)有限公司 Operation method of application program on intelligent terminal, terminal and storage medium
CN112767916A (en) * 2021-02-05 2021-05-07 百度在线网络技术(北京)有限公司 Voice interaction method, device, equipment, medium and product of intelligent voice equipment
CN113934146A (en) * 2020-06-29 2022-01-14 阿里巴巴集团控股有限公司 Method and device for controlling Internet of things equipment and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377797A (en) * 2008-09-28 2009-03-04 腾讯科技(深圳)有限公司 Method for controlling game system by voice
JP2010063475A (en) * 2008-09-08 2010-03-25 Weistech Technology Co Ltd Device and method for controlling voice command game
CN105204838A (en) * 2014-06-26 2015-12-30 金德奎 Method for concretely controlling on application program by means of mobile phone voice control software
CN106297784A (en) * 2016-08-05 2017-01-04 Intelligent terminal plays the method and system of quick voice responsive identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010063475A (en) * 2008-09-08 2010-03-25 Weistech Technology Co Ltd Device and method for controlling voice command game
CN101377797A (en) * 2008-09-28 2009-03-04 腾讯科技(深圳)有限公司 Method for controlling game system by voice
CN105204838A (en) * 2014-06-26 2015-12-30 金德奎 Method for concretely controlling on application program by means of mobile phone voice control software
CN106297784A (en) * 2016-08-05 2017-01-04 Intelligent terminal plays the method and system of quick voice responsive identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李瑞峰 等: "基于RSC4128的家用机器人语音人机交互***的设计" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113934146A (en) * 2020-06-29 2022-01-14 阿里巴巴集团控股有限公司 Method and device for controlling Internet of things equipment and electronic equipment
CN112732379A (en) * 2020-12-30 2021-04-30 智道网联科技(北京)有限公司 Operation method of application program on intelligent terminal, terminal and storage medium
CN112732379B (en) * 2020-12-30 2023-12-15 智道网联科技(北京)有限公司 Method for running application program on intelligent terminal, terminal and storage medium
CN112767916A (en) * 2021-02-05 2021-05-07 百度在线网络技术(北京)有限公司 Voice interaction method, device, equipment, medium and product of intelligent voice equipment
CN112767916B (en) * 2021-02-05 2024-03-01 百度在线网络技术(北京)有限公司 Voice interaction method, device, equipment, medium and product of intelligent voice equipment

Also Published As

Publication number Publication date
CN111292744B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US10832674B2 (en) Voice data processing method and electronic device supporting the same
US11582337B2 (en) Electronic device and method of executing function of electronic device
CN108121490B (en) Electronic device, method and server for processing multi-mode input
KR102414122B1 (en) Electronic device for processing user utterance and method for operation thereof
EP3392877B1 (en) Device for performing task corresponding to user utterance
US20160328205A1 (en) Method and Apparatus for Voice Operation of Mobile Applications Having Unnamed View Elements
CN111292744B (en) Speech instruction recognition method, system and computer readable storage medium
CN110457214B (en) Application testing method and device and electronic equipment
EP3603040B1 (en) Electronic device and method of executing function of electronic device
CN112970059A (en) Electronic device for processing user words and control method thereof
CN104184890A (en) Information processing method and electronic device
KR20190115356A (en) Method for Executing Applications and The electronic device supporting the same
US11151995B2 (en) Electronic device for mapping an invoke word to a sequence of inputs for generating a personalized command
CN112286486B (en) Operation method of application program on intelligent terminal, intelligent terminal and storage medium
CN112732379B (en) Method for running application program on intelligent terminal, terminal and storage medium
US20220270604A1 (en) Electronic device and operation method thereof
US20230081558A1 (en) Electronic device and operation method thereof
KR20140111574A (en) Apparatus and method for performing an action according to an audio command
CN112219235A (en) System comprising an electronic device for processing a user's speech and a method for controlling speech recognition on an electronic device
CN111326145B (en) Speech model training method, system and computer readable storage medium
CN113987142A (en) Voice intelligent interaction method, device, equipment and storage medium with virtual doll
CN113470614A (en) Voice generation method and device and electronic equipment
KR20220118818A (en) Electronic device and operation method thereof
KR20200077936A (en) Electronic device for providing reaction response based on user status and operating method thereof
CN112951232B (en) Voice input method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230315

Address after: 518055 1501, Building 1, Chongwen Park, Nanshan Zhiyuan, No. 3370, Liuxian Avenue, Fuguang Community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Grey Shark Technology Co.,Ltd.

Address before: 210022 Room 601, block a, Chuangzhi building, 17 Xinghuo Road, Jiangbei new district, Nanjing City, Jiangsu Province

Applicant before: Nanjing Thunder Shark Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant