CN111243580A - Voice control method, device and computer readable storage medium - Google Patents

Voice control method, device and computer readable storage medium Download PDF

Info

Publication number
CN111243580A
CN111243580A CN201811433203.4A CN201811433203A CN111243580A CN 111243580 A CN111243580 A CN 111243580A CN 201811433203 A CN201811433203 A CN 201811433203A CN 111243580 A CN111243580 A CN 111243580A
Authority
CN
China
Prior art keywords
interface
control
package name
application program
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811433203.4A
Other languages
Chinese (zh)
Other versions
CN111243580B (en
Inventor
孙向作
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Research America Inc
Original Assignee
TCL Research America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Research America Inc filed Critical TCL Research America Inc
Priority to CN201811433203.4A priority Critical patent/CN111243580B/en
Publication of CN111243580A publication Critical patent/CN111243580A/en
Application granted granted Critical
Publication of CN111243580B publication Critical patent/CN111243580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention is suitable for the technical field of artificial intelligence, and provides a voice control method, a voice control device and a computer-readable storage medium, wherein the voice control method comprises the following steps: acquiring voice content input by a user, and analyzing keywords from the voice content; searching control display characters matched with the keywords in a preset database, extracting corresponding interface packet names from the preset database according to the control display characters to serve as target interface packet names, wherein the preset database comprises the corresponding relation between the control display characters and the interface packet names; and starting an interface corresponding to the target interface package name. The flexibility of voice control is improved, and the coupling problem caused by direct butt joint with a third-party application program is avoided.

Description

Voice control method, device and computer readable storage medium
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a voice control method, a voice control device and a computer readable storage medium.
Background
Currently, people often use a voice control technology to more conveniently control various types of terminal devices to perform corresponding operations. In the process of controlling the terminal device through the voice, in order to enable the voice recognition and semantic understanding to preferentially hit the control function on the current interface of the terminal device, the application program and the interface which are currently used by the user and important information displayed on the interface, such as characters displayed on the control of the interface, need to be known.
However, in the prior art, when the above work is completed, the speech recognition module needs to be docked and adapted with each third-party application program, but the workload and difficulty of the docking and adaptation are large, so that the speech recognition module is difficult to control all the application programs, and meanwhile, because the coupling of the speech recognition module and each application program is large, the subsequent update of the application program may also affect the control of the speech recognition module on the application program, so that the subsequent maintenance difficulty is also large.
Disclosure of Invention
In view of this, embodiments of the present invention provide a voice control method, an apparatus, and a computer-readable storage medium, so as to solve the problem that a voice recognition module in an existing voice control method is difficult to control part of a third-party application.
A first aspect of an embodiment of the present invention provides a voice control method, including: analyzing keywords from voice content input by a user; searching control display characters matched with the keywords in a preset database, extracting an interface package name corresponding to the control display characters from the preset database as a target interface package name, wherein the preset database contains the corresponding relation between the control display characters and the interface package name; and starting an interface corresponding to the target interface package name.
A second aspect of an embodiment of the present invention provides a voice control apparatus, including: the first acquisition module is used for analyzing keywords from voice content input by a user; the searching module is used for searching the control display characters matched with the keywords in a preset database, extracting the interface packet name corresponding to the control display characters from the preset database as a target interface packet name, and enabling the preset database to contain the corresponding relation between the control display characters and the interface packet name; and the starting module is used for starting the interface corresponding to the target interface package name.
A third aspect of an embodiment of the present invention provides a voice control apparatus, including: memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method provided by the first aspect of an embodiment of the present invention are implemented when the computer program is executed by the processor.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method provided by the first aspect of the embodiments of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: analyzing keywords from voice content input by a user, searching control display characters matched with the keywords in a database generated according to interface data of an application program, and extracting an interface packet name corresponding to the control display characters to determine an interface which the user wants to start and start an interface corresponding to the interface packet name; therefore, the voice control module controls the relevant interface of the application program under the condition that the voice control module is not in adaptive butt joint with the third-party application program, so that the flexibility of voice control is improved, and the coupling problem caused by direct butt joint with the third-party application program is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart of an implementation of a voice control method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of an interface provided by an embodiment of the present invention;
FIG. 3 is a flow chart of generating a default database according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a specific implementation of the preset database generation method S302 according to an embodiment of the present invention;
fig. 5 is a block diagram of a voice control apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a voice control apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows an implementation flow of a voice control method provided by an embodiment of the present invention, which is detailed as follows:
in S101, a voice content input by a user is acquired, and a keyword is parsed from the voice content.
In the embodiment of the invention, the voice content input by the user can be converted into corresponding characters through the voice recognition module. Illustratively, the recognializeringent speech recognizer of the android can be adopted to recognize the speech content input by the user and generate the keywords.
In S102, the control display text matched with the keyword is searched in a preset database, and the corresponding interface package name is extracted from the preset database according to the control display text and is used as the target interface package name, where the preset database includes a corresponding relationship between the control display text and the interface package name.
In the embodiment of the invention, the voice recognition module and the third-party application program are not required to be directly matched and butted, the keywords analyzed by the voice recognition module are not required to be input into a specific application program, the data matched with the keywords are searched from a preset database, and the corresponding interface packet name is found based on the data. It can be understood that the interface package name is the name of an interface package, and in a file stored in the terminal device, an interface package name corresponds to a unique interface package, and the interface package includes a code for starting an application program and jumping to a corresponding interface, so that the interface corresponding to the interface package name can be started by executing the code in the interface package.
It should be noted that the preset data of the database in the embodiment of the present invention is generated by analyzing the relevant files of the installed application in advance, and a specific analysis process will be described in detail in the following embodiment. It should be emphasized that the preset database includes at least two types of data, one type of data is control display text, the other type of data is an interface package name, and the preset database stores the corresponding relationship between the two types of data.
The control display text refers to text displayed on a control of an interface, for example: as shown in fig. 2, the current interface is a "tv guard" interface, which at least includes 5 controls, and each control has text displayed thereon, such as: the characters are convenient for a user to select a corresponding control, and it can be understood that, after the user clicks the control with the characters of "one-key acceleration", the current interface may jump to another interface to execute the one-key acceleration function, or execute the one-key acceleration function without jumping the interface.
Each interface corresponds to an interface package name, for example: the interface of the television guard has the corresponding interface package name.
In the embodiment of the present invention, since the preset database includes the corresponding relationship between the control display text and the interface package name, when the keyword is known, the control display text matched with the keyword can be found out from the preset database, and then the interface package name corresponding to the control display text is found out and is used as the target interface package name. For example: if the keyword 'one-key acceleration' is analyzed from the voice input by the user, the control display characters matched with the 'one-key acceleration' are searched in a preset database, and then the interface package name corresponding to the control display characters is determined.
In S103, an interface corresponding to the target interface package name is started.
It can be understood that, as described above, since the interface package includes the code for starting an application and entering the corresponding interface, the interface corresponding to the target interface package name can be started by executing the code in the interface package corresponding to the target interface package name.
In the embodiment of the invention, keywords are analyzed from voice content input by a user, control display characters matched with the keywords are searched in a database generated according to interface data of an application program, and an interface package name corresponding to the control display characters is extracted to determine an interface which the user wants to start and start the interface corresponding to the interface package name; therefore, the voice control module controls the relevant interface of the application program under the condition that the voice control module is not in adaptive butt joint with the third-party application program, so that the flexibility of voice control is improved, and the coupling problem caused by direct butt joint with the third-party application program is avoided.
In the above embodiment, it is mentioned that the preset database plays an important role in smoothly implementing the voice control method, in the embodiment of the present invention, a generation process of the preset database is described, and fig. 3 shows a generation flow of the preset database provided in the embodiment of the present invention, which is detailed as follows:
in S301, a layout file of each interface in an installed application is obtained, where the layout file of the interface includes control data of each control in the interface, and the control data includes control display text.
As is well known, in an android system, each interface corresponds to a layout file, and the layout file includes data of multiple elements in the interface, including control data of each control in the interface. The control data comprises control types (such as display text type control, button type control, display picture type control and the like), control IDs, control widths, control heights, control display texts and the like. Take pseudo code in a layout file as an example: assume that the pseudo-code is:
<Button
android:id="@+id/btnPre"
android:layout_width="124dip"
android:layout_height="37dip"
context ═ one-key acceleration/>
Representing, the type of a control is: a button class control; the control ID is: btnPre; the control width is: 124 units; the height of the control is as follows: 37 units, the control displays the characters as follows: one-key acceleration.
Specifically, searching out an application program file of the application program under a preset directory, performing decompiling on the application program file, and generating an application decompiling file, wherein the application decompiling file comprises interface files of all interfaces of the application program, and the interface files comprise file IDs of layout files of the interfaces; and calling a layout file corresponding to the file ID.
In the embodiment of the present invention, in the starting process of the android system, an application management service, namely, a PackageManagerService, is started, and this service may be used to scan a preset directory in the system, so as to search out an application file of each installed application, where the application file is a file having apk as a suffix.
On one hand, the interface name and the interface package name of each interface included in one application program are obtained through an application program management service, on the other hand, an application program file is decompiled through an apktool tool in an android system, and an application decompilated file is generated and is a file containing Smali codes of the application program. During the process of generating the application decompilation file, a corresponding Smal directory is generated according to the hierarchical structure of the application program file, and all classes in the application program file have corresponding Smal files independent under the directory. Notably, the application decompiling file includes an interface file of all interfaces of the application program, and the interface file includes a file ID of a layout file of the interfaces.
Illustratively, assume that the name of an interface is: com, sunxz, test, mainactivelty, then a Smali directory of com \ sunxz \ test \ directory structure is generated, and then the name is generated under this directory: a document decompiled by an application of mainactivlty. Assume that the contents of the application decompiled file are as follows:
class public Lcom/sunxz/test/MainActivlty;
.super Landroid/app/Activlty;
.source"MainActivlty.java"
#virutal methods
.method protected onCreate(Landroid/os/Bundle;)V
.locals 3
.parameter"savedInstanceState".prologue
.line 14
invoke-super{p0,p1},Landroid/app/Activity;->onCreate(Landroid/os/Bundle;)V
.line 15
const/high 16 v2,0x7f03
invoke-virtual{p0,v2},Lcom/sunxz/test/MainActivlty;->setContentView
where the first row ". class" instruction specifies the class name of the current class. The second line ". super" instruction specifies the parent class of the current class. The third line ". source" instruction specifies the source file name of the current class. # virtual methods is a method declaration instruction, parameter is a parameter instruction, prologue is a code start instruction, and invoke-virtual is a method call instruction. The last line of code completes the setting of the view of MainActivity, and the layout represented by the method parameters is loaded through the setContentView (I) method. invoke-virtual is an opcode, representing a method call, { p0, v2} is a register for placing parameters. Lcom/sunxz/test/MainActivlty; is the object type calling the method, setContentView (I) V is the specific method called, wherein I means the type of the parameter is int, and V means the type of the return value is void. In the line disassembling code, two registers p0 and v2 respectively store Lcom/example/test/MainActivity and an int type value, the int type value is defined in the code const/high16v2 and 0x7f03 in the penultimate line, the line code represents that the value of 0x7f03 is assigned to the register v2, and the actively loaded layout file with the file ID of 0x7f03 can be determined through the value.
As can be seen from the above example, the file ID of the layout file of any interface under an application program can be found through the application decompilation file of the application program.
In S302, according to the interface package name and the layout file of each interface in the installed application program, storing the corresponding relationship between the control display text and the interface package name in the preset database.
As described above, on one hand, the application management service may obtain the interface names and the interface package names of the interfaces included in one application, that is, the interface package name of the current interface being displayed by the terminal device; on the other hand, the control display characters of each control under the interface can be analyzed through the layout file obtained in the previous step. Therefore, the corresponding relation between the control display characters and the interface package names can be established.
Notably, in the embodiment of the present invention, the interface package name corresponding to the control display text is the interface package name of the interface currently displayed by the terminal device after the user clicks the control on which the control display text is located.
Specifically, the installed application programs are started one by one, the operation shown in fig. 4 is performed on the started application program until all the controls of all the interfaces in the application program are selected, and then the operation shown in fig. 4 is performed on another installed application program.
Fig. 4 shows a flowchart of a specific implementation of the preset database generation method S302 according to an embodiment of the present invention, which is detailed as follows:
in S3021, the interface package name of the current interface is obtained as the first interface package name, the layout file of the current interface is extracted, and whether all the controls in the layout file are selected is determined.
It will be appreciated that the "current interface" is the interface that the terminal device is displaying. For example: after an application program is started, the main interface of the application program is entered firstly, the current interface is the main interface, if some controls of the main interface are operated, another interface can be entered, and the current interface is no longer the main interface.
By the method in the embodiment, the layout file of each interface can be acquired, and naturally, the layout file of the current interface can also be acquired. As described above, the layout file of an interface includes control data of a plurality of controls, and in the embodiment of the present invention, each control in the layout file is selected one by one as a selected control for subsequent calculation, so that a control in a layout file may be selected or unselected.
In S3022, if all the controls in the layout file are selected, returning to the previous interface of the current interface in the application program, and re-executing S3021;
it will be appreciated that since there is operation of the simulated click control in a subsequent step, the current interface may not already be the main interface of the application, but rather a sub-interface of several layers below the main interface. If the current interface has no previous interface in the application program, the current interface is proved to be the main interface, and according to the overall logic of fig. 4, if all the controls in the main interface of the application program have been selected, all the controls of all the interfaces in the application program are proved to be selected, as described above, the logic described in fig. 4 will be skipped, and then the operation shown in fig. 4 is performed on another installed application program.
Notably, the so-called "current interface" naturally also changes after returning to the previous interface of the current interface.
In S3023, if all the controls in the layout file are not selected, selecting one unselected control in the layout file as a selected control.
As described above, in the embodiment of the present invention, since each control in the layout file is selected one by one as the selected control, an unselected control is selected as the selected control from the layout file in this step.
In S3024, the control display text of the selected control is extracted from the layout file, and after the selected control is clicked in a simulated manner, the interface package name of the current interface is obtained again as the second interface package name.
As can be seen from the description in the above embodiments, the control display text of the control included in an interface may be extracted from the layout file of the interface.
In the embodiment of the present invention, there are two possibilities after the simulated click on the selected control, one of which is to jump from the current interface to another new interface, the first interface packet name and the second interface packet name will be different, and the other of which is to stay in the current interface, the first interface packet name and the second interface packet name will be the same.
Optionally, the current focus on the display screen is set on the selected control, and the sending of the click command is simulated to realize the operation of simulating the click on the selected control.
In S3025, it is determined whether the first interface package name is the same as the second interface package name.
In S3026, if the first interface packet name is the same as the second interface packet name, the process returns to S3021.
In S3027, if the first interface package name is different from the second interface package name, storing the correspondence between the control display text of the selected control and the second interface package name in the preset database, and returning to execute S3021.
It can be understood that, by the above method, a corresponding relationship between the control display text and the interface package name can be established, and specifically, the interface package name is an interface package name of an interface entered after clicking the control corresponding to the control display text.
Optionally, storing the control display text, the control type, the control width and the control height of the selected control into the preset database. It can be understood that the data are used for specifying the display effect of the control on the screen, where the "control display text" can prompt the user to click the corresponding control by displaying the corresponding text on the control of the interface; "control type" is used to specify the type to which the control belongs, for example: displaying a text control, a button control, a display picture control and the like; "control wide" and "control high" are used to indicate the size of the display area of the control on the screen.
Further, in another embodiment of the present invention, a listening function is provided in the terminal device. On one hand, if the application program is monitored to be uninstalled, the interface package names of all interfaces included in the uninstalled application program are used as the selected interface package names, and the data including the selected interface entry in the preset database are deleted.
On the other hand, if it is monitored that the new application program is installed, the corresponding relationship between the control display text of each control of each interface in the new application program and the interface package name is generated by the method described in the above embodiment, and is added to the preset database.
Fig. 5 shows a block diagram of a voice control apparatus according to an embodiment of the present invention, which corresponds to the application upgrading method described in the foregoing embodiment, and only shows portions related to the embodiment of the present invention for convenience of description.
Referring to fig. 5, the apparatus includes:
a first obtaining module 501, configured to obtain a voice content input by a user, and analyze a keyword from the voice content;
the searching module 502 is configured to search a preset database for control display characters matched with the keywords, and extract a corresponding interface package name from the preset database according to the control display characters as a target interface package name, where the preset database includes a corresponding relationship between the control display characters and the interface package name;
the starting module 503 is configured to start an interface corresponding to the target interface package name.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a layout file of each interface in the installed application program, wherein the layout file of the interface comprises control data of each control in the interface, and the control data comprises control display characters;
and the storage module is used for storing the corresponding relation between the control display characters and the interface package names into the preset database according to the interface package names and the layout files of all the interfaces in the installed application program.
Optionally, the second obtaining module includes:
the decompiling sub-module is used for searching out an application program file of the application program under a preset directory, performing decompiling on the application program file and generating an application decompiling file, wherein the application decompiling file comprises interface files of all interfaces of the application program, and the interface files comprise file IDs of layout files of the interfaces;
and the calling submodule is used for calling the layout file corresponding to the file ID.
Optionally, the storage module is specifically configured to:
starting the installed application programs one by one, and executing the following operations on the started application programs until all controls of all interfaces in the application programs are selected:
s1: acquiring an interface package name of a current interface as a first interface package name, extracting a layout file of the current interface, judging whether all controls in the layout file are selected, if not, selecting one unselected control in the layout file as a selected control, and executing the step S2; if all the controls in the layout file are selected, returning to the previous interface of the current interface in the application program, re-executing the operation of acquiring the interface package name of the current interface as the first interface package name, extracting the layout file of the current interface, and judging whether all the controls in the layout file are selected;
s2: extracting control display characters of the selected control from the layout file, and after the selected control is simulated and clicked, re-acquiring the interface package name of the current interface as a second interface package name;
s3: if the first interface packet name is the same as the second interface packet name, returning to execute the step S1;
s4: and if the first interface package name is different from the second interface package name, storing the corresponding relation between the control display characters of the selected control and the second interface package name into the preset database, and returning to execute the step S1.
Optionally, the apparatus further comprises:
and the monitoring execution module is used for taking the interface packet names of all interfaces included in the unloaded application program as the selected interface packet name if the application program is monitored to be unloaded, and deleting the data including the selected interface entry in the preset database.
Fig. 6 is a schematic diagram of a voice control apparatus according to an embodiment of the present invention. As shown in fig. 6, the voice control apparatus of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a speech control program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various speech control method embodiments described above, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 501 to 503 shown in fig. 6.
The voice control device 6 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or other computing devices. The voice control device/means may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the voice control apparatus 6 and does not constitute a limitation of the voice control apparatus 6 and may include more or less components than those shown, or some components may be combined, or different components, for example the voice control apparatus may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the voice control apparatus, such as a hard disk or a memory of the voice control apparatus 6. The memory 61 may also be an external storage device of the voice control apparatus/device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the voice control apparatus/device 6. Further, the memory 61 may also include both an internal storage unit of the voice control apparatus/device 6 and an external storage device. The memory 61 is used for storing the computer program and other programs and data required by the speech control device/means. The memory 61 may also be used to temporarily store data that has been output or is to be output. It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A voice control method, comprising:
analyzing keywords from voice content input by a user;
searching control display characters matched with the keywords in a preset database, extracting corresponding interface packet names from the preset database according to the control display characters to serve as target interface packet names, wherein the preset database comprises the corresponding relation between the control display characters and the interface packet names;
and starting an interface corresponding to the target interface package name.
2. The voice control method of claim 1, further comprising, prior to the obtaining of the voice content input by the user:
obtaining a layout file of each interface in an installed application program, wherein the layout file of the interface comprises control data of each control in the interface, and the control data comprises control display characters;
and storing the corresponding relation between the control display characters and the interface package names into the preset database according to the interface package names and the layout files of all the interfaces in the installed application program.
3. The voice control method according to claim 2, wherein the obtaining of the layout file of each interface in the installed application program comprises:
searching out an application program file of the application program under a preset directory, performing decompiling on the application program file, and generating an application decompiling file, wherein the application decompiling file comprises interface files of all interfaces of the application program, and the interface files comprise file IDs of layout files of the interfaces;
and calling a layout file corresponding to the file ID.
4. The voice control method according to claim 2, wherein the storing the correspondence between the control display text and the interface package name into the preset database according to the interface package name and the layout file of each interface in the installed application program comprises:
starting the installed application programs one by one, and executing the following operations on the started application programs until all controls of all interfaces in the application programs are selected:
s1: acquiring an interface package name of a current interface as a first interface package name, extracting a layout file of the current interface, and judging whether all controls in the layout file are selected;
s2: if all the controls in the layout file are not selected, selecting one unselected control in the layout file as a selected control, extracting the control display characters of the selected control from the layout file, and after the selected control is simulated and clicked, re-acquiring the interface package name of the current interface as a second interface package name;
s3: and if the first interface package name is different from the second interface package name, storing the corresponding relation between the control display characters of the selected control and the second interface package name into the preset database, and returning to execute the step S1.
5. The voice control method of claim 4, further comprising:
if all the controls in the layout file are selected, returning to the previous interface of the current interface in the application program, re-executing the operation of acquiring the interface package name of the current interface as the first interface package name, extracting the layout file of the current interface, and judging whether all the controls in the layout file are selected.
6. The voice control method of claim 4, further comprising:
and if the first interface packet name is the same as the second interface packet name, returning to execute the step S1.
7. The voice control method of claim 1, further comprising:
and if the application program is monitored to be unloaded, taking the interface packet names of all interfaces included in the unloaded application program as the selected interface packet name, and deleting the data including the selected interface entry in the preset database.
8. The voice control method of claim 4, further comprising:
and if the first interface package name is different from the second interface package name, storing the control display characters, the control types, the control widths and the control heights of the selected controls into the preset database.
9. A voice control apparatus, comprising:
the first acquisition module is used for analyzing keywords from voice content input by a user;
the searching module is used for searching the control display characters matched with the keywords in a preset database, extracting corresponding interface packet names from the preset database according to the control display characters to serve as target interface packet names, and the preset database comprises the corresponding relation between the control display characters and the interface packet names;
and the starting module is used for starting the interface corresponding to the target interface package name.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the speech control method according to any one of claims 1 to 8.
CN201811433203.4A 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium Active CN111243580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811433203.4A CN111243580B (en) 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811433203.4A CN111243580B (en) 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111243580A true CN111243580A (en) 2020-06-05
CN111243580B CN111243580B (en) 2023-06-09

Family

ID=70879177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811433203.4A Active CN111243580B (en) 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111243580B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347277A (en) * 2020-10-28 2021-02-09 同辉佳视(北京)信息技术股份有限公司 Menu generation method and device, electronic equipment and readable storage medium
CN112783550A (en) * 2021-01-25 2021-05-11 维沃软件技术有限公司 Application program management method and device
WO2022012579A1 (en) * 2020-07-14 2022-01-20 维沃移动通信有限公司 Message display method, apparatus, and electronic device

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200910201A (en) * 2007-08-29 2009-03-01 Inventec Corp System and method thereof for switching a display interface
CN103645906A (en) * 2013-12-25 2014-03-19 上海斐讯数据通信技术有限公司 Method and system for realizing interface re-layout based on fixed interface layout document
CN103885783A (en) * 2014-04-03 2014-06-25 深圳市三脚蛙科技有限公司 Voice control method and device of application program
CN103970514A (en) * 2013-01-28 2014-08-06 腾讯科技(深圳)有限公司 Information acquisition method and device for Android application program installation package
JP2014146260A (en) * 2013-01-30 2014-08-14 Fujitsu Ltd Voice input/output database search method, program and device
CN104599669A (en) * 2014-12-31 2015-05-06 乐视致新电子科技(天津)有限公司 Voice control method and device
CN105138357A (en) * 2015-08-11 2015-12-09 中山大学 Method and device for implementing mobile application operation assistant
CN105957530A (en) * 2016-04-28 2016-09-21 海信集团有限公司 Speech control method, device and terminal equipment
CN106293600A (en) * 2016-08-05 2017-01-04 三星电子(中国)研发中心 A kind of sound control method and system
US20170031652A1 (en) * 2015-07-29 2017-02-02 Samsung Electronics Co., Ltd. Voice-based screen navigation apparatus and method
US20180090142A1 (en) * 2016-09-27 2018-03-29 Fmr Llc Automated software execution using intelligent speech recognition
CN107948698A (en) * 2017-12-14 2018-04-20 深圳市雷鸟信息科技有限公司 Sound control method, system and the smart television of smart television
CN108009078A (en) * 2016-11-01 2018-05-08 腾讯科技(深圳)有限公司 A kind of application interface traversal method, system and test equipment
CN108109618A (en) * 2016-11-25 2018-06-01 宇龙计算机通信科技(深圳)有限公司 voice interactive method, system and terminal device
US20180167490A1 (en) * 2016-12-14 2018-06-14 Dell Products, Lp System and method for automated on-demand creation of and execution of a customized data integration software application
CN108364644A (en) * 2018-01-17 2018-08-03 深圳市金立通信设备有限公司 A kind of voice interactive method, terminal and computer-readable medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200910201A (en) * 2007-08-29 2009-03-01 Inventec Corp System and method thereof for switching a display interface
CN103970514A (en) * 2013-01-28 2014-08-06 腾讯科技(深圳)有限公司 Information acquisition method and device for Android application program installation package
JP2014146260A (en) * 2013-01-30 2014-08-14 Fujitsu Ltd Voice input/output database search method, program and device
CN103645906A (en) * 2013-12-25 2014-03-19 上海斐讯数据通信技术有限公司 Method and system for realizing interface re-layout based on fixed interface layout document
CN103885783A (en) * 2014-04-03 2014-06-25 深圳市三脚蛙科技有限公司 Voice control method and device of application program
CN104599669A (en) * 2014-12-31 2015-05-06 乐视致新电子科技(天津)有限公司 Voice control method and device
US20170031652A1 (en) * 2015-07-29 2017-02-02 Samsung Electronics Co., Ltd. Voice-based screen navigation apparatus and method
CN105138357A (en) * 2015-08-11 2015-12-09 中山大学 Method and device for implementing mobile application operation assistant
CN105957530A (en) * 2016-04-28 2016-09-21 海信集团有限公司 Speech control method, device and terminal equipment
CN106293600A (en) * 2016-08-05 2017-01-04 三星电子(中国)研发中心 A kind of sound control method and system
US20180090142A1 (en) * 2016-09-27 2018-03-29 Fmr Llc Automated software execution using intelligent speech recognition
CN108009078A (en) * 2016-11-01 2018-05-08 腾讯科技(深圳)有限公司 A kind of application interface traversal method, system and test equipment
CN108109618A (en) * 2016-11-25 2018-06-01 宇龙计算机通信科技(深圳)有限公司 voice interactive method, system and terminal device
US20180167490A1 (en) * 2016-12-14 2018-06-14 Dell Products, Lp System and method for automated on-demand creation of and execution of a customized data integration software application
CN107948698A (en) * 2017-12-14 2018-04-20 深圳市雷鸟信息科技有限公司 Sound control method, system and the smart television of smart television
CN108364644A (en) * 2018-01-17 2018-08-03 深圳市金立通信设备有限公司 A kind of voice interactive method, terminal and computer-readable medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022012579A1 (en) * 2020-07-14 2022-01-20 维沃移动通信有限公司 Message display method, apparatus, and electronic device
CN112347277A (en) * 2020-10-28 2021-02-09 同辉佳视(北京)信息技术股份有限公司 Menu generation method and device, electronic equipment and readable storage medium
CN112783550A (en) * 2021-01-25 2021-05-11 维沃软件技术有限公司 Application program management method and device

Also Published As

Publication number Publication date
CN111243580B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN106980504B (en) Application program development method and tool and equipment thereof
CN109376166B (en) Script conversion method, script conversion device, computer equipment and storage medium
CN109002510B (en) Dialogue processing method, device, equipment and medium
WO2021017735A1 (en) Smart contract formal verification method, electronic apparatus and storage medium
US20160357519A1 (en) Natural Language Engine for Coding and Debugging
CN111243580B (en) Voice control method, device and computer readable storage medium
US9594845B2 (en) Automating web tasks based on web browsing histories and user actions
JP2020042784A (en) Method and apparatus for operating intelligent terminal
WO2019169723A1 (en) Test case selection method, device and equipment, and computer-readable storage medium
CN111385633B (en) Resource searching method based on voice, intelligent terminal and storage medium
CN108197024B (en) Embedded browser debugging method, debugging terminal and computer readable storage medium
CN108415998B (en) Application dependency relationship updating method, terminal, device and storage medium
KR20180129623A (en) Apparatus for statically analyzing assembly code including assoxiated multi files
CN114036439A (en) Website building method, device, medium and electronic equipment
CN111200744B (en) Multimedia playing control method and device and intelligent equipment
CN109902726B (en) Resume information processing method and device
EP3519964B1 (en) Electronic apparatus for recording debugging information and control method thereof
US11449313B2 (en) System and method applied to integrated development environment
CN111385661A (en) Method and terminal for controlling full-screen playing through voice
CN108959646B (en) Method, system, device and storage medium for automatically verifying communication number
CN111151008B (en) Verification method and device for game operation data, configuration background and medium
RU2595763C2 (en) Method and apparatus for managing load on basis of android browser
CN113761588A (en) Data verification method and device, terminal equipment and storage medium
CN112711435A (en) Version updating method, version updating device, electronic equipment and storage medium
CN110888690A (en) Application starting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant