CN113157964A - Method and device for searching data set through voice and electronic equipment - Google Patents
Method and device for searching data set through voice and electronic equipment Download PDFInfo
- Publication number
- CN113157964A CN113157964A CN202110261641.2A CN202110261641A CN113157964A CN 113157964 A CN113157964 A CN 113157964A CN 202110261641 A CN202110261641 A CN 202110261641A CN 113157964 A CN113157964 A CN 113157964A
- Authority
- CN
- China
- Prior art keywords
- voice
- data set
- search term
- template
- recognition result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000004590 computer program Methods 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 17
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002650 habitual effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application is applicable to the technical field of computers, and provides a method, a device and electronic equipment for searching a data set by voice, wherein the method comprises the following steps: acquiring a voice message, wherein the content of the voice message meets a target search term template; determining a voice recognition result corresponding to the voice message based on a voice recognition model; determining a target data set corresponding to the voice recognition result in a plurality of data sets under a database. Therefore, the target data set can be found efficiently and accurately through the voice search operation of the user.
Description
Technical Field
The present application belongs to the field of computer technologies, and in particular, to a method and an apparatus for searching a data set by voice, and an electronic device.
Background
In the BI (Business Intelligence) field, data sets are generally used to externally provide or describe data, such as after cleaning raw data, and output in the form of one data set for use in subsequent processes or systems. However, in a general BI system, there are often a large number of different types of data sets, and even though the data sets are generally sorted and stored by folders, when in use, selecting a desired data set from a plurality of folders is time-consuming, and especially, it is difficult to ensure the comprehensiveness of search results.
Therefore, how to improve the search efficiency and cover the comprehensiveness of the search results is a difficult problem to be solved in the industry at present.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for searching a data set by voice, and an electronic device, so as to solve the problems of low accuracy and low efficiency of searching a data set in the prior art.
A first aspect of an embodiment of the present application provides a method for voice searching a data set, including: acquiring a voice message, wherein the content of the voice message meets a target search term template; determining a voice recognition result corresponding to the voice message based on a voice recognition model; determining a target data set corresponding to the voice recognition result in a plurality of data sets under a database.
A second aspect of an embodiment of the present application provides an apparatus for voice searching a data set, including: an acquisition unit configured to acquire a voice message whose content satisfies a target search term template; a recognition unit configured to determine a voice recognition result corresponding to the voice message based on a voice recognition model; a determination unit configured to determine a target data set corresponding to the voice recognition result among a plurality of data sets under a database.
A third aspect of embodiments of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the steps of the method as described above.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to implement the steps of the method as described above.
Compared with the prior art, the embodiment of the application has the advantages that:
based on the voice recognition model, the voice message with the target search term template is recognized, high accuracy can be guaranteed, the target data set corresponding to the voice recognition result is found in a large amount of data sets in the database, and the system can efficiently and automatically find the target data set in the database as long as a user can perform voice interaction according to the search term template.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of an example of a method of voice searching a data set of an embodiment of the present application;
FIG. 2 illustrates a flow diagram of an example of determining speech recognition results based on a speech recognition model according to an embodiment of the application;
FIG. 3 shows a flow diagram of an example of updating a set of search term templates according to an embodiment of the application;
FIG. 4 is a flow chart of an example of a method of voice searching a data set of an embodiment of the present application;
FIG. 5 is a block diagram illustrating an example of an apparatus for searching a data set by voice according to an embodiment of the present application;
fig. 6 is a schematic diagram of an example of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the electronic devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the devices described above are not portable communication devices, but rather are desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. However, it should be understood that the electronic device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The electronic device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the electronic device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
FIG. 1 shows a flow diagram of an example of a voice search data set according to an embodiment of the application. Regarding the execution subject of the method of the embodiment of the present application, it may be various devices having a processing function, such as a computer or a processor.
As shown in fig. 1, in step 110, a voice message is obtained. Here, the contents of the voice message satisfy the target search term template. Illustratively, the user may speak information about the desired data set to the computer, e.g., the user may speak "i want to find a certain file under a certain folder".
In addition, the target search term template is a specific text expression template corresponding to the specific voice requirement issued by the user, for example, the target search term template may be a text expression form of "find XXX type" (data set) to satisfy the search requirement of the user. Further, it should be noted that there may be multiple search term templates in the system, any of which may be satisfied by a voice message.
In step 120, a speech recognition result corresponding to the speech message is determined based on the speech recognition model. Illustratively, text information corresponding to the voice message under the target search term template is determined based on the voice recognition model, and a recognition result is obtained.
In step 130, a target data set corresponding to the speech recognition result is determined among a plurality of data sets under a database. Illustratively, a huge data set is stored under a database, and a specific data set required by a user is found based on a speech recognition result. Therefore, the system can quickly and accurately find out the specific data set required by the user aiming at the recognition operation of the user voice.
In some examples of the embodiment of the present application, the database may be a file tree structure, which enables fast query of a corresponding data set.
In some embodiments, a display module may also be provided for the speech search system and display the target data set after determining the target data set corresponding to the speech recognition result. In combination with an application scenario, a user searches a corresponding specific data set through a voice message and feeds back the data set to the user in real time through a display mode.
In some examples of embodiments of the present application, the speech recognition model includes a sentence recognition model module and a keyword recognition model module.
Regarding the construction details of the speech recognition model, a grammar tree/dependency tree algorithm can be used for constructing a sentence recognition model, a large amount of training data is preset, the training data comprises all term data in a term set and a certain amount of sentence data which is not in the term set, the sentence recognition model is trained, and then the word2vec algorithm is used for sentence similarity comparison. Where terminology, for example, sets forth a series of established terms to describe a data set, type relationships: find (a dataset of) type "XXX"; folder relation: find (dataset) under "XXX" folder; the file name relationship: look up (dataset) named "XXX"; the field relationship is as follows: find (dataset) containing "XXX", "XXX" fields; historical filename relationship: look up (dataset) named "XXX"; (the above relationships may also have a set of relative relationships, such as finding (data sets) that are not of the "XXX" type), the sentence model/term may be rich or may be combined. Firstly, classifying data sets according to services, and dividing the data sets into: excel dataset, SQL dataset, tag dataset, API dataset, combined dataset, etc. Then, for each existing data set, the type information, folder information, file name information, and field information are recorded.
In addition, a keyword model may be constructed using a DTW (Dynamic Time Warping) algorithm to recognize keywords in the user input speech except for all term data in the term set and a certain amount of sentence data not in the term set.
FIG. 2 shows a flow diagram of an example of determining a speech recognition result based on a speech recognition model according to an embodiment of the application.
As shown in fig. 2, in step 210, a target search term template matching the voice message is determined from a preset search term template set based on the sentence recognition model module. Illustratively, based on the sentence recognition model module, various relevant word expression templates which are made according to the voice demand sent by the user are preset, and the target search term template matched with the voice message is found. Here, the search term template includes, but is not limited to, (a dataset of) the name XXX, (a dataset of) the type XXX, (a dataset of) the XXX folder, and so on. It should be understood that the above-mentioned term templates may be classified according to business functions or attributes of different data sets, and the business function information may be tag function information, API function information, SQL function information, etc. corresponding to the data sets. For example, term templates corresponding to types of Excel datasets, SQL datasets, tag datasets, API datasets, combined datasets, and the like, may be constructed, respectively. Specifically, each search term template in the search term template set may be constructed according to content attribute information and/or business function information of the data set in the database. Here, the content attribute information includes at least one of: file type relationship, file storage location, current file name, historical file name, text information in the file.
In step 220, a search keyword in the voice message is identified based on the keyword identification model module. Illustratively, the search keyword refers to a descriptive keyword for the target search term template to be found in the voice message, and the keyword recognition model module recognizes the descriptive keyword as a specific data set. The resulting descriptive keywords can thus be matched against the target search term template. For example, a voice message "is in (a data set of) abc", which is a search keyword.
In step 230, the target search term template is populated based on the search keywords to obtain corresponding speech recognition results. Illustratively, the identified descriptive keywords of a particular data set are populated onto the target search term template to obtain a desired one-complete speech recognition result. For example, the target search term template is "find XXX type (data set)", the descriptive keyword of a specific data set is Excel, and the identified descriptive keyword of a specific data set is filled into the target search term template, i.e., (data set of Excel type), so as to obtain a complete speech recognition result "find Excel type data set". The method changes the habitual search words and the like of the traditional search, and can arbitrarily set the search words manually. For example, it is also possible to search by historical names, e.g., only remembering previous song names, which were entered.
In some embodiments, speech input by the user is recognized by a sentence recognition model module and a keyword model module, and if the threshold values T1 and T2 of the two model modules are reached, respectively, the matching is considered to be successful. And searching according to the matched content and providing the searched content for the user.
The result of sentence recognition matching is: look up (dataset) named "XXX";
the result of keyword recognition matching is: annual bills;
the matched data set data is: a data set named annual bill.
Specifically, the voice input by the user may be "find a data set named annual bill", and a corresponding term template may be identified from a preset search term template set through a sentence recognition model. Here, the search term template set may be an example of: "find" XXX "type (dataset)", "find" XXX "folder (dataset)", "find" name "XXX", "find (dataset) containing" XXX "," XXX "fields", "find (dataset) named" XXX "," find (dataset) not of "XXX" type "," find (dataset) not under "XXX" folder "," find (dataset) not named "XXX", "find (dataset) not containing" XXX "," XXX "fields and" find (dataset) not named "XXX".
In some cases, a corresponding sentence recognition model may be trained for each of the above search term templates, and the speech of the user may be input into the above models, resulting in a corresponding matching rate: 1) for "find" XXX "type (dataset)" -0.23; for "find" XXX "(dataset) under folder) -0.25; for "find (dataset) named" XXX "-0.89; for "find (dataset) containing" XXX "," XXX "fields" -0.06; for "find (dataset) named" XXX "-0.47; for "find (dataset) not of type" XXX "-0.13; for "find (dataset) not under" XXX "folder" -0.20; for "find (dataset) with name not" XXX "-0.6; for "find (dataset) that does not contain" XXX "," XXX "fields" -0.04; for "find (dataset) not named" XXX "-0.26.
In addition, assuming a threshold of 0.75, it can be determined that the matching search term template is "find (dataset) named" XXX ".
The keywords in the voice message may then be identified using a keyword model module. For example, the pronunciation of commonly used words (e.g. 3000 commonly used chinese characters) is trained by using the above DTW algorithm, the speech is input into each model, the matching rate is calculated, and the final result is obtained by assuming that the recognition result is "search, find, name, year, degree, account, list, number, data, set", the keyword in the sentence model is removed to obtain "year, degree, account, list", and the keyword is "annual bill".
FIG. 3 shows a flow diagram of an example of updating a set of search term templates according to an embodiment of the application.
As shown in FIG. 3, in step 310, template setting instructions are obtained, wherein the template setting instructions include a desired search term template. In combination with the example, the user can obtain some search term templates that do not exist in the system according to the business requirements of the user or some application scenarios, and send out corresponding template setting instructions through interactive operation.
In step 320, the set of search term templates is updated based on the desired search term template. Illustratively, when a new category speech expression is found, a new text expression template is added, and an addition new style template is added to the original search term template set, i.e., the original search term template set is updated. Therefore, the search system can better meet diversified search requirements of users.
In an application scene, when a user has a new speech expression form for a search requirement of a specific data set, a new character expression template is made based on the new speech expression form. For example, the original voice form does not have a voice search function for the folder, and a new term template, for example, "i want to search the D data set under the D folder", can be set to meet the business requirement.
FIG. 4 is a flow chart of an example of a method of voice searching a data set in an embodiment of the present application.
As shown in fig. 4, in step 410, a set of search term templates is pre-made based on the text content form of the voice search requirement. Illustratively, when a user needs to search a specific data set, a voice demand is sent, and based on the voice demand, various search term templates meeting the corresponding voice demand are prepared in advance, and the various search term templates form a search term template set. Thus, matching a voice message to a particular search term template is facilitated.
In step 420, a voice message is obtained, and the voice message is recognized based on the voice recognition model to obtain a recognition result. Illustratively, the speech recognition model includes a sentence recognition model and a keyword recognition model, when the speech message is acquired, the sentence recognition model can recognize information of the matched search term template, and the keyword recognition model can recognize keywords defining the search term template, and the two models are combined to match complete information of a specific data set required in the speech message to obtain a recognition result.
In step 430, the target data set is found under the database based on the identification result. Illustratively, based on the complete information of a particular data set, a search is performed in the database to find the particular data set, i.e., the target data set.
In step 440, the target data set is displayed. Illustratively, after the target data set is located, the target data set is displayed. Therefore, the process of finding the target data set in a large number of complicated data sets in the database can be efficiently completed through voice interaction operation of the user, the search result can be fed back in real time through a display mode, and the voice search experience of the user is guaranteed.
In the embodiment of the application, the search term template set is preset, and the search term template set is utilized to train the voice recognition model, so that the voice recognition model can better recognize the result corresponding to the voice message, and the search process is simplified and accurate; in addition, based on the voice recognition result, a target data set corresponding to the voice recognition result is determined in a large number of data sets under the database, and comprehensiveness of the search result is guaranteed.
Fig. 5 is a block diagram of an example of an apparatus for voice searching a data set according to an embodiment of the present application.
As shown in fig. 5, the apparatus for voice searching a data set includes an acquisition unit 510, a recognition unit 520, and a determination unit 530.
An obtaining unit 510 configured to obtain a voice message whose content satisfies the target search term template.
A recognition unit 520 configured to determine a voice recognition result corresponding to the voice message based on a voice recognition model.
A determining unit 530 configured to determine a target data set corresponding to the voice recognition result among a plurality of data sets under a database.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 6, the electronic apparatus 6 of this embodiment includes: a processor 610, a memory 620, and a computer program 630 stored in the memory 620 and executable on the processor 610. The processor 610, when executing the computer program 630, implements the steps in the above-described voice search data set method embodiments, such as the steps 110 to 130 shown in fig. 1. Alternatively, the processor 610, when executing the computer program 630, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the units 510 to 530 shown in fig. 5.
Illustratively, the computer program 630 may be partitioned into one or more modules/units that are stored in the memory 620 and executed by the processor 610 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 630 in the electronic device 6. For example, the computer program 630 may be divided into an acquisition program module, a recognition program module, and a determination program module, each of which functions specifically as follows:
an acquisition program module configured to acquire a voice message whose content satisfies a target search term template;
a recognition program module configured to determine a voice recognition result corresponding to the voice message based on a voice recognition model;
a determining program module configured to determine a target data set corresponding to the speech recognition result among a plurality of data sets under a database.
The electronic device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The electronic device may include, but is not limited to, a processor 610, a memory 620. Those skilled in the art will appreciate that fig. 6 is merely an example of an electronic device 6, and does not constitute a limitation of the electronic device 6, and may include more or fewer components than shown, or some components in combination, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 610 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 620 may be an internal storage unit of the electronic device 6, such as a hard disk or a memory of the electronic device 6. The memory 620 may also be an external storage device of the electronic device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 6. Further, the memory 620 may also include both an internal storage unit and an external storage device of the electronic device 6. The memory 620 is used for storing the computer program and other programs and data required by the electronic device. The memory 620 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
1. A method of voice searching a data set, comprising:
acquiring a voice message, wherein the content of the voice message meets a target search term template;
determining a voice recognition result corresponding to the voice message based on a voice recognition model;
determining a target data set corresponding to the voice recognition result in a plurality of data sets under a database.
2. The method of claim 1, wherein the speech recognition model comprises a sentence recognition model module and a keyword recognition model module, and wherein determining the speech recognition result corresponding to the speech message based on the speech recognition model comprises:
determining a target search term template matched with the voice message from a preset search term template set based on the sentence recognition model module;
identifying a search keyword in the voice message based on the keyword identification model module;
and filling the target search term template based on the search keyword to obtain a corresponding voice recognition result.
3. The method of claim 2, wherein the method further comprises:
obtaining template setting instructions, wherein the template setting instructions comprise a desired search term template;
updating the set of search term templates based on the desired search term template.
4. The method of claim 1, wherein after determining the target data set corresponding to the speech recognition result, the method further comprises:
and displaying the target data set.
5. The method of claim 2, wherein each search term template in the set of search term templates is constructed according to content attribute information and/or business function information of a set of data in the database.
6. The method of claim 4, wherein the content attribute information comprises at least one of: file type relationship, file storage location, current file name, historical file name, text information in the file.
7. The method of claim 1, wherein the database is a file tree structure.
8. An apparatus for voice searching a data set, comprising:
an acquisition unit configured to acquire a voice message whose content satisfies a target search term template;
a recognition unit configured to determine a voice recognition result corresponding to the voice message based on a voice recognition model;
a determination unit configured to determine a target data set corresponding to the voice recognition result among a plurality of data sets under a database.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110261641.2A CN113157964A (en) | 2021-03-10 | 2021-03-10 | Method and device for searching data set through voice and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110261641.2A CN113157964A (en) | 2021-03-10 | 2021-03-10 | Method and device for searching data set through voice and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113157964A true CN113157964A (en) | 2021-07-23 |
Family
ID=76886702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110261641.2A Pending CN113157964A (en) | 2021-03-10 | 2021-03-10 | Method and device for searching data set through voice and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113157964A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113822014A (en) * | 2021-11-19 | 2021-12-21 | 北京明略昭辉科技有限公司 | Code material storage method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106227774A (en) * | 2016-07-15 | 2016-12-14 | 海信集团有限公司 | Information search method and device |
CN106601236A (en) * | 2016-12-22 | 2017-04-26 | 北京云知声信息技术有限公司 | Speech recognition method and apparatus |
CN111291158A (en) * | 2020-01-22 | 2020-06-16 | 北京猎户星空科技有限公司 | Information query method and device, electronic equipment and storage medium |
CN111552457A (en) * | 2020-03-30 | 2020-08-18 | 深圳壹账通智能科技有限公司 | Statement identification-based front-end development page construction method and device and storage medium |
-
2021
- 2021-03-10 CN CN202110261641.2A patent/CN113157964A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106227774A (en) * | 2016-07-15 | 2016-12-14 | 海信集团有限公司 | Information search method and device |
CN106601236A (en) * | 2016-12-22 | 2017-04-26 | 北京云知声信息技术有限公司 | Speech recognition method and apparatus |
CN111291158A (en) * | 2020-01-22 | 2020-06-16 | 北京猎户星空科技有限公司 | Information query method and device, electronic equipment and storage medium |
CN111552457A (en) * | 2020-03-30 | 2020-08-18 | 深圳壹账通智能科技有限公司 | Statement identification-based front-end development page construction method and device and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113822014A (en) * | 2021-11-19 | 2021-12-21 | 北京明略昭辉科技有限公司 | Code material storage method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9836524B2 (en) | Internal linking co-convergence using clustering with hierarchy | |
US9836508B2 (en) | External linking based on hierarchical level weightings | |
US11361030B2 (en) | Positive/negative facet identification in similar documents to search context | |
CN108920543B (en) | Query and interaction method and device, computer device and storage medium | |
CN112668320B (en) | Model training method and device based on word embedding, electronic equipment and storage medium | |
CN112597182A (en) | Data query statement optimization method and device, terminal and storage medium | |
EP3617910A1 (en) | Method and apparatus for displaying textual information | |
EP3961426A2 (en) | Method and apparatus for recommending document, electronic device and medium | |
CN112181386B (en) | Code construction method, device and terminal based on software continuous integration | |
CN111553556A (en) | Business data analysis method and device, computer equipment and storage medium | |
CN111814481B (en) | Shopping intention recognition method, device, terminal equipment and storage medium | |
CN114297143A (en) | File searching method, file displaying device and mobile terminal | |
CN113157964A (en) | Method and device for searching data set through voice and electronic equipment | |
CN111602129B (en) | Smart search for notes and ink | |
CN112989011B (en) | Data query method, data query device and electronic equipment | |
CN111753199B (en) | User portrait construction method and device, electronic device and medium | |
CN113918630A (en) | Data synchronization method and device, computer equipment and storage medium | |
CN113761213A (en) | Data query system and method based on knowledge graph and terminal equipment | |
CN111475467A (en) | File management method, cloud file management system and terminal | |
CN111310016A (en) | Label mining method, device, server and storage medium | |
CN114417791A (en) | Method, device and equipment for generating presentation | |
CN117033601A (en) | Intelligent question-answering method, device, equipment and medium based on network system | |
CN117992576A (en) | Method, device and program product for identifying required articles | |
CN114154072A (en) | Search method, search device, electronic device, and storage medium | |
CN111858829A (en) | Method and device for sorting multiple intents and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210723 |