CN113836270A - Big data processing method and related product - Google Patents

Big data processing method and related product Download PDF

Info

Publication number
CN113836270A
CN113836270A CN202111140975.0A CN202111140975A CN113836270A CN 113836270 A CN113836270 A CN 113836270A CN 202111140975 A CN202111140975 A CN 202111140975A CN 113836270 A CN113836270 A CN 113836270A
Authority
CN
China
Prior art keywords
text information
output
voice data
output result
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111140975.0A
Other languages
Chinese (zh)
Inventor
刘德荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Gelonghui Information Technology Co Ltd
Original Assignee
Shenzhen Gelonghui Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gelonghui Information Technology Co Ltd filed Critical Shenzhen Gelonghui Information Technology Co Ltd
Priority to CN202111140975.0A priority Critical patent/CN113836270A/en
Publication of CN113836270A publication Critical patent/CN113836270A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application provides a big data processing method and a related product, wherein the method comprises the following steps: the method comprises the steps that terminal equipment obtains big data information, and type analysis is carried out on the big data information to determine a first type of the big data information; when the terminal equipment determines that the first type is voice data, voice recognition is carried out on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data; the terminal device stores the mapping relation and the text information, when receiving the search condition, inquires a first text matched with the search condition in the text information according to the search condition, and calls first voice data corresponding to the first text to determine a search result of the search condition. The technical scheme provided by the application has the advantage of high user experience.

Description

Big data processing method and related product
Technical Field
The invention relates to the technical field of big data, in particular to a big data processing method and a related product.
Background
Big data (big data), or huge data, refers to the data that is too large to be captured, managed, processed and organized in a reasonable time to help the enterprise to make business decisions more positive by the current mainstream software tools.
The existing big data only stores the voice data, and the quick search query of the big data cannot be realized, so that the search efficiency of the voice data is influenced, and the user experience is reduced.
Disclosure of Invention
The embodiment of the invention provides a big data processing method and a related product, which can realize express search of voice data and improve the experience of a user.
In a first aspect, an embodiment of the present invention provides a method for processing big data, where the method includes the following steps:
the method comprises the steps that terminal equipment obtains big data information, and type analysis is carried out on the big data information to determine a first type of the big data information;
when the terminal equipment determines that the first type is voice data, voice recognition is carried out on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data;
the terminal device stores the mapping relation and the text information, when receiving the search condition, inquires a first text matched with the search condition in the text information according to the search condition, and calls first voice data corresponding to the first text to determine a search result of the search condition.
In a second aspect, a big data processing system is provided, the system comprising:
an acquisition unit configured to acquire big data information;
the processing unit is used for carrying out type analysis on the big data information to determine a first type of the big data information; when the first type is determined to be voice data, performing voice recognition on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data; and storing the mapping relation and the text information, inquiring a first text matched with the search condition in the text information according to the search condition when the search condition is received, and calling first voice data corresponding to the first text to determine a search result of the search condition.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
according to the technical scheme, the terminal equipment acquires the big data information, and performs type analysis on the big data information to determine the first type of the big data information; when the end equipment determines that the first type is voice data, voice recognition is carried out on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data; the terminal device stores the mapping relation and the text information, when receiving the search condition, inquires a first text matched with the search condition in the text information according to the search condition, and calls first voice data corresponding to the first text to determine a search result of the search condition. Therefore, the text information is quickly searched through the text information and the mapping relation, the matching result of the search condition is improved, the search efficiency is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
Fig. 2 is a flow chart diagram of a big data processing method.
Fig. 3 is a schematic structural diagram of a big data processing system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a terminal device, which may specifically be: the terminal device may be a terminal of an IOS system, an android system, or other systems, for example, a hong meng system, and the application does not limit the specific system, and as shown in fig. 1, the terminal device may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection.
The voice data is one of big data, and for the voice data, due to uniqueness, quick positioning query of the voice data is very inconvenient, so that the voice big data can only be marked by personal identification, but for the voice data of an individual, the data volume is very large, so that retrieval of key information can only be completed by monitoring through an AI intelligent robot or other modes, the time efficiency is very low, and the processing timeliness of the big data is influenced.
Referring to fig. 2, fig. 2 provides a method for processing big data, where the method is shown in fig. 2 and executed by the terminal device shown in fig. 1, and the method includes the following steps:
step S201, the terminal equipment acquires big data information, and performs type analysis on the big data information to determine a first type of the big data information;
for example, the type analysis may be determined in various ways, for example, in an alternative way, the big data information may be input into the classifier to be analyzed to determine the first type, but other ways may also be adopted, for example, in another alternative way, the first type may be determined according to the format of the big data information, for example, if the format of the big data information is determined to be MP3, MP4 format, and of course, other voice formats may also be adopted, and the specific format of the big data information is not limited herein.
Step S202, when the terminal equipment determines that the first type is voice data, voice recognition is carried out on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data;
the text information obtained by performing speech recognition on the speech data may be obtained by recognition of an LSTM (Long Short-Term Memory network), but in practical applications, the text information may also be obtained by recognition of an RNN model. The present application is not limited to the specific representation of speech recognition described above.
Step S203, the terminal device stores the mapping relationship and the text information, and when receiving the search condition, queries a first text matching the search condition in the text information according to the search condition, and invokes first voice data corresponding to the first text to determine a search result of the search condition.
According to the technical scheme, terminal equipment acquires big data information, performs type analysis on the big data information and determines a first type of the big data information; when the end equipment determines that the first type is voice data, voice recognition is carried out on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data; the terminal device stores the mapping relation and the text information, when receiving the search condition, inquires a first text matched with the search condition in the text information according to the search condition, and calls first voice data corresponding to the first text to determine a search result of the search condition. Therefore, the text information is quickly searched through the text information and the mapping relation, the matching result of the search condition is improved, the search efficiency is improved, and the user experience is improved.
The LSTM can be divided into a forgetting gate, an input gate, and an output gate, corresponding to three calculations, and the formula of the calculation is as follows:
forget to remember the door ft=σ(ht-1*Xt+bf)。
An input gate:
it=σ(ht-1*Xt+bi)
C’t=tanh(ht-1*Xt+bc);
an output gate:
Ot=σ(ht-1*Xt+bO);
ht=Ot*tanh(Ct)。
wherein, Ct=Ct-1*ft+it*C’t
Above, bfDenotes ftThe offset of the function, the value being constant, and, similarly, bi、bc、boRespectively representing the offsets, O, of the corresponding formulaetIndicating the output at time t.
For example, if the LSTM is used to perform speech recognition on the speech data to obtain the text information, the method may specifically include:
dividing voice data into a plurality of intervals according to continuous time, forming input data of the plurality of intervals for the voice data of each interval, dividing the input data of one interval into input data of a plurality of moments, inputting the input data of the plurality of moments into an LSTM model to obtain output results of the plurality of moments, determining text information of the input data of the one interval according to the output results of the plurality of moments, and traversing the plurality of intervals to obtain text information of the plurality of intervals.
The text information of the input data in the interval determined according to the output results of the multiple times may be determined according to a confirmation manner of obtaining the text information according to the output result of the LSTM, for example, a word with a highest confidence rate is determined as the output result corresponding to the time.
For example, the method may further include:
if the highest confidence rate of the output result at one moment t is lower than the confidence rate threshold, calculating to obtain a new output result at the moment t according to the formula 1, if the highest confidence rate of the new output result is higher than the confidence rate threshold, determining that the word corresponding to the highest confidence rate of the new output result is the output result at the moment t, if the highest confidence rate of the new output result is lower than the confidence rate threshold, selecting the average value of the output values of 2 output gates corresponding to 2 highest confidence rates from the t highest confidence rates of the output results at t moments before the moment t, and replacing the average value with the output value of h output gate in the output result at the moment tt-1And obtaining a second output result at the time t (namely by using the formula 2), and if the highest confidence rate of the second output result is higher than the confidence rate threshold, determining that the word corresponding to the highest confidence rate of the second output result is the text message at the time t.
Figure BDA0003283916270000051
Figure BDA0003283916270000052
Wherein the content of the first and second substances,
Figure BDA0003283916270000053
Figure BDA0003283916270000054
wherein,h0Output value of output gate at starting time ht-1The output value of the output gate at the time t-1; h0 and h1 are output values of 2 output gates corresponding to 2 highest confidence rates selected from the t highest confidence rates of output results at t moments before one moment t.
Referring to fig. 3, fig. 3 provides a big data processing system, which includes:
an acquisition unit configured to acquire big data information;
the processing unit is used for carrying out type analysis on the big data information to determine a first type of the big data information; when the first type is determined to be voice data, performing voice recognition on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data; and storing the mapping relation and the text information, inquiring a first text matched with the search condition in the text information according to the search condition when the search condition is received, and calling first voice data corresponding to the first text to determine a search result of the search condition.
As an example of this, it is possible to provide,
the processing unit is specifically configured to identify the voice data through the LSTM or RNN to obtain text information.
As an example of this, it is possible to provide,
if the voice data is identified by the LSTM to obtain the text information:
the processing unit is specifically configured to divide voice data into multiple intervals according to a continuous time, form input data of the multiple intervals for the voice data of each interval, divide the input data of one interval into input data of multiple times, input the input data of the multiple times into the LSTM model to obtain output results of the multiple times, determine text information of the input data of the one interval according to the output results of the multiple times, and traverse the multiple intervals to obtain text information of the multiple intervals.
As an example of this, it is possible to provide,
the processing unit is further configured to calculate a new output result at a time t according to formula 1 if the highest confidence rate of the output result at the time t is lower than the threshold of the confidence rate, and if the highest confidence rate of the output result at the time t is lower than the threshold of the confidence rate, calculate a new output result at the time tIf the highest confidence rate of the new output result is lower than the confidence rate threshold, selecting the average value of the output values of 2 output gates corresponding to 2 highest confidence rates from the t highest confidence rates of the output results in t moments before the moment t, and replacing h output result of the moment t with the average valuet-1Obtaining a second output result at the moment t, and if the highest confidence rate of the second output result is higher than a confidence rate threshold value, determining that the word corresponding to the highest confidence rate of the second output result is the text information at the moment t;
Figure BDA0003283916270000061
Figure BDA0003283916270000071
h0output value of output gate at starting time ht-1The output gate output value at time t-1.
For example, the processing unit in the embodiment of the present application may also be configured to execute the refinement scheme, the alternative scheme, and the like of the embodiment shown in fig. 2, which are not described herein again.
An embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the large data processing methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute part or all of the steps of any one of the big data processing methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may be performed in other orders or concurrently according to the present invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of a receiving hardware or a receiving software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A big data processing method is characterized by comprising the following steps:
the method comprises the steps that terminal equipment obtains big data information, and type analysis is carried out on the big data information to determine a first type of the big data information;
when the terminal equipment determines that the first type is voice data, voice recognition is carried out on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data;
the terminal device stores the mapping relation and the text information, when receiving the search condition, inquires a first text matched with the search condition in the text information according to the search condition, and calls first voice data corresponding to the first text to determine a search result of the search condition.
2. The method according to claim 1, wherein the performing speech recognition on the speech data to obtain the text information specifically comprises:
the voice data is identified by LSTM or RNN to obtain text information.
3. The method of claim 2, wherein the step of, if the text information is obtained by LSTM recognition of the voice data, specifically comprises:
dividing voice data into a plurality of intervals according to continuous time, forming input data of the plurality of intervals for the voice data of each interval, dividing the input data of one interval into input data of a plurality of moments, inputting the input data of the plurality of moments into an LSTM model to obtain output results of the plurality of moments, determining text information of the input data of the one interval according to the output results of the plurality of moments, and traversing the plurality of intervals to obtain text information of the plurality of intervals.
4. The method according to claim 3, wherein if the highest confidence rate of the output result at a time t is lower than the confidence rate threshold, a new output result at the time t is obtained according to the formula 1, if the highest confidence rate of the new output result is higher than the confidence rate threshold, the word corresponding to the highest confidence rate of the new output result is determined as the output result at the time t, and if the highest confidence rate of the new output result is lower than the confidence rate threshold, 2 output results corresponding to 2 highest confidence rates are selected from the t highest confidence rates of the output results at t times before the time tAverage value of output values of gates, replacing h in output result at a time t with the average valuet-1Obtaining a second output result at the moment t, and if the highest confidence rate of the second output result is higher than a confidence rate threshold value, determining that the word corresponding to the highest confidence rate of the second output result is the text information at the moment t;
Figure FDA0003283916260000021
h0output value of output gate at starting time ht-1The output gate output value at time t-1.
5. A big data processing system, the system comprising:
an acquisition unit configured to acquire big data information;
the processing unit is used for carrying out type analysis on the big data information to determine a first type of the big data information; when the first type is determined to be voice data, performing voice recognition on the voice data to obtain text information; establishing a mapping relation between the text information and the voice data; and storing the mapping relation and the text information, inquiring a first text matched with the search condition in the text information according to the search condition when the search condition is received, and calling first voice data corresponding to the first text to determine a search result of the search condition.
6. The system of claim 5,
the processing unit is specifically configured to identify the voice data through the LSTM or RNN to obtain text information.
7. The method of claim 6, wherein if the text information is identified for the speech data by LSTM:
the processing unit is specifically configured to divide voice data into multiple intervals according to a continuous time, form input data of the multiple intervals for the voice data of each interval, divide the input data of one interval into input data of multiple times, input the input data of the multiple times into the LSTM model to obtain output results of the multiple times, determine text information of the input data of the one interval according to the output results of the multiple times, and traverse the multiple intervals to obtain text information of the multiple intervals.
8. The system of claim 7,
the processing unit is further configured to calculate a new output result at a time t according to formula 1 if the highest confidence rate of the output result at the time t is lower than the confidence rate threshold, determine a word corresponding to the highest confidence rate of the new output result as the output result at the time t if the highest confidence rate of the new output result is higher than the confidence rate threshold, select an average value of output values of 2 output gates corresponding to 2 highest confidence rates from the t highest confidence rates of the output results at t times before the time t if the highest confidence rate of the new output result is lower than the confidence rate threshold, and replace h output result at the time t with the average valuet-1Obtaining a second output result at the moment t, and if the highest confidence rate of the second output result is higher than a confidence rate threshold value, determining that the word corresponding to the highest confidence rate of the second output result is the text information at the moment t;
Figure FDA0003283916260000031
h0output value of output gate at starting time ht-1The output gate output value at time t-1.
9. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to perform the method as provided in any one of claims 1-4.
CN202111140975.0A 2021-09-28 2021-09-28 Big data processing method and related product Pending CN113836270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111140975.0A CN113836270A (en) 2021-09-28 2021-09-28 Big data processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111140975.0A CN113836270A (en) 2021-09-28 2021-09-28 Big data processing method and related product

Publications (1)

Publication Number Publication Date
CN113836270A true CN113836270A (en) 2021-12-24

Family

ID=78970830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111140975.0A Pending CN113836270A (en) 2021-09-28 2021-09-28 Big data processing method and related product

Country Status (1)

Country Link
CN (1) CN113836270A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714981A (en) * 2013-12-17 2015-06-17 腾讯科技(深圳)有限公司 Voice message search method, device and system
US20180358005A1 (en) * 2015-12-01 2018-12-13 Fluent.Ai Inc. System and method for implementing a vocal user interface by combining a speech to text system and a speech to intent system
CN109308896A (en) * 2017-07-28 2019-02-05 深圳光启合众科技有限公司 Method of speech processing and device, storage medium and processor
CN110415705A (en) * 2019-08-01 2019-11-05 苏州奇梦者网络科技有限公司 A kind of hot word recognition methods, system, device and storage medium
CN112150103A (en) * 2020-09-08 2020-12-29 腾讯科技(深圳)有限公司 Schedule setting method and device and storage medium
CN113270104A (en) * 2021-07-19 2021-08-17 深圳市思特克电子技术开发有限公司 Artificial intelligence processing method and system for voice

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714981A (en) * 2013-12-17 2015-06-17 腾讯科技(深圳)有限公司 Voice message search method, device and system
US20180358005A1 (en) * 2015-12-01 2018-12-13 Fluent.Ai Inc. System and method for implementing a vocal user interface by combining a speech to text system and a speech to intent system
CN109308896A (en) * 2017-07-28 2019-02-05 深圳光启合众科技有限公司 Method of speech processing and device, storage medium and processor
CN110415705A (en) * 2019-08-01 2019-11-05 苏州奇梦者网络科技有限公司 A kind of hot word recognition methods, system, device and storage medium
CN112150103A (en) * 2020-09-08 2020-12-29 腾讯科技(深圳)有限公司 Schedule setting method and device and storage medium
CN113270104A (en) * 2021-07-19 2021-08-17 深圳市思特克电子技术开发有限公司 Artificial intelligence processing method and system for voice

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹琳 等: "《互联网处理技术与应用研究》", 31 July 2020, pages: 69 *
葛言琭: "基于双向循环神经网络的语音识别算法", 《电脑知识与技术》, no. 10, 30 April 2020 (2020-04-30), pages 193 - 195 *

Similar Documents

Publication Publication Date Title
US20230334089A1 (en) Entity recognition from an image
CN104866985B (en) The recognition methods of express delivery odd numbers, apparatus and system
CN107704202B (en) Method and device for quickly reading and writing data
US11741094B2 (en) Method and system for identifying core product terms
CN107885716B (en) Text recognition method and device
CN108509407A (en) Text semantic similarity calculating method, device and user terminal
CN107133248B (en) Application program classification method and device
CN113326991A (en) Automatic authorization method, device, computer equipment and storage medium
CN109344396A (en) Text recognition method, device and computer equipment
CN111061837A (en) Topic identification method, device, equipment and medium
CN112085087A (en) Method and device for generating business rules, computer equipment and storage medium
CN113407851A (en) Method, device, equipment and medium for determining recommendation information based on double-tower model
CN108076032B (en) Abnormal behavior user identification method and device
CN111160410A (en) Object detection method and device
KR102481162B1 (en) Subscription data push method and device in the Internet of Things, the device and storage medium
CN108563648B (en) Data display method and device, storage medium and electronic device
CN108959289B (en) Website category acquisition method and device
CN115169489B (en) Data retrieval method, device, equipment and storage medium
CN113836270A (en) Big data processing method and related product
CN110827101B (en) Shop recommending method and device
CN112364181B (en) Insurance product matching degree determining method and apparatus
CN111597368A (en) Data processing method and device
CN110362603B (en) Feature redundancy analysis method, feature selection method and related device
CN110019771B (en) Text processing method and device
CN106897331B (en) User key position data acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211224

RJ01 Rejection of invention patent application after publication