CN113870846B - Speech recognition method, device and storage medium based on artificial intelligence - Google Patents

Speech recognition method, device and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN113870846B
CN113870846B CN202111135001.3A CN202111135001A CN113870846B CN 113870846 B CN113870846 B CN 113870846B CN 202111135001 A CN202111135001 A CN 202111135001A CN 113870846 B CN113870846 B CN 113870846B
Authority
CN
China
Prior art keywords
loss
ctc
module
target task
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111135001.3A
Other languages
Chinese (zh)
Other versions
CN113870846A (en
Inventor
罗剑
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111135001.3A priority Critical patent/CN113870846B/en
Publication of CN113870846A publication Critical patent/CN113870846A/en
Application granted granted Critical
Publication of CN113870846B publication Critical patent/CN113870846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/32Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to artificial intelligence, and discloses a voice recognition method based on artificial intelligence, which comprises the following steps: inputting the acquired training data into a voice recognition module of a preset joint recognition model, and acquiring output data of the voice recognition module and first target task loss; inputting the output data into a loss prediction module of the joint identification model to obtain a second target task loss of the loss prediction module; acquiring a total task loss of the joint identification model based on the first target task loss and the second target task loss; performing iterative training on the joint recognition model based on the training data until the total task loss is converged within a preset range to form a joint recognition model; and identifying the voice signal to be detected based on the voice identification module in the joint identification model, and acquiring a corresponding identification result. The invention can improve the precision and efficiency of voice recognition.

Description

Speech recognition method, device and storage medium based on artificial intelligence
Technical Field
The present invention relates to the field of artificial intelligence technology, and in particular, to a speech recognition method, device, electronic apparatus, and computer readable storage medium based on artificial intelligence.
Background
At present, the end-to-end neural network gradually achieves remarkable effects in an Automatic Speech Recognition (ASR) task, and in the existing model, CTC, encoder-decoder and CTC-attention hybrid architecture draw attention of researchers at home and abroad. However, these end-to-end models often suffer from computational inefficiency due to the complexity of the network architecture, and in addition, these models require large amounts of labeled speech data to train. It is well known that labeling voice data is an expensive and time consuming task, and that an audio often requires ten times the duration of the audio to label. Therefore, optimizing training efficiency is necessary to improve the performance of an end-to-end speech recognition system.
The existing sampling strategies for speech recognition can be mainly divided into two methods: one method is a probability-based lowest confidence method. The method considers that the data with the lowest decoding path probability in the data set contains the largest amount of information. However, the lowest confidence method only considers the probabilities of the most likely decoding paths in the audio samples, and does not consider the probabilities of all decoding paths. Another popular method is to anticipate the gradient length. The expected gradient length selects the sample expected to have the largest gradient length and approximates the true gradient length using a neural network. But this method has a problem of being sensitive to outliers because outliers generally have a large gradient. In addition, the gradient prediction network trains separately from the speech recognition network, so that the entire network is a non-end-to-end structure.
Disclosure of Invention
The invention provides a voice recognition method, a voice recognition device, electronic equipment and a computer readable storage medium based on artificial intelligence, and the main purpose of the invention is to improve the efficiency and the precision of voice recognition.
In order to achieve the above object, the present invention provides a speech recognition method based on artificial intelligence, comprising: inputting the acquired training data into a voice recognition module of a preset joint recognition model, and acquiring output data of the voice recognition module and first target task loss;
Inputting the output data into a loss prediction module of the joint identification model to obtain a second target task loss of the loss prediction module;
Acquiring a total task loss of the joint identification model based on the first target task loss and the second target task loss;
performing iterative training on the joint recognition model based on the training data until the total task loss is converged within a preset range to form a joint recognition model;
And identifying the voice signal to be detected based on the voice identification module in the joint identification model, and acquiring a corresponding identification result.
In addition, an optional technical solution is that the step of obtaining the output data of the voice recognition module and the first target task loss includes:
encoding the training data based on an encoder network in the voice recognition module to obtain hidden features corresponding to the training data, and outputting the hidden features as an encoder;
Outputting a text tag sequence corresponding to the encoder output as a decoder output through a decoder network in the speech recognition module based on the encoder output;
acquiring negative log likelihood of a real text sequence of the training data under the hidden characteristic as CTC loss of the voice recognition module, and determining attention loss of the voice recognition module based on cross entropy loss of the text label sequence and the real text sequence;
a first target task loss of the speech recognition module is determined based on the CTC loss and the attention loss.
In addition, an alternative technical scheme is that the expression formula of CTC loss of the voice recognition module is as follows:
Wherein y represents the real text sequence, h represents the hidden feature, t represents the t-th hidden feature, and P (y|h t) represents the probability of the real text sequence at the t-th hidden feature;
the expression formula of the attention loss is as follows:
Where y represents the real text sequence, g= (g 1,g2,...,gS) represents the hidden feature of the decoder network output, s represents the length of the real text sequence, Representing the frequency of occurrence of the target tag y s in the real text sequence at step S of the decoder network output,/>Representing characters in the text sequence predicted in step s-1;
the expression formula of the first target task loss is as follows:
wherein, CTC loss representing the speech recognition module,/>Represents the attention loss, lambda represents the scaling factor, 0.ltoreq.lambda.ltoreq.1.
Furthermore, an optional solution is that the step of obtaining the second target task loss of the loss prediction module includes:
outputting and inputting the encoder output of the output data into a CTC loss prediction module of the loss prediction module, and obtaining CTC prediction loss corresponding to the training data;
inputting the decoder output of the output data into an attention loss prediction module of the loss prediction module, and acquiring attention prediction loss corresponding to the training data;
A second target task loss of the loss prediction module is determined based on the CTC predicted loss and the attention predicted loss.
Furthermore, an optional solution is that the step of determining the second target task loss of the loss prediction module based on the CTC predicted loss and the attention predicted loss includes:
Acquiring a first error loss function corresponding to the CTC predicted loss based on the CTC predicted loss;
obtaining a second error loss function corresponding to the attention prediction loss based on the attention prediction loss;
a second target task loss of the loss prediction module is determined based on the first error loss function and the second error loss function.
In addition, an optional technical solution is that the expression formula of the first error loss function is as follows:
The expression formula of the second error loss function is as follows:
the expression formula of the second target task loss is as follows:
ε=λεctc+(1-λ)εattention
wherein, CTC loss representing the speech recognition module,/>Representing the predicted loss of said CTCs,Representing the attention loss,/>Representing the attention prediction loss, beta representing a threshold factor, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
Furthermore, an alternative solution is that the expression formula of the total task loss is as follows:
wherein, Representing the first target task loss, ε representing the second target task loss,/>CTC loss representing the speech recognition module,/>Representing the attention loss, epsilon ctc representing the first error loss function, epsilon attention representing the second error loss function, mu representing a super-parameter, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
In order to solve the above problems, the present invention also provides an artificial intelligence-based voice recognition apparatus, the apparatus comprising:
the first target task loss acquisition unit is used for inputting the acquired training data into a voice recognition module of a preset joint recognition model to acquire output data of the voice recognition module and first target task loss;
A second target task loss obtaining unit, configured to input the output data into a loss prediction module of the joint identification model, so as to obtain a second target task loss of the loss prediction module;
A total task loss acquisition unit, configured to acquire a total task loss of the joint recognition model based on the first target task loss and the second target task loss;
The joint recognition model forming unit is used for carrying out iterative training on the joint recognition model based on the training data until the total task loss is converged within a preset range to form a joint recognition model;
The recognition result acquisition unit is used for recognizing the voice signal to be detected based on the voice recognition module in the joint recognition model and acquiring a corresponding recognition result.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
A memory storing at least one instruction; and
And a processor executing the instructions stored in the memory to implement the artificial intelligence based speech recognition method described above.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium having stored therein at least one instruction that is executed by a processor in an electronic device to implement the artificial intelligence-based speech recognition method described above.
According to the embodiment of the invention, the active learning method of loss prediction is used for carrying out voice recognition on the joint recognition joint model, the voice recognition task is carried out through the voice recognition module ASR, the loss prediction module LP module predicts CTC and attention loss, the loss prediction result according to the sample can be used as the sample order standard measurement, the method can be applied to the voice recognition task needing to find the most valuable sample from unlabeled samples, and the method can be combined with semi-supervised learning of unlabeled data, so that the training efficiency and recognition accuracy of the voice recognition module are further improved.
Drawings
FIG. 1 is a flow chart of an artificial intelligence based speech recognition method according to an embodiment of the invention;
FIG. 2 is a schematic diagram illustrating a structure of a speech recognition module and a loss prediction module according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a training structure of a joint recognition model according to an embodiment of the present invention based on artificial intelligence;
FIG. 4 is a schematic diagram of a unit module for implementing an artificial intelligence-based speech recognition device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an internal structure of an electronic device implementing an artificial intelligence-based speech recognition method according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order to solve the problems that the existing speech recognition scheme only considers the probability of the most probable decoding paths in the audio sample, but does not consider the probability of all decoding paths, is very sensitive to abnormal values and the like, the invention provides a speech recognition method based on artificial intelligence, the speech signal to be detected can comprise audio data with various lengths, text information with the highest probability can be obtained after the processing of a speech recognition module, and further recognition conversion from the speech signal to the text signal is realized.
The embodiment of the invention can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The invention provides a voice recognition method based on artificial intelligence. Referring to fig. 1, a flowchart of an artificial intelligence-based speech recognition method according to an embodiment of the present invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In this embodiment, the artificial intelligence-based voice recognition method includes:
S100: inputting the acquired training data into a voice recognition module of a preset joint recognition model, and acquiring output data of the voice recognition module and first target task loss;
s200: inputting the output data into a loss prediction module of the joint identification model to obtain a second target task loss of the loss prediction module;
S300: acquiring a total task loss of the joint identification model based on the first target task loss and the second target task loss;
S400: performing iterative training on the joint recognition model based on the training data until the total task loss is converged within a preset range to form a joint recognition model;
s500: and identifying the voice signal to be detected based on the voice identification module in the joint identification model, and acquiring a corresponding identification result.
Specifically, the joint recognition model may take the form of a model combining ASR (Automatic Speech Recognition ) and LP (Loss prediction), that is, the joint recognition model of the present invention may also be understood as an ASR/LP joint model, further, the preset joint recognition model includes a speech recognition module and a Loss prediction module, the speech recognition module includes an encoder network and a decoder network, the Loss prediction module further includes a CTC Loss prediction module and an attention Loss prediction module, the task of the speech recognition module is to obtain a predicted text sentence corresponding to training data through the encoder network and the decoder network, and the Loss prediction module performs Loss calculation on the predicted text sentence according to the hidden feature extracted in the speech recognition module. In the step S100, the step of obtaining the output data of the speech recognition module and the first target task loss may further include:
S110: encoding the training data based on an encoder network in the voice recognition module to obtain hidden features corresponding to the training data, and outputting the hidden features as an encoder;
s120: outputting a text tag sequence corresponding to the encoder output as a decoder output through a decoder network in the speech recognition module based on the encoder output; wherein the output data of the speech recognition module comprises the above-mentioned encoded output and decoder output.
S130: acquiring negative log likelihood of a real text sequence of the training data under the hidden characteristic as CTC loss of the voice recognition module, and determining attention loss of the voice recognition module based on cross entropy loss of the text label sequence and the real text sequence;
s140: a first target task loss of the speech recognition module is determined based on the CTC loss and the attention loss.
As a specific example, fig. 2 shows a schematic structure of a speech recognition module and a loss prediction module according to an embodiment of the present invention.
As shown in fig. 2, unlabeled data and part of the labeled data are input into a voice recognition module for processing as training data, then a loss ordering of a corresponding text label sequence is obtained through a loss prediction model, the first K pieces of target data serving as manual labels are selected based on the ordering, and after labeling, the target data are input into the voice recognition module for iterative training. The encoder network and the decoder network are two cyclic neural networks respectively corresponding to the input sequence and the output sequence of the training data, and in the applicable process, special characters can be added at the end of the input sequence and the input sequence to represent the termination of the sequence, i.e. when preset special characters are output, for example, < eso > (end of sequence) can terminate the current output sequence.
Specifically, for a given audio sample x and real text sequence y of training data, ctc loss and attention loss can be obtained by a speech recognition module, which mainly includes two parts: θ asr=[θencoderdecoder ], where θ encoder represents an encoder network and θ decoder represents a decoder network, the encoder network converts the speech feature sequence x= (x 1,x2,...,xT) of the input training data into a hidden feature h= (h 1,h2,...,hT), and the decoder network outputs a text label sequence according to the hidden featureWhere T represents the audio frame length of the speech feature and S represents the length of the text label sequence.
Furthermore, CTC loss of speech recognition moduleCan be defined as the negative log likelihood of the real text sequence of training data with hidden feature h, h=θ encoder (x).
In particular, CTC loss of the speech recognition moduleThe expression formula of (2) is as follows:
Wherein y represents the real text sequence, h represents the hidden feature, t represents the t-th hidden feature, and P (y|h t) represents the probability of the real text sequence at the t-th hidden feature.
Furthermore, attention lossCan be defined as the text tag sequence/>, predicted under the condition of the output hidden feature g of the decoder networkAnd a cross entropy loss between the real text sequences, wherein the hidden features g, g= (g 1,g2,...,gS) in the decoder network correspond to the hidden features h in the encoder network, the expression of the attention loss is as follows:
where y represents the real text sequence, g= (g 1,g2,...,gS) represents the hidden feature/state of the decoder network output, s represents the length of the real text sequence, Representing the frequency of occurrence of the target tag y s in the real text sequence at step S of the decoder network output,/>Representing the characters in the text sequence predicted in step s-1.
Finally, the expression formula of the first target task loss is as follows:
wherein, CTC loss representing the speech recognition module,/>Represents the attention loss, lambda represents the scaling factor, 0.ltoreq.lambda.ltoreq.1.
In the step S200, the step of obtaining the second target task loss of the loss prediction module may further include:
S210: outputting and inputting the encoder output of the output data into a CTC loss prediction module of the loss prediction module, and obtaining CTC prediction loss corresponding to the training data;
S220: inputting the decoder output of the output data into an attention loss prediction module of the loss prediction module, and acquiring attention prediction loss corresponding to the training data;
S230: a second target task loss of the loss prediction module is determined based on the CTC predicted loss and the attention predicted loss.
In an example embodiment, the CTC prediction loss may be determined based on a concealment feature of a last layer of concealment features h output by the encoder; and determining the attention prediction loss based on the concealment characteristic g of the decoder.
Specifically, loss prediction moduleAlso made up of two parts,/>Where θ ctc represents a loss prediction module, the corresponding CTC prediction loss may be defined as the audio sample x of the given training data to predict CTC loss(I.e., CTC prediction loss) the hidden feature may be first obtained through the encoder network θ encoder of the speech recognition module, after which the last layer of hidden feature of θ encoder is taken as input for θ ctc, and then the CTC prediction loss may be expressed as:
Similarly, the attention loss prediction module θ attention predicts the attention loss The acquisition may be calculated based on the hidden feature g of the decoder, i.e. the attention prediction loss/>Can be expressed as:
Finally, a target task penalty for the penalty prediction module may be determined based on the CTC predicted penalty and the attention predicted penalty.
As a specific example, in a preferred embodiment of the present solution, in order to measure the difference between the true Loss and the predicted Loss, the Loss may be optimized by using SmoothL a Loss error function (smooth version between L1 error and MSE error) with a threshold factor β, and the specific step S230 may further include:
s231: acquiring a first error loss function corresponding to the CTC predicted loss based on the CTC predicted loss;
s232: obtaining a second error loss function corresponding to the attention prediction loss based on the attention prediction loss;
s233: a second target task loss of the loss prediction module is determined based on the first error loss function and the second error loss function.
Specifically, the expression formula of the first error loss function is as follows:
The expression formula of the second error loss function is as follows:
the expression formula of the second target task loss is as follows:
ε=λεctc+(1-λ)εattention
wherein, CTC loss representing the speech recognition module,/>Representing the CTC predictive loss,/>Representing the attention loss,/>Representing the attention prediction loss, beta representing a threshold factor, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
Finally, the expression formula of the total task loss is as follows:
wherein, Representing the first target task loss, ε representing the second target task loss,/>CTC loss representing the speech recognition module,/>Representing the attention loss, epsilon ctc representing the first error loss function, epsilon attention representing the second error loss function, mu representing a super-parameter, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
In the above step S300, the total task loss mainly includes four components: CTC lossAttention lossCTC predicted loss epsilon ctc, attention predicted loss epsilon attention, the formula for total task loss can be expressed as:
Wherein lambda represents a scaling factor, lambda is not less than 0 and not more than 1, mu represents a hyper-parameter, and a speech recognition module and a loss prediction module of the joint recognition model perform joint training on total task loss.
As a specific example, fig. 3 shows a training structure of a joint recognition model according to an embodiment of the present invention.
As shown in fig. 3, after the acoustic features of the training data are input into the encoder network, the acoustic features of the training data are input into Add & Norm layers after being processed by multi-head self-attentions, where Add represents residual connection (Residual Connection) for preventing network degradation, norm represents Layer Normalization for normalizing the activation value of each layer, and finally, the acoustic features are output to the CTC loss prediction module and the multi-head self-attentions layer in the decoder network through the feedforward module and another Add & Norm layer, text is embedded in the decoder network, part of samples recommended by the sorting criteria can be manually labeled in the model training process and then input into the decoder network, and can be set into text forms such as word list in the application process, finally, the Add & Norm layers of the decoder network output to the full-connection layer to obtain the probability of the predicted text, and the result with the highest probability is selected as the final predicted result.
The true CTC loss and the true attention loss are understood as CTC loss and attention loss described in the above, and the predicted CTC loss and the predicted attention loss are understood as CTC predicted loss and attention predicted loss described in the above, and are not described in detail herein.
Further in step S400, the training data includes an unlabeled data set Du and a labeled data set Dl, and in each iteration of the iterative training, sample data with the highest ranking evaluation value of the preset number before selection from Du can be manually labeled, and further the audio frame length T of the training data can be used for performing the trainingNormalizing by the length S of the text label sequenceNormalization is performed to convert the two into the same scale, so that the joint recognition model can be prevented from always selecting a longer audio sample for training (the longer audio sample has a larger loss value and a corresponding ranking rating value is also higher), and further, K sample sets Da with the earlier ranks in the dataset Du can be marked manually, and in the next iterative training, the joint recognition model can be trained again on the data of D l∪Da until the voice recognition module theta asr converges or reaches the required performance.
And finally, recognizing the die detection voice signal through a voice recognition module in the trained combined recognition model, and acquiring a corresponding recognition result. The voice signal to be detected can comprise audio data with various lengths, the text information with the highest probability can be obtained after the voice recognition module is processed, recognition conversion from the voice signal to the text signal is further realized, the recognition accuracy is high, the sample size required by model training is small, the labeling cost is low, and the method is suitable for various voice recognition scenes.
FIG. 4 is a functional block diagram of an artificial intelligence based speech recognition device of the present invention.
The artificial intelligence based speech recognition apparatus 200 of the present invention may be installed in an electronic device. The artificial intelligence based voice recognition apparatus may include a first target task loss acquisition unit 210, a second target task loss acquisition unit 220, a total task loss acquisition unit 230, a joint recognition model formation unit 240, and a recognition result acquisition unit 250 according to the implemented functions. The unit referred to herein, also referred to as a module, refers to a series of computer program segments, which can be executed by a processor of an electronic device and which can perform a fixed function, stored in a memory of the electronic device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
A first target task loss obtaining unit 210, configured to input the obtained training data into a voice recognition module of a preset joint recognition model, and obtain output data of the voice recognition module and a first target task loss;
A second target task loss obtaining unit 220, configured to input the output data into a loss prediction module of the joint identification model, so as to obtain a second target task loss of the loss prediction module;
A total task loss obtaining unit 230, configured to obtain a total task loss of the joint identification model based on the first target task loss and the second target task loss;
A joint recognition model forming unit 240, configured to iteratively train the joint recognition model based on the training data until the total task loss converges within a preset range, to form a joint recognition model;
The recognition result obtaining unit 250 is configured to identify the to-be-detected voice signal based on the voice recognition module in the joint recognition model, and obtain a corresponding recognition result.
Specifically, the joint recognition model may take the form of a model combining ASR (Automatic Speech Recognition ) and LP (Loss prediction), that is, the joint recognition model of the present invention may also be understood as an ASR/LP joint model, further, the preset joint recognition model includes a speech recognition module and a Loss prediction module, the speech recognition module includes an encoder network and a decoder network, the Loss prediction module further includes a CTC Loss prediction module and an attention Loss prediction module, the task of the speech recognition module is to obtain a predicted text sentence corresponding to training data through the encoder network and the decoder network, and the Loss prediction module performs Loss calculation on the predicted text sentence according to the hidden feature extracted in the speech recognition module. In the first target task loss obtaining unit 210 of the above step, the unit for obtaining the output data of the voice recognition module and the first target task loss may further include:
The hidden characteristic acquisition unit is used for carrying out coding processing on the training data based on an encoder network in the voice recognition module so as to acquire hidden characteristics corresponding to the training data and output the hidden characteristics as an encoder;
A decoder output acquisition unit configured to output, as a decoder output, a text tag sequence corresponding to the encoder output through a decoder network in the speech recognition module based on the encoder output; wherein the output data of the speech recognition module comprises the above-mentioned encoded output and decoder output.
An attention loss determination unit, configured to obtain a negative log likelihood of a real text sequence of the training data under the hidden feature as a CTC loss of the speech recognition module, and determine an attention loss of the speech recognition module based on a cross entropy loss of the text tag sequence and the real text sequence;
and a first target task loss determination unit configured to determine a first target task loss of the speech recognition module based on the CTC loss and the attention loss.
Specifically, unlabeled data and part of the labeled data are input into a voice recognition module for processing as training data, then a loss sequencing of a corresponding text label sequence is obtained through a loss prediction model, the first K pieces of target data serving as manual labeling are selected based on the sequencing, and the voice recognition module is input for iterative training after labeling. The encoder network and the decoder network are two cyclic neural networks respectively corresponding to the input sequence and the output sequence of the training data, and in the applicable process, special characters can be added at the end of the input sequence and the input sequence to represent the termination of the sequence, i.e. when preset special characters are output, for example, < eso > (end of sequence) can terminate the current output sequence.
Wherein, for a given audio sample x and real text sequence y of training data, ctc loss and attention loss can be obtained by a speech recognition module, which mainly comprises two parts: θ asr=[θencoderdecoder ], where θ encoder represents an encoder network and θ decoder represents a decoder network, the encoder network converts the speech feature sequence x= (x 1,x2,...,xT) of the input training data into a hidden feature h= (h 1,h2,...,hT), and the decoder network outputs a text label sequence according to the hidden featureWhere T represents the audio frame length of the speech feature and S represents the length of the text label sequence.
Furthermore, CTC loss of speech recognition moduleCan be defined as the negative log likelihood of the real text sequence of training data with hidden feature h, h=θ encoder (x).
In particular, CTC loss of the speech recognition moduleThe expression formula of (2) is as follows:
Wherein y represents the real text sequence, h represents the hidden feature, t represents the t-th hidden feature, and P (y|h t) represents the probability of the real text sequence at the t-th hidden feature.
Furthermore, attention lossCan be defined as the text tag sequence/>, predicted under the condition of the output hidden feature g of the decoder networkAnd a cross entropy loss between the real text sequences, wherein the hidden features g, g= (g 1,g2,...,gS) in the decoder network correspond to the hidden features h in the encoder network, the expression of the attention loss is as follows:
where y represents the real text sequence, g= (g 1,g2,...,gS) represents the hidden feature/state of the decoder network output, s represents the length of the real text sequence, Representing the frequency of occurrence of the target tag y s in the real text sequence at step S of the decoder network output,/>Representing the characters in the text sequence predicted in step s-1.
Finally, the expression formula of the first target task loss is as follows:
wherein, CTC loss representing the speech recognition module,/>Represents the attention loss, lambda represents the scaling factor, 0.ltoreq.lambda.ltoreq.1.
In the second target task loss obtaining unit 220, the obtaining the second target task loss of the loss prediction module may further include:
The CTC prediction loss acquisition module is used for outputting and inputting the encoder output of the output data into the CTC loss prediction module of the loss prediction module to acquire CTC prediction loss corresponding to the training data;
an attention prediction loss obtaining module, configured to input a decoder output of the output data into an attention loss prediction module of the loss prediction module, to obtain an attention prediction loss corresponding to the training data;
A second target task loss determination module for determining a second target task loss of the loss prediction module based on the CTC predicted loss and the attention predicted loss.
In an example embodiment, the CTC prediction loss may be determined based on a concealment feature of a last layer of concealment features h output by the encoder; and determining the attention prediction loss based on the concealment characteristic g of the decoder.
Specifically, loss prediction moduleAlso made up of two parts,/>Where θ ctc represents a loss prediction module, the corresponding CTC prediction loss may be defined as the audio sample x of the given training data to predict CTC loss(I.e., CTC prediction loss) the hidden feature may be first obtained through the encoder network θ encoder of the speech recognition module, after which the last layer of hidden feature of θ encoder is taken as input for θ ctc, and then the CTC prediction loss may be expressed as:
Similarly, the attention loss prediction module θ attention predicts the attention loss The acquisition may be calculated based on the hidden feature g of the decoder, i.e. the attention prediction loss/>Can be expressed as:
Finally, a target task penalty for the penalty prediction module may be determined based on the CTC predicted penalty and the attention predicted penalty.
As a specific example, in a preferred embodiment of the present solution, to measure the difference between the true Loss and the predicted Loss, the Loss may be optimized using SmoothL a Loss error function (a smoothed version between the L1 error and the MSE error) with a threshold factor β, and the specific second target task Loss determination module may further include:
A first error loss function obtaining sub-module for obtaining a first error loss function corresponding to the CTC predicted loss based on the CTC predicted loss;
A second error loss function obtaining sub-module for obtaining a second error loss function corresponding to the attention prediction loss based on the attention prediction loss;
A second target task loss determination submodule for determining a second target task loss of the loss prediction module based on the first error loss function and the second error loss function.
Specifically, the expression formula of the first error loss function is as follows:
The expression formula of the second error loss function is as follows:
the expression formula of the second target task loss is as follows:
ε=λεctc+(1-λ)εattention
wherein, CTC loss representing the speech recognition module,/>Representing the predicted loss of said CTCs,Representing the attention loss,/>Representing the attention prediction loss, beta representing a threshold factor, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
Finally, the expression formula of the total task loss is as follows:
wherein, Representing the first target task loss, ε representing the second target task loss,/>CTC loss representing the speech recognition module,/>Representing the attention loss, epsilon ctc representing the first error loss function, epsilon attention representing the second error loss function, mu representing a super-parameter, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
In the above-described total task loss acquisition unit 230, the total task loss mainly includes four components: CTC lossAttention loss/>CTC predicted loss epsilon ctc, attention predicted loss epsilon attention, the formula for total task loss can be expressed as:
Wherein lambda represents a scaling factor, lambda is not less than 0 and not more than 1, mu represents a hyper-parameter, and a speech recognition module and a loss prediction module of the joint recognition model perform joint training on total task loss.
As a specific example, after the acoustic features of the training data are input into the encoder network, the acoustic features of the training data are input into Add & Norm layers after being processed by multi-head self-attentions, wherein Add represents residual connection (Residual Connection) for preventing network degradation, norm represents Layer Normalization for normalizing the activation value of each layer, finally, the acoustic features are output to a CTC loss prediction module and the multi-head self-attentions layer in the decoder network through a feedforward module and another Add & Norm layer, text in the decoder network is embedded, part of samples recommended by a sorting rule can be manually marked in the model training process and then input into the decoder network, the acoustic features can be set into text forms such as word list in the application process, finally, the probability of the predicted text is obtained through the Add & Norm layers of the decoder network and the highest probability result is selected as the final predicted result.
The true CTC loss and the true attention loss are understood as CTC loss and attention loss described in the above, and the predicted CTC loss and the predicted attention loss are understood as CTC predicted loss and attention predicted loss described in the above, and are not described in detail herein.
Furthermore, in the joint recognition model forming unit 240, the training data includes an unlabeled data set Du and a labeled data set Dl, and in each iteration of iterative training, sample data with the highest ranking evaluation value of the preset number before selection from Du can be manually labeled, and further the audio frame length T of the training data can be used for performing the trainingNormalization is performed, and the/>, through the length S of the text label sequenceNormalization is performed to convert the two into the same scale, so that the joint recognition model can be prevented from always selecting a longer audio sample for training (the longer audio sample has a larger loss value and a corresponding ranking rating value is also higher), and further, K sample sets Da with the earlier ranks in the dataset Du can be marked manually, and in the next iterative training, the joint recognition model can be trained again on the data of D l∪Da until the voice recognition module theta asr converges or reaches the required performance.
Finally, the recognition result obtaining unit 250 trains the voice recognition module in the completed combined recognition model to recognize the power detection voice signal, and obtains the corresponding recognition result. The voice signal to be detected can comprise audio data with various lengths, the text information with the highest probability can be obtained after the voice recognition module is processed, recognition conversion from the voice signal to the text signal is further realized, the recognition accuracy is high, the sample size required by model training is small, the labeling cost is low, and the method is suitable for various voice recognition scenes.
Fig. 5 is a schematic structural diagram of an electronic device implementing an artificial intelligence-based speech recognition method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program stored in the memory 11 and executable on the processor 10, such as an artificial intelligence based speech recognition program 12.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of artificial intelligence-based voice recognition programs, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, executes or executes programs or modules (e.g., artificial intelligence-based voice recognition programs, etc.) stored in the memory 11, and invokes data stored in the memory 11 to perform various functions of the electronic device 1 and process data.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 5 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The artificial intelligence based speech recognition program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
Inputting the acquired training data into a voice recognition module of a preset joint recognition model, and acquiring output data of the voice recognition module and first target task loss;
Inputting the output data into a loss prediction module of the joint identification model to obtain a second target task loss of the loss prediction module;
Acquiring a total task loss of the joint identification model based on the first target task loss and the second target task loss;
performing iterative training on the joint recognition model based on the training data until the total task loss is converged within a preset range to form a joint recognition model;
And identifying the voice signal to be detected based on the voice identification module in the joint identification model, and acquiring a corresponding identification result.
In addition, an optional technical solution is that the step of obtaining the output data of the voice recognition module and the first target task loss includes:
encoding the training data based on an encoder network in the voice recognition module to obtain hidden features corresponding to the training data, and outputting the hidden features as an encoder;
Outputting a text tag sequence corresponding to the encoder output as a decoder output through a decoder network in the speech recognition module based on the encoder output;
acquiring negative log likelihood of a real text sequence of the training data under the hidden characteristic as CTC loss of the voice recognition module, and determining attention loss of the voice recognition module based on cross entropy loss of the text label sequence and the real text sequence;
a first target task loss of the speech recognition module is determined based on the CTC loss and the attention loss.
In addition, an alternative technical scheme is that the expression formula of CTC loss of the voice recognition module is as follows:
Wherein y represents the real text sequence, h represents the hidden feature, t represents the t-th hidden feature, and P (y|h t) represents the probability of the real text sequence at the t-th hidden feature;
the expression formula of the attention loss is as follows:
Where y represents the real text sequence, g= (g 1,g2,...,gS) represents the hidden feature of the decoder network output, s represents the length of the real text sequence, Representing the frequency of occurrence of the target tag y s in the real text sequence at step S of the decoder network output,/>Representing characters in the text sequence predicted in step s-1;
the expression formula of the first target task loss is as follows:
wherein, CTC loss representing the speech recognition module,/>Represents the attention loss, lambda represents the scaling factor, 0.ltoreq.lambda.ltoreq.1.
Furthermore, an optional solution is that the step of obtaining the second target task loss of the loss prediction module includes:
outputting and inputting the encoder output of the output data into a CTC loss prediction module of the loss prediction module, and obtaining CTC prediction loss corresponding to the training data;
inputting the decoder output of the output data into an attention loss prediction module of the loss prediction module, and acquiring attention prediction loss corresponding to the training data;
A second target task loss of the loss prediction module is determined based on the CTC predicted loss and the attention predicted loss.
Furthermore, an optional solution is that the step of determining the second target task loss of the loss prediction module based on the CTC predicted loss and the attention predicted loss includes:
Acquiring a first error loss function corresponding to the CTC predicted loss based on the CTC predicted loss;
obtaining a second error loss function corresponding to the attention prediction loss based on the attention prediction loss;
a second target task loss of the loss prediction module is determined based on the first error loss function and the second error loss function.
In addition, an optional technical solution is that the expression formula of the first error loss function is as follows:
The expression formula of the second error loss function is as follows:
the expression formula of the second target task loss is as follows:
ε=λεctc+(1-λ)εattention
wherein, CTC loss representing the speech recognition module,/>Representing the predicted loss of said CTCs,Representing the attention loss,/>Representing the attention prediction loss, beta representing a threshold factor, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
Furthermore, an alternative solution is that the expression formula of the total task loss is as follows:
wherein, Representing the first target task loss, ε representing the second target task loss,/>CTC loss representing the speech recognition module,/>Representing the attention loss, epsilon ctc representing the first error loss function, epsilon attention representing the second error loss function, mu representing a super-parameter, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
Specifically, the specific implementation method of the above instructions by the processor 10 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein. Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (6)

1. A speech recognition method based on artificial intelligence, the method comprising:
Inputting the acquired training data into a voice recognition module of a preset joint recognition model, and acquiring output data of the voice recognition module and first target task loss;
Inputting the output data into a loss prediction module of the joint identification model to obtain a second target task loss of the loss prediction module;
Acquiring a total task loss of the joint identification model based on the first target task loss and the second target task loss;
performing iterative training on the joint recognition model based on the training data until the total task loss is converged within a preset range to form a joint recognition model;
identifying the voice signal to be detected based on a voice identification module in the joint identification model, and acquiring a corresponding identification result; the step of obtaining the output data of the voice recognition module and the first target task loss comprises the following steps:
encoding the training data based on an encoder network in the voice recognition module to obtain hidden features corresponding to the training data, and outputting the hidden features as an encoder;
Outputting a text tag sequence corresponding to the encoder output as a decoder output through a decoder network in the speech recognition module based on the encoder output;
acquiring negative log likelihood of a real text sequence of the training data under the hidden characteristic as CTC loss of the voice recognition module, and determining attention loss of the voice recognition module based on cross entropy loss of the text label sequence and the real text sequence;
Determining a first target task loss for the speech recognition module based on the CTC loss and the attention loss;
The step of obtaining the second target task loss of the loss prediction module includes:
outputting and inputting the encoder output of the output data into a CTC loss prediction module of the loss prediction module, and obtaining CTC prediction loss corresponding to the training data;
inputting the decoder output of the output data into an attention loss prediction module of the loss prediction module, and acquiring attention prediction loss corresponding to the training data;
Determining a second target task loss of the loss prediction module based on the CTC predicted loss and the attention predicted loss;
The step of determining a second target task loss of the loss prediction module based on the CTC predicted loss and the attention predicted loss comprises:
Acquiring a first error loss function corresponding to the CTC predicted loss based on the CTC predicted loss;
obtaining a second error loss function corresponding to the attention prediction loss based on the attention prediction loss;
a second target task loss of the loss prediction module is determined based on the first error loss function and the second error loss function.
2. The artificial intelligence-based speech recognition method of claim 1,
The expression formula of the first error loss function is as follows:
The expression formula of the second error loss function is as follows:
the expression formula of the second target task loss is as follows:
ε=λεctc+(1-λ)εattention
wherein, CTC loss representing the speech recognition module,/>Representing the CTC predictive loss,/>Representing the attention loss,/>Representing the attention prediction loss, beta representing a threshold factor, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
3. The artificial intelligence-based speech recognition method of claim 1,
The expression formula of the total task loss is as follows:
wherein, Representing the first target task loss, ε representing the second target task loss,/>CTC loss representing the speech recognition module,/>Representing the attention loss, epsilon ctc representing the first error loss function, epsilon attention representing the second error loss function, mu representing a super-parameter, lambda representing a scaling factor, 0.ltoreq.lambda.ltoreq.1.
4. An apparatus for implementing an artificial intelligence based speech recognition method according to any one of claims 1-3, the apparatus comprising:
the first target task loss acquisition unit is used for inputting the acquired training data into a voice recognition module of a preset joint recognition model to acquire output data of the voice recognition module and first target task loss;
A second target task loss obtaining unit, configured to input the output data into a loss prediction module of the joint identification model, so as to obtain a second target task loss of the loss prediction module;
A total task loss acquisition unit, configured to acquire a total task loss of the joint recognition model based on the first target task loss and the second target task loss;
The joint recognition model forming unit is used for carrying out iterative training on the joint recognition model based on the training data until the total task loss is converged within a preset range to form a joint recognition model;
The recognition result acquisition unit is used for recognizing the voice signal to be detected based on the voice recognition module in the joint recognition model and acquiring a corresponding recognition result.
5. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps in the artificial intelligence based speech recognition method according to any one of claims 1 to 3.
6. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the artificial intelligence based speech recognition method according to any one of claims 1 to 3.
CN202111135001.3A 2021-09-27 2021-09-27 Speech recognition method, device and storage medium based on artificial intelligence Active CN113870846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111135001.3A CN113870846B (en) 2021-09-27 2021-09-27 Speech recognition method, device and storage medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111135001.3A CN113870846B (en) 2021-09-27 2021-09-27 Speech recognition method, device and storage medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN113870846A CN113870846A (en) 2021-12-31
CN113870846B true CN113870846B (en) 2024-05-31

Family

ID=78991176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111135001.3A Active CN113870846B (en) 2021-09-27 2021-09-27 Speech recognition method, device and storage medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113870846B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453507B (en) * 2023-02-21 2023-09-08 北京数美时代科技有限公司 Confidence model-based voice recognition optimization method, system and storage medium
CN116631379B (en) * 2023-07-20 2023-09-26 中邮消费金融有限公司 Speech recognition method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215662A (en) * 2018-09-18 2019-01-15 平安科技(深圳)有限公司 End-to-end audio recognition method, electronic device and computer readable storage medium
CN111916067A (en) * 2020-07-27 2020-11-10 腾讯科技(深圳)有限公司 Training method and device of voice recognition model, electronic equipment and storage medium
CN113129868A (en) * 2021-03-12 2021-07-16 北京百度网讯科技有限公司 Method for obtaining speech recognition model, speech recognition method and corresponding device
CN113436620A (en) * 2021-06-30 2021-09-24 北京有竹居网络技术有限公司 Model training method, speech recognition method, device, medium and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215662A (en) * 2018-09-18 2019-01-15 平安科技(深圳)有限公司 End-to-end audio recognition method, electronic device and computer readable storage medium
CN111916067A (en) * 2020-07-27 2020-11-10 腾讯科技(深圳)有限公司 Training method and device of voice recognition model, electronic equipment and storage medium
CN113129868A (en) * 2021-03-12 2021-07-16 北京百度网讯科技有限公司 Method for obtaining speech recognition model, speech recognition method and corresponding device
CN113436620A (en) * 2021-06-30 2021-09-24 北京有竹居网络技术有限公司 Model training method, speech recognition method, device, medium and equipment

Also Published As

Publication number Publication date
CN113870846A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN109992782B (en) Legal document named entity identification method and device and computer equipment
CN112015859A (en) Text knowledge hierarchy extraction method and device, computer equipment and readable medium
CN110968660B (en) Information extraction method and system based on joint training model
CN113870846B (en) Speech recognition method, device and storage medium based on artificial intelligence
CN113470619B (en) Speech recognition method, device, medium and equipment
CN111914085A (en) Text fine-grained emotion classification method, system, device and storage medium
CN110334186B (en) Data query method and device, computer equipment and computer readable storage medium
CN111522839A (en) Natural language query method based on deep learning
CN112699686B (en) Semantic understanding method, device, equipment and medium based on task type dialogue system
CN111428470B (en) Text continuity judgment method, text continuity judgment model training method, electronic device and readable medium
CN113807973B (en) Text error correction method, apparatus, electronic device and computer readable storage medium
CN110543561A (en) Method and device for emotion analysis of text
CN113205814B (en) Voice data labeling method and device, electronic equipment and storage medium
CN113254654A (en) Model training method, text recognition method, device, equipment and medium
CN114298050A (en) Model training method, entity relation extraction method, device, medium and equipment
CN113158676A (en) Professional entity and relationship combined extraction method and system and electronic equipment
CN116661805B (en) Code representation generation method and device, storage medium and electronic equipment
CN111340006B (en) Sign language recognition method and system
CN113435582A (en) Text processing method based on sentence vector pre-training model and related equipment
CN113239702A (en) Intention recognition method and device and electronic equipment
CN114637843A (en) Data processing method and device, electronic equipment and storage medium
CN115759254A (en) Question-answering method, system and medium based on knowledge-enhanced generative language model
CN115238115A (en) Image retrieval method, device and equipment based on Chinese data and storage medium
CN114722204A (en) Multi-label text classification method and device
CN114548325B (en) Zero sample relation extraction method and system based on dual contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant