CN113420689B - Character recognition method, device, computer equipment and medium based on probability calibration - Google Patents

Character recognition method, device, computer equipment and medium based on probability calibration Download PDF

Info

Publication number
CN113420689B
CN113420689B CN202110735014.8A CN202110735014A CN113420689B CN 113420689 B CN113420689 B CN 113420689B CN 202110735014 A CN202110735014 A CN 202110735014A CN 113420689 B CN113420689 B CN 113420689B
Authority
CN
China
Prior art keywords
probability
recognition
preset
samples
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110735014.8A
Other languages
Chinese (zh)
Other versions
CN113420689A (en
Inventor
洪振厚
王健宗
瞿晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110735014.8A priority Critical patent/CN113420689B/en
Publication of CN113420689A publication Critical patent/CN113420689A/en
Application granted granted Critical
Publication of CN113420689B publication Critical patent/CN113420689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Character Discrimination (AREA)

Abstract

The application belongs to the technical field of artificial intelligence, and provides a character recognition method, a character recognition device, computer equipment and a computer readable storage medium based on probability calibration. According to the method, the initial recognition image is acquired and is input to the preset DARTS model, the initial recognition image is subjected to character recognition to obtain the calibration parameters of characters contained in the initial recognition image, the initial recognition image is input to the preset OCR model, the character recognition is performed on the initial recognition image to obtain the character recognition Logits probability vectors corresponding to the characters contained in the initial recognition image, probability calibration is performed on the character recognition Logits probability vectors according to the calibration parameters, normalization processing is performed on the probability calibration vectors to obtain the character recognition results of the characters contained in the initial recognition image, and the calibration problem of recognition error rate can be solved by increasing the calibration of character recognition probability, so that the accuracy of character prediction in the character recognition is improved.

Description

Character recognition method, device, computer equipment and medium based on probability calibration
Technical Field
The present application relates to the field of artificial intelligence technology, and in particular, to an image detection technology, and in particular, to a text recognition method, apparatus, computer device and computer readable storage medium based on probability calibration.
Background
For OCR (english Optical Character Recognition, optical character recognition), many application scenarios are information extraction of various certificates, for example, the name on the certificate is obtained on the provided certificate, and in many scenarios, the name on the certificate can be accurately identified, so that the service flow can be greatly simplified, the efficiency is improved, the counterfeiting can be prevented, and the fake information is stopped.
Although the accuracy of recognizing characters by optical text is continuously improved, errors of recognizing characters are continuously present, so that it is important to determine when and where recognition errors occur and correct the occurring character recognition errors. However, in the conventional classifier (e.g., SVM), automatic correction of the text recognition errors that occur is not realized.
Disclosure of Invention
The application provides a character recognition method, a character recognition device, computer equipment and a computer readable storage medium based on probability calibration, which can solve the technical problem that the automatic calibration is not performed on character recognition errors in the traditional technology.
In a first aspect, the present application provides a text recognition method based on probability calibration, including: acquiring an initial identification image, inputting the initial identification image into a preset DARTS model, and performing character recognition on the initial identification image to obtain calibration parameters of characters contained in the initial identification image; inputting the initial recognition image into a preset OCR model, and performing character recognition on the initial recognition image to obtain a character recognition Logits probability vector corresponding to characters contained in the initial recognition image; and carrying out probability calibration and normalization processing on the character recognition Logits probability vector according to the calibration parameters to obtain a character recognition result of the characters contained in the initial recognition image. .
In a second aspect, the present application further provides a text recognition device based on probability calibration, including: the first recognition unit is used for acquiring an initial recognition image, inputting the initial recognition image into a preset DARTS model, and performing character recognition on the initial recognition image to obtain calibration parameters of characters contained in the initial recognition image; the second recognition unit is used for inputting the initial recognition image into a preset OCR model, and performing character recognition on the initial recognition image to obtain a character recognition Logits probability vector corresponding to characters contained in the initial recognition image; and the calibration recognition unit is used for carrying out probability calibration and normalization processing on the character recognition Logits probability vector according to the calibration parameters to obtain a character recognition result of the characters contained in the initial recognition image.
In a third aspect, the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the text recognition method based on probability calibration when executing the computer program.
In a fourth aspect, the present application also provides a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to perform the steps of the probabilistic calibration-based text recognition method.
The application provides a character recognition method, a character recognition device, computer equipment and a computer readable storage medium based on probability calibration. According to the method, the initial recognition image is obtained and is input to the preset DARTS model, the initial recognition image is subjected to character recognition to obtain the calibration parameters of characters contained in the initial recognition image, the initial recognition image is input to the preset OCR model, the initial recognition image is subjected to character recognition to obtain the character recognition Logits probability vector corresponding to the characters contained in the initial recognition image, the character recognition Logits probability vector is subjected to probability calibration and normalization processing according to the calibration parameters to obtain the character recognition result of the characters contained in the initial recognition image, so that the self-adaptive calibration of OCR character recognition is realized by increasing the calibration error of the character recognition probability, the calibration problem of recognition error rate can be solved, the manual intervention is reduced, and the accuracy of character prediction in the character recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a text recognition method based on probability calibration according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an overall system framework of a text recognition method based on probability calibration according to an embodiment of the present application;
FIG. 3 is a schematic view of DARTS model framework in the text recognition method based on probability calibration according to the embodiment of the present application;
fig. 4 is a schematic diagram of a text recognition model SRN model in the text recognition method based on probability calibration according to the embodiment of the present application;
FIG. 5 is a schematic diagram of a first sub-flowchart of a text recognition method based on probability calibration according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a second sub-flowchart of a text recognition method based on probability calibration according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a third sub-flowchart of a text recognition method based on probability calibration according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a fourth sub-flowchart of a text recognition method based on probability calibration according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a fifth sub-flowchart of a text recognition method based on probability calibration according to an embodiment of the present application;
FIG. 10 is a schematic block diagram of a text recognition device based on probability calibration according to an embodiment of the present application; and
Fig. 11 is a schematic block diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic flow chart of a text recognition method based on probability calibration according to an embodiment of the present application, and fig. 2 is a schematic overall system framework of the text recognition method based on probability calibration according to an embodiment of the present application. As shown in fig. 1 and 2, the method includes the following steps S11-S13:
S11, acquiring an initial identification image, inputting the initial identification image into a preset DARTS model, and performing character recognition on the initial identification image to obtain calibration parameters of characters contained in the initial identification image.
The DARTS model is Differentiable Architecture Search, namely, a differential architecture search is used for identifying characters contained in the initial identification image, and according to the identification of the characters, a calibration parameter describing an optimal global neural network structure is searched out from a preset candidate neural network substructure set called a search space, wherein the calibration parameter comprises a Weight (English is Weight and can be abbreviated as W) and a Bias (English is Bias and can be abbreviated as B), and the Bias value of the Bias is used for shifting an activation function leftwards or rightwards so as to adjust the identification result of the characters.
Specifically, the neural network structure search (Neural Architecture Search, also called as a neural network architecture search, abbreviated as NAS) is given a candidate neural network substructure set called as a search space, an optimal global neural network structure is searched out from the candidate neural network substructure set by using a preset search strategy, the advantages and disadvantages (i.e., performance) of the neural network structure can be measured by using preset indexes such as precision, speed, weight and bias, and the like, the neural network structure is an optimal parameter search of a high-dimensional space, called as performance evaluation, and the DARTS model is to convert discrete architecture search into continuous architecture weight search. In order to realize probability calibration for character recognition, a DARTS model is introduced to obtain an initial recognition image, the initial recognition image is input into a preset DARTS model, the character recognition is carried out on the initial recognition image through the preset DARTS model, and the preset DARTS model outputs calibration parameters of the characters contained in the initial recognition image according to the recognition of the character features (also can be called as image features) of the characters contained in the initial recognition image. Referring to fig. 2 and 3, fig. 3 is a schematic diagram of a DARTS model framework in the text recognition method based on probability calibration according to the embodiment of the present application, where DARTS is composed of five cells shown in fig. 3, three Normal cells and two Reduction cells respectively, each Cell is composed of 4 nodes, and lines between the nodes represent operations. The operations are a depth convolution of 3x3 and 5x5, a hole convolution of 3x3 and 5x5, a max pooling of 3x3, an average pooling of 3x 3. Normal Cell and Reduction Cell differ in that max pulling and average pulling are due to pulling being an operation of downsizing a picture.
The loss function is shown in the following formula:
f * =argmaxM(f(θ*),D valid ) Formula (1)
θ * =argminL(f(θ),D train ) Formula (2)
Where f is the optimal function and M is the measurement mode, based on the highest accuracy rate searched. D (D) valid Is data for valid, θ * Is the optimal parameter, D train Is the data used for training (i.e., train).
As shown in fig. 1 and fig. 2, initial recognition Images are obtained, the initial recognition Images are input into a preset DARTS model, and the preset DARTS is used for searching an optimal calibration probability model, so as to obtain calibration parameters of characters contained in the initial recognition Images: weight W (i.e., weights) and Bias B (i.e., bias).
S12, inputting the initial recognition image into a preset OCR model, and performing character recognition on the initial recognition image to obtain a character recognition Logits probability vector corresponding to the characters contained in the initial recognition image.
Specifically, an initial recognition image is acquired, the initial recognition image is input to a preset OCR model (Optical Character Recognition, abbreviated as OCR, optical character recognition, and may also be referred to as character recognition), the preset OCR model may be a spatially-regular network model (Sequence Recognition Network, abbreviated as SRN) based on deep learning, please refer to fig. 4, fig. 4 is a schematic diagram of a character recognition model SRN model in the character recognition method based on probability calibration provided in the embodiment of the present application, as shown in fig. 4, the SRN model (i.e., character recognition OCR model, but not limited to SRN model) in the overall framework diagram of the system of fig. 2, where p and r are probability values and coordinate positions of characters respectively, SOS is a start character, and EOS is a stop character. In the SRN model, the CNN has 5 convolution layers and 5 pooling layers, wherein each convolution layer has 3x3, stride of 1 and channel numbers of 32,64,128,256,256 respectively. The pooling layer is the maximum pooling layer, the core size is 2x2, and the stride is 2. The hidden layer unit of BiLSTM (bidirectional LSTM) is 128, and the hidden layer unit of GRU (Gate Recurrent Unit) is 128. The Attention (i.e., attention layer) is 10.
(2) The formula for the calibrated metric is as follows:
equation (3) is a pairing of predictive labels and predictive probabilities,is a predictive tag,/->Is the prediction probability, M is the number of samples from the test set, equation (4) is the average prediction probability calculation, |bm| describes the number of pairs of prediction tags and prediction probabilities for averaging, equation (5) word recognition accuracy calculation (i.e., the ratio of the number of word recognition correctness to the total number of recognition), where>For the ith predictive label, y i Is->The corresponding correct label, equation (6) is the expected calibration error calculation (Expect Calibration Error, ECE), n is the test set size.
Referring to fig. 4, as shown in fig. 4, normal text recognition is performed on the initial recognition image, and a text recognition Logits probabilistic vector corresponding to the text included in the initial recognition image is obtained according to the recognized text feature, where Logits is a probabilistic vector that does not enter a softmax for normalization in a preset OCR model, generally, the output of a full-connection layer, and the input of the softmax, where a softmax function is also called a normalized exponential function. It is a classification function, with the aim of presenting the results of multiple classifications in the form of probabilities.
And S13, carrying out probability calibration and normalization processing on the character recognition Logits probability vector according to the calibration parameters to obtain a character recognition result of the characters contained in the initial recognition image.
Specifically, after the calibration parameters of the characters contained in the initial recognition image are obtained, and the character recognition Logits probability vectors corresponding to the characters contained in the initial recognition image are obtained through a preset OCR model, before the normalization processing is carried out on the character recognition Logits probability vectors, probability calibration is carried out on the character recognition Logits probability vectors according to the calibration parameters, normalization processing is carried out on the character recognition probability corresponding to the characters contained in the normalized initial recognition image, and according to the character recognition probability, target recognition characters corresponding to the image characters based on image forms and contained in the initial recognition image are obtained, so that the accuracy of character prediction in the character recognition can be improved through increasing the calibration error of the character recognition probability when the character recognition is carried out based on the deep learning. With continued reference to fig. 1, the calibration parameters of the text contained in the initial identification image Images are obtained: after the weight W (i.e., weights) and the Bias B (i.e., bias) are obtained and the word recognition Logits probability vector corresponding to the words contained in the initial recognition image is obtained, the weight W, the Bias B and the word recognition Logits probability vector z are input into softmax for calculation, so that the word recognition calibration probability Calibration probability after the probability calibration of the word recognition Logits probability vector is obtained, the word recognition calibration probability Calibration probability is used for describing the probability of recognizing the word image a contained in the initial recognition image as the word a', and the image words based on the image form contained in the initial recognition image are recognized as the corresponding target recognition words according to the word recognition calibration probability Calibration probability, thereby realizing the probability of error occurrence of word recognition by the preset OCR recognition model SRN through auxiliary correction of the preset DARTS model and improving the accuracy of word recognition of the initial recognition image by the preset OCR recognition model SRN.
According to the embodiment of the application, the initial recognition image is obtained and is input into the preset DARTS model, the initial recognition image is subjected to character recognition to obtain the calibration parameters of the characters contained in the initial recognition image, the initial recognition image is input into the preset OCR model, the initial recognition image is subjected to character recognition to obtain the character recognition Logits probability vectors corresponding to the characters contained in the initial recognition image, the probability calibration is carried out on the character recognition Logits probability vectors according to the calibration parameters, the normalization processing is carried out to obtain the character recognition results of the characters contained in the initial recognition image, and the target recognition characters corresponding to the image characters based on the image form and the character recognition probability corresponding to the target characters contained in the initial recognition image can be obtained.
Referring to fig. 5, fig. 5 is a schematic diagram of a first sub-flow of a text recognition method based on probability calibration according to an embodiment of the present application. As shown in fig. 5, in this embodiment, the text recognition result includes a target recognition text corresponding to the text included in the initial recognition image and a text recognition probability of the target recognition text, and after the step of obtaining the text recognition result of the text included in the initial recognition image, the method further includes:
S14, judging whether the character recognition probability is larger than or equal to a preset first probability threshold;
s15, if the character recognition probability is greater than or equal to a preset first probability threshold, classifying the character recognition probability and target recognition characters corresponding to the character recognition probability into a preset high probability sample set;
s16, extracting a preset first number of samples from the preset high-probability sample set to serve as high-probability samples, and displaying the high-probability samples so that a user confirms the high-probability samples;
and S17, if the character recognition probability is smaller than a preset first probability threshold, not classifying the character recognition probability and the target recognition characters corresponding to the character recognition probability into a preset high probability sample set.
Specifically, a word recognition result of a word contained in the initial recognition image is obtained, the word recognition result may include a target recognition word corresponding to the word contained in the initial recognition image and a word recognition probability of the target recognition word, the higher the word recognition probability is, the closer the word recognition probability is to a real probability, the more accurate the target recognition word is, the lower the word recognition probability is, the more inaccurate the target recognition word is, and according to the word recognition probability, the word recognition probability and the target recognition word corresponding to the word recognition probability are classified, whether the word recognition probability is greater than or equal to a preset first probability threshold is judged, if the word recognition probability is greater than or equal to the preset first probability threshold, the word recognition probability and the target recognition word corresponding to the word recognition probability are classified to a preset high probability sample set, if the word recognition probability is less than the first probability threshold, the word recognition probability is not close to the real probability, the target recognition probability corresponding to the word recognition probability is indicated to the preset probability, the high probability is not detected, the high probability is detected from the preset probability set, the high probability is detected by the high probability, the high probability sample set is detected by the user, the high probability is detected by the user, and the high probability is detected by the user, the high probability is detected by the human sample, and the high probability is detected by the human sample, and the human sample is detected by the human sample, and the human sample is detected by the high user, the data can be recycled, the calibration efficiency of the model is improved, and the recognition efficiency of the model is further improved.
Referring to fig. 6, fig. 6 is a schematic diagram of a second sub-flowchart of a text recognition method based on probability calibration according to an embodiment of the present application. As shown in fig. 6, in this embodiment, after the step of classifying the text recognition probability and the target recognition text corresponding to the text recognition probability into the preset high probability sample set if the text recognition probability is greater than or equal to the preset first probability threshold, the method further includes:
s18, judging whether the character recognition probability is smaller than or equal to a preset second probability threshold value;
s19, if the character recognition probability is smaller than or equal to a preset second probability threshold, classifying the character recognition probability and the target recognition characters corresponding to the character recognition probability into a preset low probability sample set;
s20, extracting a preset second number of samples from the preset low-probability sample set to serve as low-probability samples, and displaying the low-probability samples so that a user confirms the low-probability samples;
and S21, if the character recognition probability is larger than a preset second probability threshold, not classifying the character recognition probability and the target recognition characters corresponding to the character recognition probability into a preset low probability sample set.
The preset first probability threshold value and the preset second probability threshold value may be the same or different, and when the preset first probability threshold value and the preset second probability threshold value are the same, the word recognition probability is equal to the preset second probability threshold value, and the word recognition probability and the target recognition word corresponding to the word recognition probability can only be attributed to a preset low probability sample set or only to a preset high probability sample set.
Specifically, a text recognition result of a text included in the initial recognition image is obtained, where the text recognition result may include a target recognition text corresponding to the text included in the initial recognition image and a text recognition probability of the target recognition text, and according to the text recognition probability, the text recognition probability and the target recognition text corresponding to the text recognition probability are classified, whether the text recognition probability is smaller than or equal to a preset second probability threshold is determined, if the text recognition probability is smaller than or equal to the preset second probability threshold, the text recognition probability and the target recognition text corresponding to the text recognition probability are classified into a preset low probability sample set, if the text recognition probability is larger than the preset second probability threshold, the text recognition probability and the target recognition text corresponding to the text recognition probability are not classified into a preset low probability sample set, then a preset second number of samples are extracted from the preset low probability sample set as low probability samples, the low probability samples are displayed, so that a user confirms the low probability samples, if the low probability samples are confirmed by the user, and whether the low probability samples are manually checked by the user, and whether the low probability samples are the false, or not are the low probability samples are the cause of which are the false.
Referring to fig. 7, fig. 7 is a schematic diagram of a third sub-flow of the text recognition method based on probability calibration according to the embodiment of the present application. As shown in fig. 7, in this embodiment, after the step of extracting a preset second number of samples from the preset low probability sample set as low probability samples and displaying the low probability samples to enable the user to confirm the low probability samples, the method further includes:
s22, judging whether the high probability sample or the low probability sample is modified;
s23, if the high probability sample or the low probability sample is modified, acquiring a modification sample corresponding to the high probability sample or the low probability sample, and carrying out character recognition again on the modification sample;
and S24, if the high-probability sample or the low-probability sample is not modified, not carrying out character recognition again on the modified sample.
Specifically, after checking and confirming the high probability sample or the low probability sample by the user, if the high probability sample or the low probability sample has an identification error, the high probability sample or the low probability sample can be modified by the user, the computer equipment compares the high probability sample or the low probability sample before and after the user confirms, whether the high probability sample or the low probability sample is modified can be judged, if the high probability sample or the low probability sample before and after the user confirms is consistent, the high probability sample or the low probability sample is indicated to be unmodified, the target text identification corresponding to the high probability sample or the low probability sample has no error, no further processing is needed, if the high probability sample or the low probability sample before and after the user confirms is inconsistent, the high probability sample or the low probability sample is indicated to be modified, the sample data after the manual checking is returned to the initial training data by using the sample circulation training text identification model, the initial closed loop calibration can be realized, the recognition efficiency can be fully used as the training model, and the recognition can be fully used as the recognition model.
Referring to fig. 8, fig. 8 is a schematic diagram of a fourth sub-flow of the text recognition method based on probability calibration according to the embodiment of the present application. As shown in fig. 8, in this embodiment, the step of extracting a preset second number of samples from the preset low probability sample set as low probability samples includes:
s201, according to the text recognition probability corresponding to the low probability sample, sequencing all the low probability samples contained in the preset low probability sample set according to the sequence from small to large to obtain a low probability sample sequencing queue;
s202, according to the low-probability sample sequencing queue, extracting a preset second number of samples from the low-probability sample sequencing queue as low-probability samples according to the order from small to large.
Specifically, for the preset low probability sample set, all the low probability samples included in the preset low probability sample set may be ordered according to the text recognition probability corresponding to the low probability sample set from small to large to obtain a low probability sample ordering queue, according to the low probability sample ordering queue, a preset second number of samples with the lowest text recognition probability are extracted from the low probability sample ordering queue according to the order from small to large as the low probability sample, and the lower the text recognition probability is, the more inaccurate the target recognition text is, the more the problem corresponding to the text recognition error can be reflected, therefore, the sample with the lowest text recognition probability is extracted from the low probability sample ordering queue as the low probability sample, the problem existing in the text recognition process is found to the greatest extent as possible through manual confirmation, and the largest problem existing in the manual mode is solved as far as possible, so that the following text recognition performance and efficiency are improved.
Referring to fig. 9, fig. 9 is a schematic diagram of a fifth sub-flowchart of a text recognition method based on probability calibration according to an embodiment of the present application. As shown in fig. 9, in this embodiment, the step of extracting a preset second number of samples from the preset low probability sample set as low probability samples includes:
s203, counting the number of low probability samples of the low probability samples contained in the preset low probability sample set;
s204, judging whether the number of the low probability samples is smaller than or equal to the preset second number;
s205, if the number of the low probability samples is smaller than or equal to the preset second number, acquiring all low probability samples contained in the preset low probability sample set;
s206, if the number of the low probability samples is larger than the preset second number, extracting the low probability samples one by one from the preset low probability sample set to obtain samples with the preset second number as the low probability samples.
Specifically, when a preset second number of samples are extracted from the preset low probability sample set as low probability samples, the number of low probability samples of the low probability samples contained in the preset low probability sample set can be counted in advance, whether the number of the low probability samples is smaller than or equal to the preset second number is judged, if the number of the low probability samples is smaller than or equal to the preset second number, all the low probability samples contained in the preset low probability sample set are directly obtained, and if the number of the low probability samples is larger than the preset second number, the low probability samples are obtained one by one from the preset low probability sample set to obtain the samples of the preset second number as the low probability samples, so that the efficiency of extracting the low probability samples can be improved.
In an embodiment, before the step of acquiring the initial identification image, the method further includes:
and acquiring an original image, and preprocessing the original image according to a preset preprocessing mode to obtain an initial identification image.
Specifically, an original image to be subjected to text recognition is obtained, and the original image is subjected to preprocessing according to a preset preprocessing mode, for example, brightness adjustment or contrast adjustment, pixel normalization processing and the like are performed on the original image, so that the image quality of the original recognition image can be improved, the accuracy of text recognition can be improved, if a text recognition model is trained, the recognition accuracy and recognition efficiency of the text recognition model can be improved, wherein the brightness adjustment of the original image can be realized by adopting the following brightness adjustment formula:
the brightness adjustment formula: i' =i g Formula (1)
Wherein I' is the pixel value of the initial identification image corresponding to the original image after pretreatment, I is the pixel value of the original image, g is gamma, if g is larger than 1, the initial identification image is darker than the original image, if g is smaller than 1, the initial identification image is brighter than the original image, and according to experience, the value of g is generally in the range of 0.5 to 2.
The original image is subjected to contrast adjustment, and the following contrast adjustment formula can be adopted:
the contrast adjustment formula: i' =log (I) formula (2)
Where I' is the pixel value of the original identification image and I is the pixel value of the original image.
It should be noted that, the text recognition method based on probability calibration in the foregoing embodiments may recombine the technical features included in the different embodiments as needed to obtain a combined implementation, which is within the scope of protection claimed in the present application.
Referring to fig. 10, fig. 10 is a schematic block diagram of a text recognition device based on probability calibration according to an embodiment of the present application. Corresponding to the text recognition method based on probability calibration, the embodiment of the application also provides a text recognition device based on probability calibration. As shown in fig. 10, the probability calibration-based text recognition apparatus includes a unit for performing the above-described probability calibration-based text recognition method, and the probability calibration-based text recognition apparatus may be configured in a computer device. Specifically, referring to fig. 10, the text recognition device 100 based on probability calibration includes a first recognition unit 101, a second recognition unit 102, and a calibration recognition unit 103.
The first recognition unit 101 is configured to obtain an initial recognition image, input the initial recognition image into a preset DARTS model, and perform text recognition on the initial recognition image to obtain calibration parameters of text included in the initial recognition image;
the second recognition unit 102 is configured to input the initial recognition image into a preset OCR model, perform text recognition on the initial recognition image, and obtain a text recognition Logits probabilistic vector corresponding to text included in the initial recognition image;
and the calibration recognition unit 103 is configured to perform probability calibration and normalization processing on the text recognition Logits probabilistic vector according to the calibration parameter, so as to obtain a text recognition result of the text included in the initial recognition image.
In an embodiment, the text recognition result includes a target recognition text corresponding to the text included in the initial recognition image and a text recognition probability of the target recognition text, and the text recognition device 100 based on probability calibration further includes:
the first judging unit is used for judging whether the character recognition probability is larger than or equal to a preset first probability threshold value;
the first classifying unit is used for classifying the character recognition probability and the target recognition characters corresponding to the character recognition probability into a preset high-probability sample set if the character recognition probability is greater than or equal to a preset first probability threshold;
And the first extraction unit is used for extracting a preset first number of samples from the preset high-probability sample set to serve as high-probability samples, and displaying the high-probability samples so that a user confirms the high-probability samples.
In one embodiment, the text recognition device 100 based on probability calibration further includes:
the second judging unit is used for judging whether the character recognition probability is smaller than or equal to a preset second probability threshold value;
the second classifying unit is used for classifying the character recognition probability and the target recognition characters corresponding to the character recognition probability into a preset low-probability sample set if the character recognition probability is smaller than or equal to a preset second probability threshold;
and the second extraction unit is used for extracting a preset second number of samples from the preset low-probability sample set to serve as low-probability samples, and displaying the low-probability samples so that a user confirms the low-probability samples.
In one embodiment, the text recognition device 100 based on probability calibration further includes:
a third judging unit configured to judge whether the high probability sample or the low probability sample is modified;
The first obtaining unit is used for obtaining the high probability sample or the low probability sample corresponding to the modification sample if the high probability sample or the low probability sample is modified, and carrying out character recognition on the modification sample again.
In an embodiment, the second extraction unit includes:
the sequencing subunit is used for sequencing all the low-probability samples contained in the preset low-probability sample set according to the text recognition probability corresponding to the low-probability sample to obtain a low-probability sample sequencing queue;
and the extraction subunit is used for extracting a preset second number of samples from the low-probability sample sequencing queue as low-probability samples according to the order from small to large in the low-probability sample sequencing queue.
In an embodiment, the second extraction unit includes:
a statistics subunit, configured to count the number of low probability samples of the low probability samples included in the preset low probability sample set;
a judging subunit, configured to judge whether the number of low probability samples is less than or equal to the preset second number;
and the acquisition subunit is used for acquiring all the low-probability samples contained in the preset low-probability sample set if the number of the low-probability samples is smaller than or equal to the preset second number.
In one embodiment, the text recognition device 100 based on probability calibration further includes:
the second acquisition unit is used for acquiring an original image and preprocessing the original image according to a preset preprocessing mode to obtain an initial identification image.
It should be noted that, as those skilled in the art can clearly understand, the specific implementation process of the text recognition device and each unit based on the probability calibration may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the description is omitted here.
Meanwhile, the above-mentioned dividing and connecting modes of each unit in the character recognition device based on probability calibration are only used for illustration, in other embodiments, the character recognition device based on probability calibration may be divided into different units according to the needs, and different connecting sequences and modes may be adopted for each unit in the character recognition device based on probability calibration, so as to complete all or part of functions of the character recognition device based on probability calibration.
The above-described probabilistic calibration-based word recognition apparatus may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 11.
Referring to fig. 11, fig. 11 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a computer device such as a desktop computer or a server, or may be a component or part of another device.
With reference to FIG. 11, the computer device 500 includes a processor 502, a memory, and a network interface 505, which are connected by a system bus 501, wherein the memory may include a non-volatile storage medium 503 and an internal memory 504, which may also be a volatile storage medium.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform a probabilistic calibration-based text recognition method as described above.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a probabilistic calibration-based word recognition method as described above.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of a portion of the architecture in connection with the present application and is not intended to limit the computer device 500 to which the present application is applied, and that a particular computer device 500 may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components. For example, in some embodiments, the computer device may include only a memory and a processor, and in such embodiments, the structure and function of the memory and the processor are consistent with the embodiment shown in fig. 11, and will not be described again.
Wherein the processor 502 is configured to execute a computer program 5032 stored in a memory to implement the steps of: acquiring an initial identification image, inputting the initial identification image into a preset DARTS model, and performing character recognition on the initial identification image to obtain calibration parameters of characters contained in the initial identification image; inputting the initial recognition image into a preset OCR model, and performing character recognition on the initial recognition image to obtain a character recognition Logits probability vector corresponding to characters contained in the initial recognition image; and carrying out probability calibration and normalization processing on the character recognition Logits probability vector according to the calibration parameters to obtain a character recognition result of the characters contained in the initial recognition image.
In an embodiment, after the step of obtaining the text recognition result of the text included in the initial recognition image, the processor 502 further realizes the following steps after implementing the text recognition result including the target recognition text corresponding to the text included in the initial recognition image and the text recognition probability of the target recognition text:
judging whether the character recognition probability is larger than or equal to a preset first probability threshold value;
If the character recognition probability is greater than or equal to a preset first probability threshold, classifying the character recognition probability and target recognition characters corresponding to the character recognition probability into a preset high probability sample set;
and extracting a preset first number of samples from the preset high-probability sample set to serve as high-probability samples, and displaying the high-probability samples so that a user confirms the high-probability samples.
In an embodiment, after implementing the step of classifying the word recognition probability and the target recognition word corresponding to the word recognition probability to a preset high probability sample set if the word recognition probability is greater than or equal to a preset first probability threshold, the processor 502 further implements the following steps:
judging whether the character recognition probability is smaller than or equal to a preset second probability threshold value;
if the character recognition probability is smaller than or equal to a preset second probability threshold value, classifying the character recognition probability and target recognition characters corresponding to the character recognition probability into a preset low probability sample set;
and extracting a preset second number of samples from the preset low-probability sample set to serve as low-probability samples, and displaying the low-probability samples so that a user confirms the low-probability samples.
In one embodiment, after implementing the step of extracting a preset second number of samples from the preset low probability sample set as low probability samples, and displaying the low probability samples, the processor 502 further implements the following steps:
judging whether the high probability sample or the low probability sample is modified;
and if the high probability sample or the low probability sample is modified, acquiring a modification sample corresponding to the high probability sample or the low probability sample, and carrying out character recognition again on the modification sample.
In one embodiment, when implementing the step of extracting the predetermined second number of samples from the predetermined low probability sample set as the low probability samples, the processor 502 specifically implements the following steps:
according to the text recognition probability corresponding to the low probability sample, all the low probability samples contained in the preset low probability sample set are ordered according to the order from small to large, and a low probability sample ordering queue is obtained;
and extracting a preset second number of samples from the low-probability sample sequencing queue as low-probability samples according to the order from small to large according to the low-probability sample sequencing queue.
In one embodiment, when implementing the step of extracting the predetermined second number of samples from the predetermined low probability sample set as the low probability samples, the processor 502 specifically implements the following steps:
counting the number of low probability samples of the low probability samples contained in the preset low probability sample set;
judging whether the number of the low probability samples is smaller than or equal to the preset second number;
and if the number of the low probability samples is smaller than or equal to the preset second number, acquiring all the low probability samples contained in the preset low probability sample set.
In one embodiment, before implementing the step of acquiring the initial identification image, the processor 502 further implements the steps of:
and acquiring an original image, and preprocessing the original image according to a preset preprocessing mode to obtain an initial identification image.
It should be appreciated that in embodiments of the present application, the processor 502 may be a central processing unit (Central Processing Unit, CPU), the processor 502 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be appreciated by those skilled in the art that all or part of the flow of the method of the above embodiments may be implemented by a computer program, which may be stored on a computer readable storage medium. The computer program is executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a computer-readable storage medium. The computer readable storage medium may be a nonvolatile computer readable storage medium or a volatile computer readable storage medium, and the computer readable storage medium stores a computer program, and when executed by a processor, causes the processor to execute the steps of:
a computer program product which, when run on a computer, causes the computer to perform the steps of the probabilistic calibration based text recognition method described in the above embodiments.
The computer readable storage medium may be an internal storage unit of the aforementioned device, such as a hard disk or a memory of the device. The computer readable storage medium may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the device. Further, the computer readable storage medium may also include both internal storage units and external storage devices of the device.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The storage medium is a physical, non-transitory storage medium, and may be, for example, a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing an electronic device (which may be a personal computer, a terminal, a network device, or the like) to perform all or part of the steps of the method described in the embodiments of the present application.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A character recognition method based on probability calibration comprises the following steps:
acquiring an initial identification image, inputting the initial identification image into a preset DARTS model, and performing character recognition on the initial identification image to obtain calibration parameters of characters contained in the initial identification image;
inputting the initial recognition image into a preset OCR model, and performing character recognition on the initial recognition image to obtain a character recognition Logits probability vector corresponding to characters contained in the initial recognition image;
carrying out probability calibration and normalization processing on the character recognition Logits probability vector according to the calibration parameters to obtain a character recognition result of the characters contained in the initial recognition image, wherein the calibration parameters of the characters contained in the initial recognition image comprise weights and offsets; inputting the weight, the bias and the text recognition Logits probability vector into softmax for calculation to obtain text recognition probability after probability calibration of the text recognition Logits probability vector;
Judging whether the character recognition probability is larger than or equal to a preset first probability threshold value;
if the character recognition probability is greater than or equal to a preset first probability threshold, classifying the character recognition probability and target recognition characters corresponding to the character recognition probability into a preset high probability sample set;
and extracting a preset first number of samples from the preset high-probability sample set to serve as high-probability samples, and displaying the high-probability samples so that a user confirms the high-probability samples.
2. The method for recognizing text based on probability calibration according to claim 1, wherein if the text recognition probability is greater than or equal to a preset first probability threshold, classifying the text recognition probability and the target recognition text corresponding to the text recognition probability into a preset high probability sample set further comprises:
judging whether the character recognition probability is smaller than or equal to a preset second probability threshold value;
if the character recognition probability is smaller than or equal to a preset second probability threshold value, classifying the character recognition probability and target recognition characters corresponding to the character recognition probability into a preset low probability sample set;
And extracting a preset second number of samples from the preset low-probability sample set to serve as low-probability samples, and displaying the low-probability samples so that a user confirms the low-probability samples.
3. The text recognition method based on probability calibration of claim 2, wherein the step of extracting a preset second number of samples from the preset low probability sample set as low probability samples and displaying the low probability samples to enable a user to confirm the low probability samples further comprises:
judging whether the high probability sample or the low probability sample is modified;
and if the high probability sample or the low probability sample is modified, acquiring a modification sample corresponding to the high probability sample or the low probability sample, and carrying out character recognition again on the modification sample.
4. The method for recognizing characters based on probability calibration according to claim 2, wherein the step of extracting a predetermined second number of samples from the predetermined low probability sample set as low probability samples comprises:
according to the text recognition probability corresponding to the low probability sample, all the low probability samples contained in the preset low probability sample set are ordered according to the order from small to large, and a low probability sample ordering queue is obtained;
And extracting a preset second number of samples from the low-probability sample sequencing queue as low-probability samples according to the order from small to large according to the low-probability sample sequencing queue.
5. The method for recognizing characters based on probability calibration according to claim 2, wherein the step of extracting a predetermined second number of samples from the predetermined low probability sample set as low probability samples comprises:
counting the number of low probability samples of the low probability samples contained in the preset low probability sample set;
judging whether the number of the low probability samples is smaller than or equal to the preset second number;
and if the number of the low probability samples is smaller than or equal to the preset second number, acquiring all the low probability samples contained in the preset low probability sample set.
6. The probabilistic calibration-based text recognition method of claim 1, further comprising, prior to the step of obtaining the initial recognition image:
and acquiring an original image, and preprocessing the original image according to a preset preprocessing mode to obtain an initial identification image.
7. A probabilistic calibration-based text recognition device, comprising:
The first recognition unit is used for acquiring an initial recognition image, inputting the initial recognition image into a preset DARTS model, and performing character recognition on the initial recognition image to obtain calibration parameters of characters contained in the initial recognition image;
the second recognition unit is used for inputting the initial recognition image into a preset OCR model, and performing character recognition on the initial recognition image to obtain a character recognition Logits probability vector corresponding to characters contained in the initial recognition image;
the calibration recognition unit is used for carrying out probability calibration and normalization processing on the character recognition Logits probability vector according to the calibration parameters to obtain a character recognition result of characters contained in the initial recognition image, wherein the calibration parameters of the characters contained in the initial recognition image comprise weights and offsets; inputting the weight, the bias and the text recognition Logits probability vector into softmax for calculation to obtain text recognition probability after probability calibration of the text recognition Logits probability vector;
the first judging unit is used for judging whether the character recognition probability is larger than or equal to a preset first probability threshold value;
The first classifying unit is used for classifying the character recognition probability and the target recognition characters corresponding to the character recognition probability into a preset high-probability sample set if the character recognition probability is greater than or equal to a preset first probability threshold;
and the first extraction unit is used for extracting a preset first number of samples from the preset high-probability sample set to serve as high-probability samples, and displaying the high-probability samples so that a user confirms the high-probability samples.
8. A computer device comprising a memory and a processor coupled to the memory; the memory is used for storing a computer program; the processor being adapted to run the computer program to perform the steps of the method according to any of claims 1-6.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the steps of the method according to any of claims 1-6.
CN202110735014.8A 2021-06-30 2021-06-30 Character recognition method, device, computer equipment and medium based on probability calibration Active CN113420689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110735014.8A CN113420689B (en) 2021-06-30 2021-06-30 Character recognition method, device, computer equipment and medium based on probability calibration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110735014.8A CN113420689B (en) 2021-06-30 2021-06-30 Character recognition method, device, computer equipment and medium based on probability calibration

Publications (2)

Publication Number Publication Date
CN113420689A CN113420689A (en) 2021-09-21
CN113420689B true CN113420689B (en) 2024-03-22

Family

ID=77717310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110735014.8A Active CN113420689B (en) 2021-06-30 2021-06-30 Character recognition method, device, computer equipment and medium based on probability calibration

Country Status (1)

Country Link
CN (1) CN113420689B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667066A (en) * 2020-04-23 2020-09-15 北京旷视科技有限公司 Network model training and character recognition method and device and electronic equipment
CN112329779A (en) * 2020-11-02 2021-02-05 平安科技(深圳)有限公司 Method and related device for improving certificate identification accuracy based on mask

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110377686B (en) * 2019-07-04 2021-09-17 浙江大学 Address information feature extraction method based on deep neural network model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667066A (en) * 2020-04-23 2020-09-15 北京旷视科技有限公司 Network model training and character recognition method and device and electronic equipment
CN112329779A (en) * 2020-11-02 2021-02-05 平安科技(深圳)有限公司 Method and related device for improving certificate identification accuracy based on mask

Also Published As

Publication number Publication date
CN113420689A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN112990432B (en) Target recognition model training method and device and electronic equipment
US9070041B2 (en) Image processing apparatus and image processing method with calculation of variance for composited partial features
US7643674B2 (en) Classification methods, classifier determination methods, classifiers, classifier determination devices, and articles of manufacture
CN111753863A (en) Image classification method and device, electronic equipment and storage medium
US8687893B2 (en) Classification algorithm optimization
US20230229897A1 (en) Distances between distributions for the belonging-to-the-distribution measurement of the image
CN113221918B (en) Target detection method, training method and device of target detection model
CN113065525A (en) Age recognition model training method, face age recognition method and related device
JP5214679B2 (en) Learning apparatus, method and program
CN111274821B (en) Named entity identification data labeling quality assessment method and device
CN113902944A (en) Model training and scene recognition method, device, equipment and medium
CN116563291A (en) SMT intelligent error-proofing feeding detector
CN114519401A (en) Image classification method and device, electronic equipment and storage medium
CN112464966B (en) Robustness estimating method, data processing method, and information processing apparatus
CN113420689B (en) Character recognition method, device, computer equipment and medium based on probability calibration
CN113128518A (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN110210314B (en) Face detection method, device, computer equipment and storage medium
CN109858328B (en) Face recognition method and device based on video
CN116167336A (en) Sensor data processing method based on cloud computing, cloud server and medium
CN116342851A (en) Target detection model construction method, target detection method and device
CN112446428B (en) Image data processing method and device
US11948391B2 (en) Model training method and apparatus, electronic device and readable storage medium
CN112907541B (en) Palm image quality evaluation model construction method and device
CN113076993B (en) Information processing method and model training method for chest X-ray film recognition
CN110689064B (en) Image semi-supervised classification method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant