CN111027555A - License plate recognition method and device and electronic equipment - Google Patents

License plate recognition method and device and electronic equipment Download PDF

Info

Publication number
CN111027555A
CN111027555A CN201811174638.1A CN201811174638A CN111027555A CN 111027555 A CN111027555 A CN 111027555A CN 201811174638 A CN201811174638 A CN 201811174638A CN 111027555 A CN111027555 A CN 111027555A
Authority
CN
China
Prior art keywords
license plate
target
model
target license
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811174638.1A
Other languages
Chinese (zh)
Other versions
CN111027555B (en
Inventor
钱华
程战战
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201811174638.1A priority Critical patent/CN111027555B/en
Publication of CN111027555A publication Critical patent/CN111027555A/en
Application granted granted Critical
Publication of CN111027555B publication Critical patent/CN111027555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a license plate recognition method, a license plate recognition device and electronic equipment, and the license plate recognition method comprises the following steps: determining a target license plate feature sequence according to the license plate features of the target license plate in the target license plate region, wherein the license plate features are license plate attribute features extracted from an image containing the target license plate by a convolutional neural network; inputting the target license plate feature sequence into an attention model, performing character string recognition on the target license plate feature sequence by the attention model according to model parameters trained by taking an edit distance as a loss function, and outputting the license plate number of the target license plate; and acquiring the license plate number of the target license plate output by the attention model. The method provided by the application can improve the accuracy of license plate recognition.

Description

License plate recognition method and device and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a license plate recognition method and apparatus, and an electronic device.
Background
With the aging of image processing technology, license plate recognition technology is also continuously developed.
In the existing license plate recognition technology, character segmentation is usually performed on a license plate region to obtain a single character image and position information of a single character, then character recognition is performed on the single character image, and the position information of the single character image is combined to form a whole character string of the license plate region.
However, the method of firstly performing character segmentation and then performing single character recognition is not ideal for recognizing abnormal license plate images such as license plate characters being stuck, too large or too small character spacing, and the like, and the recognized license plate number is easy to have the problems of character missing, character missing and the like.
Disclosure of Invention
In view of this, the present application provides a license plate recognition method, a license plate recognition device and an electronic device, so as to improve the accuracy of license plate recognition.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of the present application, there is provided a license plate recognition method, the method including:
determining a target license plate feature sequence according to the license plate features of the target license plate in the target license plate region, wherein the license plate features are license plate attribute features extracted from an image containing the target license plate by a convolutional neural network;
inputting the target license plate feature sequence into an attention model, performing character string recognition on the target license plate feature sequence by the attention model according to model parameters trained by taking an edit distance as a loss function, and outputting the license plate number of the target license plate;
and acquiring the license plate number of the target license plate output by the attention model.
Optionally, the determining a target license plate feature sequence according to the license plate features in the target license plate region includes:
inputting the license plate features into a bidirectional LSTM network, processing the license plate features by the bidirectional LSTM network and outputting a target license plate feature sequence, wherein adjacent license plate features in the target license plate feature sequence are related in adjacent time sequence;
and acquiring the target license plate characteristic sequence output by the bidirectional LSTM network.
Optionally, the performing, by the attention model, character string recognition on the target license plate feature sequence according to a model parameter trained by using the edit distance as a loss function, and outputting the license plate number in the target license plate region includes:
and calculating the activity value of the hidden layer in the attention model at each moment according to model parameters trained by taking the editing distance between the calculated and predicted character string and the calibrated character string as a loss function, and determining the license plate number of the target license plate according to the calculated activity value of the hidden layer at each moment.
Optionally, the calculating an activity value of a hidden layer in the attention model includes:
calculating the weight factor of each license plate feature in the target license plate feature sequence at each moment;
calculating semantic codes of all the moments according to the weight factors of all the license plate features at all the moments and the target license plate feature sequence;
and calculating the activity value of the hidden layer of the attention model at each moment based on the target license plate feature sequence and the semantic codes at each moment.
Optionally, the convolutional neural network, the bidirectional LSTM network, and the attention model are cascaded in a target neural network model, the target neural network model further includes a classification model, and the classification model is cascaded with an output of the convolutional neural network; the method further comprises the following steps:
inputting the license plate features into a preset classification model so that the classification model calculates confidence degrees of the license plate features of the target license plate corresponding to different license plate types;
determining the license plate type with the highest calculated confidence coefficient as the license plate type of the target license plate;
and outputting the license plate type and the license plate number as the recognition result of the target license plate.
According to a second aspect of the present application, there is provided a license plate recognition device, the method comprising:
the determining unit is used for determining a target license plate feature sequence according to each license plate feature of a target license plate in a target license plate area, wherein the license plate feature is a license plate attribute feature extracted from an image containing the target license plate by a convolutional neural network;
the recognition unit is used for inputting the target license plate feature sequence into an attention model, performing character string recognition on the target license plate feature sequence according to model parameters trained by the attention model by taking an edit distance as a loss function, and outputting the license plate number of the target license plate;
and the acquisition unit is used for acquiring the license plate number of the target license plate output by the attention model.
Optionally, the determining unit is specifically configured to input the license plate features into a bidirectional LSTM network, so that the bidirectional LSTM network processes the license plate features and outputs a target license plate feature sequence, where adjacent license plate features in the target license plate feature sequence are related in adjacent time sequences; and acquiring the target license plate characteristic sequence output by the bidirectional LSTM network.
Optionally, the identification unit is specifically configured to calculate an activity value of a hidden layer in the attention model at each time according to a model parameter trained by using an edit distance between a calculated and predicted character string and a calibrated character string as a loss function, and determine the license plate number of the target license plate according to the calculated activity value of the hidden layer at each time.
Optionally, the identification unit is specifically configured to calculate a weight factor of each license plate feature in the target license plate feature sequence at each time when calculating an activity value of a hidden layer in the attention model; calculating semantic codes of all the moments according to the weight factors of all the license plate features at all the moments and the target license plate feature sequence; and calculating the activity value of the hidden layer of the attention model at each moment based on the target license plate feature sequence and the semantic codes at each moment.
Optionally, the convolutional neural network, the bidirectional LSTM network, and the attention model are cascaded in a target neural network model, the target neural network model further includes a classification model, and the classification model is cascaded with an output of the convolutional neural network; the device further comprises:
the classification unit is used for inputting the license plate characteristics into a preset classification model so as to enable the classification model to calculate the confidence degrees of the license plate characteristics of the target license plate corresponding to different license plate types; determining the license plate type with the highest calculated confidence coefficient as the license plate type of the target license plate; and outputting the license plate type and the license plate number as the recognition result of the target license plate.
According to a third aspect of the present application, there is provided an electronic device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to perform the method of the first aspect.
According to a fourth aspect of the present application, there is provided a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to perform the method of the first aspect.
On one hand, after extracting the license plate features with the spatial correlation of the target license plate, the method also performs time sequence correlation processing on the license plate features to obtain a target license plate feature sequence with both the spatial correlation and the time sequence relation, and then performs whole character string recognition on the target license plate feature sequence through the attention model to obtain the license plate number. Because each license plate feature in the target license plate feature sequence for character string recognition has both a spatial relationship and a time sequence relationship, the problems of character missing, character missing and the like of the recognized license plate number can be relieved by adopting the target license plate feature sequence for character string recognition.
On the other hand, the attention model adopted by the application is obtained by calculating the edit distance between the recognized license plate number and the calibrated license plate number whole character string as the loss function training, and is not obtained by calculating the Euclidean distance between the recognized single character and the calibrated single character as the loss function training, so that the integrity of the license plate number recognized by the trained attention model is better, and the problems of character loss, character missing and the like of the recognized license plate number can be relieved to a certain extent.
Drawings
Fig. 1 is a schematic diagram of a CNN network according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of an overall license plate recognition scheme according to an exemplary embodiment of the present application;
FIG. 3 is a flowchart illustrating a license plate recognition method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a bi-directional LSTM network shown in an exemplary embodiment of the present application;
FIG. 5 is a schematic view of an attention model calculation shown in an exemplary embodiment of the present application;
FIG. 6 is a flow chart illustrating another license plate recognition method according to an exemplary embodiment of the present application;
FIG. 7 is a diagram illustrating a license plate recognition method according to an exemplary embodiment of the present disclosure;
FIG. 8 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
fig. 9 is a block diagram illustrating a license plate recognition apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the existing license plate recognition technology, a CNN (Convolutional Neural Networks) network is usually adopted to perform feature extraction on a license plate image, and then the extracted license plate features are used to perform character segmentation and character recognition.
Referring to fig. 1, fig. 1 is a schematic diagram of a CNN network according to an exemplary embodiment of the present application;
CNN networks typically include multiple network layers with full connectivity between the network layers. However, the nodes on each network layer are independent and have no relation. Therefore, the license plate features extracted by the CNN usually have only a spatial incidence relation but not a temporal incidence relation, so that the license plate features are independent in temporal sequence, that is, the current feature does not retain memory for the previous feature or the next feature. Therefore, the CNN network is used for identifying the license plate number, and particularly when the license plate number with characters adhered and characters with too large and too small intervals is identified, the problems of character loss, character missing and the like of the identified license plate number can easily occur.
In view of the above, the present application provides a license plate recognition scheme, which is generally described with reference to fig. 2.
As shown in fig. 2, the whole scheme of license plate recognition mainly includes the following five parts:
a first part: an image of the vehicle is acquired, as shown at 201 in FIG. 2.
A second part: the vehicle image is subjected to license plate detection, as shown at 202 in FIG. 2.
When detecting the license plate of a vehicle object, the method generally comprises the steps of positioning a license plate area in a vehicle image, and then providing the characteristics of each license plate of a target license plate in the license plate area. And the license plate features in the license plate features are correlated in spatial position.
And a third part: character recognition, as shown at 203 in fig. 2.
In the third part, time sequence correlation processing can be performed on each license plate feature to obtain a target license plate feature sequence. Adjacent license plate features in the target license plate feature sequence are related in adjacent time sequence. And then, carrying out integral character string recognition on the target license plate feature sequence by using the attention model to obtain the license plate number. The attention model is obtained by training a sample to obtain a predicted value of the license plate number and the editing distance of the calibrated license plate number as a loss function.
The fourth part: the license plate type is determined as shown at 204 in FIG. 2.
In the fourth section, the license plate features can be input into a classification model, so that the classification model determines the license plate type of the vehicle.
The fifth part is that: and outputting a license plate recognition result, as shown by 205 in fig. 2.
The license plate recognition result comprises: and identifying the license plate number and the license plate type.
According to the license plate recognition technical scheme, on one hand, after the license plate features with the spatial correlation of the target license plate are extracted, time sequence correlation processing is carried out on the license plate features to obtain a target license plate feature sequence with the spatial correlation and the time sequence relation, and then the attention model is used for carrying out whole character string recognition on the target license plate feature sequence to obtain the license plate number. Because each license plate feature in the target license plate feature sequence for character string recognition has both a spatial relationship and a time sequence relationship, the problems of character missing, character missing and the like of the recognized license plate number can be relieved by adopting the target license plate feature sequence for character string recognition.
On the other hand, the attention model adopted by the application is obtained by calculating the edit distance between the recognized license plate number and the calibrated license plate number whole character string as the loss function training, and is not obtained by calculating the Euclidean distance between the recognized single character and the calibrated single character as the loss function training, so that the integrity of the license plate number recognized by the trained attention model is better, and the problems of character loss, character missing and the like of the recognized license plate number can be relieved to a certain extent.
It should be noted that the method and the system can be applied to image acquisition equipment such as cameras in scenes such as crossroads and intelligent monitoring, and can also be applied to background servers of monitoring systems.
The license plate recognition process proposed herein is explained in detail below.
Referring to fig. 3, fig. 3 is a flowchart illustrating a license plate recognition method according to an exemplary embodiment of the present disclosure;
step 301: and determining a target license plate characteristic sequence according to the license plate characteristics of the target license plate in the target license plate area.
Step 1: an image of the target vehicle is acquired.
The target vehicle image is an image obtained by shooting a target vehicle by the image acquisition equipment, and the target vehicle image comprises the target vehicle.
In an optional implementation manner, when the license plate recognition method is applied to an image acquisition device, the image acquisition device may acquire an image of a target vehicle. Alternatively, the image capturing device may receive the target vehicle image transmitted by other devices (such as other image capturing devices, servers, etc.).
In another optional implementation manner, when the license plate recognition method is applied to a background server of a monitoring system, the background server may receive the target vehicle image acquired by the image acquisition device.
The acquisition of the target vehicle image is only exemplified here, and is not particularly limited.
The image capturing apparatus may be a camera, a video camera, or other apparatuses having an image capturing function, and the image capturing apparatus is only exemplarily described herein, and is not particularly limited.
Step 2: and positioning a license plate area of the target vehicle in the target vehicle image.
When the method is implemented, the neural network can be adopted to position the license plate area and determine the parameters of the license plate area, such as coordinates, length, width and the like.
For example, a large number of candidate frames are regularly drawn in the whole image in advance, and all the candidate frames are subjected to feature extraction by using a preset neural network. For each candidate box, a confidence of the candidate box with respect to each object is calculated from the features extracted from the candidate box. And determining a candidate frame with the highest confidence relative to the license plate as a candidate frame containing the license plate, wherein the region framed by the candidate frame is the license plate region, the coordinates of the candidate frame are the coordinates of the license plate region, and the length and width of the candidate frame are the length and width of the license plate region.
Of course, the whole image can also be divided according to the pixel points or the pixel small blocks formed by the adjacent pixel points to form the sub-images. Then extracting the characteristics of each subimage by using a preset neural network, then calculating the confidence coefficient of each article corresponding to the subimage, finding the subimage with the highest confidence coefficient corresponding to the license plate, determining the area contained in the subimage as the license plate area, wherein the coordinates of the subimage are the coordinates of the license plate area, and the length and the width of the subimage are the length and the width of the license plate area
Here, this is an exemplary explanation for locating the license plate region, and is not particularly limited.
The Neural network for locating the license plate Region may be a Fast-based convolutional Neural network (FRCNN) network or a YOLO (yollo Look) network, which is only exemplary and not limited specifically.
After the license plate area is determined and located, the license plate area image can be intercepted from the target vehicle image according to the determined parameters of the license plate area, such as coordinates, length, width and the like.
And step 3: extracting each license plate feature aiming at the target license plate in the license plate area; the license plate features have a spatial relationship with each other.
In order to reduce the calculation amount of subsequent feature extraction and subsequent character string recognition, the license plate region can be preprocessed, and then each license plate feature for the target license plate is extracted from the preprocessed license plate region.
For example, the license plate region may be normalized to a certain size. The preprocessing performed on the license plate region is only exemplary and is not particularly limited.
In the embodiment of the application, each license plate feature for the target license plate can be provided in the license plate area after preprocessing by using the CNN network. The license plate features extracted using the CNN network have a spatial relationship with each other.
When the license plate feature extraction is realized, the preprocessed license plate region can be input into a CNN network, and the CNN network performs feature extraction on the license plate region and outputs the feature of each license plate.
For example: the character string of the license plate region is A12345, after the license plate region is input into the CNN network, and assuming that the license plate features extracted by the CNN are a plurality of character features, the character features included in each formed license plate feature are character feature "A", character feature "1", character feature "2", character feature "3", character feature "4" and character feature "5" in sequence from left to right according to the spatial position.
Of course, this is merely an example to illustrate the license plate features, as well as the individual license plate features. In practical applications, the license plate features may further include a license plate border feature, a texture feature, an edge feature, and the like, and each formed license plate feature is more complex, and the license plate features are only exemplary and are not specifically limited herein.
And 4, step 4: processing the extracted license plate features to form a target license plate feature sequence; and the adjacent license plate features in the target license plate feature sequence are related on an adjacent time sequence.
In the embodiment of the application, in order to alleviate the problems of character missing and character missing of the identified license plate number and the like, the application also processes the license plate features to obtain a target license plate feature sequence, so that adjacent features in the target license plate sequence are not only in spatial association, but also in adjacent time sequences, and the association between the license plate features is stronger,
when the method is realized, the target license plate feature sequence can be obtained by processing each license plate feature by using the bidirectional LSTM network.
Referring to fig. 4, fig. 4 is a schematic diagram of a bidirectional LSTM network shown in an exemplary embodiment of the present application.
A bidirectional LSTM network typically includes an input layer, a hidden layer, and an output layer.
The hidden layers of the bidirectional LSTM network comprise a Forward layer and a Backward layer.
In the bidirectional LSTM network, a Forward layer calculates according to the sequence from 1 moment to t moment to obtain and store the activity value of the Forward layer at each moment, and a Backward layer calculates according to the sequence from t moment to 1 moment to obtain and store the activity value of the Backward layer at each moment. And the output layer combines the activity value of the Forward layer and the activity value of the Backward layer at each moment to obtain a final output result.
Due to the Forward layer structure of the bidirectional LSTM network, the feature of the Backward processing of the bidirectional LSTM network can have the memory of the feature of the Forward processing, and due to the Backward layer structure of the bidirectional LSTM network, the feature of the Forward processing can have the memory of the feature of the Backward processing, so that the adjacent license plate feature of the target license plate feature obtained by the LSTM network processing also has the correlation on the adjacent time sequence.
When the method is realized, all license plate characteristics can be input into the bidirectional LSTM model, all license plate characteristics of the bidirectional LSTM model are processed, then a target license plate characteristic sequence is output, and adjacent license plate characteristics in the target license plate characteristic sequence have adjacent time sequence correlation.
For example, the license plate number is still a 12345.
Assuming that the characteristics in each license plate characteristic are character characteristic "A", character characteristic "1", character characteristic "2", character characteristic "3", character characteristic "4" and character characteristic "5" in sequence, inputting the six characteristics into a bidirectional LSTM model for processing to obtain each license plate characteristic in a target license plate characteristic sequence, wherein each license plate characteristic has time sequence correlation.
Regarding the character feature "3", after the bidirectional LSTM network processing, it can be known that the character feature "a", the character feature "1", and the character feature "2" exist before the character feature "3", and the character feature "4" and the character feature "5" exist after the character feature "3". Of course, in practical applications, the license plate features may further include license plate border features, texture features, edge features, and the like, and thus the sequence of target license plate features is more complex, which is only exemplified here.
It should be noted that, in the present application, each license plate feature may be processed by using a neural network capable of processing time sequence association, such as a unidirectional LSTM network, an RNN network composed of an activation function as a tanh algorithm, and the like, so that each license plate feature in an output target license plate feature sequence has time sequence association with each other, but a bidirectional LSTM network may make the current license plate feature have association between a license plate feature processed in front and a license plate feature processed in the back, and the association is stronger, so the bidirectional LSTM network is preferable. Here, the processing of the license plate features is only an exemplary illustration, and is not specifically limited.
Step 302: and inputting the target license plate feature sequence into an attention model, performing character string recognition on the target license plate feature sequence by the attention model according to model parameters trained by taking the edit distance as a loss function, and outputting the license plate number of the target license plate.
Step 303: and acquiring the license plate number of the target license plate output by the attention model.
Note that the attention model used in the present application is different from the existing attention model:
the existing attention model adopts a loss function during training, namely a softmax algorithm, wherein the softmax algorithm calculates a loss value through Euclidean distance between each character and a calibration character obtained by identifying a sample.
For example, assuming that the license plate number is a12845, the character string obtained by the recognition sample is a12345, and the calibrated character string is a12845, the current attention model calculates the loss of difference by calculating the euclidean distance between a single recognition character and the calibration character, that is, calculating the euclidean distance between the recognition character a and the calibration character a, the euclidean distance between the recognition character 1 and the calibration character 1, the euclidean distance between the recognition character 2 and the calibration character 2, the euclidean distance between the recognition character 3 and the calibration character 8, the euclidean distance between the recognition character 4 and the calibration character 4, and the euclidean distance between the recognition character 5 and the calibration character 5.
The attention model is trained by calculating the Euclidean distance between the single character obtained by recognition and the single character marked as a loss function, so that the recognized license plate number is poor in integrity, and particularly, the problems of missing characters and the like easily occur to the recognized license plate number for recognition of license plates with characters adhered to each other, too large character spacing and the like.
The attention model of the application adopts the editing distance of the whole character as a loss function during training, namely, the editing distance of the whole character string (including all characters) obtained by recognition and the whole calibration character string is calculated to calculate the difference loss.
For example, assume that the license plate number is a12845, the character string obtained by identifying the sample is a12345, and the calibration character string is a 12845. The attention model of the present application calculates the loss value in the following manner: and calculating the editing distance between the character string obtained by identifying the sample and the whole calibration character string, namely calculating the editing distance between the character string 'A12345' obtained by identifying and the whole calibration character string 'A12845'.
Because the attention model is trained by calculating the editing distance between the recognition character string and the whole calibration character string as a loss function, the integrity of the recognized license plate number is better, and particularly for the recognition of license plates with adhered license plate characters, overlarge character spacing and the like, the problems of character missing, character missing and the like of the recognized license plate number can be effectively relieved.
Step 302 is described in detail below in terms of attention model training and license plate number recognition using an attention model.
1) Training of attention models
Firstly, license plate area samples can be positioned in a vehicle image sample, then, license plate feature samples are extracted from the license plate area samples, and the license plate feature samples are input into a bidirectional LSTM network to obtain target license plate feature sequence samples.
Then, inputting the target license plate feature sequence sample and the license plate number calibrated from the vehicle image sample into the attention model so as to train model parameters of the attention model.
In training, two stages of forward propagation and backward propagation can be generally included:
and (4) forward propagating, wherein the attention model sequentially backward propagates the target license plate feature sequence samples from the first layer of the attention model to the last layer so as to perform whole character string recognition on the target license plate feature sequence samples to obtain the target license plate number.
And (4) performing back propagation, wherein the attention model calculates the overall edit distance between the target license plate number and the calibrated license plate number as a loss value. Then, the differential loss is propagated backward from the last layer of the attention model to the first layer in order to adjust the model parameters of the attention model until the loss values converge.
2) License plate number identification using attention model
The general attention model is built from a unidirectional LSTM network, which typically has an input layer, a hidden layer, and an output layer. The attention model therefore also has an input layer, a hidden layer and an output layer.
Referring to fig. 5, fig. 5 is a schematic view of an attention model calculation shown in an exemplary embodiment of the present application.
After the attention model is trained, the target license plate feature sequence can be input into the trained attention model, the attention model can calculate the activity value of the hidden layer of the attention model through the model parameters obtained by training by taking the edit distance as a loss function, and the license plate number of the target license plate is determined according to the calculated activity value.
When calculating the hidden layer activity value, the attention model may first calculate the weight factor corresponding to each license plate in the target license plate sequence.
For example, as shown in FIG. 5, X in FIG. 51、X2、…、XMThe formed sequence is the characteristic sequence of the target license plate, α in FIG. 5t,1Is X1Weight factor at time T, αt,2Is X2Weight factor at time T, …, αt,MIs XMWeight factor at time T.
Then, the attention model can calculate semantic codes of all the moments according to the weight factors of all the license plate features of all the moments and the target license plate feature sequence.
As shown in FIG. 5, the attention model may be based on license plate feature X1And X1Weight factor αt,1Number plate characteristic X2And X2Weight factor αt,2…, license plate feature XMAnd XMWeight factor αt,MPerforming operation to obtain speech coding C of T timet
Then, the attention model can calculate the activity value of the hidden layer of the attention model at each moment based on the target license plate feature sequence and the semantic code at each moment.
As shown in FIG. 5, the initial activity value of the hidden layer is a predetermined value S0. The attention model may be based on a target license plate feature sequence X1、X2、…、XMAnd the calculated 1-time semantic code C1And in combination with S0Obtaining the activity value S of the hidden layer 1 at the moment1And so on.
The attention model may be based on a target license plate feature sequence X1、X2、…、XMAnd semantic coding C of the calculated T timetAnd calculating the activity value S of the hidden layer by combining the T-1 timeT-1Obtaining the activity value S of the hidden layer at the time TT.
And finally, the attention model can determine the license plate number of the target license plate according to the calculated activity value of the hidden layer at each moment.
For example, as shown in FIG. 5, the attention model may decode according to the hidden layer activity value at each time to obtain a decoding result, such as YTIs the solution at time TAnd (6) coding the result. The decoding result contains the confidence coefficient of each candidate character, then the character with the highest confidence coefficient can be used as the recognized character, and then the recognized characters at each moment are combined according to the time sequence to generate the license plate number obtained by recognition.
It should be noted that, assuming that the license plate a12345 includes 6 characters, each time point described above is 6 time points, and the decoding result of each time point output as described above is also a decoding result of 6 time points.
In addition, in the embodiment of the application, the application can not only identify the license plate number of the target vehicle, but also identify the license plate type of the target vehicle.
Specifically, the convolutional neural network, the bidirectional LSTM network, and the attention model described above are cascaded in a target neural network model that also includes a classification model that is cascaded with the output of the convolutional neural network (i.e., the CNN network).
During implementation, after a plurality of license plate features aiming at a target license plate are extracted from a license plate area by adopting a CNN network, the license plate features can be input into a classification model, and the classification model can calculate the confidence of a set formed by the license plate features corresponding to different types of license plates. Then, the license plate type with the highest calculated confidence coefficient can be determined as the license plate type of the target vehicle. The license plate number of the target vehicle and the license plate type of the target vehicle can be used as the recognition result of the target license plate to be output.
The classification model may be a classification model built based on a softmax algorithm, or may be other classifiers, such as an ssd (single Shot multi box detector) classifier, and the like, where the classification model is only exemplarily described herein, and the classification model is not specifically limited.
The license plate types can include a license plate of a civil large vehicle, a license plate of a civil small vehicle, a license plate of a police vehicle and the like. Here, the type of the license plate is merely exemplary, and is not particularly limited.
The following describes the license plate recognition method provided by the present application in detail by using a specific embodiment and with reference to fig. 6 and 7.
Referring to fig. 6, fig. 6 is a flowchart illustrating another license plate recognition method according to an exemplary embodiment of the present application, which may include the following steps.
It should be noted that, as shown in fig. 6, the present application designs a target neural network model, which includes a convolutional neural network (CNN network), a bidirectional LSTM network, an attention model, and a classification model. Wherein, the output cascade of the bidirectional LSTM network and the convolutional neural network, the output cascade of the attention model and the bidirectional LSTM network, and the output cascade of the classification model and the convolutional neural network.
Step 601: an image of the target vehicle is acquired.
For specific implementation, refer to step 301, which is not described herein again.
Step 602: and positioning a license plate area of the target vehicle in the target vehicle image.
For specific implementation, refer to step 301, which is not described herein again.
The target vehicle image may be input into a YOLO model or FRCNN network, which may output a license plate region as shown in fig. 7.
Step 603: and (4) utilizing the CNN network to provide each license plate feature in the license plate area.
When the method is implemented, the license plate area image can be input into a CNN network, and the CNN network performs feature extraction on the license plate area image to extract a plurality of license plate features. The license plate features extracted by the CNN network have spatial correlation with each other.
Step 604: and determining the license plate type of the license plate of the target vehicle by using the softmax-classification model.
In implementation, the license plate features can be input into a softmax-classification model, and the softmax-classification model can calculate confidence degrees of the license plate features corresponding to different license plate types.
Then, the license plate type with the highest calculated confidence coefficient can be determined as the license plate type of the target vehicle.
Step 305: and processing the license plate characteristics by using a bidirectional LSTM network to obtain a target license plate characteristic sequence.
When the license plate feature sequence is realized, the license plate features can be input into a bidirectional LSTM network, and the bidirectional LSTM network can process the license plate features to obtain a target license plate feature sequence. Adjacent license plate features in the target license plate feature sequence have adjacent time sequence correlation.
Step 606: and (4) carrying out integral character string recognition on the target license plate characteristic sequence by using an Attention model to obtain the license plate number.
For specific implementation, refer to step 302 and step 303, which are not described herein again.
The loss function adopted by the Attention model in the training process is as follows: the loss value is calculated by calculating the edit distance between the license plate number obtained by recognition and the whole character string of the calibrated license plate number, and the recognized license plate number has better integrity due to the calculation of the whole character string, and particularly, the problems of word missing and character missing of the recognized license plate number can be effectively relieved for the recognition of license plates with adhered license plate characters, overlarge character spacing and the like.
During identification, the target license plate feature sequence can be input into an Attention model, and the Attention model can carry out whole character string identification on the target license plate feature sequence to obtain a license plate number.
Step 607: and outputting the identified license plate number and the license plate type.
As shown in fig. 7, the final output is the license plate number "lua 88888" and the license plate type "small vehicle".
According to the license plate recognition technical scheme, on one hand, after the license plate features with the spatial correlation of the target license plate are extracted, time sequence correlation processing is carried out on the license plate features to obtain a target license plate feature sequence with the spatial correlation and the time sequence relation, and then the attention model is used for carrying out whole character string recognition on the target license plate feature sequence to obtain the license plate number. Because each license plate feature in the target license plate feature sequence for character string recognition has both a spatial relationship and a time sequence relationship, the problems of character missing, character missing and the like of the recognized license plate number can be relieved by adopting the target license plate feature sequence for character string recognition.
On the other hand, the attention model adopted by the application is obtained by calculating the edit distance between the recognized license plate number and the calibrated license plate number whole character string as the loss function training, and is not obtained by calculating the Euclidean distance between the recognized single character and the calibrated single character as the loss function training, so that the integrity of the license plate number recognized by the trained attention model is better, and the problems of character loss, character missing and the like of the recognized license plate number can be relieved to a certain extent.
Referring to fig. 8, fig. 8 is a hardware structure diagram of an electronic device according to an exemplary embodiment of the present application.
The electronic device includes: a communication interface 801, a processor 802, a machine-readable storage medium 803, and a bus 804; wherein the communication interface 801, the processor 802 and the machine-readable storage medium 803 communicate with each other via a bus 804. The processor 802 may perform the license plate recognition methods described above by reading and executing machine-executable instructions in the machine-readable storage medium 803 corresponding to the license plate recognition control logic.
The machine-readable storage medium 803 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the machine-readable storage medium 803 may be a RAM (random Access Memory), a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
Referring to fig. 9, fig. 9 is a block diagram illustrating a license plate recognition apparatus according to an exemplary embodiment of the present application, which may include the following elements.
A determining unit 901, configured to determine a target license plate feature sequence according to each license plate feature of a target license plate in a target license plate region, where the license plate feature is a license plate attribute feature extracted from an image including the target license plate by a convolutional neural network;
a recognition unit 902, configured to input the target license plate feature sequence into an attention model, perform character string recognition on the target license plate feature sequence according to a model parameter trained by the attention model with an edit distance as a loss function, and output a license plate number of the target license plate;
an obtaining unit 903, configured to obtain the license plate number of the target license plate output by the attention model.
Optionally, the determining unit is specifically configured to input the license plate features into a bidirectional LSTM network, so that the bidirectional LSTM network processes the license plate features and outputs a target license plate feature sequence, where adjacent license plate features in the target license plate feature sequence are related in adjacent time sequences; and acquiring the target license plate characteristic sequence output by the bidirectional LSTM network.
Optionally, the identifying unit 902 is specifically configured to calculate an activity value of a hidden layer in the attention model at each time according to a model parameter trained by using an edit distance between a calculated predicted character string and a calibrated character string as a loss function, and determine the license plate number of the target license plate according to the calculated activity value of the hidden layer at each time.
Optionally, the identifying unit 902 is specifically configured to calculate a weight factor of each license plate feature in the target license plate feature sequence at each time when calculating an activity value of a hidden layer in the attention model; calculating semantic codes of all the moments according to the weight factors of all the license plate features at all the moments and the target license plate feature sequence; and calculating the activity value of the hidden layer of the attention model at each moment based on the target license plate feature sequence and the semantic codes at each moment.
Optionally, the convolutional neural network, the bidirectional LSTM network, and the attention model are cascaded in a target neural network model, the target neural network model further includes a classification model, the classification model is cascaded with an output of the convolutional neural network, and the apparatus further includes:
a classification unit 904, configured to input the license plate features into a preset classification model, so that the classification model calculates confidence levels of the license plate features of the target license plate corresponding to different license plate types; determining the license plate type with the highest calculated confidence coefficient as the license plate type of the target license plate; and outputting the license plate type and the license plate number as the recognition result of the target license plate.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (12)

1. A license plate recognition method is characterized by comprising the following steps:
determining a target license plate feature sequence according to the license plate features of the target license plate in the target license plate region, wherein the license plate features are license plate attribute features extracted from an image containing the target license plate by a convolutional neural network;
inputting the target license plate feature sequence into an attention model, performing character string recognition on the target license plate feature sequence by the attention model according to model parameters trained by taking an edit distance as a loss function, and outputting the license plate number of the target license plate;
and acquiring the license plate number of the target license plate output by the attention model.
2. The method of claim 1, wherein determining a sequence of target license plate features from the license plate features in the target license plate region comprises:
inputting the license plate features into a bidirectional LSTM network, processing the license plate features by the bidirectional LSTM network and outputting a target license plate feature sequence, wherein adjacent license plate features in the target license plate feature sequence are related in adjacent time sequence;
and acquiring the target license plate characteristic sequence output by the bidirectional LSTM network.
3. The method of claim 1, wherein the attention model performs character string recognition on the target license plate feature sequence according to model parameters trained by using edit distance as a loss function, and outputs the license plate number in the target license plate region, comprising:
and calculating the activity value of the hidden layer in the attention model at each moment according to model parameters trained by taking the editing distance between the calculated and predicted character string and the calibrated character string as a loss function, and determining the license plate number of the target license plate according to the calculated activity value of the hidden layer at each moment.
4. The method of claim 3, wherein the calculating the activity value of the hidden layer in the attention model comprises:
calculating the weight factor of each license plate feature in the target license plate feature sequence at each moment;
calculating semantic codes of all the moments according to the weight factors of all the license plate features at all the moments and the target license plate feature sequence;
and calculating the activity value of the hidden layer of the attention model at each moment based on the target license plate feature sequence and the semantic codes at each moment.
5. The method of claim 2, wherein the convolutional neural network, the bi-directional LSTM network, and the attention model are cascaded in a target neural network model, the target neural network model further comprising a classification model, the classification model cascaded with an output of the convolutional neural network; the method further comprises the following steps:
inputting the license plate features into a preset classification model so that the classification model calculates confidence degrees of the license plate features of the target license plate corresponding to different license plate types;
determining the license plate type with the highest calculated confidence coefficient as the license plate type of the target license plate;
and outputting the license plate type and the license plate number as the recognition result of the target license plate.
6. A license plate recognition device, the method comprising:
the determining unit is used for determining a target license plate feature sequence according to each license plate feature of a target license plate in a target license plate area, wherein the license plate feature is a license plate attribute feature extracted from an image containing the target license plate by a convolutional neural network;
the recognition unit is used for inputting the target license plate feature sequence into an attention model, performing character string recognition on the target license plate feature sequence according to model parameters trained by the attention model by taking an edit distance as a loss function, and outputting the license plate number of the target license plate;
and the acquisition unit is used for acquiring the license plate number of the target license plate output by the attention model.
7. The apparatus of claim 6, wherein the determining unit is specifically configured to input the license plate features into a bidirectional LSTM network, so that the bidirectional LSTM network processes the license plate features to output a target license plate feature sequence, wherein adjacent license plate features in the target license plate feature sequence are associated in adjacent time sequences; and acquiring the target license plate characteristic sequence output by the bidirectional LSTM network.
8. The apparatus according to claim 6, wherein the recognition unit is specifically configured to calculate activity values of hidden layers in the attention model at each time according to model parameters trained by calculating an edit distance between the predicted character string and the calibrated character string as a loss function, and determine the license plate number of the target license plate according to the calculated activity values of the hidden layers at each time.
9. The apparatus according to claim 8, wherein the recognition unit, when calculating the activity value of the hidden layer in the attention model, is specifically configured to calculate a weight factor of each license plate feature in the target license plate feature sequence at each time; calculating semantic codes of all the moments according to the weight factors of all the license plate features at all the moments and the target license plate feature sequence; and calculating the activity value of the hidden layer of the attention model at each moment based on the target license plate feature sequence and the semantic codes at each moment.
10. The apparatus of claim 7, wherein the convolutional neural network, the bi-directional LSTM network, and the attention model are cascaded in a target neural network model, the target neural network model further comprising a classification model, the classification model cascaded with an output of the convolutional neural network;
the device further comprises: the classification unit is used for inputting the license plate characteristics into a preset classification model so as to enable the classification model to calculate the confidence degrees of the license plate characteristics of the target license plate corresponding to different license plate types; determining the license plate type with the highest calculated confidence coefficient as the license plate type of the target license plate; and outputting the license plate type and the license plate number as the recognition result of the target license plate.
11. An electronic device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to perform the method of any one of claims 1 to 6.
12. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any of claims 1 to 5.
CN201811174638.1A 2018-10-09 2018-10-09 License plate recognition method and device and electronic equipment Active CN111027555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811174638.1A CN111027555B (en) 2018-10-09 2018-10-09 License plate recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811174638.1A CN111027555B (en) 2018-10-09 2018-10-09 License plate recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111027555A true CN111027555A (en) 2020-04-17
CN111027555B CN111027555B (en) 2023-09-26

Family

ID=70191036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811174638.1A Active CN111027555B (en) 2018-10-09 2018-10-09 License plate recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111027555B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132031A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Vehicle money identification method and device, electronic equipment and storage medium
CN112508018A (en) * 2020-12-14 2021-03-16 北京澎思科技有限公司 License plate recognition method and device and storage medium
CN113486885A (en) * 2021-06-17 2021-10-08 杭州鸿泉物联网技术股份有限公司 License plate recognition method and device, electronic equipment and storage medium
CN113515985A (en) * 2020-07-01 2021-10-19 阿里巴巴集团控股有限公司 Self-service weighing system, weighing detection method, equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8331621B1 (en) * 2001-10-17 2012-12-11 United Toll Systems, Inc. Vehicle image capture system
US20170177965A1 (en) * 2015-12-17 2017-06-22 Xerox Corporation Coarse-to-fine cascade adaptations for license plate recognition with convolutional neural networks
CN106960206A (en) * 2017-02-08 2017-07-18 北京捷通华声科技股份有限公司 Character identifying method and character recognition system
US20170249524A1 (en) * 2016-02-25 2017-08-31 Xerox Corporation Method and system for detection-based segmentation-free license plate recognition
CN107239778A (en) * 2017-06-09 2017-10-10 中国科学技术大学 The licence plate recognition method of efficiently and accurately
CN107368831A (en) * 2017-07-19 2017-11-21 中国人民解放军国防科学技术大学 English words and digit recognition method in a kind of natural scene image
CN108009543A (en) * 2017-11-29 2018-05-08 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device
CN108053653A (en) * 2018-01-11 2018-05-18 广东蔚海数问大数据科技有限公司 Vehicle behavior Forecasting Methodology and device based on LSTM
CN108122209A (en) * 2017-12-14 2018-06-05 浙江捷尚视觉科技股份有限公司 A kind of car plate deblurring method based on confrontation generation network
WO2018099194A1 (en) * 2016-11-30 2018-06-07 杭州海康威视数字技术股份有限公司 Character identification method and device
WO2018112900A1 (en) * 2016-12-23 2018-06-28 深圳先进技术研究院 License plate recognition method and apparatus, and user equipment
CN108229474A (en) * 2017-12-29 2018-06-29 北京旷视科技有限公司 Licence plate recognition method, device and electronic equipment
WO2018121006A1 (en) * 2016-12-30 2018-07-05 杭州海康威视数字技术股份有限公司 Method and device for license plate positioning
CN108388896A (en) * 2018-02-09 2018-08-10 杭州雄迈集成电路技术有限公司 A kind of licence plate recognition method based on dynamic time sequence convolutional neural networks
CN108615036A (en) * 2018-05-09 2018-10-02 中国科学技术大学 A kind of natural scene text recognition method based on convolution attention network

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8331621B1 (en) * 2001-10-17 2012-12-11 United Toll Systems, Inc. Vehicle image capture system
US20170177965A1 (en) * 2015-12-17 2017-06-22 Xerox Corporation Coarse-to-fine cascade adaptations for license plate recognition with convolutional neural networks
US20170249524A1 (en) * 2016-02-25 2017-08-31 Xerox Corporation Method and system for detection-based segmentation-free license plate recognition
WO2018099194A1 (en) * 2016-11-30 2018-06-07 杭州海康威视数字技术股份有限公司 Character identification method and device
WO2018112900A1 (en) * 2016-12-23 2018-06-28 深圳先进技术研究院 License plate recognition method and apparatus, and user equipment
WO2018121006A1 (en) * 2016-12-30 2018-07-05 杭州海康威视数字技术股份有限公司 Method and device for license plate positioning
CN106960206A (en) * 2017-02-08 2017-07-18 北京捷通华声科技股份有限公司 Character identifying method and character recognition system
CN107239778A (en) * 2017-06-09 2017-10-10 中国科学技术大学 The licence plate recognition method of efficiently and accurately
CN107368831A (en) * 2017-07-19 2017-11-21 中国人民解放军国防科学技术大学 English words and digit recognition method in a kind of natural scene image
CN108009543A (en) * 2017-11-29 2018-05-08 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device
CN108122209A (en) * 2017-12-14 2018-06-05 浙江捷尚视觉科技股份有限公司 A kind of car plate deblurring method based on confrontation generation network
CN108229474A (en) * 2017-12-29 2018-06-29 北京旷视科技有限公司 Licence plate recognition method, device and electronic equipment
CN108053653A (en) * 2018-01-11 2018-05-18 广东蔚海数问大数据科技有限公司 Vehicle behavior Forecasting Methodology and device based on LSTM
CN108388896A (en) * 2018-02-09 2018-08-10 杭州雄迈集成电路技术有限公司 A kind of licence plate recognition method based on dynamic time sequence convolutional neural networks
CN108615036A (en) * 2018-05-09 2018-10-02 中国科学技术大学 A kind of natural scene text recognition method based on convolution attention network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUI LI, CHUNHUA SHEN: "Reading Car License Plates Using Deep Convolutional Neural Networks and LSTMs" *
曹正凤: "基于深度学习的端到端车牌检测识别***" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515985A (en) * 2020-07-01 2021-10-19 阿里巴巴集团控股有限公司 Self-service weighing system, weighing detection method, equipment and storage medium
CN113515985B (en) * 2020-07-01 2022-07-22 阿里巴巴集团控股有限公司 Self-service weighing system, weighing detection method, weighing detection equipment and storage medium
CN112132031A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Vehicle money identification method and device, electronic equipment and storage medium
CN112132031B (en) * 2020-09-23 2024-04-16 平安国际智慧城市科技股份有限公司 Vehicle style identification method and device, electronic equipment and storage medium
CN112508018A (en) * 2020-12-14 2021-03-16 北京澎思科技有限公司 License plate recognition method and device and storage medium
CN113486885A (en) * 2021-06-17 2021-10-08 杭州鸿泉物联网技术股份有限公司 License plate recognition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111027555B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
US11200424B2 (en) Space-time memory network for locating target object in video content
CN111611847B (en) Video motion detection method based on scale attention hole convolution network
US11093789B2 (en) Method and apparatus for object re-identification
CN111027555B (en) License plate recognition method and device and electronic equipment
JP7097641B2 (en) Loop detection method based on convolution perception hash algorithm
US20170161591A1 (en) System and method for deep-learning based object tracking
US11640714B2 (en) Video panoptic segmentation
CN109086797B (en) Abnormal event detection method and system based on attention mechanism
CN112699786B (en) Video behavior identification method and system based on space enhancement module
EP3570220B1 (en) Information processing method, information processing device, and computer-readable storage medium
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN112818955B (en) Image segmentation method, device, computer equipment and storage medium
CN110188627B (en) Face image filtering method and device
CN111814690B (en) Target re-identification method, device and computer readable storage medium
CN116071709B (en) Crowd counting method, system and storage medium based on improved VGG16 network
CN112633205A (en) Pedestrian tracking method and device based on head and shoulder detection, electronic equipment and storage medium
CN114742112A (en) Object association method and device and electronic equipment
KR101936947B1 (en) Method for temporal information encoding of the video segment frame-wise features for video recognition
CN113112479A (en) Progressive target detection method and device based on key block extraction
CN110942463A (en) Video target segmentation method based on generation countermeasure network
CN114612979B (en) Living body detection method and device, electronic equipment and storage medium
Han et al. Multi-target tracking based on high-order appearance feature fusion
CN113762231B (en) End-to-end multi-pedestrian posture tracking method and device and electronic equipment
CN115359091A (en) Armor plate detection tracking method for mobile robot
CN115346143A (en) Behavior detection method, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant