CN110766009A - Tail plate identification method and device and computer readable storage medium - Google Patents

Tail plate identification method and device and computer readable storage medium Download PDF

Info

Publication number
CN110766009A
CN110766009A CN201911055902.4A CN201911055902A CN110766009A CN 110766009 A CN110766009 A CN 110766009A CN 201911055902 A CN201911055902 A CN 201911055902A CN 110766009 A CN110766009 A CN 110766009A
Authority
CN
China
Prior art keywords
tail
detection frame
vehicle
target image
license plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911055902.4A
Other languages
Chinese (zh)
Inventor
唐健
张彦彬
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jieshun Science and Technology Industry Co Ltd
Original Assignee
Shenzhen Jieshun Science and Technology Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jieshun Science and Technology Industry Co Ltd filed Critical Shenzhen Jieshun Science and Technology Industry Co Ltd
Priority to CN201911055902.4A priority Critical patent/CN110766009A/en
Publication of CN110766009A publication Critical patent/CN110766009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for identifying a tail card and a computer readable storage medium, which are used for identifying the tail card. The method of the embodiment of the application comprises the following steps: acquiring a target image; selecting a license plate in the target image by using a first detection frame; expanding the first detection frame to obtain a second detection frame; adjusting the second detection frame by using a regression model to obtain a third detection frame; extracting the directional gradient histogram features of the image in the third detection frame to obtain a directional gradient histogram vector; extracting local binary pattern features of the image in the third detection frame to obtain a local binary pattern vector; combining the directional gradient histogram vector and the local binary pattern vector to obtain a composite vector; and classifying the composite vector by using a KNN classifier to obtain a classification result, and judging whether the result of the license plate is a tail license plate.

Description

Tail plate identification method and device and computer readable storage medium
Technical Field
The embodiment of the application relates to the field of image recognition, in particular to a method and a device for recognizing a tail board and a computer-readable storage medium.
Background
With the rapid development of economy and traffic leading to the dramatic increase of motor vehicles, monitoring management of motor vehicles is one of the problems facing today. For example, in a parking lot where a vehicle enters and exits a shared lane, a license plate recognizer at the lane can capture both a license plate of a motor vehicle entering the lane gate and a license plate of a motor vehicle exiting the lane gate, but the license plate in the captured image cannot be judged to be a tail license plate, and the tail license plate of the vehicle in the image needs to be recognized to judge the entering and exiting conditions of the vehicle.
Currently, a method for identifying a tail license plate is to determine whether a license plate in an image is a tail license plate by distinguishing a vehicle head or a vehicle tail image according to Local Binary Patterns (LBPs) or Histogram of Oriented Gradient (HOG) features. For example, after a head image or a tail image is acquired by using a camera, the LBP features of the image are extracted, and the license plate appearing in the image is classified by using the LBP features.
In the current tail board identification technology, the used identification parameters are single, so that the identification accuracy is not high enough.
Disclosure of Invention
The embodiment of the application provides a method and a device for identifying a tail plate and a computer readable storage medium, in particular to a tail plate identification technology capable of integrating image gray scale information and texture information. The method can simultaneously utilize two image characteristics of gray level and texture, combine the characteristic vectors of the two image characteristics for use, and identify the images of the head and the tail of the vehicle by using a K-nearest neighbor (KNN, K-nearest neighbor) classification algorithm to judge whether the license plate in the images is the tail plate of the vehicle, so that the identification accuracy is improved, and the robustness is enhanced.
The first aspect of the embodiments of the present application provides a method for identifying a tail board, including:
acquiring a target image;
selecting a license plate in the target image by using a first detection frame;
expanding the first detection frame to obtain a second detection frame;
adjusting the second detection frame by using a pre-trained regression model to obtain a third detection frame capable of framing the vehicle head or the vehicle tail in the target image, wherein the regression model is generated by training a plurality of target samples consisting of the vehicle head image and the vehicle tail image;
extracting the directional gradient histogram features of the image in the third detection frame to obtain a directional gradient histogram vector;
extracting local binary pattern features of the image in the third detection frame to obtain a local binary pattern vector;
combining the directional gradient histogram vector and the local binary pattern vector to obtain a composite vector;
classifying the composite vector by using a nearest neighbor classifier to obtain a classification result, wherein the classification result is that the target image comprises a vehicle head or the target image comprises a vehicle tail;
if the classification result is that the target image comprises the vehicle tail, judging the license plate tail plate; and if the classification result is that the target image comprises a vehicle head, judging that the license plate is not a tail license plate.
Optionally, expanding the first detection frame to obtain a second detection frame includes:
and expanding the width of the first detection frame by a first proportion, and expanding the height of the first detection frame by a second proportion to obtain a second detection frame.
Optionally, the classifying the composite vector by using a nearest neighbor classifier to obtain a classification result includes:
selecting a preset number of target samples with the feature vectors closest to the composite vector distance from a plurality of comparison samples by using a nearest neighbor classifier;
if the number of the target samples including the vehicle head is larger than that of the target samples including the vehicle tail in the comparison samples with the preset number, obtaining a classification result that the target image includes the vehicle head;
and if the number of the target samples including the vehicle head is smaller than that of the target samples including the vehicle tail in the comparison samples with the preset number, obtaining a classification result that the target image includes the vehicle tail.
Optionally, each target sample includes a prediction frame whose overlap rate with the vehicle head or the vehicle tail is greater than a preset value.
Optionally, the method further comprises:
acquiring the acquisition direction of an image acquisition device for acquiring the target image;
and judging the vehicle running direction in the target image according to the acquisition direction and the classification result of the target image.
The second aspect of the embodiments of the present application further provides a tail plate recognition apparatus, including:
an acquisition unit configured to acquire a target image;
the frame selection unit is used for selecting a license plate area in the target image by using a first detection frame;
the expanding unit is used for expanding the first detection frame to obtain a second detection frame;
the adjusting unit is used for adjusting the second detection frame by using a pre-trained regression model to obtain a third detection frame capable of framing the head or the tail of the vehicle in the target image, wherein the regression model is generated by training a plurality of target samples consisting of head images and tail images;
the first extraction unit is used for extracting the directional gradient histogram characteristics of the image in the third detection frame to obtain a directional gradient histogram vector;
the second extraction unit is used for extracting local binary pattern features of the image in the third detection frame to obtain a local binary pattern vector;
a combining unit, configured to combine the histogram of directional gradients and the local binary pattern vector to obtain a composite vector;
a classification unit, configured to classify the composite vector by using a nearest neighbor classifier to obtain a classification result, where the classification result is that the target image includes a vehicle head or the target image includes a vehicle tail;
the judging unit is used for judging that the license plate is a tail license plate if the classification result is that the target image comprises the tail of the vehicle; and if the classification result is that the target image comprises a vehicle head, judging whether the license plate is a tail license plate.
Optionally, the expanding unit is specifically configured to: and expanding the width of the first detection frame by a first proportion, and expanding the height of the first detection frame by a second proportion to obtain a second detection frame.
Optionally, the classification unit is specifically configured to:
selecting a preset number of target samples with feature vectors closest to the composite vector distance from the plurality of target samples by using a nearest neighbor classifier;
if the number of the target samples including the vehicle head is larger than that of the target samples including the vehicle tail in the preset number of target samples, obtaining a classification result of the target image including the vehicle head;
and if the number of the target samples including the vehicle is less than that of the target samples including the vehicle tail in the preset number of target samples, obtaining a classification result that the target image includes the vehicle tail.
Optionally, each target sample includes a prediction frame whose overlap rate with the vehicle head or the vehicle tail is greater than a preset value.
Optionally, the acquiring unit is further configured to acquire an acquisition direction of an image acquisition device that acquires the target image; the judging unit is further used for judging the vehicle running direction in the target image according to the collecting direction and the classification result of the target image.
The third aspect of the present application further provides another tail card recognition device, which includes a processor and a memory, the processor is connected to the memory through a bus, and the memory is used for storing computer-executable instructions, when the tail card recognition device is in operation, the processor reads the computer-executable instructions stored in the memory, so as to enable the tail card recognition device to execute the tail card recognition method according to any one of claims 1 to 5.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the tail card identifying method according to any one of claims 1 to 5.
Drawings
FIG. 1 is a flow chart of a method for identifying a tail card according to an embodiment of the present application;
FIG. 2 is another flow chart of a method for identifying a tail card provided by an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a tail board recognition device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of another tail card recognition device provided in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a method and a device for identifying a tail card and a storage medium, which are used for identifying the tail card in an image.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present application provides a method for identifying a tail board, please refer to fig. 1, where an embodiment of the method includes:
101. acquiring a target image;
firstly, a target image to be detected is acquired, and the target image is usually captured from a certain frame in a picture shot by a camera. In some cases, the same image source may be used in conjunction with a license plate recognition system.
102. Selecting a license plate by using the first detection frame;
and using the first detection frame to frame out the region where the license plate is located in the target image as the license plate region. The existing license plate detection and positioning technology is mature, and the existing license plate detection and positioning method can be directly used for reference, such as a license plate positioning method based on a gray image, a license plate positioning method based on wavelet transformation or a license plate positioning method based on morphology, and the method is not limited here.
103. Expanding the first detection frame to obtain a second detection frame;
by means of the license plate detection and recognition technology, although the license plate can be selected by the first detection frame, the image containing the license plate also comprises the image of the head or the tail of the vehicle. At the moment, the first detection frame is expanded to obtain a larger second detection frame, so that the second detection frame can comprise images of the head or the tail of the vehicle. In an alternative embodiment, the second detection frame is expanded by expanding the upper and lower boundaries and the left and right boundaries of the first detection frame according to a certain ratio to obtain the second detection frame. It should be noted that, because the images of different vehicle types are enlarged in the same proportion, the second detection frame may not be completely matched with the contour of the vehicle head or the vehicle tail.
104. Adjusting the second detection frame to obtain a third detection frame;
because the second detection frame can not be completely matched with the outline of the vehicle head or the vehicle tail, in order to facilitate identification, the enlarged second detection frame needs to be correspondingly adjusted, and a third detection frame capable of better matching with the outline of the vehicle head or the vehicle tail is obtained. In one possible embodiment, the area of the front or the rear of the vehicle is repositioned accurately by using a regression model trained in advance, the regression model is trained by using a convolution structure embedded with fast features of a deep learning framework (buffer, conditional Architecture for fast feature Embedding), the training network depth is 3 layers, and the network input is 32 × 16. Firstly, a minimum rectangular frame is used as a prediction frame to label the head or tail area of each image in a target sample set to obtain a rectangular labeling frame Gt(x1,y1,x2,y2) Wherein (x)1,y1)、(x2,y2) Respectively the upper left of the rectangular frameAngle and lower right angle. Randomly selecting a rectangular frame P with the overlapping rate of more than 60 percent with the marking frame Gt in the area of the head or the tail of the vehiclet(x1,y1,x2,y2) Using the image intercepted by the frame as a training sample, and calculating a rectangular marking frame G where the head or the tail of the vehicle is located by the following formulatAnd a detection frame PtOffset (delta x) between upper left corner and lower back corner1,Δy1,Δx2,Δy2):
Δx1=(Gx1-Px1)/Pw
Δy1=(Gy1-Py1)/Ph
Δx2=(Gx2-Px2)/Pw
Δy2=(Gy2-Py2)/Ph
Wherein, Pw、PhRespectively represent a rectangular frame PtWidth and height of (a). The regression model is actually used for learning the coordinate offset (delta x) of the coordinates of two points of the head or the tail of the vehicle, namely the upper left corner and the lower right corner, relative to the two points corresponding to the prediction rectangular frame1,Δy1,Δx2,Δy2) I.e. four output values of the network. When in use, the second detection frame P obtained by external expansion is extracted firstlyb(x1,y1,x2,y2) The size of the image is adjusted to 32 x 16, and the image is sent to a network convolution to obtain an offset output (delta x)1,Δy1,Δx2,Δy2) The accurate head or tail area can be obtained through the following formula to be used as a third detection frame, and the coordinate of the third detection frame is marked as Gb(x1,y1,x2,y2):
Gbx1=Pbx1+Pw×Δx1
Gby1=Pby1+Py×Δy1
Gbx2=Pbx2+Pw×Δx2
Gby2=Pby2+Py×Δy2
It can be understood that the parameters of the model used in the method for adjusting the second detection frame to obtain the third detection frame are only used for facilitating understanding of those skilled in the art, and other parameters may be selected in practical application as long as the vehicle head or the vehicle tail can be selected more accurately, which is not limited herein.
105. Extracting the HOG characteristics of the image in the third detection frame to obtain an HOG vector;
the HOG characteristics of the image can be used for representing the object characteristics of the image, are sensitive to the corner and are insensitive to partial pixel value changes, so that the method can adapt to different environments. In one possible embodiment, the detection window is first divided into a number of small 5 × 5 regions (cells), and the gradient direction is divided 360 degrees into 9 directional histogram channels. Then, for each pixel of the cell, calculating the gradient of all pixel points in each channel to generate a gradient histogram, namely a feature vector of 1 × 9 dimension; using 8 adjacent cells as a block, wherein the block has a 72-dimensional vector in common; finally, scanning line by line in a detection window with a certain step length by taking a block as a unit, connecting all block vectors together, and normalizing the block vectors into a unit length by using an L2 norm to be used as an HOG characteristic of each block; the size of the images used for training is 40 × 20, and each image is divided into 9 blocks, so that each image obtains a HOG feature vector with dimensions of 1 × 648. It is understood that the way of extracting the HOG features may not be limited to the above, and the dimension of the obtained HOG vector may also vary according to the extraction way.
106. Extracting LBP characteristics of the image in the third detection frame to obtain an LBP vector;
LBP is an operator used for describing local texture features of an image; it has the obvious advantages of rotation invariance, gray scale invariance and the like. In one possible embodiment, the detection window is first divided into 9 × 9 small regions (cells); for each pixel of the cell, generating an 8-bit binary LBP feature code serving as a feature value of the pixel point by comparing the size of the central pixel with the size of 8 adjacent pixel values of the central pixel; then, calculating the occurrence frequency of the LBP value of each cell to obtain a characteristic histogram and carrying out normalization processing; finally, connecting the statistical histograms of the cells into a feature vector, and representing the region texture feature by using the vector; the size of the image used for training is 40 × 20, and each image can obtain a feature vector with dimensions of 1 × 472. Similar to the extraction of the HOG features, the way of extracting the HOG is not limited to the above method, and the obtained HOG vector dimension may be other dimensions.
107. Combining the HOG vector and the LBP vector to obtain a composite vector;
since the image recognition using the HOG feature and the LBP feature have different advantages, the HOG feature and the LBP feature can be combined to make the image recognition more accurate. And splicing the HOG vector corresponding to the HOG characteristic and the LBP vector corresponding to the LBP characteristic to obtain a new vector, wherein the vector is called a composite vector, and the dimension of the vector is determined by the HOG vector and the LBP vector which form the composite vector. For example, the HOG vector of 1 × 648 dimensions in the step 106 is spliced with the vector of 1 × 472 dimensions in the step 107 to obtain a composite vector of 1 × 1120 dimensions.
108. Classifying the composite vector by using a KNN classifier to obtain a classification result;
the idea of KNN is simple and practical: and for a given training data set and a sample to be classified, judging the class of the sample to be classified by using K training data closest to the sample to be classified. In the method provided by the embodiment of the application, the KNN classifier uses the composite vectors corresponding to the images of the plurality of car heads and car tails as training samples to train, finds out 7 training samples closest to the composite vectors obtained in step 107, and determines that the target image is one of the training samples with the largest number. In the embodiment of the application, the classification result that the KNN classifier needs to be identified is the locomotive or the tailstock. It is to be understood that the setting of the KNN classifier to find the 7 closest samples is merely for convenience of the skilled person to give a better value for illustration, and is not limited to the values in the embodiment.
109. Judging the type of the license plate;
and judging whether the license plate is a tail license plate or not according to the judgment condition of the head and the tail of the vehicle in the target image. If the tail of the vehicle exists in the target image, the license plate in the image can be judged to be the tail license plate. If the target image has a head, the license plate in the image is judged to be not the tail license plate.
It should be noted that, in the embodiment corresponding to fig. 1, the steps 205 and 206 are not executed in a fixed sequential order.
In another embodiment provided by the present application, the method can further obtain the driving direction of the vehicle in the target image, please refer to fig. 2, the method includes
201. Acquiring a target image;
202. selecting a license plate by using the first detection frame;
203. expanding the first detection frame to obtain a second detection frame;
204. adjusting the second detection frame to obtain a third detection frame;
205. extracting the HOG characteristics of the image in the third detection frame to obtain an HOG vector;
206. extracting LBP characteristics of the image in the third detection frame to obtain an LBP vector;
207. combining the HOG vector and the LBP vector to obtain a composite vector;
208. classifying the composite vector by using a KNN classifier to obtain a classification result;
209. judging the type of the license plate;
steps 201 to 209 are similar to the method of steps 101 to 109 in the embodiment of fig. 1, and are not repeated here.
210. Acquiring the acquisition direction of an image acquisition device for acquiring a target image;
the target image is usually shot by cameras at the entrance and exit of a parking lot or other traffic main roads, and the acquisition direction of the corresponding camera corresponding to the intersection is obtained. It should be noted that the timing of this step is not limited, and only needs to be executed before step 210.
211. And judging the vehicle driving direction in the target image.
And judging the vehicle driving direction in the target image according to the acquisition direction of the image acquisition device for acquiring the target image and the classification result of the target image. For example, when the acquisition device faces the entering direction of the parking lot, the classification result of the target image is the vehicle head, the driving direction of the vehicle can be judged to be the entering direction of the parking lot, and if the classification result is the vehicle tail, the vehicle can be judged to be the exiting direction of the parking lot; when the collecting device faces the exit direction of the parking lot, the judgment result is opposite to the situation.
The embodiment of the present application further provides a tail card recognition device, including:
an acquisition unit 301 for acquiring a target image;
a framing unit 302, configured to frame a license plate in the target image by using a first detection frame;
an expanding unit 303, configured to expand the first detection frame to obtain a second detection frame;
an adjusting unit 304, configured to adjust the second detection frame by using a pre-trained regression model to obtain a third detection frame capable of framing the vehicle head or the vehicle tail in the target image, where the regression model is generated by training a plurality of target samples including the vehicle head image and the vehicle tail image;
a first extracting unit 305, configured to extract directional gradient histogram features of the image in the third detection frame, so as to obtain a directional gradient histogram vector;
a second extracting unit 306, configured to extract a local binary pattern feature of the image in the third detection frame, so as to obtain a local binary pattern vector;
a combining unit 307, configured to combine the histogram vector of directional gradients and the local binary pattern vector to obtain a composite vector;
a classifying unit 308, configured to classify the composite vector by using a nearest neighbor classifier to obtain a classification result, where the classification result is that the target image includes a vehicle head or the target image includes a vehicle tail;
the judging unit 309 is configured to judge that the license plate is a tail license plate if the classification result indicates that the target image includes a tail vehicle; and if the classification result is that the target image comprises the head, judging whether the license plate is a tail license plate.
Further, the expanding unit 303 is specifically configured to: expanding the width of the first detection frame by a first proportion, and expanding the height of the first detection frame by a second proportion to obtain a second detection frame;
further, the classification unit 308 is specifically configured to: selecting a preset number of target samples with the nearest distance between the feature vector and the composite vector from a plurality of comparison samples by using a nearest neighbor classifier; if the number of the comparison samples including the vehicle head is larger than that of the comparison samples including the vehicle tail in the comparison samples with the preset number, obtaining a classification result that the target image includes the vehicle head; and if the number of the comparison samples including the vehicle head is smaller than that of the comparison samples including the vehicle tail in the comparison samples with the preset number, obtaining a classification result that the target image includes the vehicle tail.
Further, the acquiring unit 301 in the tail board recognizing device is further configured to acquire a collecting direction of an image collecting device for collecting the target image, and the determining unit 309 is further configured to determine a vehicle driving direction in the target image according to the collecting direction and a classification result of the target image.
The tail board recognition device of this embodiment is used to implement the above-mentioned tail board recognition method, so the specific implementation manner in the tail board recognition device can be seen in the above-mentioned embodiment section of the method for detecting the parking crossing line, and the specific implementation manner thereof may refer to the description of the corresponding respective section embodiments, and is not described herein again.
The embodiment of the present application further provides another tail card recognition apparatus 40, which includes a processor 401 and a memory 402. Further, a power supply 403 is also included. The processor is connected with the memory through the bus, the memory is used for storing computer execution instructions, and when the tail board recognition device runs, the processor reads the computer execution instructions stored in the memory so as to enable the tail board recognition device to execute the tail board recognition method shown in the embodiment of the application.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer executes the method for identifying the tail license plate shown in the foregoing embodiment corresponding to fig. 1 and fig. 2.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other media capable of storing program codes.

Claims (10)

1. A method for identifying a tail card, comprising:
acquiring a target image;
selecting a license plate in the target image by using a first detection frame;
expanding the first detection frame to obtain a second detection frame;
adjusting the second detection frame by using a pre-trained regression model to obtain a third detection frame capable of framing the vehicle head or the vehicle tail in the target image, wherein the regression model is generated by training a plurality of target samples consisting of the vehicle head image and the vehicle tail image;
extracting the directional gradient histogram features of the image in the third detection frame to obtain a directional gradient histogram vector;
extracting local binary pattern features of the image in the third detection frame to obtain a local binary pattern vector;
combining the directional gradient histogram vector and the local binary pattern vector to obtain a composite vector;
classifying the composite vector by using a nearest neighbor classifier to obtain a classification result, wherein the classification result is that the target image comprises a vehicle head or the target image comprises a vehicle tail;
if the classification result is that the target image comprises the tail of the vehicle, judging that the license plate is a tail license plate; and if the classification result is that the target image comprises a vehicle head, judging that the license plate is not a tail license plate.
2. The method of claim 1, wherein the expanding the first detection frame to obtain a second detection frame comprises:
and expanding the width of the first detection frame by a first proportion, and expanding the height of the first detection frame by a second proportion to obtain a second detection frame.
3. The method for identifying the tail cards according to claim 1, wherein the classifying the composite vector by using a nearest neighbor classifier to obtain a classification result comprises:
selecting a preset number of comparison samples with the feature vectors closest to the composite vector distance from a plurality of comparison samples by using a nearest neighbor classifier;
if the number of the comparison samples including the vehicle head is larger than that of the comparison samples including the vehicle tail in the comparison samples with the preset number, obtaining a classification result that the target image includes the vehicle head;
and if the number of the comparison samples including the vehicle head is smaller than that of the comparison samples including the vehicle tail in the preset number of target samples, obtaining a classification result that the target image includes the vehicle tail.
4. The method for identifying the tail license plate of claim 1 or 2, wherein each target sample comprises a prediction frame with an overlapping rate with the head or tail of the vehicle being greater than a preset value.
5. The method of claim 3, further comprising:
acquiring the acquisition direction of an image acquisition device for acquiring the target image;
and judging the vehicle running direction in the target image according to the acquisition direction and the classification result of the target image.
6. A tail card identifying device, comprising:
an acquisition unit configured to acquire a target image;
the frame selection unit is used for selecting a license plate in the target image by using a first detection frame;
the expanding unit is used for expanding the first detection frame to obtain a second detection frame;
the adjusting unit is used for adjusting the second detection frame by using a pre-trained regression model to obtain a third detection frame capable of framing the head or the tail of the vehicle in the target image, wherein the regression model is generated by training a plurality of target samples consisting of head images and tail images;
the first extraction unit is used for extracting the directional gradient histogram characteristics of the image in the third detection frame to obtain a directional gradient histogram vector;
the second extraction unit is used for extracting local binary pattern features of the image in the third detection frame to obtain a local binary pattern vector;
a combining unit, configured to combine the histogram of directional gradients and the local binary pattern vector to obtain a composite vector;
and the classification unit is used for classifying the composite vector by using a nearest neighbor classifier to obtain a classification result, wherein the classification result is that the target image comprises a vehicle head or the target image comprises a vehicle tail.
The judging unit is used for judging that the image result of the license plate is a tail license plate if the classification result is that the target image comprises the tail of the vehicle; and if the classification result is that the target image comprises a vehicle head, judging whether the image of the license plate is a tail license plate.
7. The tail card recognition device of claim 6, wherein the expansion unit is specifically configured to: and expanding the width of the first detection frame by a first proportion, and expanding the height of the first detection frame by a second proportion to obtain a second detection frame.
8. The tail card recognition device of claim 6, wherein the sorting unit is specifically configured to:
selecting a preset number of target samples with the feature vectors closest to the composite vector distance from a plurality of comparison samples by using a nearest neighbor classifier;
if the number of the comparison samples including the vehicle head is larger than that of the comparison samples including the vehicle tail in the comparison samples with the preset number, obtaining a classification result that the target image includes the vehicle head;
and if the number of the comparison samples including the vehicle head is smaller than that of the comparison samples including the vehicle tail in the comparison samples with the preset number, obtaining the classification result that the target image includes the vehicle tail.
9. A tail identification device, comprising a processor and a memory, wherein the processor is connected with the memory through a bus, the memory is used for storing computer-executable instructions, and when the tail identification device is operated, the processor reads the computer-executable instructions stored in the memory to enable the tail identification device to execute the tail identification method according to any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, performs the tail card identifying method according to any one of claims 1 to 5.
CN201911055902.4A 2019-10-31 2019-10-31 Tail plate identification method and device and computer readable storage medium Pending CN110766009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911055902.4A CN110766009A (en) 2019-10-31 2019-10-31 Tail plate identification method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911055902.4A CN110766009A (en) 2019-10-31 2019-10-31 Tail plate identification method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110766009A true CN110766009A (en) 2020-02-07

Family

ID=69335477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911055902.4A Pending CN110766009A (en) 2019-10-31 2019-10-31 Tail plate identification method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110766009A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560856A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 License plate detection and identification method, device, equipment and storage medium
CN113255632A (en) * 2021-07-16 2021-08-13 深圳市赛菲姆科技有限公司 Camera parameter adjusting method, device, equipment and medium based on license plate recognition
CN114170810A (en) * 2021-12-28 2022-03-11 深圳市捷顺科技实业股份有限公司 Vehicle traveling direction identification method, system and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388871A (en) * 2018-02-28 2018-08-10 中国计量大学 A kind of vehicle checking method returned based on vehicle body
CN109145928A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 It is a kind of based on the headstock of image towards recognition methods and device
CN109214420A (en) * 2018-07-27 2019-01-15 北京工商大学 The high texture image classification method and system of view-based access control model conspicuousness detection
CN109558902A (en) * 2018-11-20 2019-04-02 成都通甲优博科技有限责任公司 A kind of fast target detection method
CN109948612A (en) * 2019-03-19 2019-06-28 苏州怡林城信息科技有限公司 Detection method of license plate, storage medium and detection device based on convolutional network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145928A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 It is a kind of based on the headstock of image towards recognition methods and device
CN108388871A (en) * 2018-02-28 2018-08-10 中国计量大学 A kind of vehicle checking method returned based on vehicle body
CN109214420A (en) * 2018-07-27 2019-01-15 北京工商大学 The high texture image classification method and system of view-based access control model conspicuousness detection
CN109558902A (en) * 2018-11-20 2019-04-02 成都通甲优博科技有限责任公司 A kind of fast target detection method
CN109948612A (en) * 2019-03-19 2019-06-28 苏州怡林城信息科技有限公司 Detection method of license plate, storage medium and detection device based on convolutional network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王延江 等: "《数字图像处理》", 30 November 2016, 石油大学出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560856A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 License plate detection and identification method, device, equipment and storage medium
CN112560856B (en) * 2020-12-18 2024-04-12 深圳赛安特技术服务有限公司 License plate detection and identification method, device, equipment and storage medium
CN113255632A (en) * 2021-07-16 2021-08-13 深圳市赛菲姆科技有限公司 Camera parameter adjusting method, device, equipment and medium based on license plate recognition
CN114170810A (en) * 2021-12-28 2022-03-11 深圳市捷顺科技实业股份有限公司 Vehicle traveling direction identification method, system and device

Similar Documents

Publication Publication Date Title
Panahi et al. Accurate detection and recognition of dirty vehicle plate numbers for high-speed applications
WO2020173022A1 (en) Vehicle violation identifying method, server and storage medium
Leibe et al. Pedestrian detection in crowded scenes
Wang et al. An effective method for plate number recognition
JP4479478B2 (en) Pattern recognition method and apparatus
Kaur et al. Number plate recognition using OCR technique
CN110766009A (en) Tail plate identification method and device and computer readable storage medium
CN104766042A (en) Method and apparatus for and recognizing traffic sign board
Prates et al. Brazilian license plate detection using histogram of oriented gradients and sliding windows
Donoser et al. Detecting, tracking and recognizing license plates
Shah et al. OCR-based chassis-number recognition using artificial neural networks
Rabiu Vehicle detection and classification for cluttered urban intersection
Rodríguez-Serrano et al. Data-driven vehicle identification by image matching
Chaturvedi et al. Automatic license plate recognition system using surf features and rbf neural network
Tamersoy et al. Robust vehicle detection for tracking in highway surveillance videos using unsupervised learning
Ilayarajaa et al. Text recognition in moving vehicles using deep learning neural networks
Nguyen et al. Robust car license plate localization using a novel texture descriptor
CN111178359A (en) License plate number recognition method, device and equipment and computer storage medium
CN113569934B (en) LOGO classification model construction method, LOGO classification model construction system, electronic equipment and storage medium
Sotheeswaran et al. A coarse-to-fine strategy for vehicle logo recognition from frontal-view car images
Kosala et al. Robust License Plate Detection in Complex Scene using MSER-Dominant Vertical Sobel.
CN113454649A (en) Target detection method, target detection device, electronic equipment and computer-readable storage medium
CN113449629A (en) Lane line false and true identification device, method, equipment and medium based on driving video
Xiong et al. High Speed Front-Vehicle Detection Based on Video Multi-feature Fusion
Khosravi A sliding and classifying approach towards real time Persian license plate recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200207