CN116453103A - Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment - Google Patents

Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment Download PDF

Info

Publication number
CN116453103A
CN116453103A CN202310705974.9A CN202310705974A CN116453103A CN 116453103 A CN116453103 A CN 116453103A CN 202310705974 A CN202310705974 A CN 202310705974A CN 116453103 A CN116453103 A CN 116453103A
Authority
CN
China
Prior art keywords
target
license plate
tracking
unmatched
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310705974.9A
Other languages
Chinese (zh)
Other versions
CN116453103B (en
Inventor
刘寒松
王永
王国强
刘瑞
董玉超
焦安健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonli Holdings Group Co Ltd
Original Assignee
Sonli Holdings Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonli Holdings Group Co Ltd filed Critical Sonli Holdings Group Co Ltd
Priority to CN202310705974.9A priority Critical patent/CN116453103B/en
Publication of CN116453103A publication Critical patent/CN116453103A/en
Application granted granted Critical
Publication of CN116453103B publication Critical patent/CN116453103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of license plate recognition, and relates to a method, a system and electronic equipment for vehicle cross-mirror tracking license plate recognition, wherein the method comprises the steps of firstly carrying out target detection and feature extraction on an input vehicle video frame, then carrying out data association and tracker updating, and then updating a tracking target set or creating a new tracker according to an unmatched feature set and an unmatched target set after the data association; finally, license plate image recognition is carried out, and the deep learning and multi-target tracking technology is combined, so that higher tracking stability can be kept in a complex scene, multi-target tracking can be realized in a real-time scene, and the method is suitable for vehicle movement with different speeds and directions and has important significance for vehicle management scenes needing real-time feedback.

Description

Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment
Technical Field
The invention belongs to the technical field of license plate recognition, relates to a vehicle cross-mirror tracking license plate recognition method, a system and electronic equipment, and particularly relates to a vehicle cross-mirror tracking license plate recognition method, a system and electronic equipment based on appearance matching self-adaption.
Background
Under the background of continuously increasing urban traffic pressure, the high-level locomotive license plate recognition technology plays an important role in street parking management. However, in practical application, this technology faces many challenges, such as vehicle shielding, angle, light interference, and the like, which easily causes recognition errors. To solve these problems, researchers have been focusing on how to improve license plate recognition accuracy in the above complex environments, for example, by means of light compensation techniques, image preprocessing techniques, and deep learning algorithms.
Under the background, the vehicle cross-mirror tracking solution adopts a multi-camera linkage mode, the cameras in multiple adjacent areas are linked through the edge equipment to perform unified vehicle identification, the vehicle cross-mirror tracking is realized by using the space-time data of the cameras in the areas, and the solution can effectively solve the problems of shielding, angle, light interference and the like in the license plate identification process, so that the accuracy and stability of license plate identification are improved, and more effective technical means are provided for urban parking management.
Disclosure of Invention
In order to solve the problem that a high-order camera is interfered by a complex light environment in the prior art, the invention provides a vehicle cross-mirror tracking license plate recognition method, a system and electronic equipment based on appearance matching self-adaption, and a target tracking technology and a self-adaption re-recognition strategy are used for coping with challenges in a complex scene based on object appearance information, and a lightweight Optical Character Recognition (OCR) method is used for obtaining a high-quality license plate recognition result.
In a first aspect, the present invention provides a vehicle cross-mirror tracking license plate recognition method, including the steps of:
s1, target detection and feature extraction: performing target detection on an input vehicle video frame to obtain a boundary box of each target, and extracting characteristic representation of the target from each detected target boundary box;
s2, updating data association and tracker: the method comprises the steps of associating a vehicle detected by a current frame with a tracked vehicle, predicting a target state quantity by using a Kalman filtering algorithm, carrying out self-adaptive appearance information fusion according to similarity and dynamic weight coefficients, increasing the weight of appearance information by using self-adaptive weighting, matching the target state quantity predicted by the Kalman filter with an actual target observed quantity by using a Hungary matching algorithm to obtain a matched observation set, an unmatched observation set and an unmatched target set, and updating a tracker by using the Kalman filtering algorithm;
s3, re-identification tracking: updating the tracking target set or creating a new tracker according to the unmatched observation set and the unmatched target set;
s4, license plate image recognition: and acquiring a vehicle target boundary frame in each frame according to the re-identification tracking result, transmitting the target boundary frame to a license plate detector or a license plate positioning algorithm, extracting a license plate region, and transmitting the license plate region to a license plate identifier for identification.
As a further technical scheme of the invention, the specific process of the step S1 is as follows: first using the Faster R-CNN object detector on an incoming video frameIn detection target object,/->Obtaining a boundary box of each target for the current momentN represents the number of targets detected for the current frame, for each detected target bounding box, extracting a feature representation of the target using a pre-trained ReID network +.>
As a further technical scheme of the invention, the process of predicting the target state quantity by using the Kalman filtering algorithm in the step S2 is as follows:
(S21) creating a corresponding target state quantity Tracks according to the first frame detection result, initializing a motion variable of Kalman filtering, and predicting a corresponding frame through the Kalman filtering;
(S22) moving objects in videoAt->All target state amounts of time are set to +.>Complete according to the linear dynamic device equation>Target state quantity prediction:
wherein ,for state transition matrix>Covariance matrix for target state quantity, +.>Is the noise matrix of the covariance matrix.
As a further technical scheme of the invention, the process of performing self-adaptive appearance information fusion according to the similarity and the dynamic weight coefficient in the step S2 is as follows:
(S23) by calculationCurrent firstIn frame +.>Feature vector of individual object->And->Feature vector of individual object->Cosine similarity between->Judging whether the appearance information of the current vehicle changes or not;
(S24) according to the present firstIn frame +.>Bounding box of individual objects->And->Bounding box of individual objectsOverlap of->Calculating dynamic weight coefficient +.>For reflecting the degree of change in the appearance of the object;
(S25) carrying out self-adaptive appearance information fusion according to the cosine similarity and the dynamic weight coefficient:
a. if the similarity isWhen the value is larger than the threshold value 0.5, the appearance information of the target is not changed significantly, and the current feature vector is directly used>Carrying out subsequent data association;
b. if the similarity isWhen the value is smaller than the threshold value 0.5, the appearance information of the target is changed, and the history feature vector is required to be used>And the current feature vector +.>Feature fusion is carried out, and updated feature vectors are generated>The method comprises the following steps:
as a further technical solution of the present invention, the step S2 of increasing the weight of the appearance information by using adaptive weighting includes:
(S26) targetingAt->The observed quantity obtained by the time object detector is recorded as +.>The process of converting the target state quantity into an observed quantity is expressed as: />, wherein />A mapping matrix for the state space to the measurement space;
(S27) time of dayPredicted value of target state quantity +.>And objective observance->Utilize->The correlation matrix matches the state quantity with the target detection result of the observed quantity, and then passes +.>Matching result calculation +.>Cost matrix-> = C(/>, />) Wherein C is a cost metric function;
(S28) first according to and />Calculating the appearance cost matrix->According to the existing cost matrix->In->Add->Obtaining a cost matrix->, wherein />Denoted as->Defined as appearance cost matrix->The difference between the highest value and the next highest value in a row or column of (a).
As a further technical scheme of the invention, the process of updating the tracker by the Kalman filtering algorithm in the step S2 is as follows: known current frame observation setAnd tracked target set +.>M represents the number of targets that have been tracked, for a matching observation set +.>The target observed quantity is +.>Predicted value updated to target state quantity +.>The method comprises the steps of carrying out a first treatment on the surface of the For unmatched observation sets->Each of->Create a new tracker->For unmatched targets->Each of the concentrations->Determining whether to delete +.>Or update its state, the updated tracking target set is expressed as: />, wherein />Representing the number of newly created trackers.
As a further technical scheme of the invention, the specific process of the step S3 is as follows:
(S31) for non-matching observation setsIs>Calculate its and unmatched target set +.>Is->Similarity of-> , wherein />Representing the number of features in the unmatched observation set, +.>Representing the number of unmatched objects;
(S32) setting a similarity thresholdIf->Is greater than->Features->Is->Associating and updating tracking target set->
(S33) if the characteristic isNot associated with any target, then is +.>Create a new tracker and add it to the tracking target set +.>Obtaining a further updated tracking target set +.>
In a second aspect, the present invention provides a vehicle cross-mirror tracking license plate recognition system, comprising:
the target detection and feature extraction module is used for carrying out target detection on an input vehicle video frame, obtaining a boundary box of each target, and extracting feature representation of the target aiming at each detected target boundary box;
the data association and tracker updating module is used for associating the target detected by the current frame with the tracked target, predicting the state quantity of the target by using a Kalman filtering algorithm, carrying out self-adaptive appearance information fusion according to the similarity and the dynamic weight coefficient, increasing the weight of the appearance information by using self-adaptive weight, matching the predicted state quantity of the target and the actual observed quantity of the target by using a Hungary matching algorithm to obtain a matched observation set, a non-matched observation set and a non-matched target set, and updating the tracker by using the Kalman filtering algorithm;
the re-identification tracking module is used for updating the tracking target set or creating a new tracker according to the unmatched observation set and the unmatched target set;
and the license plate image recognition module is used for recognizing license plate numbers according to the re-recognition tracking result.
In a third aspect, the invention provides an electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the method of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium storing computer instructions which, when executed by a processor, perform the method of the first aspect.
Compared with the prior art, the invention provides the vehicle cross-mirror tracking license plate recognition method, the system and the electronic equipment based on appearance matching self-adaption, which are used for solving the problem of vehicle cross-mirror tracking, thereby improving the accuracy of license plate recognition and having the following advantages:
1. stronger occlusion handling capability: occlusion problems are handled by an adaptive re-recognition strategy. In a vehicle tracking scene, when a vehicle is blocked by other vehicles or objects, the method can more effectively re-identify and track the target, and the tracking accuracy is improved;
2. tracking stability: by combining deep learning and multi-target tracking technology, high tracking stability can be maintained in complex scenes, and the method is particularly important for application scenes needing long-time and stable tracking, such as urban parking management and the like;
3. real-time performance: the method can realize multi-target tracking in a real-time scene, is suitable for vehicle movement with different speeds and directions, and has important significance for vehicle management scenes needing real-time feedback.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the present disclosure and do not constitute a limitation on the invention.
Fig. 1 is a schematic flow chart of a vehicle cross-mirror tracking license plate recognition method provided by the invention.
Fig. 2 is a schematic diagram of a vehicle cross-mirror tracking license plate recognition system provided by the invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, unless the context clearly indicates otherwise, the singular forms also are intended to include the plural forms, and furthermore, it is to be understood that the terms "comprises" and "comprising" and any variations thereof are intended to cover non-exclusive inclusions, such as, for example, processes, methods, systems, products or devices that comprise a series of steps or units, are not necessarily limited to those steps or units that are expressly listed, but may include other steps or units that are not expressly listed or inherent to such processes, methods, products or devices.
Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
Example 1:
as shown in fig. 1, the present embodiment provides a vehicle cross-mirror tracking license plate recognition method, which includes the following steps:
s1, target detection and feature extraction: first using a Faster R-CNN object detector on an incoming video frameIn detection target object,/->For the current time, a bounding box for each object is obtained>N represents the number of targets detected for the current frame, for each detected target bounding box, extracting a feature representation of the target using a pre-trained feature extraction network +.>(one-to-one correspondence with the target bounding box), the feature extraction network in this embodiment adopts a ReID network;
s2, updating data association and tracker:
(2-1) predicting a motion variable at a next moment according to a current series of motion variables using a kalman filter algorithm, specifically:
creating a corresponding target state quantity Tracks according to the first frame detection result, initializing a motion variable of Kalman filtering, and predicting a corresponding frame through the Kalman filtering;
moving objects in videoAt->All target state amounts of time are set to +.>According to the linear dynamic device (apparatus) equation +.>Target state quantity prediction:
wherein ,for state transition matrix>Covariance matrix for target state quantity, +.>A noise matrix that is a covariance matrix;
and (2-2) carrying out self-adaptive appearance information fusion according to the similarity and the dynamic weight coefficient:
in order to fuse high quality appearance information into features brought by the target bounding box, it is necessary to use adaptive rules to remove low quality appearance information generated by occlusion, fast motion, specifically:
by calculating the current firstIn frame +.>Feature vector of individual object->And->Feature vector of individual object->Cosine similarity between->Judging whether the appearance information of the current vehicle is changed or not;
according to the current firstIn frame +.>Bounding box of individual objects->And->Bounding box of individual objects->Overlap of->Calculating dynamic weight coefficient +.>For reflecting the degree of change in the appearance of the object;
and then carrying out self-adaptive appearance information fusion according to the cosine similarity and the dynamic weight coefficient:
a. if the similarity isWhen the value is larger than the threshold value 0.5, the appearance information of the target is not changed significantly, and the current feature vector is directly used>Carrying out subsequent data association;
b. if the similarity isWhen the value is smaller than the threshold value 0.5, the appearance information of the target is changed, and the history feature vector is required to be used>And the current feature vector +.>Feature fusion is carried out, and updated feature vectors are generated>The method comprises the following steps:for convenience of description later, updated feature vector +.>Still use +.>Symbology, no longer distinguished;
(2-3) increasing the weight of the appearance information using adaptive weighting, specifically:
target to be targetedAt->The observed quantity obtained by the time object detector is recorded as +.>The process of converting the target state quantity into an observed quantity is expressed as: />, wherein />A mapping matrix for the state space to the measurement space;
will time of dayPredicted value of target state quantity +.>And target observed quantityUtilize->The correlation matrix matches the state quantity with the target detection result of the observed quantity, and then passes +.>Matching result calculation +.>Cost matrix-> = C(/>, />) Wherein C is a cost metric function;
first according to and />Calculating the appearance cost matrix->The appearance cost matrix and +.>Cost matrix->Bonding: />The present embodiment is based on the difference +.>Is based on (a) add->To improve->Specifically, set track to +.>Detection is set to->When->When there is a high similarity to only one box, the appearance cost matrix is +.>Is added with weight; when->Only one->When differentially associated, the appearance cost matrix +.>Weight is added to the column of->Difference of->Defined as appearance cost matrix->The difference between the highest value and the next highest value in one row or column +.>Expressed as:
cost matrixExpressed as: />
(2-4) cost matrixInputting Hungary algorithm to find the best matching relation, namely matching observation set +.>Then determining unmatched observation set +.>And unmatched target set->
(2-5) updating the tracker by a kalman filter algorithm:
known current frame observation setAnd tracked target set +.>M represents the number of tracked objects, for the matching relation +.>The target observed quantity is +.>Predicted value updated to target state quantity +.>For unmatched observations +.>Each of->Create a new tracker->For unmatched targets->Each of->Determining whether to delete +.>Or update its state, the updated tracking target set is expressed as: />, wherein />Representing the number of newly created trackers;
s3, re-identification tracking: updating a tracking target set or creating a new tracker according to the unmatched observation set and the unmatched target set, and re-identifying a target in a subsequent frame to restore lost tracking when the target is not detected within a certain time, and recording the position, the characteristics and the identification information of each target in the tracking process of the whole video sequence so as to output a complete tracking result, wherein the method specifically comprises the following steps:
for non-matching observation sets(/>Representing the number of features in the unmatched observation set)>Calculate its and unmatched purposeLabel set->(/>Representing the number of unmatched objects) is +.>Similarity of->
Setting a similarity thresholdIf->Is greater than->Features->With the objectAssociating and updating tracking target set->
If the characteristics areNot associated with any target, then is +.>Creating a new tracker and adding it to a set of tracking targetsObtaining a further updated tracking target set +.>
S4, license plate image recognition: according to the re-recognition tracking result, a vehicle target boundary frame in each frame is obtained, the target boundary frame is transmitted to a license plate detector or a license plate positioning algorithm, a license plate region in the target region is extracted, then the license plate region is transmitted to a license plate recognizer for recognition, even if vehicles are blocked in certain frames, the vehicles can be detected according to historical information, and further license plate recognition is carried out, and the license plate recognizer in the embodiment uses a lightweight optical character recognition model.
Comparing the accuracy of license plate recognition with other existing methods, the results are shown in table 1, and the results show that the accuracy of license plate recognition is greatly improved.
Table 1: the accuracy of this example is compared with other methods
Examples
The embodiment provides a vehicle cross-mirror tracking license plate recognition system, which comprises:
the target detection and feature extraction module is used for carrying out target detection on an input video frame, obtaining a boundary box of each target, and extracting feature representation of the target aiming at each detected target boundary box;
the data association and tracker updating module is used for associating the target detected by the current frame with the tracked target, predicting the state quantity of the target by using a Kalman filtering algorithm, carrying out self-adaptive appearance information fusion according to the similarity and the dynamic weight coefficient, increasing the weight of the appearance information by using self-adaptive weight, matching the predicted state quantity of the target and the actual observed quantity of the target by using a Hungary matching algorithm to obtain a matched observation set, a non-matched observation set and a non-matched target set, and updating the tracker by using the Kalman filtering algorithm;
the re-identification tracking module is used for updating the tracking target set or creating a new tracker according to the unmatched observation set and the unmatched target set;
and the license plate image recognition module is used for recognizing license plate numbers according to the re-recognition tracking result.
Here, it should be noted that the above-mentioned modules correspond to steps S1 to S4 in embodiment 1, and the above-mentioned modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the modules described above may be implemented as part of a system in a computer device (apparatus) such as a set of computer-executable instructions.
In further embodiments, there is also provided:
an electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the method described in embodiment 1. For brevity, the description is omitted here.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method described in embodiment 1.
The method in embodiment 1 may be directly embodied as a hardware processor executing or executed with a combination of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
Those of ordinary skill in the art will appreciate that the elements of the various examples described in connection with the present embodiments, i.e., the algorithm steps, can be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention. Algorithms and networks not specifically described in the present invention are well known in the art.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (10)

1. The vehicle cross-mirror tracking license plate recognition method is characterized by comprising the following steps of:
s1, target detection and feature extraction: performing target detection on an input vehicle video frame to obtain a boundary box of each target, and extracting characteristic representation of the target from each detected target boundary box;
s2, updating data association and tracker: the method comprises the steps of associating a vehicle detected by a current frame with a tracked vehicle, predicting a target state quantity by using a Kalman filtering algorithm, carrying out self-adaptive appearance information fusion according to similarity and dynamic weight coefficients, increasing the weight of appearance information by using self-adaptive weighting, matching the target state quantity predicted by the Kalman filter with an actual target observed quantity by using a Hungary matching algorithm to obtain a matched observation set, an unmatched observation set and an unmatched target set, and updating a tracker by using the Kalman filtering algorithm;
s3, re-identification tracking: updating the tracking target set or creating a new tracker according to the unmatched observation set and the unmatched target set;
s4, license plate image recognition: and acquiring a vehicle target boundary frame in each frame according to the re-identification tracking result, transmitting the target boundary frame to a license plate detector or a license plate positioning algorithm, extracting a license plate region, and transmitting the license plate region to a license plate identifier for identification.
2. The vehicle cross-mirror tracking license plate recognition method according to claim 1, wherein the specific process of step S1 is as follows: first using the Faster R-CNN object detector on an incoming video frameIn detection target object,/->For the current time, a bounding box for each object is obtained>N represents the number of targets detected for the current frame, for each detected target bounding box, extracting a feature representation of the target using a pre-trained ReID network +.>
3. The vehicle cross-mirror tracking license plate recognition method according to claim 2, wherein the step S2 of predicting the target state quantity by using a kalman filter algorithm comprises the following steps:
(S21) creating a corresponding target state quantity Tracks according to the first frame detection result, initializing a motion variable of Kalman filtering, and predicting a corresponding frame through the Kalman filtering;
(S22) moving objects in videoAt->All target state amounts of time are set to +.>Complete according to the linear dynamic device equation>Target state quantity prediction:
wherein ,for state transition matrix>Covariance matrix for target state quantity, +.>Is the noise matrix of the covariance matrix.
4. The vehicle cross-mirror tracking license plate recognition method according to claim 3, wherein the step S2 performs the process of adaptive appearance information fusion according to the similarity and the dynamic weight coefficient, which comprises the following steps:
(S23) by calculating the currentIn frame +.>Feature vector of individual object->And->Feature vector of individual object->Cosine similarity between->Judging whether the appearance information of the current vehicle changes or not;
(S24) according to the present firstIn frame +.>Bounding box of individual objects->And->Bounding box of individual objects->Overlap of->Calculating dynamic weight coefficient +.>For reflecting the degree of change in the appearance of the object;
(S25) carrying out self-adaptive appearance information fusion according to the cosine similarity and the dynamic weight coefficient:
a. if the similarity isWhen the value is larger than the threshold value 0.5, the appearance information of the target is not changed significantly, and the current feature vector is directly used>Carrying out subsequent data association;
b. if the similarity isWhen the value is smaller than the threshold value 0.5, the appearance information of the target is changed, and the history feature vector is required to be used>And the current feature vector +.>Feature fusion is carried out, and updated feature vectors are generated>The method comprises the following steps:
5. the vehicle cross-mirror tracking license plate recognition method according to claim 4, wherein the step S2 of increasing the weight of the appearance information using adaptive weighting is:
(S26) targetingAt->The observed quantity obtained by the time object detector is recorded as +.>The process of converting the target state quantity into an observed quantity is expressed as: />, wherein />A mapping matrix for the state space to the measurement space;
(S27) time of dayPredicted value of target state quantity +.>And target observed quantityUtilize->The correlation matrix matches the state quantity with the target detection result of the observed quantity, and then passes +.>Matching result calculation +.>Cost matrix-> = C(/>, />) Wherein C is a cost metric function;
(S28) first according to and />Calculating the appearance cost matrix->According to the existing cost matrix->In->Add->Obtaining a cost matrix->, wherein />Denoted as->Defined as appearance cost matrix->The difference between the highest value and the next highest value in a row or column of (a).
6. The method for identifying a vehicle cross-mirror tracking license plate according to claim 5, wherein step S2 updates the passing of the tracker by a kalman filter algorithmThe process is as follows: known current frame observation setAnd tracked target set +.>M represents the number of targets that have been tracked, for a matching observation set +.>The target observed quantity is +.>Predicted value updated to target state quantity +.>The method comprises the steps of carrying out a first treatment on the surface of the For unmatched observation sets->Each of->Creating a new trackerFor unmatched targets->Each of the concentrations->Determining whether to delete +.>Or update its state, the updated tracking target set is expressed as: />, wherein />Representing the number of newly created trackers.
7. The vehicle cross-mirror tracking license plate recognition method according to claim 6, wherein the specific process of step S3 is as follows:
(S31) for non-matching observation setsIs>Calculate it and unmatched target setIs->Similarity of-> , wherein />Representing the number of features in the unmatched observation set, +.>Representing the number of unmatched objects;
(S32) setting a similarity thresholdIf->Is greater than->Features->Is->Associating and updating tracking target set->
(S33) if the characteristic isNot associated with any target, then is +.>Create a new tracker and add it to the tracking target set +.>Obtaining a further updated tracking target set +.>
8. A vehicle cross-mirror tracking license plate recognition system capable of implementing the method of any one of claims 1-7, comprising:
the target detection and feature extraction module is used for carrying out target detection on an input vehicle video frame, obtaining a boundary box of each target, and extracting feature representation of the target aiming at each detected target boundary box;
the data association and tracker updating module is used for associating the target detected by the current frame with the tracked target, predicting the state quantity of the target by using a Kalman filtering algorithm, carrying out self-adaptive appearance information fusion according to the similarity and the dynamic weight coefficient, increasing the weight of the appearance information by using self-adaptive weight, matching the predicted state quantity of the target and the actual observed quantity of the target by using a Hungary matching algorithm to obtain a matched observation set, a non-matched observation set and a non-matched target set, and updating the tracker by using the Kalman filtering algorithm;
the re-identification tracking module is used for updating the tracking target set or creating a new tracker according to the unmatched observation set and the unmatched target set;
and the license plate image recognition module is used for recognizing license plate numbers according to the re-recognition tracking result.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method of any of claims 1-7.
CN202310705974.9A 2023-06-15 2023-06-15 Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment Active CN116453103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310705974.9A CN116453103B (en) 2023-06-15 2023-06-15 Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310705974.9A CN116453103B (en) 2023-06-15 2023-06-15 Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment

Publications (2)

Publication Number Publication Date
CN116453103A true CN116453103A (en) 2023-07-18
CN116453103B CN116453103B (en) 2023-08-18

Family

ID=87130558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310705974.9A Active CN116453103B (en) 2023-06-15 2023-06-15 Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment

Country Status (1)

Country Link
CN (1) CN116453103B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935388A (en) * 2024-01-29 2024-04-26 广州华南路桥实业有限公司 Expressway charging monitoring system and method based on networking

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017117359A1 (en) * 2015-12-30 2017-07-06 3M Innovative Properties Company Automatic learning for vehicle classification
CN110675430A (en) * 2019-09-24 2020-01-10 中国科学院大学 Unmanned aerial vehicle multi-target tracking method based on motion and appearance adaptation fusion
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN113034548A (en) * 2021-04-25 2021-06-25 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal
WO2021223367A1 (en) * 2020-05-06 2021-11-11 佳都新太科技股份有限公司 Single lens-based multi-pedestrian online tracking method and apparatus, device, and storage medium
CN113674328A (en) * 2021-07-14 2021-11-19 南京邮电大学 Multi-target vehicle tracking method
CN113989794A (en) * 2021-11-12 2022-01-28 珠海安联锐视科技股份有限公司 License plate detection and recognition method
CN114972418A (en) * 2022-03-30 2022-08-30 北京航空航天大学 Maneuvering multi-target tracking method based on combination of nuclear adaptive filtering and YOLOX detection
CN115063836A (en) * 2022-06-10 2022-09-16 烟台大学 Pedestrian tracking and re-identification method based on deep learning
CN116246232A (en) * 2023-03-16 2023-06-09 江苏华真信息技术有限公司 Cross-border head and local feature strategy optimized vehicle multi-target tracking method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017117359A1 (en) * 2015-12-30 2017-07-06 3M Innovative Properties Company Automatic learning for vehicle classification
CN110675430A (en) * 2019-09-24 2020-01-10 中国科学院大学 Unmanned aerial vehicle multi-target tracking method based on motion and appearance adaptation fusion
WO2021223367A1 (en) * 2020-05-06 2021-11-11 佳都新太科技股份有限公司 Single lens-based multi-pedestrian online tracking method and apparatus, device, and storage medium
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN113034548A (en) * 2021-04-25 2021-06-25 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal
CN113674328A (en) * 2021-07-14 2021-11-19 南京邮电大学 Multi-target vehicle tracking method
CN113989794A (en) * 2021-11-12 2022-01-28 珠海安联锐视科技股份有限公司 License plate detection and recognition method
CN114972418A (en) * 2022-03-30 2022-08-30 北京航空航天大学 Maneuvering multi-target tracking method based on combination of nuclear adaptive filtering and YOLOX detection
CN115063836A (en) * 2022-06-10 2022-09-16 烟台大学 Pedestrian tracking and re-identification method based on deep learning
CN116246232A (en) * 2023-03-16 2023-06-09 江苏华真信息技术有限公司 Cross-border head and local feature strategy optimized vehicle multi-target tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李文姣;秦勃;: "基于卡尔曼滤波器的抗遮挡车辆跟踪算法", 计算机应用, no. 2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935388A (en) * 2024-01-29 2024-04-26 广州华南路桥实业有限公司 Expressway charging monitoring system and method based on networking
CN117935388B (en) * 2024-01-29 2024-06-18 广州华南路桥实业有限公司 Expressway charging monitoring system and method based on networking

Also Published As

Publication number Publication date
CN116453103B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
Wu et al. Track to detect and segment: An online multi-object tracker
Hassaballah et al. Vehicle detection and tracking in adverse weather using a deep learning framework
Sudha et al. An intelligent multiple vehicle detection and tracking using modified vibe algorithm and deep learning algorithm
Lu et al. Deep-sea organisms tracking using dehazing and deep learning
CN111932580A (en) Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN107491731B (en) Ground moving target detection and identification method for accurate striking
CN107133970B (en) Online multi-target tracking method and device based on motion information
CN110288627B (en) Online multi-target tracking method based on deep learning and data association
CN109099929B (en) Intelligent vehicle positioning device and method based on scene fingerprints
CN116453103B (en) Vehicle cross-mirror tracking license plate recognition method, system and electronic equipment
CN114049382B (en) Target fusion tracking method, system and medium in intelligent network connection environment
CN117036397A (en) Multi-target tracking method based on fusion information association and camera motion compensation
Mahaur et al. An improved lightweight small object detection framework applied to real-time autonomous driving
CN110533692B (en) Automatic tracking method for moving target in aerial video of unmanned aerial vehicle
Chan et al. City tracker: Multiple object tracking in urban mixed traffic scenes
CN113379795B (en) Multi-target tracking and segmentation method based on conditional convolution and optical flow characteristics
Song et al. Multi-object tracking and segmentation with embedding mask-based affinity fusion in hierarchical data association
Cores et al. Spatiotemporal tubelet feature aggregation and object linking for small object detection in videos
Micheal et al. Deep learning-based multi-class multiple object tracking in UAV video
Ma An object tracking algorithm based on optical flow and temporal–spatial context
Liu et al. Find small objects in UAV images by feature mining and attention
Duan [Retracted] Deep Learning‐Based Multitarget Motion Shadow Rejection and Accurate Tracking for Sports Video
CN111161323B (en) Complex scene target tracking method and system based on correlation filtering
Razzok et al. Pedestrian detection under weather conditions using conditional generative adversarial network
Zheng et al. 6d camera relocalization in visually ambiguous extreme environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant