CN109543610B - Vehicle detection tracking method, device, equipment and storage medium - Google Patents

Vehicle detection tracking method, device, equipment and storage medium Download PDF

Info

Publication number
CN109543610B
CN109543610B CN201811399560.3A CN201811399560A CN109543610B CN 109543610 B CN109543610 B CN 109543610B CN 201811399560 A CN201811399560 A CN 201811399560A CN 109543610 B CN109543610 B CN 109543610B
Authority
CN
China
Prior art keywords
vehicle
image
region
detection
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811399560.3A
Other languages
Chinese (zh)
Other versions
CN109543610A (en
Inventor
杨岳航
朱明�
郝志成
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201811399560.3A priority Critical patent/CN109543610B/en
Publication of CN109543610A publication Critical patent/CN109543610A/en
Application granted granted Critical
Publication of CN109543610B publication Critical patent/CN109543610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a vehicle detection tracking method, a device, equipment and a medium, wherein the method comprises the following steps: extracting a feature descriptor of a vehicle image to be tracked; detecting a target vehicle tracking object in an image of a region to be detected by using a trained vehicle detection model containing vehicle normally visible region component characteristics and vehicle easily-sheltered region component characteristics; extracting a feature descriptor of the target vehicle tracking object image; carrying out local sensitive Hash matching on the feature descriptors and purifying; determining the number of the purified feature descriptors and judging whether the number of the purified feature descriptors is greater than a preset threshold value; if so, tracking the target vehicle tracking object when the regional tone value of the target vehicle tracking object is within the preset tone value range. That is, the invention utilizes the model containing the common visible area and the easily-shielded area to carry out image detection, and detects the vehicle to be tracked through local sensitive Hash matching, thereby realizing vehicle tracking, effectively avoiding the problem of vehicle missing detection caused by shielding and improving the tracking accuracy.

Description

Vehicle detection tracking method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for vehicle detection and tracking.
Background
The vehicle detection and tracking are key components in an intelligent traffic system, have wide application prospects in the fields of traffic dispersion, auxiliary driving systems, road monitoring and the like, and can provide important clues and evidences for public security cases and traffic accident investigation. However, due to the complex imaging conditions in real scenes, vehicle detection and tracking have many difficulties, wherein the occlusion problem is particularly prominent. The existence of multiple targets in a complex road environment is a main reason for mutual shielding among vehicles, and target information is lost due to shielding, so that target missing and tracking loss are easily caused.
Disclosure of Invention
In view of the above, the present invention provides a vehicle detecting and tracking method, device, apparatus and storage medium, which can solve the tracking missing problem caused by the loss of vehicle visual information. The specific scheme is as follows:
in a first aspect, the present invention discloses a vehicle detection and tracking method, including:
determining a vehicle image to be tracked, and extracting a first feature descriptor of the vehicle image to be tracked;
acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects;
performing locality sensitive hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors;
determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value;
if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object;
and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected.
Optionally, the determining an image of the vehicle to be tracked includes:
and selecting the image of the vehicle to be tracked from a database of vehicles to be tracked.
Optionally, the acquiring the images of the to-be-detected region arranged in time sequence includes:
and acquiring a video to be processed, and sampling according to a time sequence to acquire the image of the area to be detected.
Optionally, the method further includes:
acquiring a preset vehicle detection model;
and training the preset vehicle detection model by using the image training sample, and learning the scale and position relation among the vehicle components in the preset vehicle detection model to obtain the trained vehicle detection model containing the component characteristics of the normally visible region of the vehicle and the component characteristics of the region easily blocked by the vehicle.
Optionally, the obtaining of the preset vehicle detection model includes:
dividing a vehicle object into a vehicle normally visible area and a vehicle easily-sheltered area;
and constructing a preset vehicle detection model comprising a vehicle normally visible region component and a vehicle easily-sheltered region component by utilizing a mixed image template comprising different types of features.
Optionally, the training of the preset vehicle detection model by using the image training sample, and the learning of the scale and position relationship between vehicle components in the preset vehicle detection model include:
generating a characteristic response matrix by using the image training sample; wherein each row in the feature response matrix characterizes a feature response vector of one of the image training samples;
selecting the features of which the response values in all the image training samples in the feature response matrix are larger than a preset threshold value to obtain a large feature response area;
calculating the region scores of all the large feature response regions according to a preset score calculation formula; wherein, the preset score calculation formula is as follows:
Figure GDA0003806457390000021
wherein, B k Represents the kth large feature response region, rows () represents the positive examples contained in the large feature response region, cols () represents the features contained in the large feature response region, β k,j Representing the mixed image mode in the kth large-feature response regionWeight, R, corresponding to primitive j in the panel i,j Represents the characteristic response value, z, corresponding to the ith row and the jth column k,j Is represented by a to k,j Corresponding to an independent standard constant, score (B) k ) A score value representing the large feature response region;
determining a target response region with the score larger than a region score threshold value in all the large feature response regions according to the region score;
and learning the scale and position relation among the vehicle parts by using all the target response areas.
Optionally, the detecting all vehicle tracking objects in the current frame to-be-detected region image by using the trained vehicle detection model includes:
s11: performing template matching on the current frame to-be-detected region image based on a mixed image template in the trained vehicle detection model to obtain all vehicle candidates in the current frame to-be-detected region image, and determining detection scores of all the vehicle candidates;
s12: determining the optimal vehicle candidate with the highest detection score in all the vehicle candidates, judging whether the detection score of the optimal vehicle candidate is larger than a detection score threshold value, and if so, entering S13; if not, the detection is finished;
s13: and determining the optimal vehicle candidate as a vehicle tracking object, recording the position, the direction and the scale of the optimal vehicle candidate, removing the optimal vehicle candidate from the current frame to-be-detected region image, taking the image without the optimal vehicle candidate as the current frame to-be-detected region image, and entering S11.
Optionally, the determining the detection scores of all the vehicle candidates includes:
filtering by changing the position, the direction and the scale of a mixed image template corresponding to a target vehicle component in the trained vehicle detection model, and obtaining all scores of the target vehicle component according to a component score calculation formula; wherein the component score calculation formula is:
Figure GDA0003806457390000031
wherein (x) j ,y j ,o j ,s j ) Representing the position (x) of the target vehicle component as a function of the template j ,y j ) Changing the direction o j Transforming the scale s j ,τ x,y,o,s (x j ,y j ,o j ,s j ) (x) representing corresponding features in the hybrid image template of the target vehicle component j ,y j ,o j ,s j ) MAX _ RESPONSE () represents the local area maximum eigen RESPONSE value vector, β k,j Represents the weight, Z, corresponding to the primitive j in the mixed image template in the kth large-feature response area k,j Is represented by a to k,j Corresponding independent standard constant, SUM _ LPAR k () A score representing the target vehicle component;
generating component score vectors by using all the scores, and constructing a region detection model by using all the component score vectors;
changing the position, the direction and the scale on the image of the current frame to be detected region through the region detection model, calculating according to a region detection formula to obtain a region detection score, and determining the detection score of the current vehicle candidate; wherein the region detection formula is:
Figure GDA0003806457390000041
wherein,
Figure GDA0003806457390000042
representing a region detection model, r g Represents a score vector of the g-th target vehicle component, and SUM _ DETECT () represents a region detection score.
Optionally, after the detecting all vehicle tracking objects in the current frame image of the area to be detected by using the trained vehicle detection model, the method further includes:
and carrying out gray processing on the current frame image of the area to be detected, and carrying out rapid median filtering on the processed image.
Optionally, the performing the graying processing on the image of the current frame to be detected includes:
determining the format of the current frame to-be-detected region image;
when the current frame to-be-detected region image is in a YUV format, directly extracting a value of a Y component in the current frame to-be-detected region image, and taking the value of the Y component as a gray value;
when the image of the current frame to-be-detected area is in an RGB format, calculating the gray values of all pixels in the image of the current frame to-be-detected area by using a preset gray formula; wherein the preset graying formula is as follows:
GrayValue=(306×R+601×G+117×B)>>10;
wherein, R represents the pixel value of the R channel in the current frame to-be-detected region image, G represents the pixel value of the G channel in the current frame to-be-detected region image, B represents the pixel value of the B channel in the current frame to-be-detected region image, and GrayValue represents the gray value.
Optionally, the purifying the matched feature descriptors includes:
and purifying the matched feature descriptors by using consistency constraint of adjacent feature points.
Optionally, the method further includes:
determining the tone value of the normally visible area of the vehicle and the tone value of the easily sheltered area of the vehicle in the image of the vehicle to be tracked;
determining a preset hue value range corresponding to each area according to a range determination formula by using the hue value of the normally visible area of the vehicle and the hue value of the easily-sheltered area of the vehicle; wherein the range determination formula is:
D1∈[T1-10°,T1+10°],
D2∈[T2-10°,T2+10°];
wherein, T1 represents the tone value of the normally visible area of the vehicle, T2 represents the tone value of the easily-shielded area of the vehicle, D1 represents the preset tone value range of the normally visible area of the vehicle, and D2 represents the preset tone value range of the easily-shielded area of the vehicle.
In a second aspect, the present invention discloses a vehicle detecting and tracking device, comprising:
the system comprises a first extraction module, a second extraction module and a third extraction module, wherein the first extraction module is used for determining a vehicle image to be tracked and extracting a first feature descriptor of the vehicle image to be tracked;
the first detection module is used for acquiring images of the area to be detected which are arranged according to a time sequence and detecting all vehicle tracking objects in the image of the current frame area to be detected by using the trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
the second extraction module is used for extracting a second feature descriptor of an image area corresponding to the target vehicle tracking object in all the vehicle tracking objects;
the feature matching module is used for performing locality sensitive hash matching on the first feature descriptor and the second feature descriptor and purifying the matched feature descriptors;
the number judgment module is used for determining the number of the purified feature descriptors and judging whether the number is greater than a preset threshold value;
the vehicle tracking module is used for tracking by using the position, the direction and the scale of the target vehicle tracking object when the regional hue value in the image region corresponding to the target vehicle tracking object is within a preset hue value range if the number is larger than a preset threshold value;
and the second detection module is used for detecting all vehicle tracking objects in the next frame of image of the area to be detected if the number is less than the preset threshold value.
In a third aspect, the present invention discloses a vehicle detection and tracking device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the vehicle detection and tracking method disclosed in the foregoing when executing the computer program.
In a fourth aspect, the present invention discloses a computer readable storage medium for storing a computer program, which when executed by a processor, performs the steps of the vehicle detection and tracking method disclosed above.
Therefore, the vehicle image to be tracked is determined, and the first feature descriptor of the vehicle image to be tracked is extracted; acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle; extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects; carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors; determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value; if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object; and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected. That is, the invention detects the image of the area to be detected by using the model containing the common visible area and the easily-shielded area, and further detects the vehicle to be tracked in the area to be detected by local sensitive hash matching, thereby realizing vehicle tracking, effectively avoiding the problem of vehicle detection omission caused by shielding and improving the tracking accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of one embodiment of a vehicle detection and tracking method provided by the present invention;
FIGS. 2a to 2c are schematic diagrams illustrating the consistency constraint of included angles between neighboring feature points according to an embodiment of the vehicle detecting and tracking method provided by the present invention;
FIG. 3 is a flowchart of a vehicle detection model training process in an embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 4 is a flowchart of a model building process in one embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 5 is a schematic diagram of vehicle component features in one embodiment of a vehicle detection and tracking method provided by the present invention;
FIG. 6 is a flow chart of a pre-defined vehicle detection model training process in an embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 7 is a schematic diagram of a large signature response region in one embodiment of a vehicle detection and tracking method provided by the present invention;
FIG. 8 is a diagram of a topology structure of a component model in one embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 9 is a flowchart illustrating a method for detecting an image of an area to be detected according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating a graying process for an image according to an embodiment of the vehicle detecting and tracking method provided by the present invention;
FIG. 11 is a flowchart of determining a vehicle candidate score according to one embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 12 is a flowchart of a preset hue value range determination process in one embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 13 is a block diagram of a vehicle detection and tracking device provided by the present invention;
FIG. 14 is a flow chart of a vehicle detection and tracking process in the vehicle detection and tracking device provided by the present invention;
fig. 15 is a schematic diagram of a hardware structure of the vehicle detection and tracking device provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Due to the complex imaging conditions in real scenes, vehicle detection and tracking face a lot of difficulties, wherein the problem of occlusion is particularly prominent. The existence of multiple targets in a complex road environment is a main reason for mutual shielding among vehicles, and target information is lost due to shielding, so that target missing and tracking loss are easily caused.
The embodiment of the invention discloses a vehicle detection and tracking method, which is shown in figure 1 and comprises the following steps:
step S101: determining a vehicle image to be tracked, and extracting a first feature descriptor of the vehicle image to be tracked;
in this embodiment, an image of a vehicle to be tracked is first determined. Specifically, the embodiment selects the vehicle image to be tracked from the vehicle database to be tracked, and extracts the first feature descriptor of the vehicle image to be tracked. Preferably, the method for extracting the ORB feature descriptors of the image has a fast extraction speed, and further improves the detection rate.
Step S102: acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
specifically, in this embodiment, a video to be processed is first acquired, and sampling is performed according to a time sequence to acquire the image of the region to be detected. For example, this embodiment samples the video to be processed every 5 seconds to obtain a sequence image arranged in time sequence.
Further, the embodiment detects all vehicle tracking objects in the current frame to-be-detected region image by using the trained vehicle detection model. The trained vehicle model comprises the component characteristics of a normally visible area and an easily-shielded area of the vehicle, and the trained vehicle model is used for detecting the vehicle tracking object, so that the problem of detection leakage caused by mutual shielding between vehicles can be solved.
Step S103: extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects;
it is understood that the target vehicle tracking object is determined among all the vehicle tracking objects, and the second feature descriptor of the image area corresponding to the target vehicle tracking object is extracted.
Step S104: carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors;
in this embodiment, the first feature descriptor and the second feature descriptor are subjected to local sensitive hash matching, and the matched feature descriptors are further purified by using consistency constraint of adjacent feature points, as shown in fig. 2a to fig. 2c, fig. 2a shows an included angle of adjacent feature points of a template image, fig. 2b shows an included angle of adjacent feature points of an image to be matched, fig. 2c shows a matched feature point-to-included angle, and a feature descriptor subjected to mismatching can be filtered out through purification processing.
Step S105: determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value;
step S106: if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object;
it should be noted that, after the matching and the purification, the number of the feature descriptors successfully matched is determined, and whether the number of the feature descriptors is greater than a preset threshold value is judged. And if the number of the successfully matched feature descriptors is larger than a preset threshold value, indicating that the currently matched target vehicle tracking object and the vehicle to be tracked are the same vehicle.
And further judging whether the hue value of the area in the image area corresponding to the target vehicle tracking object is within a preset hue value range, if so, indicating that the target vehicle tracking object and the vehicle to be tracked are the same vehicle, and tracking by using the position, the direction and the scale of the target vehicle tracking object. If not, the target vehicle tracking object is not the vehicle to be tracked, and the remaining tracking objects except the target vehicle tracking object in all the vehicle tracking objects in the current frame area image to be detected are further matched.
Step S107: and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected.
It can be understood that if the number of feature descriptors successfully matched is less than the preset threshold, all vehicle tracking objects in the area to be detected in the next frame are continuously detected, and the positions of the vehicles to be tracked are further determined in a matching manner, so that vehicle tracking is realized.
Therefore, the vehicle image to be tracked is determined, and the first feature descriptor of the vehicle image to be tracked is extracted; acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-shielded region of the vehicle; extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects; carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors; determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value; if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object; and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected. That is, the invention detects the image of the area to be detected by using the model containing the common visible area and the easily-shielded area, and further detects the vehicle to be tracked in the area to be detected by local sensitive hash matching, thereby realizing vehicle tracking, effectively avoiding the problem of vehicle detection omission caused by shielding and improving the tracking accuracy.
In a specific embodiment of the vehicle detection and tracking method provided by the present invention, a training process for a vehicle detection model is further described, as shown in fig. 3, the training process specifically includes:
step S201: acquiring a preset vehicle detection model;
in this embodiment, a preset vehicle detection model is first obtained.
Further, the embodiment further explains a process of constructing the preset vehicle prediction model, and as shown in fig. 4, the process includes:
step S2011: dividing a vehicle object into a vehicle normally visible area and a vehicle easily-sheltered area;
it can be understood that the license plate and the headlight area of the vehicle usually have abundant visual information, but in a complex traffic environment, the area is usually occluded, and the area is divided into easy-to-occlude areas. In contrast to the license plate area, the roof area and the front windshield are generally visible, and even in the case of traffic congestion, this area can be seen despite heavy obstruction between vehicles, and thus the area is divided into generally visible areas.
Step S2012: and constructing a preset vehicle detection model comprising a vehicle normally visible region component and a vehicle easily-sheltered region component by utilizing a mixed image template comprising different types of features.
In this embodiment, after dividing the vehicle object into two regions, modeling is performed by using a mixed image template containing different types of features, where the different types of features may include: edge, texture, smoothness. A representation of the features of the modeled vehicle components is shown in fig. 5.
Step S202: and training the preset vehicle detection model by using the image training sample, and learning the scale and position relationship among the vehicle components in the preset vehicle detection model to obtain the trained vehicle detection model containing the component characteristics of the normally visible region of the vehicle and the component characteristics of the easily-shielded region of the vehicle.
Specifically, the training process of the preset vehicle detection model is further explained, and as shown in fig. 6, the process includes:
step S2021: generating a characteristic response matrix by using the image training sample; wherein each row in the feature response matrix characterizes a feature response vector of one of the image training samples;
specifically, the number of rows of the characteristic response matrix is the number of image training samples, and each row in the matrix represents a characteristic response vector of one image training sample. Each value in each row is a feature response value calculated from the euclidean distance between the image block and the feature and normalized to between 0 and 1, representing the likelihood that the prototype of the feature appears in the image.
Step S2022: selecting the features of which the response values in all the image training samples in the feature response matrix are larger than a preset threshold value to obtain a large feature response area;
in this embodiment, as shown in fig. 7, the eigen regions having high response values that are commonly owned in the sample are selected from the eigen response matrix to form a large eigen response region.
Step S2023: calculating the region scores of all the large feature response regions according to a preset score calculation formula; wherein, the preset score calculation formula is as follows:
Figure GDA0003806457390000111
wherein, B k Represents the kth region of said large characteristic response, rows: () Represents the positive case contained in the large feature response region, cols () represents the feature contained in the large feature response region, β k,j Represents the weight corresponding to the primitive j in the mixed image template in the kth large-characteristic response area, R i,j Represents the characteristic response value, z, corresponding to the ith row and the jth column k,j Is represented by a to k,j Corresponding independent standard constant, score (B) k ) A score value representing the large feature response region;
step S2024: according to the region score, determining a target response region of all the large feature response regions, wherein the score is larger than a region score threshold;
it will be appreciated that all of the large feature response regions are ranked according to the region score, with regions having lower scores being discarded, or regions having scores below a region score threshold being discarded.
Step S2025: and learning the scale and position relation among the vehicle parts by using all the target response areas.
Further, by utilizing the target response region, the organizational structure of the model is reconstructed through map compression and sharing termination points, and the geometric structure of the component model is obtained through the scale and position relation between each image block and each component and through rotation transformation. Referring to fig. 8, fig. 8 shows the topology of the model.
In a specific embodiment of the vehicle detection and tracking method provided by the present invention, a process for detecting all vehicle tracking objects in the current frame to-be-detected region image by using the trained vehicle detection model is further described, as shown in fig. 9, the process specifically includes:
s11: performing template matching on the current frame to-be-detected region image based on a mixed image template in the trained vehicle detection model to obtain all vehicle candidates in the current frame to-be-detected region image, and determining detection scores of all the vehicle candidates;
in this embodiment, before detecting all vehicle tracking objects in the current frame of image of the area to be detected by using the trained vehicle detection model, the image of the current frame of the area to be detected is grayed, and the processed image is subjected to fast median filtering, so as to maintain the edge and filter noise interference.
Specifically, referring to fig. 10, the process of performing graying processing on the image of the current frame to be detected includes:
step S111: determining the format of the current frame to-be-detected area image;
step S112: when the current frame to-be-detected region image is in a YUV format, directly extracting a value of a Y component in the current frame to-be-detected region image, and taking the value of the Y component as a gray value;
step S113: when the image of the current frame to-be-detected area is in an RGB format, calculating the gray values of all pixels in the image of the current frame to-be-detected area by using a preset gray formula; wherein the preset graying formula is as follows:
GrayValue=(306×R+601×G+117×B)>>10;
wherein, R represents the pixel value of the R channel in the current frame to-be-detected region image, G represents the pixel value of the G channel in the current frame to-be-detected region image, B represents the pixel value of the B channel in the current frame to-be-detected region image, and GrayValue represents the gray value.
In this embodiment, a Gabor filter is first used to filter an image of a current frame to be detected region to obtain an edge image with target direction features, and further based on a mixed image template in the trained vehicle detection model, filtering is performed by locally changing the position, direction and scale of the mixed image template corresponding to a component under the model to obtain a score corresponding to the component, and an optimal region detection model is constructed according to the component score. And further converting the position, the direction and the scale on the image by using the region detection model to obtain the detection score of the current vehicle candidate.
S12: determining the optimal vehicle candidate with the highest detection score in all the vehicle candidates, judging whether the detection score of the optimal vehicle candidate is larger than a detection score threshold value, and if so, entering S13; if not, the detection is finished;
in this embodiment, the vehicle with the highest score among all the vehicle candidate scores is determined as the optimal vehicle candidate, and it is further determined whether the current detection score is greater than the detection score threshold. If yes, go to step S13; if not, the detection is finished.
S13: determining the optimal vehicle candidate as a vehicle tracking object, recording the position, the direction and the scale of the optimal vehicle candidate, removing the optimal vehicle candidate from the current frame to-be-detected region image, taking the image without the optimal vehicle candidate as the current frame to-be-detected region image, and entering S11.
It is understood that the optimal vehicle candidate is determined as the vehicle tracking object, and the position, direction and scale of the current candidate are recorded to enable tracking of the current optimal vehicle candidate using the position, direction and scale. And further removing the optimal vehicle candidate from the current frame to-be-detected region image, and iteratively detecting all vehicle tracking objects in the current frame to-be-detected region image.
In one embodiment of the vehicle detection and tracking method provided by the present invention, the process for determining the detection scores of all the vehicle candidates is further described, as shown in fig. 11, the process specifically includes:
step S401: filtering by changing the position, the direction and the scale of a mixed image template corresponding to a target vehicle component in the trained vehicle detection model, and obtaining all scores of the target vehicle component according to a component score calculation formula; wherein the component score calculation formula is:
Figure GDA0003806457390000131
wherein (x) j ,y j ,o j ,s j ) Representing the template-dependent position (x) of the target vehicle component j ,y j ) Changing the direction o j Transforming the scale s j ,τ x,y,o,s (x j ,y j ,o j ,s j ) (x) representing corresponding features in the hybrid image template of the target vehicle component j ,y j ,o j ,s j ) MAX _ RESPONSE () represents the local area maximum eigen RESPONSE value vector, β k,j Represents the weight, Z, corresponding to the primitive j in the mixed image template in the kth large-feature response area k,j Is represented by a and beta k,j Corresponding independent standard constant, SUM _ LPAR k () A score representing the target vehicle component;
step S402: generating component score vectors by using all the scores, and constructing a region detection model by using all the component score vectors;
specifically, the score vectors corresponding to all the score collection and generation components are collected, and further inferred to obtain the region detection model.
Step S403: changing the position, the direction and the scale on the image of the current frame to be detected region through the region detection model, calculating according to a region detection formula to obtain a region detection score, and determining the detection score of the current vehicle candidate; wherein the region detection formula is:
Figure GDA0003806457390000132
wherein,
Figure GDA0003806457390000141
representing a region detection model, r g Represents a score vector of the g-th target vehicle component, and SUM _ DETECT () represents a region detection score.
In this embodiment, the position, the direction, and the scale of the region detection model are transformed on the current frame to-be-detected region image, so as to obtain a region detection score. And further calculating a global highest score according to the region detection score to obtain a detection score of the current vehicle candidate.
In an embodiment of the vehicle detection and tracking method provided by the present invention, a process for determining the preset hue value range is further described, and as shown in fig. 12, the process specifically includes:
step S501: determining the tone value of the normally visible area of the vehicle and the tone value of the easily sheltered area of the vehicle in the image of the vehicle to be tracked;
specifically, the present embodiment performs histogram statistics of hue values on the vehicle normally visible region and the vehicle easy-to-occlude region, and takes a peak value of the histogram as a hue value of the current region.
Step S502: determining a preset hue value range corresponding to each area according to a range determination formula by using the hue value of the normally visible area of the vehicle and the hue value of the easily-sheltered area of the vehicle; wherein the range determination formula is:
D1∈[T1-10°,T1+10°],
D2∈[T2-10°,T2+10°];
wherein, T1 represents the tone value of the normally visible area of the vehicle, T2 represents the tone value of the easily-shielded area of the vehicle, D1 represents the preset tone value range of the normally visible area of the vehicle, and D2 represents the preset tone value range of the easily-shielded area of the vehicle.
In the following, the vehicle detecting and tracking device provided by the embodiment of the present invention is described, and the vehicle detecting and tracking device described below and the vehicle detecting and tracking method described above may be referred to correspondingly.
Fig. 13 is a block diagram of a vehicle detecting and tracking device according to an embodiment of the present invention, and referring to fig. 13, the vehicle detecting and tracking device may include:
the first extraction module 100 is configured to determine a vehicle image to be tracked, and extract a first feature descriptor of the vehicle image to be tracked;
the first detection module 200 is configured to acquire images of a to-be-detected region arranged in a time sequence, and detect all vehicle tracking objects in the current frame image of the to-be-detected region by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-shielded region of the vehicle;
the second extraction module 300 is configured to extract a second feature descriptor of an image region corresponding to a target vehicle tracking object from among all the vehicle tracking objects;
a feature matching module 400, configured to perform locality sensitive hash matching on the first feature descriptor and the second feature descriptor, and purify the matched feature descriptors;
a number judgment module 500, configured to determine the number of the purified feature descriptors and judge whether the number is greater than a preset threshold;
the vehicle tracking module 600 is configured to, if the number is greater than a preset threshold, perform tracking by using a position, a direction, and a scale of the target vehicle tracking object when a hue value of an area in an image area corresponding to the target vehicle tracking object is within a preset hue value range;
the second detecting module 700 is configured to detect all vehicle tracking objects in the next frame of image of the area to be detected if the number is smaller than the preset threshold.
The vehicle detecting and tracking device of this embodiment is used to implement the vehicle detecting and tracking method, so the specific implementation of the vehicle detecting and tracking device can be found in the foregoing embodiment of the vehicle detecting and tracking method, and will not be described again here.
Further, the embodiment of the present invention also discloses a vehicle detection and tracking device, which includes a memory 11 and a processor 12, wherein the memory 11 is used for storing a computer program, and the processor 12 is used for implementing the steps of the vehicle detection and tracking method disclosed in the foregoing when executing the computer program.
The process of detecting and tracking the vehicle by the vehicle detecting and tracking device in this embodiment is as shown in fig. 14, and includes, first, completing training of a component model before detection, acquiring an image of the vehicle to be tracked, preprocessing an image of a region to be tracked, inputting the preprocessed image into the component model for detection, extracting ORB feature descriptors of the image of the vehicle to be tracked and the image of the region to be tracked, further performing local sensitive hash matching, and purifying the descriptors after matching. And judging whether the number of the matched and purified feature descriptors reaches a threshold value, if so, comparing the regional hue value of the vehicle image to be tracked with that of the regional image to be detected, if so, successfully tracking, and continuously detecting the next frame of image.
Further, referring to fig. 15, the vehicle detecting and tracking device in the present embodiment may further include:
the input interface 13 is configured to obtain a computer program imported from the outside, store the obtained computer program in the memory 12, and also be configured to obtain various instructions and parameters transmitted by an external terminal device, and transmit the instructions and parameters to the processor 11, so that the processor 11 performs corresponding processing by using the instructions and parameters. In this embodiment, the input interface 13 may specifically include, but is not limited to, a USB interface, a serial interface, a voice input interface, a fingerprint input interface, a hard disk reading interface, and the like.
And an output interface 14, configured to output various data generated by the processor 11 to a terminal device connected thereto, so that other terminal devices connected to the output interface 14 can acquire various data generated by the processor 11. In this embodiment, the output interface 14 may specifically include, but is not limited to, a USB interface, a serial interface, and the like.
And a display unit 15 for displaying the data sent by the processor 11.
The communication unit 16 is configured to establish a remote communication connection with an external server, acquire data sent by an external terminal, and send the data to the processor 11 for processing and analysis, and in addition, the processor 11 may also send various results obtained after processing to preset various data receiving ends through the communication unit 16. In this embodiment, the communication technology adopted by the communication unit 16 may be a wired communication technology or a wireless communication technology, such as a Universal Serial Bus (USB), a wireless fidelity technology (WiFi), a bluetooth communication technology, a low-power bluetooth communication technology (BLE), and the like. Additionally, the communication unit 16 may embody a cellular radio transceiver operating in accordance with wideband code division multiple access (W-CDMA), long Term Evolution (LTE), and similar standards.
Further, the embodiment of the present invention also discloses a computer readable storage medium, which is used for storing a computer program, and the computer program is executed by a processor to execute the steps of the vehicle detection and tracking method disclosed in the foregoing.
According to the method, the model containing the common visible area and the easily-shielded area is used for detecting the image of the area to be detected, and the vehicle to be tracked in the area to be detected is further detected through local sensitive Hash matching, so that vehicle tracking is realized, the problem of vehicle missing detection caused by shielding is effectively avoided, and the tracking accuracy is improved.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts between the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above detailed description is provided for the vehicle detecting and tracking method, apparatus, device and storage medium, and the specific examples are applied herein to explain the principles and embodiments of the present invention, and the descriptions of the above embodiments are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. A vehicle detection tracking method, comprising:
determining a vehicle image to be tracked, and extracting a first feature descriptor of the vehicle image to be tracked;
acquiring images of a to-be-detected area which are arranged in a time sequence, and detecting all vehicle tracking objects in the current frame of the to-be-detected area by using a trained vehicle detection model; the trained vehicle detection model is a model comprising vehicle visible region component characteristics and vehicle easily-sheltered region component characteristics;
extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects;
performing locality sensitive hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors;
determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value;
if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object;
if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected;
the method for detecting all vehicle tracking objects in the current frame to-be-detected region image by using the trained vehicle detection model comprises the following steps:
s11: performing template matching on the current frame to-be-detected region image based on a mixed image template in the trained vehicle detection model to obtain all vehicle candidates in the current frame to-be-detected region image, and determining detection scores of all the vehicle candidates;
s12: determining the optimal vehicle candidate with the highest detection score in all the vehicle candidates, judging whether the detection score of the optimal vehicle candidate is larger than a detection score threshold value, and if so, entering S13; if not, the detection is finished;
s13: determining the optimal vehicle candidate as a vehicle tracking object, recording the position, the direction and the scale of the optimal vehicle candidate, removing the optimal vehicle candidate from the current frame to-be-detected region image, taking the image without the optimal vehicle candidate as the current frame to-be-detected region image, and entering S11.
2. The vehicle detection and tracking method according to claim 1, wherein the determining an image of the vehicle to be tracked comprises:
and selecting the image of the vehicle to be tracked from a database of vehicles to be tracked.
3. The vehicle detecting and tracking method according to claim 1, wherein the acquiring images of the region to be detected arranged in time sequence comprises:
and acquiring a video to be processed, and sampling according to a time sequence to acquire the image of the area to be detected.
4. The vehicle detection and tracking method according to claim 1, further comprising:
acquiring a preset vehicle detection model;
and training the preset vehicle detection model by using an image training sample, and learning the scale and position relationship among the vehicle components in the preset vehicle detection model to obtain a trained vehicle detection model containing the component characteristics of the vehicle visible region and the component characteristics of the vehicle easily-shielded region.
5. The vehicle detection and tracking method according to claim 4, wherein the obtaining of the preset vehicle detection model comprises:
dividing a vehicle object into a vehicle visible area and a vehicle easily-sheltered area;
and constructing a preset vehicle detection model comprising the vehicle visible area component and the vehicle easily-sheltered area component by utilizing a mixed image template comprising different types of features.
6. The vehicle detection and tracking method according to claim 4, wherein the training of the preset vehicle detection model by using the image training samples and the learning of the scale and position relationship between vehicle components in the preset vehicle detection model comprises:
generating a characteristic response matrix by using the image training sample; wherein each row in the feature response matrix represents a feature response vector of one of the image training samples;
selecting the features of which the response values in all the image training samples in the feature response matrix are larger than a preset threshold value to obtain a large feature response area;
calculating the region scores of all the large feature response regions according to a preset score calculation formula; wherein, the preset score calculation formula is as follows:
Figure FDA0003769709550000021
wherein, B k Represents the kth large feature response region, rows () represents the positive examples contained in the large feature response region, cols () represents the features contained in the large feature response region, β k,j Represents the weight corresponding to the primitive j in the mixed image template in the kth large-characteristic response area, R i,j Represents the characteristic response value, z, corresponding to the ith row and the jth column k,j Is represented by a to k,j Corresponding to an independent standard constant, score (B) k ) A score value representing the large feature response region;
according to the region score, determining a target response region of all the large feature response regions, wherein the score is larger than a region score threshold;
and learning the scale and position relation among the vehicle parts by using all the target response areas.
7. The vehicle detection tracking method of claim 1, wherein the determining the detection scores of all vehicle candidates comprises:
filtering by changing the position, the direction and the scale of a mixed image template corresponding to a target vehicle component in the trained vehicle detection model, and obtaining all scores of the target vehicle component according to a component score calculation formula; wherein the component score calculation formula is:
Figure FDA0003769709550000031
wherein (x) j ,y j ,o j ,s j ) Representing the target vehicle componentPosition (x) as a function of template j ,y j ) And changing the direction o j Transforming the scale s j ,τ x,y,o,s (x j ,y j ,o j ,s j ) (x) representing corresponding features in the hybrid image template of the target vehicle component j ,y j ,o j ,s j ) MAX _ RESPONSE () represents the local area maximum eigen RESPONSE value vector, β k,j Represents the weight, Z, corresponding to the primitive j in the mixed image template in the kth large-feature response area k,j Is represented by a to k,j Corresponding independent standard constant, SUM _ LPAR k () A score representing the target vehicle component;
generating component score vectors by using all the scores, and constructing a region detection model by using all the component score vectors;
changing the position, the direction and the scale on the current frame to-be-detected region image through the region detection model, calculating according to a region detection formula to obtain a region detection score, and determining the detection score of the current vehicle candidate; wherein the region detection formula is:
Figure FDA0003769709550000032
wherein,
Figure FDA0003769709550000033
representing a region detection model, r g Score vector representing the g-th target vehicle component, and SUM _ DETECT () representing the region detection score.
8. The vehicle detecting and tracking method according to claim 1, wherein after detecting all vehicle tracking objects in the current frame image of the region to be detected by using the trained vehicle detection model, the method further comprises:
and carrying out gray processing on the image of the current frame to be detected region, and carrying out rapid median filtering on the processed image.
9. The vehicle detecting and tracking method according to claim 8, wherein the graying the image of the current frame to be detected comprises:
determining the format of the current frame to-be-detected region image;
when the current frame to-be-detected region image is in a YUV format, directly extracting a value of a Y component in the current frame to-be-detected region image, and taking the value of the Y component as a gray value;
when the image of the current frame to-be-detected area is in an RGB format, calculating the gray values of all pixels in the image of the current frame to-be-detected area by using a preset gray formula; wherein the preset graying formula is as follows:
GrayValue=(306×R+601×G+117×B)>>10;
wherein, R represents the pixel value of the R channel in the current frame to-be-detected region image, G represents the pixel value of the G channel in the current frame to-be-detected region image, B represents the pixel value of the B channel in the current frame to-be-detected region image, and GrayValue represents the gray value.
10. The vehicle detection and tracking method of claim 1, wherein the refining the matched feature descriptors comprises:
and purifying the matched feature descriptors by using consistency constraint of adjacent feature points.
11. The vehicle detection and tracking method according to any one of claims 1 to 10, characterized by further comprising:
determining a tone value of the visible area of the vehicle and a tone value of the easy-to-block area of the vehicle in the image of the vehicle to be tracked;
determining a preset hue value range corresponding to each area according to a range determination formula by using the hue value of the visible area of the vehicle and the hue value of the area easy to shield of the vehicle; wherein the range determination formula is:
D1∈[T1-10°,T1+10°],
D2∈[T2-10°,T2+10°];
wherein, T1 represents the tone value of the vehicle visible area, T2 represents the tone value of the vehicle easy-to-block area, D1 represents the preset tone value range of the vehicle visible area, and D2 represents the preset tone value range of the vehicle easy-to-block area.
12. A vehicle detection and tracking device, comprising:
the system comprises a first extraction module, a second extraction module and a third extraction module, wherein the first extraction module is used for determining a vehicle image to be tracked and extracting a first feature descriptor of the vehicle image to be tracked;
the first detection module is used for acquiring images of the area to be detected which are arranged according to a time sequence and detecting all vehicle tracking objects in the image of the current frame area to be detected by using the trained vehicle detection model; the trained vehicle detection model is a model comprising vehicle visible region component characteristics and vehicle easily-sheltered region component characteristics;
the second extraction module is used for extracting a second feature descriptor of an image area corresponding to the target vehicle tracking object in all the vehicle tracking objects;
the characteristic matching module is used for carrying out local sensitive Hash matching on the first characteristic descriptor and the second characteristic descriptor and purifying the matched characteristic descriptors;
the number judgment module is used for determining the number of the purified feature descriptors and judging whether the number is greater than a preset threshold value;
the vehicle tracking module is used for tracking by using the position, the direction and the scale of the target vehicle tracking object when the regional tone value in the image region corresponding to the target vehicle tracking object is within the preset tone value range if the number is larger than the preset threshold value;
the second detection module is used for detecting all vehicle tracking objects in the next frame of image of the area to be detected if the number is smaller than a preset threshold value;
the first detection module is specifically configured to, S11: performing template matching on the current frame to-be-detected region image based on a mixed image template in the trained vehicle detection model to obtain all vehicle candidates in the current frame to-be-detected region image, and determining detection scores of all the vehicle candidates; s12: determining the optimal vehicle candidate with the highest detection score in all the vehicle candidates, judging whether the detection score of the optimal vehicle candidate is larger than a detection score threshold value, and if so, entering S13; if not, the detection is finished; s13: determining the optimal vehicle candidate as a vehicle tracking object, recording the position, the direction and the scale of the optimal vehicle candidate, removing the optimal vehicle candidate from the current frame to-be-detected region image, taking the image without the optimal vehicle candidate as the current frame to-be-detected region image, and entering S11.
13. A vehicle detection and tracking device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the vehicle detection and tracking method according to any one of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium for storing a computer program which, when executed by a processor, performs the steps of the vehicle detection and tracking method according to any one of claims 1 to 11.
CN201811399560.3A 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium Active CN109543610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811399560.3A CN109543610B (en) 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811399560.3A CN109543610B (en) 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109543610A CN109543610A (en) 2019-03-29
CN109543610B true CN109543610B (en) 2022-11-11

Family

ID=65850184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811399560.3A Active CN109543610B (en) 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109543610B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784794B (en) * 2021-01-29 2024-02-02 深圳市捷顺科技实业股份有限公司 Vehicle parking state detection method and device, electronic equipment and storage medium
CN116758494B (en) * 2023-08-23 2023-12-22 深圳市科灵通科技有限公司 Intelligent monitoring method and system for vehicle-mounted video of internet-connected vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159063A (en) * 2007-11-13 2008-04-09 上海龙东光电子有限公司 Hyper complex crosscorrelation and target centre distance weighting combined tracking algorithm
JP2010039617A (en) * 2008-08-01 2010-02-18 Toyota Central R&D Labs Inc Object tracking device and program
CN102867411A (en) * 2012-09-21 2013-01-09 博康智能网络科技股份有限公司 Taxi dispatching method and taxi dispatching system on basis of video monitoring system
CN104463238A (en) * 2014-12-19 2015-03-25 深圳市捷顺科技实业股份有限公司 License plate recognition method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110026770A1 (en) * 2009-07-31 2011-02-03 Jonathan David Brookshire Person Following Using Histograms of Oriented Gradients
CN101800890B (en) * 2010-04-08 2013-04-24 北京航空航天大学 Multiple vehicle video tracking method in expressway monitoring scene
JP5780979B2 (en) * 2012-02-17 2015-09-16 株式会社東芝 Vehicle state detection device, vehicle behavior detection device, and vehicle state detection method
CN102867416B (en) * 2012-09-13 2014-08-06 中国科学院自动化研究所 Vehicle part feature-based vehicle detection and tracking method
CN103456030B (en) * 2013-09-08 2016-04-13 西安电子科技大学 Based on the method for tracking target of scattering descriptor
US9779331B2 (en) * 2014-04-24 2017-10-03 Conduent Business Services, Llc Method and system for partial occlusion handling in vehicle tracking using deformable parts model
CN105354857B (en) * 2015-12-07 2018-09-21 北京航空航天大学 A kind of track of vehicle matching process for thering is viaduct to block
CN105844669B (en) * 2016-03-28 2018-11-13 华中科技大学 A kind of video object method for real time tracking based on local Hash feature
CN107066968A (en) * 2017-04-12 2017-08-18 湖南源信光电科技股份有限公司 The vehicle-mounted pedestrian detection method of convergence strategy based on target recognition and tracking
CN108765452A (en) * 2018-05-11 2018-11-06 西安天和防务技术股份有限公司 A kind of detection of mobile target in complex background and tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159063A (en) * 2007-11-13 2008-04-09 上海龙东光电子有限公司 Hyper complex crosscorrelation and target centre distance weighting combined tracking algorithm
JP2010039617A (en) * 2008-08-01 2010-02-18 Toyota Central R&D Labs Inc Object tracking device and program
CN102867411A (en) * 2012-09-21 2013-01-09 博康智能网络科技股份有限公司 Taxi dispatching method and taxi dispatching system on basis of video monitoring system
CN104463238A (en) * 2014-12-19 2015-03-25 深圳市捷顺科技实业股份有限公司 License plate recognition method and system

Also Published As

Publication number Publication date
CN109543610A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN112085952B (en) Method and device for monitoring vehicle data, computer equipment and storage medium
CN109033950B (en) Vehicle illegal parking detection method based on multi-feature fusion cascade depth model
CN108268867B (en) License plate positioning method and device
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
CN102509291B (en) Pavement disease detecting and recognizing method based on wireless online video sensor
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN110502982A (en) The method, apparatus and computer equipment of barrier in a kind of detection highway
CN113569724B (en) Road extraction method and system based on attention mechanism and dilation convolution
CN110826484A (en) Vehicle weight recognition method and device, computer equipment and model training method
CN108710841B (en) Human face living body detection device and method based on MEMs infrared array sensor
CN112784724A (en) Vehicle lane change detection method, device, equipment and storage medium
CN109543610B (en) Vehicle detection tracking method, device, equipment and storage medium
WO2023071024A1 (en) Driving assistance mode switching method, apparatus, and device, and storage medium
CN111627057A (en) Distance measuring method and device and server
CN111753610A (en) Weather identification method and device
CN108304852B (en) Method and device for determining road section type, storage medium and electronic device
CN108509826B (en) Road identification method and system for remote sensing image
CN114005105A (en) Driving behavior detection method and device and electronic equipment
CN114155278A (en) Target tracking and related model training method, related device, equipment and medium
CN112257567B (en) Training of behavior recognition network, behavior recognition method and related equipment
CN115083008A (en) Moving object detection method, device, equipment and storage medium
CN110765940B (en) Target object statistical method and device
CN112784494A (en) Training method of false positive recognition model, target recognition method and device
CN112528994A (en) Free-angle license plate detection method, license plate identification method and identification system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant