CN115661720A - Target tracking and identifying method and system for shielded vehicle - Google Patents
Target tracking and identifying method and system for shielded vehicle Download PDFInfo
- Publication number
- CN115661720A CN115661720A CN202211407202.9A CN202211407202A CN115661720A CN 115661720 A CN115661720 A CN 115661720A CN 202211407202 A CN202211407202 A CN 202211407202A CN 115661720 A CN115661720 A CN 115661720A
- Authority
- CN
- China
- Prior art keywords
- target vehicle
- target
- vehicle
- frame
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000007405 data analysis Methods 0.000 claims abstract description 14
- 230000000007 visual effect Effects 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims description 54
- 230000008569 process Effects 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 18
- 230000010354 integration Effects 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 6
- 230000007704 transition Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000013481 data capture Methods 0.000 claims description 2
- 230000009191 jumping Effects 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 229910052731 fluorine Inorganic materials 0.000 claims 1
- 125000001153 fluoro group Chemical group F* 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 6
- 238000011176 pooling Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010790 dilution Methods 0.000 description 1
- 239000012895 dilution Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a target tracking and identifying method and a target tracking and identifying system for an occluded vehicle, which belong to the technical field of image data processing, wherein the method comprises the following steps: building a model for data analysis; capturing video data of a target vehicle in running; dividing video data in a mode of taking a video frame as a unit; traversing the video data, reading image data of each frame in the video data according to the frame, analyzing the position of the vehicle in the image data, and acquiring the motion characteristic and the appearance visual characteristic of the target vehicle; judging whether a target vehicle is detected in the current frame; if yes, continuing to read the next frame of video data; if the current frame does not exist, predicting the position of the target vehicle in the current frame based on the obtained motion characteristics of the target vehicle; and summarizing the position of the vehicle in each frame of image data to obtain the whole-course driving track of the vehicle. Through the prediction of the sheltered position of the target vehicle, the running track which is possibly generated after the vehicle is sheltered can be effectively judged, and the phenomenon that the target vehicle is lost is reduced.
Description
Technical Field
The invention belongs to the technical field of image data processing, and particularly relates to a target tracking and identifying method and a target tracking and identifying system for an occluded vehicle.
Background
In the development trend of intelligent traffic, effective vehicle running information can be provided for traffic control by tracking a target vehicle in real time, so that the vehicle tracking technology occupies a non-negligible position in traffic management. For the actual tracking requirement of the vehicle, a method of target image data analysis is often adopted in the prior art to classify and identify the target object of the picture.
However, in the process of practical application, a complex driving environment often causes a phenomenon that a tracked target vehicle is partially shielded or completely shielded, so that a tracked target is lost, and the robustness of real-time tracking is reduced.
Disclosure of Invention
The purpose of the invention is as follows: a target tracking and identifying method and a target tracking and identifying system of an occluded vehicle are provided to solve the problems in the prior art. Through the prediction of the sheltered position of the target vehicle, the running track which is possibly generated after the vehicle is sheltered is effectively judged, and therefore the tracking loss phenomenon which is generated after the vehicle is sheltered is reduced.
The technical scheme is as follows: in a first aspect, a method for tracking and identifying a target of an occluded vehicle is provided, which specifically includes the following steps:
step 1, constructing a target vehicle detection model and a track prediction model for data analysis;
to improve the performance of the target vehicle detection model and the trajectory prediction model, the constructed models are first performance trained before performing data analysis.
And aiming at the performance of the target vehicle detection model, before the target vehicle detection is executed, the learning capacity of the target vehicle detection model is optimized by adopting a classification loss function.
Aiming at the performance of the track prediction model, before the track prediction of the target vehicle is executed, the error between a prediction frame and a boundary frame where an actual target is located is judged through the Mahalanobis distance between the prediction frame and the boundary frame where the actual target is located and the cosine distance of the apparent characteristic in the training process; subsequently, the parameters of the kalman filter are optimally updated based on the error values.
Step 2, capturing video data of a target vehicle in running through information acquisition equipment;
step 3, dividing the video data in a mode of taking a video frame as a unit;
step 4, the target vehicle detection model reads image data of each frame in the video data in a mode of traversing the video data, analyzes the position of the vehicle in the image data and obtains the motion characteristic and the appearance visual characteristic of the target vehicle;
step 5, judging whether the target vehicle detection model in the current frame detects a target vehicle; if yes, continuously reading the next frame of video data; if the target vehicle motion characteristics do not exist, predicting the position of the target vehicle in the current frame by adopting the track prediction model based on the obtained target vehicle motion characteristics;
and 6, summarizing the position of the vehicle in each frame of image data to obtain the whole-course driving track of the vehicle.
In some realizations of the first aspect, the process of performing target vehicle detection identification using the target vehicle detection model comprises the steps of:
step 4.1, the target vehicle detection model receives image data corresponding to the current frame;
in order to improve the data analysis accuracy, after the image data corresponding to the current frame is obtained, the image data under the special environment is further preprocessed. When the actual driving environment of the target vehicle is a low-light environment, the feature information of the target vehicle is weakened, so that the difficulty of feature extraction is reduced while the contrast of image data is improved by performing contrast enhancement operation, and the method specifically comprises the following steps:
step 3.1, receiving image data divided by frames;
step 3.2, judging the driving environment of the target vehicle; when the running environment of the target vehicle is a low-light environment, skipping to the step 3.3; otherwise, jumping to the step 4;
step 3.3, converting the RGB mode of the image data into an HIS mode;
step 3.4, constructing a brightness adjusting function based on the HIS mode;
step 3.5, brightness adjustment is carried out on the converted image data by utilizing a brightness adjustment function;
and 3.6, outputting the adjusted image data.
Step 4.2, dividing the received image data into a preset number of grid areas;
4.3, predicting N prediction boundary frames in the grid area according to the characteristic data corresponding to the image data; wherein N is a natural number;
4.4, judging whether the target vehicle exists in the prediction boundary box or not through the confidence coefficient value obtained through calculation;
step 4.5, outputting an analysis result;
wherein the expression for judging whether the target vehicle exists in the boundary box according to the confidence coefficient is as follows:
in the formula, pr represents whether a label of a target vehicle exists in a preset boundary frame, and the value is 1 when the label exists, and is 0 otherwise;representing the intersection ratio of the prediction bounding box and the real bounding box;
and obtaining the prediction boundary box with the maximum confidence coefficient in the prediction boundary boxes in a traversal mode, and taking the prediction boundary box with the maximum confidence coefficient as the position of the target vehicle in the current frame.
When the target vehicle is shielded, the process of predicting the position of the target vehicle by using the track prediction model specifically comprises the following steps:
step 5.1, the track prediction model receives the motion characteristics and the appearance visual characteristics of the target vehicle extracted from the previous frame;
step 5.2, constructing a Kalman filter, correlating the extracted motion characteristics and appearance visual characteristics of the target vehicle, and predicting the current position of the target vehicle; the Kalman filter specifically comprises the following steps in the process of predicting the position of a target vehicle in a current frame:
step 5.2.1, taking the received characteristic information as an initial condition;
step 5.2.2, constructing a state transition matrix;
step 5.2.3, estimating the motion state mean value and covariance of the target vehicle by using the state transfer function;
H t =Fx t-1
P t =FP t-1 F T +Q
in the formula, X t Representing the state of the target vehicle characteristic and the position; x is the number of t-1 Represents the mean value at time t-1; f represents a state transition matrix; q represents the covariance of gaussian noise; p t Represents a correspondence X t The covariance matrix of (a);
and 5.2.4, obtaining the position of the detection frame where the predicted target vehicle is located according to the estimation value.
In some implementation manners of the first aspect, for a phenomenon that a contrast ratio between a target vehicle and a surrounding environment is low in a low-light environment, by performing a contrast-enhancing processing operation on acquired image data, an identification accuracy of the vehicle in the low-light environment is effectively improved. Meanwhile, since the luminance and the chrominance of the HIS mode are separated in the color space, it takes a greater advantage than the RGB mode employed in the related art.
Wherein, the conversion expression from the RGB mode to the HIS mode is as follows:
wherein R represents red in RGB mode; g represents green in RGB mode; b represents blue in RGB mode; h represents a hue in the HIS mode; i represents luminance in the HIS mode; s represents the degree to which the pure color in the HIS mode is diluted by white light;
the brightness adjustment function expression is as follows:
Y=αI γ
in the formula, Y represents the luminance of an output image; i represents the luminance of the input image; alpha represents a preset correction coefficient; γ represents a control coefficient.
In a second aspect, a target tracking and identifying system for an occluded vehicle is provided, which is used to implement a target tracking and identifying method for the occluded vehicle, and the system specifically includes the following modules:
a model construction module for constructing a data analysis model;
the data capturing module is used for capturing the driving video data of the target vehicle;
a partitioning module for partitioning the video data;
the target detection module is used for detecting and identifying the target vehicle and extracting the characteristics of the video data;
a trajectory prediction module for predicting a target vehicle travel trajectory;
the track integration module is used for integrating the position of the target vehicle to form a driving track;
and the track output module is used for outputting the running track.
In some implementations of the second aspect, to meet the tracking requirements of the target vehicle, a model building module is first used to build a target vehicle detection model and a trajectory prediction model, and used for subsequent data analysis. In the practical application process, the video data in the running process of the vehicle is captured through the information acquisition equipment in the data capture module, and the video data is divided according to the frame by the dividing module according to the analysis requirement.
And based on the divided video data, carrying out detection and identification of the target vehicle and feature extraction on the video data by using a target vehicle detection model in a target detection module, and using the extracted data as the basis of subsequent data analysis. Because the target vehicle is shielded in the actual target vehicle detection process, the position of the target vehicle in the shielded time is predicted by adopting the track prediction model in the track prediction module based on the characteristic data extracted by the target detection module.
Integrating by using a track integration module based on the detected target vehicle position and the predicted position so as to obtain the whole-course driving track of the vehicle; and finally, outputting an integration result of the track integration module by adopting a track output module.
In a third aspect, an apparatus for tracking and recognizing an object of an occluded vehicle is provided, the apparatus comprising: a processor and a memory storing computer program instructions.
The processor reads and executes computer program instructions to realize the target tracking and identifying method of the sheltered vehicle.
In a fourth aspect, a computer-readable storage medium having computer program instructions stored thereon is presented. The computer program instructions, when executed by the processor, implement a method for target tracking identification of occluded vehicles.
Has the advantages that: the invention provides a target tracking and identifying method and a target tracking and identifying system for an occluded vehicle. In addition, aiming at the phenomenon that the target vehicle may be blocked in the tracking process, the embodiment further realizes the position prediction of the target vehicle in the video frame under the condition of no blocking object through the proposed track prediction model.
Drawings
FIG. 1 is a flow chart of data processing according to the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
The applicant believes that in the technical field of vehicle tracking applications, due to the influence of practical environmental factors, such as illumination, buildings, pedestrians, and trees, a target object is often shielded, and then a target object is lost. Aiming at the phenomenon that the tracking of a target vehicle is lost due to the fact that the target vehicle is shielded, a target tracking identification method and a target tracking identification system of the shielded vehicle are provided, through prediction of a vehicle running path, the running track which is possibly generated after the vehicle is shielded is effectively judged, and therefore the tracking loss phenomenon which is generated after the vehicle is shielded is reduced.
Example one
In one embodiment, aiming at the phenomenon that a vehicle is blocked, in the actual tracking application facing a target vehicle, a target tracking identification method of the blocked vehicle is provided for predicting the running track of the vehicle, so that the vehicle tracking in the blocking process is realized. As shown in fig. 1, the method specifically includes the following steps:
step 1, constructing a target vehicle detection model and a track prediction model for data analysis;
step 2, capturing video data of a target vehicle in running through information acquisition equipment;
step 3, dividing the video data in a mode of taking the video frame as a unit;
and 4, reading image data of each frame in the video data by the target vehicle detection model in a mode of traversing the video data, analyzing the position of the vehicle in the image data, and acquiring the motion characteristic and the appearance visual characteristic of the target vehicle.
Specifically, the process of executing the target vehicle detection and identification by the target vehicle detection model comprises the following steps: firstly, receiving image data corresponding to a current frame; secondly, dividing the received image data into a preset number of grid areas; thirdly, predicting N prediction bounding boxes in the grid region, wherein N is a natural number; calculating the confidence coefficient values of all the obtained prediction boundary frames, obtaining the prediction boundary frame with the maximum confidence coefficient in the prediction boundary frames in a traversal mode, and taking the prediction boundary frame with the maximum confidence coefficient as the position of the target vehicle in the current frame; and finally, outputting an analysis result.
Wherein the expression for judging whether the target vehicle exists in the boundary box according to the confidence coefficient is as follows:
in the formula, pr represents whether a label of a target vehicle exists in a preset boundary frame, and the value is 1 when the label exists, and is 0 otherwise;representing the intersection ratio of the predicted bounding box to the true bounding box.
In a further embodiment, in order to improve the performance of the target vehicle detection model, a classification loss function is used to optimize the learning capability of the target vehicle detection model. The classification loss function expression used in the preferred embodiment is:
where N denotes the number of targets, N denotes the current target index, superscript 2 denotes the norm squared, subscript 2 denotes the sum of the squares of the absolute values of the vector elements in the root, y n Representing the corresponding position parameter of the current image frame in the deep convolutional network as a calculation sample in the classification,representing target image frame as partition class correspondence in deep convolutional networkThe location parameter of (2). In a further embodiment, based on the adopted classification loss function, a two-classification cross loss function is further provided, a parameter factor is added, and the attention of the target vehicle detection model is placed in a difficult and wrongly-classified sample; wherein the two-class cross-loss function expression is:
in the formula, y ′ The output after the activation function is represented, the value range is within 0-1, and the larger the output probability is, the smaller the loss is for the positive sample due to the common cross entropy; for negative samples, the smaller the output probability, the smaller the penalty. Therefore, the loss function at this point is slow and may not be optimized to the optimum during an iteration of a large number of simple samples. In order to reduce the loss of easily classified samples and make the whole network pay more attention to difficult and wrongly classified samples, two primers of alpha and gamma are introduced, namely:
in the formula, α represents a balance factor for balancing the positive and negative sample importance, and γ represents the sample importance, preferably 0.25.
Step 5, judging whether the target vehicle detection model detects the target vehicle in the current frame; if the target vehicle exists, continuing to read the next frame; if the target vehicle does not exist, predicting the position of the target vehicle in the current frame by adopting a track prediction model based on the obtained motion characteristics of the target vehicle;
specifically, when the target vehicle detection model does not detect the target vehicle, it indicates that there is no target vehicle in the current frame, that is, the target vehicle is occluded. In order to effectively obtain the running path of the vehicle, the position of the target vehicle in the current frame is predicted by adopting a track prediction model based on the obtained motion characteristics of the target vehicle, and the predicted position is taken as the position of the target vehicle in the current frame.
When the target vehicle is shielded, the process of predicting the position of the target vehicle by using the track prediction model specifically comprises the following steps:
firstly, a track prediction model receives the motion characteristics and the appearance visual characteristics of a target vehicle extracted from the previous frame; and secondly, constructing a Kalman filter, correlating the extracted motion characteristics and the appearance visual characteristics of the target vehicle, and predicting the current position of the target vehicle.
In the process of predicting the position of the target vehicle in the current frame by using the Kalman filter, the method specifically comprises the following steps:
step (1), taking the received characteristic information as an initial condition;
step (2), constructing a state transition matrix;
estimating the motion state mean value and covariance of the target vehicle by using a state transfer function;
X t =Fx t-1
P t =FP t-1 F T +Q
in the formula, X t Representing the state of the target vehicle characteristic and the position; x is the number of t-1 Represents the mean value at time t-1; f represents a state transition matrix; q represents a covariance matrix of Gaussian noise; p t Represents a correspondence X t The covariance matrix of (2). Passing state x at time t-1 t-1 Effectively predicting the state X at time t t Based on the covariance matrix P at time t-1 t-1 The sum system noise matrix Q can effectively obtain the covariance matrix P at the t moment t 。
And (4) obtaining the position of a detection frame where the target vehicle is predicted according to the estimation value.
In a further embodiment, in order to improve the performance of the trajectory prediction model, model performance optimization training is further performed. In the training process, the error between the prediction frame and the boundary frame where the actual target is located is judged through the Mahalanobis distance between the prediction frame and the boundary frame where the actual target is located and the cosine distance of the apparent characteristic; subsequently, the parameters of the kalman filter are optimally updated based on the error values.
And 6, summarizing the position of the vehicle in each frame of image data to obtain the whole-course driving track of the vehicle.
In a further embodiment, the target vehicle detection model comprises: a Darknet-53 network, a feature map pyramid FPN network structure, and a residual structure. When the target vehicle detection model is used for executing the position of the target vehicle at the current frame, a space pooling module is further provided for the image data in the input model, and the problem of inconsistency of the data image data in size is solved by adopting a fixed pooling method.
Specifically, the space pooling module comprises: the system comprises an input layer, a pooling layer and a connecting layer, wherein the pooling layer is formed by juxtaposing convolution kernels with different scales. And the spatial pooling module parallelly enters a pooling layer formed by different convolution kernels after passing through an input layer aiming at the received data, and finally performs integration of output data of the pooling layer through a connecting layer.
The receptive field can be effectively increased through the pooling operation, and the introduction of the spatial pooling module enables the target vehicle detection model to effectively extract multi-scale depth features with different receptive fields.
According to the embodiment, the target vehicle is detected and identified through the constructed target vehicle detection model, and the vehicle running track is obtained by summarizing the positions of the vehicle at different time points, so that the vehicle tracking is realized. In addition, aiming at the phenomenon that the target vehicle may be blocked in the tracking process, the embodiment further realizes the position prediction of the target vehicle in the video frame under the condition of no blocking object through the proposed track prediction model.
Example two
In a further embodiment based on the embodiment, a low-light environment such as night often causes the actual vehicle tracking to be affected, and the comparison result of the vehicle information is not obvious. Under the low-light environment, the color features, the texture features and the like of the vehicle are weakened, so that the contrast is not obvious, and the difficulty of feature extraction is deepened. The embodiment performs contrast enhancement processing on the acquired picture aiming at the application environment under the weak light, thereby improving the vehicle identification precision under the weak light environment.
Specifically, the acquired image data is often presented in an RGB mode, but because the RGB model still has defects in the color presentation degree, the embodiment preferably converts the RGB mode into an HIS mode with higher color saturation through mode conversion; and then, adjusting the image background brightness based on the converted data to realize background enhancement and improve the contrast of the target and the surrounding environment.
The HIS mode separates color information from gray information, the attribute of the pure color is expressed through a hue component H, the degree measurement of dilution of the pure color by white light is expressed through a saturation component S, and the brightness of the color is expressed through brightness I.
The expression for the RGB to HIS mode conversion is:
wherein R represents red in RGB mode; g represents green in RGB mode; b represents blue in RGB mode; h represents a hue in the HIS mode; i denotes luminance in the HIS mode; s represents the degree to which the pure color in the HIS mode is diluted by white light.
After the mode conversion is completed, brightness adjustment is carried out on the background in the image based on the converted data, and the corresponding adjustment expression is as follows:
Y=αI γ
in the formula, Y represents the luminance of an output image; i denotes the luminance of the input image; alpha represents a preset correction coefficient; γ represents a control coefficient.
Aiming at the phenomenon that the contrast ratio of a target vehicle and the surrounding environment is low in the low-light environment, the identification accuracy of the vehicle in the low-light environment is effectively improved by carrying out contrast enhancement processing operation on the acquired image data. Meanwhile, since the luminance and the chrominance of the HIS mode are separated in the color space, it takes a greater advantage than the RGB mode employed in the related art.
EXAMPLE III
In one embodiment, a target tracking and identifying system of an occluded vehicle is provided for implementing a target tracking and identifying method of the occluded vehicle, and specifically includes the following modules: the device comprises a model building module, a data capturing module, a dividing module, a target detection module, a track prediction module, a track integration module and a track output module.
Specifically, the model construction module is used for constructing a target vehicle detection model and a track prediction model according to the image data analysis requirements; the data capturing module is used for capturing video data of the target vehicle in running; the dividing module is used for dividing the video data; the target detection module is used for reading the divided video data, detecting and identifying target vehicles in the video data and extracting corresponding vehicle characteristics; the track prediction module is used for realizing the prediction of the running track of the target vehicle by utilizing a track prediction model; the track integration module is used for integrating the identified positions of the target vehicle to obtain the whole-course running track of the vehicle; the track output module is used for outputting an integration result of the track integration module.
In a further embodiment, for the tracking requirement of the target vehicle, a model construction module is firstly adopted to construct a target vehicle detection model and a track prediction model according to the data analysis purpose. Aiming at the tracking analysis requirement of a target vehicle, capturing video data in the running process of the vehicle by adopting information acquisition equipment in a data capturing module, and dividing the video data by a dividing module according to the analysis requirement. And then, the target detection module adopts a target vehicle detection model to detect and identify the target vehicle and extract the characteristics of the video data, and the extracted data is used as the basis of subsequent data analysis. Because the target vehicle is shielded in the actual target vehicle detection process, the position of the target vehicle in the shielded time is predicted by adopting the track prediction model in the track prediction module based on the characteristic data extracted by the target detection module. Integrating by using a track integration module based on the detected target vehicle position and the predicted position so as to obtain the whole-course driving track of the vehicle; and finally, outputting an integration result of the track integration module by adopting a track output module.
Example four
In one embodiment, a target tracking identification device for an occluded vehicle is provided, the device comprising: a processor and a memory storing computer program instructions.
The processor reads and executes computer program instructions to realize the target tracking and identifying method of the sheltered vehicle.
EXAMPLE five
In one embodiment, a computer-readable storage medium having computer program instructions stored thereon is presented.
Wherein the computer program instructions, when executed by the processor, implement a target tracking identification method for occluded vehicles.
As noted above, while the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limited thereto. Various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A target tracking and identifying method for an occluded vehicle is characterized by comprising the following steps:
step 1, constructing a target vehicle detection model and a track prediction model for data analysis;
step 2, capturing video data of a target vehicle in running through information acquisition equipment;
step 3, dividing the video data in a mode of taking a video frame as a unit;
step 4, the target vehicle detection model reads image data of each frame in the video data in a mode of traversing the video data, analyzes the position of the vehicle in the image data and obtains the motion characteristic and the appearance visual characteristic of the target vehicle;
step 5, judging whether the target vehicle detection model in the current frame detects the target vehicle; if yes, continuously reading the next frame of video data; if the target vehicle motion characteristics do not exist, predicting the position of the target vehicle in the current frame by adopting the track prediction model based on the obtained target vehicle motion characteristics;
and 6, summarizing the position of the vehicle in each frame of image data to obtain the whole-course driving track of the vehicle.
2. The method for tracking and identifying the target of the occluded vehicle according to claim 1, wherein the process of performing target vehicle detection and identification by using the target vehicle detection model comprises the following steps:
step 4.1, the target vehicle detection model receives image data corresponding to the current frame;
step 4.2, dividing the received image data into a preset number of grid areas;
4.3, predicting N prediction boundary frames in the grid area according to the characteristic data corresponding to the image data; wherein N is a natural number;
4.4, judging whether the target vehicle exists in the prediction boundary box or not through the confidence coefficient value obtained through calculation;
step 4.5, outputting an analysis result;
wherein the expression for judging whether the target vehicle exists in the boundary box according to the confidence coefficient is as follows:
in the formula, pr represents whether a label of a target vehicle exists in a preset boundary frame, and the value is 1 when the label exists, and is 0 otherwise;representing the intersection ratio of the prediction bounding box and the real bounding box;
and acquiring a prediction boundary box with the maximum confidence coefficient in the prediction boundary boxes in a traversal mode, and taking the prediction boundary box with the maximum confidence coefficient as the position of the target vehicle in the current frame.
3. The method according to claim 1, wherein in order to improve the performance of the target vehicle detection model, before performing target vehicle detection, the learning capacity of the target vehicle detection model is optimized by using a classification loss function.
4. The method for tracking and identifying the target of the occluded vehicle according to claim 1, wherein when the target vehicle is occluded, the step of performing the target vehicle position prediction process by using the trajectory prediction model specifically comprises the following steps:
step 5.1, the track prediction model receives the motion characteristics and the appearance visual characteristics of the target vehicle extracted from the previous frame;
step 5.2, constructing a Kalman filter, correlating the extracted motion characteristics and appearance visual characteristics of the target vehicle, and predicting the current position of the target vehicle; the Kalman filter specifically comprises the following steps in the process of predicting the position of a target vehicle in a current frame:
step 5.2.1, taking the received characteristic information as an initial condition;
step 5.2.2, constructing a state transition matrix;
step 5.2.3, estimating the motion state mean value and covariance of the target vehicle by using the state transfer function;
H t =Fx t-1
P t =FP t-1 F T +Q
in the formula, X t Representing the state of the target vehicle characteristic and the position; x is a radical of a fluorine atom t-1 To representMean value at time t-1; f represents a state transition matrix; q represents the covariance of gaussian noise; p is t Represents a correspondence X t The covariance matrix of (a);
and 5.2.4, obtaining the position of the detection frame where the predicted target vehicle is located according to the estimation value.
5. The method for tracking and identifying the target of the occluded vehicle according to claim 4, wherein, in order to improve the performance of the trajectory prediction model, model performance optimization training is further performed;
in the training process, the error between the prediction frame and the boundary frame where the actual target is located is judged through the Mahalanobis distance between the prediction frame and the boundary frame where the actual target is located and the cosine distance of the apparent characteristic; subsequently, the parameters of the kalman filter are optimally updated based on the error values.
6. The method for tracking and identifying the target of the sheltered vehicle according to claim 1, wherein when the actual driving environment of the target vehicle is a low light environment, the feature information of the target vehicle is weakened, and in order to improve the contrast of the image data, the difficulty of feature extraction is reduced by performing a contrast enhancement operation, specifically comprising the following steps:
step 3.1, receiving image data divided by frames;
step 3.2, judging the driving environment of the target vehicle; when the running environment of the target vehicle is a low-light environment, skipping to the step 3.3; otherwise, jumping to the step 4;
step 3.3, converting the RGB mode of the image data into an HIS mode;
step 3.4, constructing a brightness adjusting function based on the HIS mode;
step 3.5, brightness adjustment is carried out on the converted image data by utilizing a brightness adjustment function;
and 3.6, outputting the adjusted image data.
7. The method for tracking and identifying the target of the occluded vehicle according to claim 6, wherein the conversion expression from the RGB mode to the HIS mode is as follows:
wherein R represents red in RGB mode; g represents green in RGB mode; b represents blue in RGB mode; h represents a hue in the HIS mode; i represents luminance in the HIS mode; s represents the degree to which the pure color in HIS mode is diluted by white light;
the brightness adjustment function expression is as follows:
Y=αI γ
in the formula, Y represents the luminance of an output image; i represents the luminance of the input image; alpha represents a preset correction coefficient; γ represents a control coefficient.
8. An occluded vehicle target tracking and recognition system, which is used for realizing the occluded vehicle target tracking and recognition according to any one of claims 1-7, and is characterized by specifically comprising the following modules:
a model construction module configured to construct a target vehicle detection model and a trajectory prediction model for image data analysis according to a requirement;
a data capture module configured to capture video data of a target vehicle in motion using an information collection device;
a dividing module configured to divide video data according to a manner of video frames;
the target detection module is used for detecting and identifying a target vehicle and extracting characteristics from the video data by using the target vehicle detection model;
a trajectory prediction module configured to predict a travel trajectory of the target vehicle using the trajectory prediction model based on the features extracted by the target detection module;
the track integration module is arranged for integrating the identified positions of the target vehicle to obtain the whole-course running track of the vehicle;
and the track output module is arranged to output the integration result of the track integration module.
9. An occluded vehicle target tracking identification device, the device comprising:
a processor and a memory storing computer program instructions;
the processor reads and executes the computer program instructions to implement the method for object tracking identification of occluded vehicles according to any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon computer program instructions, which when executed by a processor, implement the method for target tracking identification of an occluded vehicle according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211407202.9A CN115661720B (en) | 2022-11-10 | 2022-11-10 | Target tracking and identifying method and system for shielded vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211407202.9A CN115661720B (en) | 2022-11-10 | 2022-11-10 | Target tracking and identifying method and system for shielded vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115661720A true CN115661720A (en) | 2023-01-31 |
CN115661720B CN115661720B (en) | 2024-07-02 |
Family
ID=85021498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211407202.9A Active CN115661720B (en) | 2022-11-10 | 2022-11-10 | Target tracking and identifying method and system for shielded vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115661720B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115994929A (en) * | 2023-03-24 | 2023-04-21 | 中国兵器科学研究院 | Multi-target tracking method integrating space motion and apparent feature learning |
CN116012949A (en) * | 2023-02-06 | 2023-04-25 | 南京智蓝芯联信息科技有限公司 | People flow statistics and identification method and system under complex scene |
CN117152258A (en) * | 2023-11-01 | 2023-12-01 | 中国电建集团山东电力管道工程有限公司 | Product positioning method and system for intelligent workshop of pipeline production |
CN117876977A (en) * | 2024-01-11 | 2024-04-12 | 江苏昌兴阳智能家居有限公司 | Target identification method based on monitoring video |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919072A (en) * | 2019-02-28 | 2019-06-21 | 桂林电子科技大学 | Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking |
CN111667512A (en) * | 2020-05-28 | 2020-09-15 | 浙江树人学院(浙江树人大学) | Multi-target vehicle track prediction method based on improved Kalman filtering |
CN112287906A (en) * | 2020-12-18 | 2021-01-29 | 中汽创智科技有限公司 | Template matching tracking method and system based on depth feature fusion |
CN112884816A (en) * | 2021-03-23 | 2021-06-01 | 武汉理工大学 | Vehicle feature deep learning recognition track tracking method based on image system |
CN113674328A (en) * | 2021-07-14 | 2021-11-19 | 南京邮电大学 | Multi-target vehicle tracking method |
CN113983737A (en) * | 2021-10-18 | 2022-01-28 | 海信(山东)冰箱有限公司 | Refrigerator and food material positioning method thereof |
CN114663471A (en) * | 2022-02-17 | 2022-06-24 | 深圳大学 | Target tracking method and device and computer readable storage medium |
-
2022
- 2022-11-10 CN CN202211407202.9A patent/CN115661720B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919072A (en) * | 2019-02-28 | 2019-06-21 | 桂林电子科技大学 | Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking |
CN111667512A (en) * | 2020-05-28 | 2020-09-15 | 浙江树人学院(浙江树人大学) | Multi-target vehicle track prediction method based on improved Kalman filtering |
CN112287906A (en) * | 2020-12-18 | 2021-01-29 | 中汽创智科技有限公司 | Template matching tracking method and system based on depth feature fusion |
CN112884816A (en) * | 2021-03-23 | 2021-06-01 | 武汉理工大学 | Vehicle feature deep learning recognition track tracking method based on image system |
CN113674328A (en) * | 2021-07-14 | 2021-11-19 | 南京邮电大学 | Multi-target vehicle tracking method |
CN113983737A (en) * | 2021-10-18 | 2022-01-28 | 海信(山东)冰箱有限公司 | Refrigerator and food material positioning method thereof |
CN114663471A (en) * | 2022-02-17 | 2022-06-24 | 深圳大学 | Target tracking method and device and computer readable storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012949A (en) * | 2023-02-06 | 2023-04-25 | 南京智蓝芯联信息科技有限公司 | People flow statistics and identification method and system under complex scene |
CN116012949B (en) * | 2023-02-06 | 2023-11-17 | 南京智蓝芯联信息科技有限公司 | People flow statistics and identification method and system under complex scene |
CN115994929A (en) * | 2023-03-24 | 2023-04-21 | 中国兵器科学研究院 | Multi-target tracking method integrating space motion and apparent feature learning |
CN117152258A (en) * | 2023-11-01 | 2023-12-01 | 中国电建集团山东电力管道工程有限公司 | Product positioning method and system for intelligent workshop of pipeline production |
CN117152258B (en) * | 2023-11-01 | 2024-01-30 | 中国电建集团山东电力管道工程有限公司 | Product positioning method and system for intelligent workshop of pipeline production |
CN117876977A (en) * | 2024-01-11 | 2024-04-12 | 江苏昌兴阳智能家居有限公司 | Target identification method based on monitoring video |
CN117876977B (en) * | 2024-01-11 | 2024-07-12 | 江苏昌兴阳智能家居有限公司 | Target identification method based on monitoring video |
Also Published As
Publication number | Publication date |
---|---|
CN115661720B (en) | 2024-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111723748B (en) | Infrared remote sensing image ship detection method | |
CN110956094B (en) | RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network | |
CN115661720B (en) | Target tracking and identifying method and system for shielded vehicle | |
CN111460968B (en) | Unmanned aerial vehicle identification and tracking method and device based on video | |
CN108615226B (en) | Image defogging method based on generation type countermeasure network | |
CN113065558A (en) | Lightweight small target detection method combined with attention mechanism | |
CN110929593B (en) | Real-time significance pedestrian detection method based on detail discrimination | |
CN107273832B (en) | License plate recognition method and system based on integral channel characteristics and convolutional neural network | |
CN103020992B (en) | A kind of video image conspicuousness detection method based on motion color-associations | |
CN109684922B (en) | Multi-model finished dish identification method based on convolutional neural network | |
CN110298297B (en) | Flame identification method and device | |
KR20160143494A (en) | Saliency information acquisition apparatus and saliency information acquisition method | |
JP6482195B2 (en) | Image recognition apparatus, image recognition method, and program | |
KR20180135898A (en) | Systems and methods for training object classifiers by machine learning | |
CN114783003B (en) | Pedestrian re-identification method and device based on local feature attention | |
CN113361326B (en) | Wisdom power plant management and control system based on computer vision target detection | |
CN105930794A (en) | Indoor scene identification method based on cloud computing | |
CN109902576B (en) | Training method and application of head and shoulder image classifier | |
CN111046789A (en) | Pedestrian re-identification method | |
CN109255326A (en) | A kind of traffic scene smog intelligent detecting method based on multidimensional information Fusion Features | |
CN113763427B (en) | Multi-target tracking method based on coarse-to-fine shielding processing | |
CN107045630B (en) | RGBD-based pedestrian detection and identity recognition method and system | |
CN116052090A (en) | Image quality evaluation method, model training method, device, equipment and medium | |
CN114708615A (en) | Human body detection method based on image enhancement in low-illumination environment, electronic equipment and storage medium | |
CN111274964A (en) | Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |