CN113205108A - YOLOv 4-based multi-target vehicle detection and tracking method - Google Patents

YOLOv 4-based multi-target vehicle detection and tracking method Download PDF

Info

Publication number
CN113205108A
CN113205108A CN202011206816.1A CN202011206816A CN113205108A CN 113205108 A CN113205108 A CN 113205108A CN 202011206816 A CN202011206816 A CN 202011206816A CN 113205108 A CN113205108 A CN 113205108A
Authority
CN
China
Prior art keywords
boxes
detection
target
yolov4
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011206816.1A
Other languages
Chinese (zh)
Inventor
柳长源
张林林
何先平
张荟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202011206816.1A priority Critical patent/CN113205108A/en
Publication of CN113205108A publication Critical patent/CN113205108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-target vehicle detection tracking method based on YOLOv4, which comprises the steps of firstly, optimizing the prediction of an anchor box through a k-means clustering algorithm so that YOLOv4 can better adapt to the requirements of a vehicle data set; secondly, the target detection network of YOLOv4 is improved, and the detection precision is effectively improved; and finally, solving the data association problem between the prediction result and the tracking result by adopting Kalman filtering and Hungarian algorithm, and effectively reducing the ID Switch phenomenon by combining target motion information and apparent information as the total association cost. The method not only improves the accuracy of detecting multiple targets and weak and small targets in a complex scene, but also improves the robustness and adaptability of a target tracking algorithm.

Description

YOLOv 4-based multi-target vehicle detection and tracking method
Technical Field
The invention relates to the technical field of target tracking, in particular to a multi-target vehicle detection and tracking method based on YOLOv 4.
Background
Target detection and tracking are hot problems in the field of computer vision, and have important significance in the aspects of intelligent video monitoring, intelligent traffic, robot vision navigation, military guidance and the like. In recent years, with the continuous development of deep learning, the convolutional neural network is widely applied to target detection and tracking, a tracking algorithm which adopts a deep learning network is derived, and great success is achieved in the field of target detection and tracking.
Currently, the target detection algorithm is generally divided into four steps: firstly, target detection, namely selecting a target boundary frame by using a target detection network; secondly, extracting features, namely extracting apparent information and motion information of the target by establishing an apparent feature extraction network, and then predicting the position of the next frame of the target; thirdly, calculating the similarity through the incidence matrix, calculating the incidence matrix by using the apparent characteristics and the position characteristics, and then calculating the similarity of the two frames of targets before and after; and fourthly, matching the targets, associating the target detected by the current frame with the tracked target, and distributing the same ID for the tracked target after the association is successful.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a YOLOv 4-based multi-target vehicle detection and tracking method, which solves the problem of false detection and missed detection of multiple targets and weak targets in target tracking and improves the robustness and adaptability of a target tracking algorithm.
In order to solve the technical problems, the invention provides the following technical scheme:
a multi-target vehicle detection and tracking method based on YOLOv4 comprises the following steps:
s1, generating an anchor box through a k-means clustering algorithm, wherein the anchor box is widely used for setting the initial size of the bounding box in a single-layer target detection algorithm, and is superior to other unsupervised learning algorithms;
s2, improving a target detection network of YOLOv4, continuously increasing the scale on the basis of the fusion of the three scale features of the original YOLOv4, and adding the feature maps into four different scales;
s3, carrying out vehicle Detection on the video frames through an improved YOLOv4 target Detection network to obtain all detected target vehicle frames Detection boxes;
s4, predicting the state of the vehicle in the target vehicle detection frame through a Kalman filter to obtain corresponding target tracking frames;
s5, constructing a cost matrix between the Detection boxes and the Track boxes by utilizing the motion similarity and the apparent similarity between all the Detection boxes and the Track boxes;
and S6, performing relevance matching on the relevance cost in the relevance cost matrix according to the Hungarian algorithm, calculating the matching degree between the two frames before and after, further determining a tracking result, allocating the ID of a target to each object, and realizing multi-target vehicle detection.
Further, the step S1 of generating an anchor box by a k-means clustering algorithm specifically includes the following steps:
s1.1, acquiring a real boundary box of a target on a data set;
s1.2, a k-means algorithm randomly selects k bounding boxes as clustering heads to initialize a normalization process, redistributes clusters around the nearest centroid, and updates according to a certain threshold value until k anchor boxes are generated after convergence.
Further, the step S2 improves the target detection network of YOLOv4, and specifically includes the following steps:
s2.1, modifying a backbone network CSPDarknet-53 of YOLOv4, and adding a feature layer to ensure that the CSPDarknet-53 has four feature layers;
s2.2, inputting the last layer of feature layer into an SPP structure to carry out four times of maximum pooling operation, wherein the sizes of the pooled nuclei of the maximum pooling are respectively 13x13, 9x9, 5x5 and 1x 1;
and S2.3, inputting the four feature layers into the PANET structure to realize top-to-bottom feature extraction and bottom-to-top feature extraction.
And S2.4, finally, predicting the obtained features by using Yoloidea.
Further, the step S5 specifically includes the following steps of constructing a cost matrix between the Detection boxes and the Track boxes by using motion similarities and apparent similarities between all the Detection boxes and the Track boxes:
s5.1, measuring the distance between the Track boxes and the Detection boxes by using the squared Mahalanobis distance to calculate the similarity between the Track boxes and the Detection boxes, wherein the specific formula is as follows:
Figure BDA0002755981430000021
Figure BDA0002755981430000022
djrepresents the jth Track boxes, yjRepresents the ith Track boxes,
Figure BDA0002755981430000023
represents the covariance of d and y;
equation (2) is an indicator that compares the Mahalanobis distance to a threshold of chi-squared distribution, t(1)9.4877, measuring the matching degree of Detection boxes and Track boxes by a threshold value;
s5.2, measuring the distance between the apparent features by using the cosine distance, wherein the calculation formula is as follows:
Figure BDA0002755981430000024
Figure BDA0002755981430000025
Figure BDA0002755981430000026
the cosine similarity is calculated, the cosine distance is 1-cosine similarity, the apparent characteristics of Track boxes and the apparent characteristics corresponding to Detection boxes are measured through the cosine distance, and the formula (4) is also an indicator;
s5.3, obtaining a correlation cost matrix by weighting the similarity and the apparent similarity of the moving target, wherein the formula is as follows:
ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j) (5)
where λ is a hyper-parameter and defaults to 0.
Further, the step S6 is to perform correlation matching on the correlation cost in the correlation cost matrix according to the hungarian algorithm, calculate the matching degree between the two frames before and after, further determine the tracking result, assign an ID of a target to each object, and implement multi-target vehicle detection specifically includes the following steps:
s6.1, setting a similarity threshold value, and comparing the similarity threshold value with the cost matrix calculated in the step S5;
and S6.2, allocating the same ID to the targets in the Detection boxes and the Track boxes corresponding to the cost matrix with the similarity threshold value, and taking the same ID as a group of tracking results.
The invention has the following beneficial effects:
(1) the prediction of the anchor box is optimized through a k-means clustering algorithm, so that YOLOv4 is more suitable for the requirement of a vehicle data set, and the precision of Detection boxes is improved;
(2) the CSPDarknet-53 of the backbone network of YOLOv4 is modified, and a feature layer is added, so that the CSPDarknet-53 has four feature layers, and the detection precision of small targets is improved;
(3) the data association problem between the predicted and the tracked results is solved by using a Kalman filtering algorithm and a Hungarian algorithm, a cost matrix is generated by using the motion similarity and the apparent similarity of the target, and the ID Switch phenomenon is effectively reduced.
Drawings
FIG. 1 is a flow chart of a tracking model of the present invention;
FIG. 2 is a diagram of an improved object detection network architecture of the present invention;
FIG. 3 is a flow chart of Detection boxes and Track boxes matching for Kalman filtering and Hungarian algorithms.
Detailed Description
The following description of the embodiments of the present invention is provided in conjunction with the accompanying drawings to facilitate those skilled in the art to understand the present invention, and it is to be understood that the embodiments described herein are further intended to illustrate and not to limit the present invention, and that various changes will be apparent to those skilled in the art as long as they are within the scope and range of the present invention as defined and de-identified in the appended claims, and all inventions utilizing the concepts of the present invention are protected.
Referring to fig. 1 to 3, a method for detecting and tracking multiple targets of vehicles based on YOLOv4 includes the following steps:
s1, generating an anchor box through a k-means clustering algorithm, wherein the anchor box is widely used for setting the initial size of the bounding box in a single-layer target detection algorithm, and is superior to other unsupervised learning algorithms;
s2, improving a target detection network of YOLOv4, continuously increasing the scale on the basis of the fusion of the three scale features of the original YOLOv4, and adding the feature maps into four different scales;
s3, carrying out vehicle Detection on the video frames through an improved YOLOv4 target Detection network to obtain all detected target vehicle frames Detection boxes;
s4, predicting the state of the vehicle in the target vehicle detection frame through a Kalman filter to obtain corresponding target tracking frames;
s5, constructing a cost matrix between the Detection boxes and the Track boxes by utilizing the motion similarity and the apparent similarity between all the Detection boxes and the Track boxes;
and S6, performing relevance matching on the relevance cost in the relevance cost matrix according to the Hungarian algorithm, calculating the matching degree between the two frames before and after, further determining a tracking result, allocating the ID of a target to each object, and realizing multi-target vehicle detection.
The step S1 of generating the anchor box through the k-means clustering algorithm specifically comprises the following steps:
s1.1, acquiring a real boundary box of a target on a data set;
s1.2, a k-means algorithm randomly selects k bounding boxes as clustering heads to initialize a normalization process, redistributes clusters around the nearest centroid, and updates according to a certain threshold value until k anchor boxes are generated after convergence.
The step S2 improves the target detection network of YOLOv4, and specifically includes the following steps:
s2.1, modifying a backbone network CSPDarknet-53 of YOLOv4, and adding a feature layer to ensure that the CSPDarknet-53 has four feature layers;
s2.2, inputting the last layer of feature layer into an SPP structure to carry out four times of maximum pooling operation, wherein the sizes of the pooled nuclei of the maximum pooling are respectively 13x13, 9x9, 5x5 and 1x 1;
and S2.3, inputting the four feature layers into the PANET structure to realize top-to-bottom feature extraction and bottom-to-top feature extraction.
And S2.4, finally, predicting the obtained features by using Yoloidea.
The step S5, which utilizes motion similarities and apparent similarities between all Detection boxes and Track boxes, specifically includes the following steps:
s5.1, measuring the distance between the Track boxes and the Detection boxes by using the squared Mahalanobis distance to calculate the similarity between the Track boxes and the Detection boxes, wherein the specific formula is as follows:
Figure BDA0002755981430000041
Figure BDA0002755981430000042
djrepresents the jth Track boxes, yjRepresents the ith Track boxes,
Figure BDA0002755981430000051
represents the covariance of d and y;
equation (2) is an indicator that compares the Mahalanobis distance to a threshold of chi-squared distribution, t(1)9.4877, measuring the matching degree of Detection boxes and Track boxes by a threshold value;
s5.2, measuring the distance between the apparent features by using the cosine distance, wherein the calculation formula is as follows:
Figure BDA0002755981430000052
Figure BDA0002755981430000053
Figure BDA0002755981430000054
the cosine similarity is calculated, the cosine distance is 1-cosine similarity, the apparent characteristics of Track boxes and the apparent characteristics corresponding to Detection boxes are measured through the cosine distance, and the formula (4) is also an indicator;
s5.3, obtaining a correlation cost matrix by weighting the similarity and the apparent similarity of the moving target, wherein the formula is as follows:
ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j) (5)
where λ is a hyper-parameter and defaults to 0.
The step S6 described above is to perform correlation matching on the correlation cost in the correlation cost matrix according to the hungarian algorithm, calculate the matching degree between the two frames before and after, further determine the tracking result, assign an ID of a target to each object, and implement multi-target vehicle detection specifically includes the following steps:
s6.1, setting a similarity threshold value, and comparing the similarity threshold value with the cost matrix calculated in the step S5;
and S6.2, allocating the same ID to the targets in the Detection boxes and the Track boxes corresponding to the cost matrix with the similarity threshold value, and taking the same ID as a group of tracking results.
The foregoing illustrates a method for detecting and tracking multiple targets of vehicles based on YOLOv4, and the collective flowchart is shown in fig. 1.
The invention has the beneficial effects that:
(1) the prediction of the anchor box is optimized through a k-means clustering algorithm, so that YOLOv4 is more suitable for the requirement of a vehicle data set, and the precision of Detection boxes is improved;
(2) the CSPDarknet-53 of the backbone network of YOLOv4 is modified, and a feature layer is added, so that the CSPDarknet-53 has four feature layers, and the detection precision of small targets is improved;
(3) the data association problem between the predicted and the tracked results is solved by using a Kalman filtering algorithm and a Hungarian algorithm, a cost matrix is generated by using the motion similarity and the apparent similarity of the target, and the ID Switch phenomenon is effectively reduced.

Claims (4)

1. A multi-target vehicle detection and tracking method based on YOLOv4 is characterized by comprising the following steps:
s1, generating an anchor box through a k-means clustering algorithm, wherein the anchor box is widely used for setting the initial size of the bounding box in a single-layer target detection algorithm, and is superior to other unsupervised learning algorithms;
s2, improving a target detection network of YOLOv4, continuously increasing the scale on the basis of the fusion of the three scale features of the original YOLOv4, and adding the feature maps into four different scales;
s3, carrying out vehicle Detection on the video frames through an improved YOLOv4 target Detection network to obtain all detected target vehicle frames Detection boxes;
s4, predicting the state of the vehicle in the target vehicle detection frame through a Kalman filter to obtain corresponding target tracking frames;
s5, constructing a cost matrix between the Detection boxes and the Track boxes by utilizing the motion similarity and the apparent similarity between all the Detection boxes and the Track boxes;
and S6, performing relevance matching on the relevance cost in the relevance cost matrix according to the Hungarian algorithm, calculating the matching degree between the two frames before and after, further determining a tracking result, allocating the ID of a target to each object, and realizing multi-target vehicle detection.
2. The method for detecting and tracking the multiple targets based on the YOLOv4 of claim 1, wherein the step S1 of generating an anchor box through a k-means clustering algorithm specifically comprises the following steps:
s1.1, acquiring a real boundary box of a target on a data set;
s1.2, a k-means algorithm randomly selects k bounding boxes as clustering heads to initialize a normalization process, redistributes clusters around the nearest centroid, and updates according to a certain threshold value until k anchor boxes are generated after convergence.
3. The method for detecting and tracking multiple targets of vehicles based on YOLOv4 of claim 1, wherein the step S2 improves the target detection network of YOLOv4, and comprises the following steps:
s2.1, modifying a backbone network CSPDarknet-53 of YOLOv4, and adding a feature layer to ensure that the CSPDarknet-53 has four feature layers;
s2.2, inputting the last layer of feature layer into an SPP structure to carry out four times of maximum pooling operation, wherein the sizes of the pooled nuclei of the maximum pooling are respectively 13x13, 9x9, 5x5 and 1x 1;
s2.3, inputting the four feature layers into the PANET structure to realize feature extraction from top to bottom and feature extraction from bottom to top;
and S2.4, finally, predicting the obtained features by using Yoloidea.
4. The YOLOv 4-based multi-target vehicle Detection and tracking method according to claim 1, wherein the step S5 is to construct the cost matrix between the Detection boxes and the Track boxes by using the motion similarity and the apparent similarity between all the Detection boxes and the Track boxes, and specifically comprises the following steps:
s5.1, measuring the distance between the Track boxes and the Detection boxes by using the squared Mahalanobis distance to calculate the similarity between the Track boxes and the Detection boxes, wherein the specific formula is as follows:
Figure 518836DEST_PATH_IMAGE001
Figure 938316DEST_PATH_IMAGE002
represents the firstjA plurality of Track boxes are arranged in the base,
Figure 86401DEST_PATH_IMAGE003
represents the firstiA plurality of Track boxes are arranged in the base,
Figure 766781DEST_PATH_IMAGE004
representsdAndythe covariance of (a);
equation (2) is an indicator that compares mahalanobis distance to a threshold value for chi-squared distribution,
Figure 974908DEST_PATH_IMAGE005
measuring the matching degree of Detection boxes and Track boxes through a threshold value;
s5.2, measuring the distance between the apparent features by using the cosine distance, wherein the calculation formula is as follows:
Figure 616105DEST_PATH_IMAGE006
Figure 302301DEST_PATH_IMAGE007
the cosine similarity is calculated, the cosine distance = 1-cosine similarity, the apparent features of the Track boxes and the apparent features corresponding to the Detection boxes are measured through the cosine distance, and the formula (4) is also an indicator;
s5.3, obtaining a correlation cost matrix by weighting the similarity and the apparent similarity of the moving target, wherein the formula is as follows:
Figure 102767DEST_PATH_IMAGE008
wherein
Figure 481796DEST_PATH_IMAGE010
Is a hyper-parameter and defaults to 0.
CN202011206816.1A 2020-11-02 2020-11-02 YOLOv 4-based multi-target vehicle detection and tracking method Pending CN113205108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011206816.1A CN113205108A (en) 2020-11-02 2020-11-02 YOLOv 4-based multi-target vehicle detection and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011206816.1A CN113205108A (en) 2020-11-02 2020-11-02 YOLOv 4-based multi-target vehicle detection and tracking method

Publications (1)

Publication Number Publication Date
CN113205108A true CN113205108A (en) 2021-08-03

Family

ID=77025068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011206816.1A Pending CN113205108A (en) 2020-11-02 2020-11-02 YOLOv 4-based multi-target vehicle detection and tracking method

Country Status (1)

Country Link
CN (1) CN113205108A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113702393A (en) * 2021-09-29 2021-11-26 安徽理工大学 Intrinsic safety type mining conveyor belt surface damage detection system and detection method
CN116993779A (en) * 2023-08-03 2023-11-03 重庆大学 Vehicle target tracking method suitable for monitoring video

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378331A (en) * 2019-06-10 2019-10-25 南京邮电大学 A kind of end-to-end Vehicle License Plate Recognition System and its method based on deep learning
CN111126152A (en) * 2019-11-25 2020-05-08 国网信通亿力科技有限责任公司 Video-based multi-target pedestrian detection and tracking method
CN111259819A (en) * 2020-01-16 2020-06-09 广东工业大学 Outdoor scene safety monitoring method based on visual correlation discrimination network
CN111476826A (en) * 2020-04-10 2020-07-31 电子科技大学 Multi-target vehicle tracking method based on SSD target detection
CN111488795A (en) * 2020-03-09 2020-08-04 天津大学 Real-time pedestrian tracking method applied to unmanned vehicle
CN111832513A (en) * 2020-07-21 2020-10-27 西安电子科技大学 Real-time football target detection method based on neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378331A (en) * 2019-06-10 2019-10-25 南京邮电大学 A kind of end-to-end Vehicle License Plate Recognition System and its method based on deep learning
CN111126152A (en) * 2019-11-25 2020-05-08 国网信通亿力科技有限责任公司 Video-based multi-target pedestrian detection and tracking method
CN111259819A (en) * 2020-01-16 2020-06-09 广东工业大学 Outdoor scene safety monitoring method based on visual correlation discrimination network
CN111488795A (en) * 2020-03-09 2020-08-04 天津大学 Real-time pedestrian tracking method applied to unmanned vehicle
CN111476826A (en) * 2020-04-10 2020-07-31 电子科技大学 Multi-target vehicle tracking method based on SSD target detection
CN111832513A (en) * 2020-07-21 2020-10-27 西安电子科技大学 Real-time football target detection method based on neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALEXEY BOCHKOVSKIY: "YOLOv4: Optimal Speed and Accuracy of Object Detection", 《COMPUTER SCIENCE》 *
CSDN博客园作者"S.ZHENKAI": "yolov3改进4层特征检测层", 《HTTPS://BLOG.CSDN.NET/WEIXIN_44076342/ARTICLE/DETAILS/106547312》 *
徐子睿: "基于YOLOv4的车辆检测与流量统计研究", 《现代信息科技》 *
码农家园: "YOLOV4中k-means聚类获得anchor boxes", 《HTTPS://WWW.CODENONG.COM/CS109071574/》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113702393A (en) * 2021-09-29 2021-11-26 安徽理工大学 Intrinsic safety type mining conveyor belt surface damage detection system and detection method
CN113702393B (en) * 2021-09-29 2023-10-27 安徽理工大学 Intrinsic safety type mining conveyor belt surface damage detection system and detection method
CN116993779A (en) * 2023-08-03 2023-11-03 重庆大学 Vehicle target tracking method suitable for monitoring video
CN116993779B (en) * 2023-08-03 2024-05-14 重庆大学 Vehicle target tracking method suitable for monitoring video

Similar Documents

Publication Publication Date Title
CN111680542B (en) Steel coil point cloud identification and classification method based on multi-scale feature extraction and Pointnet neural network
CN112308881B (en) Ship multi-target tracking method based on remote sensing image
CN108573496B (en) Multi-target tracking method based on LSTM network and deep reinforcement learning
CN107424171B (en) Block-based anti-occlusion target tracking method
CN111476826A (en) Multi-target vehicle tracking method based on SSD target detection
CN110197502B (en) Multi-target tracking method and system based on identity re-identification
CN107633226B (en) Human body motion tracking feature processing method
CN111444767B (en) Pedestrian detection and tracking method based on laser radar
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN111862155B (en) Unmanned aerial vehicle single vision target tracking method aiming at target shielding
CN110532921B (en) SSD-based generalized label detection multi-Bernoulli video multi-target tracking method
CN112288773A (en) Multi-scale human body tracking method and device based on Soft-NMS
CN111739053B (en) Online multi-pedestrian detection tracking method under complex scene
CN113327272B (en) Robustness long-time tracking method based on correlation filtering
CN110363165B (en) Multi-target tracking method and device based on TSK fuzzy system and storage medium
CN111582349B (en) Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering
CN112016445A (en) Monitoring video-based remnant detection method
CN113205108A (en) YOLOv 4-based multi-target vehicle detection and tracking method
CN104615998B (en) A kind of vehicle retrieval method based on various visual angles
CN109146918B (en) Self-adaptive related target positioning method based on block
CN112946625B (en) B-spline shape-based multi-extended target track tracking and classifying method
CN110400347B (en) Target tracking method for judging occlusion and target relocation
Cao et al. Correlation-based tracking of multiple targets with hierarchical layered structure
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
CN116381672A (en) X-band multi-expansion target self-adaptive tracking method based on twin network radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210803

WD01 Invention patent application deemed withdrawn after publication