CN116993776B - Personnel track tracking method - Google Patents

Personnel track tracking method Download PDF

Info

Publication number
CN116993776B
CN116993776B CN202310791566.XA CN202310791566A CN116993776B CN 116993776 B CN116993776 B CN 116993776B CN 202310791566 A CN202310791566 A CN 202310791566A CN 116993776 B CN116993776 B CN 116993776B
Authority
CN
China
Prior art keywords
frame
track
ratio
target
tracking method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310791566.XA
Other languages
Chinese (zh)
Other versions
CN116993776A (en
Inventor
裴文良
张旭华
谢海峰
郭永涛
张越超
张连源
尚昱昊
李军伟
郑子东
马心刚
许鑫
孙海龙
王宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CITIC HIC Kaicheng Intelligence Equipment Co Ltd
Original Assignee
CITIC HIC Kaicheng Intelligence Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CITIC HIC Kaicheng Intelligence Equipment Co Ltd filed Critical CITIC HIC Kaicheng Intelligence Equipment Co Ltd
Priority to CN202310791566.XA priority Critical patent/CN116993776B/en
Publication of CN116993776A publication Critical patent/CN116993776A/en
Application granted granted Critical
Publication of CN116993776B publication Critical patent/CN116993776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a personnel track tracking method, which comprises the following steps of S1, carrying out histogram equalization on each frame of video of a water pump house worker; s2, inputting the image processed in the step S1 into a Yolov5 model for detection to obtain a target frame; s3, obtaining a movement track of the staff through a track recognition model; s4, judging whether the center point of the target frame in each frame falls in a preset water pump room key area or not, and recording. The invention adopts image preprocessing and deep learning algorithm, has high accuracy and wide application scene, and reduces the imperfection caused by only artificial design features; the method is not interfered by complex background, and has strong robustness and high portability.

Description

Personnel track tracking method
Technical Field
The invention relates to the technical field of indoor positioning, in particular to a personnel track tracking method based on Deep sort.
Background
The computer vision technology is used for obtaining the needed information by processing the video acquired by the camera, wherein the target tracking technology is rapidly developed and plays an important role in the field of video monitoring. In a coal mine underground water pump house, people monitor the action track of staff by utilizing video monitoring to check whether the staff passes through a key position, thereby being greatly helpful for production safety management. However, when the action track of the worker is simply detected manually and neglected, once the worker does not check the key part and the detector does not find the key part, safety accidents are easily caused. Therefore, there is a need for an accurate and effective method for tracking personnel trajectories for the management of water pump rooms.
Disclosure of Invention
In order to solve the problems in the technology, the method is combined with the Yolov5 target detection algorithm and the Deep sort target tracking algorithm to capture the movement track of the staff in the water pump room, the method can identify the staff in the monitoring area and carry out positioning tracking, the loophole of the monitoring management system is solved, the action track of the staff is automatically detected and judged whether the staff goes to a key position or not, if the staff does not go to the key position, the undelivered position can be automatically reminded.
The method is applied to a coal mine water pump house monitoring system; preprocessing each frame of image by using limited contrast adaptive histogram equalization; the algorithm detects the personnel target in each frame of processed image; incomplete personnel targets at the edges of the images are removed, and the central positions and the size information of the personnel targets are obtained; and using a cascade matching algorithm and a Hungary assignment algorithm in the Deep sort multi-target tracking system as a basic tracking architecture, introducing depth features and manual features, extracting features by adopting a lightweight network, and enhancing the accuracy of target ID allocation.
Specifically, the personnel track tracking method is characterized by comprising the following steps:
s1, carrying out histogram equalization on each frame of video of a water pump house worker;
s2, inputting the image processed in the step S1 into a Yolov5 model for detection;
s3, obtaining a movement track of the staff through a track recognition model;
s4, judging whether the personnel track passes through a preset key area and recording.
Wherein the track recognition model is Deep sort.
The histogram equalization method in step S1 is to set a threshold value first, cut out the picture if the gray value exceeds the threshold value, average the part exceeding the threshold value to each gray level, then block the image, calculate the cumulative probability distribution of the histogram of each block,for each pixel, calculating the mapping of the cumulative probability distribution of the histogram of the adjacent 4 windows to the pixel, and marking as f 1 (D),f 2 (D),f 3 (D),f 4 (D) Then calculating the processing value of the pixel point by using bilinear interpolation;
wherein the bilinear interpolation formula is:
f(D)=(1-Δy)((1-Δx)f 1 (D)+Δxf 3 (D))+Δy((1-Δx)f 2 (D)+Δxf 4 (D));
wherein Δx is the increment of the pixel point in the x direction, and Δy is the increment of the pixel point in the y direction.
Wherein, the method also comprises the step S2-1, the width and the height of the image are W and H', and the ratio W of the width at the edge of the removed image is set ratio And a high ratio h ratio Thus obtaining four regions, namely [0, w ratio *W,H’],[0,0,W,h ratio *H’],[(1-w ratio )*W,0,W,H’]And [0, (1-h) ratio )*H’,W,H’]And calculating a central point bboxC of the target frame, if the bboxC falls into any one of the areas, removing the target frame area, otherwise, reserving, and finally obtaining the optimized target frame.
The state prediction equation and the error prediction equation of the kalman filter of the Deep start recognition model in step S3 are as follows:
P t|t-1 =FP t-1|t-1 F T +Q
wherein the method comprises the steps ofFor the predicted value at time t +.>Representing the optimal estimated value at time t-1, F is a state transition matrix, F T Being the transposed matrix of F, B represents the control transfer matrix of the system, u t Indicating t time controlVariable, Q, is the covariance of the noise.
Wherein, the Kalman filter update formula is as follows:
K t =P t|t-1 H T (HP t|t-1 H T +R) -1
P t|t =(I-K t H)P t|t-1
wherein K is t Representing Kalman gain, P t|t-1 Is the prior estimated covariance, P t|t Is the posterior estimated covariance.
The Hungary algorithm solves the maximum and minimum values of the objective function by using a coefficient matrix of the conditional function, and the mathematical formula is as follows:
x ij =0 or x ij =1
wherein x is ij As a binary function, x is as follows for the target tracking task ij =1 indicates that the i-th target and the j-th track complete matching, c ij Representing the correlation metric value between the i-th object and the j-th track.
The beneficial effects of the invention are as follows:
1) Compared with the traditional target tracking technical method, the method adopts image preprocessing and a deep learning algorithm, has high accuracy and wide application scene, and reduces the imperfection caused by only artificial design features;
2) The invention further improves the automation degree of the water pump room monitoring system and greatly reduces the consumed manpower;
3) The invention is not interfered by complex background, has strong robustness and high portability.
Drawings
FIG. 1 is a flow chart of the overall algorithm of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
As shown in fig. 1, the specific process of the water pump house staff track tracking method based on Deep start comprises the following steps:
step one: and inputting a video frame sequence to be tracked, and carrying out histogram equalization on each frame of video image. Firstly, setting a threshold value, if the gray level of the picture exceeds the threshold value, cutting the picture, averaging the parts exceeding the threshold value to each gray level, then partitioning the image, respectively calculating the cumulative probability distribution of the histogram of each block, and for each pixel point, calculating the mapping of the cumulative probability distribution of the histogram of the adjacent 4 windows to the pixel point, and marking as f 1 (D),f 2 (D),f 3 (D),f 4 (D) Then calculating the processing value of the pixel point by bilinear interpolation, wherein the bilinear interpolation formula is as follows:
f(D)=(1-Δy)((1-Δx)f 1 (D)+Δxf 3 (D))+Δy((1-Δx)f 2 (D)+Δxf 4 (D))。
wherein Δx is the increment of the pixel point in the x direction, and Δy is the increment of the pixel point in the y direction. Step two: and obtaining a staff target frame by using a Yolov5 detector. The Yolo detector divides the input image into an S x S grid. If the center of the object falls into a grid, the grid is designated for detection of the object. Each grid predicts not just one frame of the object but multiple frames of the object and predicts the confidence score corresponding to each frame. These scores reflect the confidence level of the model in the inclusion of items in the grid, as well as the accuracy of the model in the grid predictions.
Step three: assuming that the width and height of the image are W and H', respectively, the ratio W of the width at the edge of the image is set ratio And a high ratio h ratio Thus obtaining four regions, namely [0, w ratio *W,H’],[0,0,W,h ratio *H’],[(1-w ratio )*W,0,W,H’]And [0, (1-h) ratio )*H’,W,H’]And calculating a central point bboxC of the target frame, if the bboxC falls into any one of the areas, removing the target frame area, otherwise, reserving, and finally obtaining the optimized target frame.
Step four: inputting the video processed in the steps into a deep sort model to judge the track, wherein the track is as follows:
each tracking track is assigned its corresponding ID in the following process;
(1) Predicting a target frame corresponding to a t-1 frame as a 'confirmation state' track through Kalman filtering, extracting depth characteristics of a region corresponding to the t-frame target frame by using a lightweight network (MobileNet), taking the t-frame predicted frame and the t-frame target frame as input of Cascade Matching (Matching Cascade) to calculate a distance, and finally obtaining three Matching results by combining a Hungary algorithm:
1. matching (Matched stacks): the t frame target frame is matched with the t frame prediction frame, and the track is successfully matched;
2. tracking Unmatched tracks: the t frame target frame is not matched with the t frame prediction frame, and the track is mismatched;
3. detect no match (Unmatched detections): the occurrence of a new target frame for the t frame cannot be matched with the existing track.
(2) Updating a corresponding track variable of the track (confirmation state) matched with the t frame in the step (1) through Kalman filtering, taking the track variable as a basis for predicting a t+1 frame prediction frame, calculating the central point of a target frame successfully matched with the t frame, judging whether the central point falls in a set key area of a water pump house, marking the key area if the central point of the target frame falls in a certain key area, and recording the key area in an unmarked state and in a storage if the central point does not fall in the key area, and simultaneously recording a time period corresponding to the frame number in the storage;
carrying out IOU matching on the tracking unmatched result of the t frame, the detection unmatched result of the t frame and the 'unacknowledged state' track before the t frame, and obtaining three results:
1. matching: the tracking ID of the t frame is not matched with the result or the track of the 'non-acknowledgement state' before the t frame is matched with the result of the detection of the t frame;
1.1, if the matching track is successfully matched with the continuous multi-frame IOU before the current matching, converting the matched track into a 'confirmation state' track, wherein the continuous multi-frame can be 2 frames;
1.2 if the matching track is not successfully matched with the continuous multi-frame IOU before the current matching, maintaining the matching track in a 'non-confirmed state', wherein the continuous multi-frame can be 2 frames;
2. tracking unmatched: t frames of still mismatched tracks;
3. detecting unmatched: t frames have not matched target frames.
(3) Updating track variables of corresponding IDs (including a confirmed track and a non-confirmed track) of the t frame matching in the step (2) through a Kalman filter, and taking the track variables as a basis for predicting a t+1 frame prediction frame;
distributing a new track ID to the t frame detection unmatched result in the step (2), predicting the track of the next frame through Kalman filtering, and adding the track into the 'non-confirmed state' track before the t+1 frame in the step (5);
keeping track ID of the unmatched result of t frame tracking in the step (2), predicting the track of the next frame through Kalman filtering, and adding the track into the track of the 'non-confirmation state' before t+1 frame in the step (5);
(4) The distance is calculated by taking a t+1 frame prediction frame corresponding to a matched track of which the t frame is in a 'confirmation state' and a target frame of the t+1 frame as input of cascade matching, and three matching results are finally obtained by combining a Hungary algorithm:
1. matching: the t+1 frame target frame is matched with the t+1 frame prediction frame;
2. tracking unmatched: the t+1 frame target frame is not matched with the t+1 prediction frame, and the track is mismatched;
3. detecting unmatched: the new target frame occurrence for the t+1 frame cannot be matched with the existing track ID.
(5) Updating a corresponding track variable of the t+1 frame matched track in the step (4) through Kalman filtering, taking the track variable as a basis for predicting a t+2 frame prediction frame, calculating the central point of a target frame successfully matched with the t+1 frame, judging whether the central point falls in a set key area of a water pump house, marking the key area if the central point of the target frame falls in a certain key area, and recording the key area in an unlabeled state in a storage if the central point does not fall in the key area, and simultaneously recording a time period corresponding to the frame number in the storage;
carrying out IOU matching on the tracking unmatched result of the t+1 frame, the detection unmatched result of the t+1 frame and the 'unacknowledged state' track before the t+1 frame;
and three results were obtained:
1. matching: tracking unmatched results of the t+1 frames or matching a track in a 'non-acknowledgement state' before the t+1 frames with detection unmatched results of the t frames;
1.1, if the matching track is successfully matched with the continuous multi-frame IOU before the current matching, converting the matched track into a 'confirmation state' track, wherein the continuous multi-frame can be 2 frames;
1.2 if the matching track is not successfully matched with the continuous multi-frame IOU before the current matching, maintaining the matching track in a 'non-confirmed state', wherein the continuous multi-frame can be 2 frames;
2. tracking unmatched: tracks with t+1 frames still mismatched;
3. detecting unmatched: t+1 frames have not yet matched target frames.
(6) Updating track variables of corresponding IDs of tracks (including a confirmed track and a non-confirmed track) matched with the t+1 frames in the step (5) through Kalman filtering, and taking the track variables as a basis for predicting a t+2 frame prediction frame;
distributing a new track ID to the detection unmatched result of the t+1 frame in the step (2), predicting the track of the next frame through Kalman filtering, and adding the track into the track of the 'non-confirmation state' before the subsequent step t+2 frame;
keeping track ID of the unmatched result of t+1 frame tracking in the step (2), predicting the track of the next frame through Kalman filtering, and adding the track into the 'non-confirmed state' track before the frame in the subsequent step t+2;
(7) Repeating (1) - (6) until the video ends; finally, the step of obtaining the product,
(8) Judging whether a person checks all preset water pump house key areas in a certain time period or a whole video through the marking state of the key areas in the storage;
in the above process, if a certain track ID is not matched after being cycled for more than a threshold number of times, for example, if no matching is successful after a certain track is matched for 30 consecutive frames of IOU, all information corresponding to the track is deleted.
(1) Or (4) the set of the detection unmatched results is the 'all detection targets unmatched set' in fig. 1, namely the unmatched target frames after cascade matching of the current frame;
(1) Or (5) the set of tracking unmatched results is the "set of all unmatched ID information in tracker" in fig. 1, i.e. the track on unmatched after cascade matching of the current frame.
In the above process, the state prediction equation and the error prediction equation of the kalman filter are as follows:
P t|t-1 =FP t-1|t-1 F T +Q
wherein the method comprises the steps ofFor the predicted value at time t +.>Representing t-The optimal estimated value at time 1, F is a state transition matrix, F T Being the transposed matrix of F, B represents the control transfer matrix of the system, u t The control variable at time t is represented, Q is the covariance of the noise, and P represents the error equation.
The Kalman filter update formula is as follows:
K t =P t|t-1 H T (HP t|t-1 H T +R) -1
P t|t =(I-K t H)P t|t-1
wherein K is t Representing Kalman gain, P t|t-1 Is the prior estimated covariance, P t|t Is the posterior estimated covariance.
In the process, the Hungary algorithm utilizes the coefficient matrix of the conditional function to solve the maximum and minimum values of the objective function. The mathematical formula is as follows:
x ij =0 or x ij =1
wherein x is ij As a binary function, x is as follows for the target tracking task ij =1 indicates that the i-th target and the j-th track complete matching, c ij Representing the correlation metric value between the i-th object and the j-th track.
It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.

Claims (4)

1. The personnel track tracking method is characterized by comprising the following steps of:
s1, carrying out histogram equalization on each frame of video of a water pump house worker;
the histogram equalization method comprises the steps of firstly setting a threshold value, cutting out a picture if the gray level value of the picture exceeds the threshold value, averaging the parts exceeding the threshold value to each gray level, then partitioning the image, respectively calculating the cumulative probability distribution of the histogram of each block, and calculating the mapping of the cumulative probability distribution of the histogram of 4 adjacent windows to each pixel point, namely f 1 (D),f 2 (D),f 3 (D),f 4 (D) Then calculating the processing value of the pixel point by using bilinear interpolation;
wherein the bilinear interpolation formula is:
f(D)=(1-Δy)((1-Δx)f 1 (D)+Δxf 3 (D))+Δy((1-Δx)f 2 (D)+Δxf 4 (D));
wherein Deltax is the increment of the pixel point in the x direction, and Deltay is the increment of the pixel point in the y direction;
s2, inputting the image processed in the step S1 into a Yolov5 model for detection;
s2-1, the width and the height of the image are respectively W and H', and the ratio W of the width at the edge of the removed image is set ratio And a high ratio h ratio Thus obtaining four regions, namely [0, w ratio *W,H’],[0,0,W,h ratio *H’],[(1-w ratio )*W,0,W,H’]And [0, (1-h) ratio )*H’,W,H’]Calculating a central point bboxC of the target frame, if the bboxC falls into any one of the areas, removing the target frame area, otherwise, reserving, and finally obtaining the optimized target frame;
s3, obtaining a motion trail of a worker through the Deep sort trail identification model;
s4, judging whether the personnel track passes through a preset key area and recording.
2. The person tracking method according to claim 1, wherein the state prediction equation and the error prediction equation of the kalman filter of the Deep sort recognition model in step S3 are:
P t|t-1 =FP t-1|t-1 F T +Q
wherein the method comprises the steps ofFor the predicted value at time t +.>Representing the optimal estimated value at time t-1, F is a state transition matrix, F T Being the transposed matrix of F, B represents the control transfer matrix of the system, u t The time t control variable is represented, and Q is the covariance of the noise.
3. A person tracking method according to claim 2, characterized in that,
the Kalman filter update formula is as follows:
K t =P t|t-1 H T (HP t|t-1 H T +R) -1
P t|t =(I-K t H)P t|t-1
wherein K is t Representing Kalman gain, P t|t-1 Is the prior estimated covariance, P t|t Is a posterior estimation co-ordinatesThe variance.
4. A person tracking method according to claim 3, wherein the hungarian algorithm uses a coefficient matrix of a conditional function to solve for the maximum and minimum values of an objective function, and the mathematical formula is:
x ij =0 or x ij =1
wherein x is ij As a binary function, x is as follows for the target tracking task ij =1 indicates that the i-th target and the j-th track complete matching, c ij Representing the correlation metric value between the i-th object and the j-th track.
CN202310791566.XA 2023-06-30 2023-06-30 Personnel track tracking method Active CN116993776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310791566.XA CN116993776B (en) 2023-06-30 2023-06-30 Personnel track tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310791566.XA CN116993776B (en) 2023-06-30 2023-06-30 Personnel track tracking method

Publications (2)

Publication Number Publication Date
CN116993776A CN116993776A (en) 2023-11-03
CN116993776B true CN116993776B (en) 2024-02-13

Family

ID=88531090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310791566.XA Active CN116993776B (en) 2023-06-30 2023-06-30 Personnel track tracking method

Country Status (1)

Country Link
CN (1) CN116993776B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634232A (en) * 2020-12-23 2021-04-09 精英数智科技股份有限公司 Detection method, device and system for violation border crossing of tunneller personnel
CN114038193A (en) * 2021-11-08 2022-02-11 华东师范大学 Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking
CN114092517A (en) * 2021-11-15 2022-02-25 江苏航天大为科技股份有限公司 Multi-target tracking method based on traditional and deep learning algorithm
CN114140832A (en) * 2022-01-30 2022-03-04 西安华创马科智能控制***有限公司 Method and device for detecting pedestrian boundary crossing risk in well, electronic equipment and storage medium
CN114627360A (en) * 2020-12-14 2022-06-14 国电南瑞科技股份有限公司 Substation equipment defect identification method based on cascade detection model
CN115171022A (en) * 2022-07-19 2022-10-11 武汉理工大学 Method and system for detecting wearing of safety helmet in construction scene
CN115512315A (en) * 2022-11-01 2022-12-23 深圳市城市交通规划设计研究中心股份有限公司 Non-motor vehicle child riding detection method, electronic device and storage medium
CN115880620A (en) * 2022-09-13 2023-03-31 中信重工开诚智能装备有限公司 Personnel counting method applied to cart early warning system
CN115995063A (en) * 2021-10-18 2023-04-21 中车株洲电力机车研究所有限公司 Work vehicle detection and tracking method and system
CN116129312A (en) * 2023-01-13 2023-05-16 电子科技大学 Tank armor flow detection method based on target detection model and deep source

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517292A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
CN113744310A (en) * 2021-08-24 2021-12-03 北京百度网讯科技有限公司 Target tracking method and device, electronic equipment and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627360A (en) * 2020-12-14 2022-06-14 国电南瑞科技股份有限公司 Substation equipment defect identification method based on cascade detection model
CN112634232A (en) * 2020-12-23 2021-04-09 精英数智科技股份有限公司 Detection method, device and system for violation border crossing of tunneller personnel
CN115995063A (en) * 2021-10-18 2023-04-21 中车株洲电力机车研究所有限公司 Work vehicle detection and tracking method and system
CN114038193A (en) * 2021-11-08 2022-02-11 华东师范大学 Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking
CN114092517A (en) * 2021-11-15 2022-02-25 江苏航天大为科技股份有限公司 Multi-target tracking method based on traditional and deep learning algorithm
CN114140832A (en) * 2022-01-30 2022-03-04 西安华创马科智能控制***有限公司 Method and device for detecting pedestrian boundary crossing risk in well, electronic equipment and storage medium
CN115171022A (en) * 2022-07-19 2022-10-11 武汉理工大学 Method and system for detecting wearing of safety helmet in construction scene
CN115880620A (en) * 2022-09-13 2023-03-31 中信重工开诚智能装备有限公司 Personnel counting method applied to cart early warning system
CN115512315A (en) * 2022-11-01 2022-12-23 深圳市城市交通规划设计研究中心股份有限公司 Non-motor vehicle child riding detection method, electronic device and storage medium
CN116129312A (en) * 2023-01-13 2023-05-16 电子科技大学 Tank armor flow detection method based on target detection model and deep source

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Edge-Computing-Facilitated Nighttime Vehicle Detection Investigations With CLAHE-Enhanced Images;Igor Lashkov 等;《IEEE Transactions on Intelligent Transportation Systems》;第24卷(第11期);全文 *
Pedestrian Detection and Tracking System Based on Deep-SORT, YOLOv5, and New Data Association Metrics;Mohammed Razzok 等;《Information》;第14卷(第4期);全文 *
Position-velocity constrained trajectory tracking control for unmanned underwater vehicle with model uncertainties;Changchun Hua 等;《Ocean Engineering》;第266卷;全文 *
基于YOLOv3与卡尔曼滤波的多目标跟踪算法;任珈民;宫宁生;韩镇阳;;计算机应用与软件(第05期);全文 *
基于区域候选的粗-精行人检测方法;周少康 等;《南京理工大学学报》;第44卷(第3期);全文 *

Also Published As

Publication number Publication date
CN116993776A (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN110619657B (en) Multi-camera linkage multi-target tracking method and system for intelligent communities
Yang et al. Real-time multiple objects tracking with occlusion handling in dynamic scenes
Stauffer et al. Adaptive background mixture models for real-time tracking
Gao et al. Error analysis of background adaption
EP1844443B1 (en) Classifying an object in a video frame
US8374388B2 (en) Real-time tracking of non-rigid objects in image sequences for which the background may be changing
US7957560B2 (en) Unusual action detector and abnormal action detecting method
CN108222749B (en) Intelligent automatic door control method based on image analysis
US7606416B2 (en) Landmark detection apparatus and method for intelligent system
CN103971386A (en) Method for foreground detection in dynamic background scenario
Abdelkader et al. Integrated motion detection and tracking for visual surveillance
Neiswanger et al. The dependent Dirichlet process mixture of objects for detection-free tracking and object modeling
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN103400120B (en) Video analysis-based bank self-service area push behavior detection method
Wang Joint random field model for all-weather moving vehicle detection
CN105354540A (en) Video analysis based method for implementing person fall-down behavior detection
CN116051970A (en) Identification method for overlapping fish targets based on improved yolov5 model
WO2008125854A1 (en) Method for tracking multiple objects with occlusions
CN116993776B (en) Personnel track tracking method
CN110349184B (en) Multi-pedestrian tracking method based on iterative filtering and observation discrimination
CN112307897A (en) Pet tracking method based on local feature recognition and adjacent frame matching in community monitoring scene
CN108241837B (en) Method and device for detecting remnants
CN114581847B (en) Community pedestrian abnormal behavior detection method and device based on GAM tracker
Kim et al. Unsupervised moving object segmentation and recognition using clustering and a neural network
Li et al. Real-time moving object detection for video monitoring systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant