CN109003290A - A kind of video tracing method of monitoring system - Google Patents
A kind of video tracing method of monitoring system Download PDFInfo
- Publication number
- CN109003290A CN109003290A CN201711305847.0A CN201711305847A CN109003290A CN 109003290 A CN109003290 A CN 109003290A CN 201711305847 A CN201711305847 A CN 201711305847A CN 109003290 A CN109003290 A CN 109003290A
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- detection
- module
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
A kind of video tracing method of monitoring system.System includes: monitor camera, IP network, interchanger and groups of clients at.By the tracking target frame of selected digital image, the position of target frame and size are delivered separately to tracking module and detection module;Target following is carried out by tracking module and obtains new target frame position;Target detection is carried out by detection module, by the way that window to be detected is passed sequentially through variance classifier, Ensemble classifier and nearest neighbor classifier, judges whether the detection window contains required target;Final target position and size is obtained by integration module again, realizes target following result.If target is lost, correlation filtering tracking module is reinitialized by detector, so that temporarily lost target is given for change again, the tracking and detection of a new round is carried out, continues to complete tracing task, tenacious tracking when long under realization monitoring scene.
Description
Technical field
The present invention relates to field of video monitoring, related image processing and analysis technology, especially for video monitoring scene
Under Automatic Target Tracking method.
Background technique
With the development of information age, the security protection consciousness of people is gradually increased, and security protection problem is increasingly by various circles of society
Concern.Video monitoring is the basis of security protection, and the intelligent-tracking technology in monitoring scene increasingly has been favored by people.Tradition
Video monitoring needs manually observed, be easy to appear monitoring loophole, and visual angle is fixed, monitoring efficiency is low.It sends out in recent years
The rifle ball linked system of exhibition, the advantages of seeing entirely with seeing is combined with it, is had in field of video monitoring more and more extensive
Application.The overall situation is monitored by the wide viewing angle of gunlock, while controlling the real-time high-definition that ball machine cloud platform rotation carries out target
Tracking, see local detail.But since target appearance deforms, illumination variation, motion blur, the similar interference of background, scale change
The presence for the problems such as change and target are blocked completely can influence the accuracy of target following to some extent, give target following band
Great challenge is carried out.At present in target following technology, there are mainly two types of methods: production method and discriminate method.It is raw
Accepted way of doing sth method mainly models the target area of present frame, and it is exactly to predict that next frame, which finds the region most like with model,
Target position, such as: kalman filter, mean shift.Discriminate method is mainly positive sample with the target area of present frame
This, background area is negative sample, and by machine learning training classifier, next frame finds similar target with trained classifier
The optimal region in region, such as: struck.In recent years, " correlation filtering " method and the method for tracking target based on deep learning
It has been increasingly becoming the tracking of mainstream.Wherein, the closing solution of ridge regression is transformed to by Fourier due to " correlation filtering " method
Domain, and the characteristic for combining circular matrix carries out abbreviation to sample set, has while maintaining higher tracking velocity good
Tracking accuracy, with its superior general performance become method for tracking target in important branch.
Although target following technology is more and more mature, in actual video at present in the product of rifle ball linkage
Under monitoring scene, the target tracked in certain scenes can be blocked temporarily, so as to cause tracking failure.Therefore, even performance
Preferably " correlation filtering " method and the tracking based on deep learning, still will appear under actual video monitoring scene
The case where target is temporarily lost.
Summary of the invention
It is an object of the invention to provide one kind to be blocked the feelings for causing target temporarily to be lost completely due to target
Under condition, lost target can be given for change again, the method for continuing to complete tracing task.
The principle of the present invention is to be established based on " correlation filtering " tracking and contained tracking module, and detection module is comprehensive
The intelligent tracking system of module and study module;Detector is initialized while initializing tracking target to present frame, is detected
Device traverses image and generates positive and negative sample set, for training classifier;It is tracked and is detected simultaneously in subsequent frames, and by comprehensive
It molds block and comprehensive analysis is carried out to the result of tracking and detection, obtain optimal tracking position of object;Study module is in entire mistake
Positive and negative sample set, optimized detector are constantly adjusted in journey;When target causes target to be lost due to being blocked, tracked by judgement
Whether confidence level is less than threshold value to reactivate detector, gives lost target for change.
Technical solution is as follows:
System includes: monitor camera, IP network, interchanger and groups of clients at.
Step 1: when starting target following task, video frame is read in by system, selectes the tracking target frame of first frame image,
By the position of target frame and size be delivered separately to tracking module and detection module go forward side by side line trace module and detection module it is initial
Change.
Step 2: tracking module initialization, tracking module are designed based on " correlation filtering " method;" correlation filtering " is to differentiate
One of formula tracking, essence are that the Solve problems of ridge regression are transformed to Fourier by a characteristic using circular matrix
Domain, while cyclic shift approximation sample intensive sampling is utilized, the process for accelerating ridge regression to solve.Tracking module initialization is main right
Given tracking target frame extracts HOG feature and Lab color characteristic, obtains target response label figure by Gaussian function, and ask
Obtain weight coefficient matrix.
Step 3: detection module initialization: detection module includes variance classifier, Ensemble classifier and nearest neighbor classifier.
Detection module initialization mainly traverses first frame image by the target frame of building different scale, obtains target sample data set,
And the target frame that traversal obtains is subjected to degree of overlapping with given tracking target frame and is compared, target sample data set is divided into positive sample
Sheet and negative sample.Meanwhile positive and negative sample data set is split as training set and test set respectively, for train Ensemble classifier and
Nearest neighbor classifier simultaneously adjusts classifier threshold value.
Step 4: after completing detection module and tracking module initial work, reading next frame video flowing, obtain new images
Carry out target following and target detection simultaneously afterwards.
Step 5: tracking module carries out target following: nearby being carried out using obtained weight coefficient matrix to target frame similar
Degree matching, obtains target response figure, responding maximum position is exactly the new position of target.
Simultaneously in order to adapt to the variation of target scale, target frame dimensional variation is adjusted separately on new target frame position,
To seek the object matching response diagram maximum value with target frame size variation, if the maximum response newly obtained compares original scale
Obtained response is big, then exports the corresponding target frame of the scale as new tracking result;Simultaneously by the maximum response
It is classified as new tracking creditability numerical value, is newly marked the new tracking creditability numerical value as the evaluation of current tracking target accuracy
It is quasi-.
Step 6: detection module carries out target detection: passing sequentially through variance by will traverse the window to be detected that image obtains
Classifier, Ensemble classifier and nearest neighbor classifier;
Wherein, by calculating integrogram, the variance of each window to be detected is obtained, variance is greater than given threshold and then thinks this
Detecting window includes target, which enters Ensemble classifier by variance classifier;
Ensemble classifier has N tree, and each tree has M binary system to judge node, forms M binary codings, Mei Gebian
The corresponding posterior probability of code;Each window to be detected forms corresponding posterior probability after entering Ensemble classifier, if posteriority
Probability is greater than the threshold value of setting, and then the detection window by Ensemble classifier enters nearest neighbor classifier;
By calculating the similarity of each window and on-time model to be detected into nearest neighbor classifier, similarity is greater than
The threshold value of setting then judges that the detection window contains required target.
Step 7: after completing target following and target detection, the result of the result of target following and target detection being sent into comprehensive
Molding block is integrated.
Step 8: integration module is responsible for the result of comprehensive tracking and detection;The target frame that tracking module obtains is to pass through tracking
What algorithm keeps track obtained, the target frame that detection module obtains is obtained by detection of classifier, this is two incoherent letters
Breath.Integration module passes through the similarity of cluster calculation detection block and tracking box, and it is maximum to carry out screening output similarity to detection block
Detection target frame, and the target frame obtained with tracking module is integrated to obtain final target position and size.
Step 9: meanwhile, judge whether the tracking creditability of tracking module is greater than given threshold value, is then by target position
Output is carried out to show;Otherwise judgement tracking target has been lost, and the target frame detected by detection module reinitializes tracking
Module, again by continuing step 2;
Step 10: it next, it is determined that whether meeting model modification condition, is to enable study module to carry out model learning, it is no
Next frame image is then directly read, the tracking and detection of a new round are opened.
Step 11: study module re-starts model learning;Study module, which passes through the target position being currently calculated, to be believed
Breath traverses image again, updates positive and negative sample data set.
Study module corrects the classification of mistake again, improves the property of detection module using P-N learning algorithm
Energy.Wherein, P-experts checks that those are detected the data that module error is classified as positive sample, which quilt N-experts checks
Detection module mistake is classified as the data of negative sample.
Step 12: finally judging whether to stop tracking, be to terminate and exit, otherwise update and save data set, read
Next frame image opens the tracking and detection of a new round.
The invention proposes one kind to be integrated with tracking module, detection module, the video tracking of integration module and study module
Method.After adopting this method, tracking target is caused temporarily to be lost even if target is blocked completely, if mesh in subsequent video frame
Mark occurs again, and the invention patent can detect target using detector module, gives script lost target for change again,
Continue to complete tracing task;Correlation filtering tracking module is reinitialized by detector, is realized steady when long under monitoring scene
Fixed tracking.
Detailed description of the invention
Fig. 1 is the connection relationship topology schematic diagram of present system;
Fig. 2 is that the information of present system judges flow diagram;
Fig. 3-1 is normal tracking pedestrians schematic diagram under original monitoring system;
Fig. 3-2 is that pedestrian is blocked schematic diagram completely under original monitoring system;
Fig. 3-3 be under original monitoring system pedestrian be blocked cause tracking failure schematic diagram;
Fig. 4-1 is normal tracking pedestrians schematic diagram under monitoring system of the present invention;
Fig. 4-2 is that pedestrian is blocked schematic diagram completely under monitoring system of the present invention;
Fig. 4-3 is to give for change after pedestrian is blocked under monitoring system of the present invention and continue to track schematic diagram again;
Fig. 5-1 is the target response figure tracked when target is correctly tracked;
Fig. 5-2 is the target response figure tracked when target is lost.
Specific embodiment
Embodiment 1:
In the video tracing method that " correlation filtering " is used only, when the target of tracking is blocked completely by barrier, with
Track failure.By Fig. 3-1, we determined that tracking module realizes the tracking to target;When target reaches the position Fig. 3-2, mesh
Mark tracking box has tracked shelter, does not search again for moving ahead;When target reaches the position Fig. 3-3, target following frame is still being blocked
Object is nearby hovered, and does not search again for moving ahead, target following failure;Even if target occurs again in subsequent video frame, can not also continue
Tracking.
Embodiment 2:
Tracking module is integrated with using one kind, detection module, after the video tracing method of integration module and study module,
Tracking target is caused temporarily to be lost even if target is blocked completely, if target occurs again in subsequent video frame, the present invention
Target can be detected using detector module, give script lost target for change again, continue to complete tracing task.
Step 1: system reads in video frame, selectes tracking target frame, by the position of target frame and size be delivered separately to
Track module and detection module, as shown in Fig. 4-1;
Step 2: tracking module initialization sets tracking creditability threshold value, is set as 0.3;To given tracking target frame
Target signature is extracted, target signature can be HOG feature (histograms of oriented gradients feature) and be also possible to Lab color characteristic, also
It can be the depth characteristic based on deep learning;Target response label figure, target response label can be generated by Gaussian function
Figure be it is fixed, for the template as correlation filtering template matching;And acquire correlation filtering Model Weight coefficient matrix, weight
Coefficient matrix, which is used to act on target block diagram picture, generates target response figure.
Step 3: detection module initialization: the target frame by constructing different scale traverses first frame image, obtains target
Sample data set, and the target frame that traversal obtains is subjected to degree of overlapping with given tracking target frame and is compared, not according to degree of overlapping
It is divided into positive sample and negative sample with by target sample data set, degree of overlapping is high for positive sample, and degree of overlapping is low for negative sample;It will
Positive and negative sample data set is split as training set and test set respectively, for training Ensemble classifier and nearest neighbor classifier;
Step 4: after completing detection module and tracking module initial work, next frame video flowing is read, obtains new images,
Carry out target following and target detection;
Step 5: tracking module carries out target following: similar to target block diagram picture progress using obtained weight coefficient matrix
Degree matching, obtains target response figure;The maximum response of target response figure is calculated, and the maximum response is classified as to new target
Tracking creditability numerical value, using the new target following confidence value as the evaluation criterion of current tracking target accuracy;When
In the case that target is not blocked, unimodality, as shown in fig. 5-1, the mesh in figure in red circle is presented in tracking target response figure
Mark response (white pixel) compares concentration, and has a maximum response, maximum response 0.42, i.e. tracking creditability
It is 0.42;In the case that target is blocked completely, multimodality is presented in tracking target response figure, red in figure as shown in Fig. 5-2
Target response value (white pixel) in circle is more dispersed, and maximum response is only 0.19, i.e., tracking creditability is 0.19;Through
A large amount of practice analysis discovery is crossed, when target is not blocked, unimodality can be presented in target response figure, and maximum response can be higher than
0.4 and relatively stable, when target is blocked completely, target response figure unimodality disappears, and multimodality, maximum response meeting is presented
Suddenly decline, be reduced to 0.2 hereinafter, until target reappear and correctly tracked, target response figure can just restore unimodality
Characteristic, and maximum response is higher than 0.4 or more, therefore, can judge that tracking target is hidden by the way that the threshold value of a certain fixation is arranged
Gear, it is 0.3 that the threshold value, which is arranged, in this patent;
Step 6: detection module carries out target detection: passing sequentially through variance by will traverse the window to be detected that image obtains
Classifier, Ensemble classifier and nearest neighbor classifier;
Wherein, by calculating integrogram, the variance of each window to be detected is obtained, variance is greater than setting variance classifier threshold
Value then thinks that the detection window includes target, which is entered Ensemble classifier by variance classifier;For width ash
The image of degree, the value at any point (x, y) in integral image refer to the square constituted put from the upper left corner of image to this
The sum of the gray value of all points in shape region.Variance classifier threshold value is calculated by initialization procedure, this example is 1046.
Ensemble classifier has 10 trees, and each tree has 16 binary systems to judge node, forms 16 binary codings, each
Encode a corresponding posterior probability;Each window to be detected forms corresponding posterior probability after entering Ensemble classifier, if after
Probability is tested to be greater than the Ensemble classifier threshold value of setting then the detection window by Ensemble classifier enters nearest neighbor classifier;Set
For classifier threshold value by being calculated, this example is 6.
The similarity of each window and on-time model to be detected into nearest neighbor classifier is calculated, similarity is greater than setting
Nearest neighbor classifier threshold value then judge that the detection window contains required target;Nearest neighbor classifier threshold value by being calculated,
This example is 0.64.
Step 7: after completing target following and target detection, the result of the result of target following and target detection being sent into comprehensive
Molding block is integrated;
Step 8: integration module is responsible for the result of comprehensive tracking and detection;Integration module by cluster calculation detection block with
The similarity of track frame carries out the screening output maximum detection target frame of similarity, and the mesh obtained with tracking module to detection block
Mark frame is integrated to obtain final target position and size;
Step 9: meanwhile, judge whether the tracking creditability of tracking module is greater than given threshold value, as shown in fig. 5-1, when
When target is not blocked, target following confidence level is 0.42, greater than the tracking creditability threshold value 0.3 of setting, so by target position
It sets output and shows.As shown in Fig. 5-2, when target is blocked completely, target following confidence level is 0.19, less than setting with
Track confidence threshold value 0.3 detects that target frame reinitializes tracking mould by detection module so judging that target has been lost
Block, again by the tracking and detection of a continuation new round step 2.As shown in Fig. 4-3, when target occurs once again, inspection is utilized
The result for surveying device reinitializes tracking module and is tracked, and can re-establish the tracking to target;
Step 10: judging whether target meets model modification condition, be, enable study module and carry out model learning, otherwise
Next frame image is directly read, the tracking and detection of a new round are opened;
Step 11: study module re-starts model learning;Study module, which passes through the target position being currently calculated, to be believed
Breath traverses image again, updates positive and negative sample data set;
Step 12: judging whether to stop tracking, be to terminate and exit, otherwise update and save data set, read next
Frame image opens the tracking and detection of a new round.
Embodiment 3:
Step 1: system reads in video frame, selectes tracking target frame, by the position of target frame and size be delivered separately to
Track module and detection module;
Step 2: tracking module initialization sets tracking creditability threshold value, is set as 0.3;To given tracking target frame
Target signature is extracted, target signature can be HOG feature (histograms of oriented gradients feature) and be also possible to Lab color characteristic, also
It can be the depth characteristic based on deep learning;By the available target response label data of Gaussian function, for being used as phase
Close the matched data template of Filtering Template;And correlation filtering Model Weight coefficient matrix is acquired, weight coefficient matrix is used for and mesh
It marks the effect of block diagram picture and generates target response data.
Step 3: detection module initialization: the target frame by constructing different scale traverses first frame image, obtains target
Sample data set, and the target frame that traversal obtains is subjected to degree of overlapping with given tracking target frame and is compared, not according to degree of overlapping
It is divided into positive sample and negative sample with by target sample data set, degree of overlapping is high for positive sample, and degree of overlapping is low for negative sample;It will
Positive and negative sample data set is split as training set and test set respectively, for training Ensemble classifier and nearest neighbor classifier;
Step 4: after completing detection module and tracking module initial work, next frame video flowing is read, obtains new images,
Carry out target following and target detection;
Step 5: tracking module carries out target following: similar to target block diagram picture progress using obtained weight coefficient matrix
Degree matching, obtains target response data;Numerical analysis is carried out to target response data, seeks target response data maximums conduct
Target following confidence value, the evaluation criterion as current goal tracking accuracy.In the case that target is not blocked, mesh
Mark response data preferably meets Gaussian Profile, only one wave crest of data, and peak value 0.56, i.e. tracking creditability are 0.56;
In the case that target is blocked completely, target response data do not meet Gaussian Profile, the characteristic of multi-modal are presented, data are not
Only one wave crest, peak-peak 0.15, i.e. tracking creditability are 0.15.It is found by a large amount of practice analysis, works as target
When not being blocked, unimodality can be presented in target response data, and maximum response can be higher than 0.4 and relatively stable, when target is complete
Full when blocking, target response data unimodality disappears, and multimodality is presented, and maximum response can decline suddenly, be reduced to 0.2 with
Under, until target is reappeared and correctly tracked, target response data can just restore unimodality characteristic, and maximum response is high
In 0.4 or more, therefore, it can judge that tracking target is blocked by the way that the threshold value of a certain fixation is arranged, the threshold value is arranged in this patent
It is 0.3;
Step 6: detection module carries out target detection: passing sequentially through variance by will traverse the window to be detected that image obtains
Classifier, Ensemble classifier and nearest neighbor classifier;
Wherein, by calculating integrogram, the variance of each window to be detected is obtained, variance is greater than setting variance classifier threshold
Value then thinks that the detection window includes target, which is entered Ensemble classifier by variance classifier;For width ash
The image of degree, the value at any point (x, y) in integral image refer to the square constituted put from the upper left corner of image to this
The sum of the gray value of all points in shape region.Variance classifier threshold value is calculated by initialization procedure, this example is 874.
Ensemble classifier has 10 trees, and each tree has 13 binary systems to judge node, forms 13 binary codings, each
Encode a corresponding posterior probability;Each window to be detected forms corresponding posterior probability after entering Ensemble classifier, if after
Probability is tested to be greater than the Ensemble classifier threshold value of setting then the detection window by Ensemble classifier enters nearest neighbor classifier;Set
For classifier threshold value by being calculated, this example is 8.
The similarity of each window and on-time model to be detected into nearest neighbor classifier is calculated, similarity is greater than setting
Nearest neighbor classifier threshold value then judge that the detection window contains required target;Nearest neighbor classifier threshold value by being calculated,
This example is 0.44.
Step 7: after completing target following and target detection, the result of the result of target following and target detection being sent into comprehensive
Molding block is integrated;
Step 8: integration module is responsible for the result of comprehensive tracking and detection;Integration module by cluster calculation detection block with
The similarity of track frame carries out the screening output maximum detection target frame of similarity, and the mesh obtained with tracking module to detection block
Mark frame is integrated to obtain final target position and size;
Step 9: meanwhile, judge whether the tracking creditability of tracking module is greater than given threshold value, when target is not blocked
When, target following confidence level is 0.56, greater than the tracking creditability threshold value 0.3 of setting, so exporting and showing target position.
When target is blocked completely, target following confidence level is 0.15, less than the tracking creditability threshold value 0.3 of setting, so judgement
Target has been lost, and detects that target frame reinitializes tracking module by detection module, again by continuing new one step 2
The tracking and detection of wheel.When target occurs once again, tracking module is reinitialized using the result of detector and is tracked, energy
Enough re-establish the tracking to target;
Step 10: judging whether target meets model modification condition, be, enable study module and carry out model learning, otherwise
Next frame image is directly read, the tracking and detection of a new round are opened;
Step 11: study module re-starts model learning;Study module, which passes through the target position being currently calculated, to be believed
Breath traverses image again, updates positive and negative sample data set;
Step 12: judging whether to stop tracking, be to terminate and exit, otherwise update and save data set, read next
Frame image opens the tracking and detection of a new round.
Claims (4)
1. a kind of video tracing method of monitoring system, it is characterised in that:
Step 1: system reads in video frame, selectes the tracking target frame of first frame image, and the position of target frame and size are distinguished
Pass to tracking module and detection module;
Step 2: tracking module initialization;Feature is extracted to given tracking target frame, target response is obtained by Gaussian function
Label figure, and seek weight coefficient;
Step 3: detection module initialization: the target frame by constructing different scale traverses first frame image, obtains target sample
Data set, and the target frame that traversal obtains is subjected to degree of overlapping with given tracking target frame and is compared, by target sample data set
It is divided into positive sample and negative sample;Positive and negative sample data set is split as training set and test set respectively, for training sets classification
Device and nearest neighbor classifier simultaneously adjust classifier threshold value;
Step 4: after completing detection module and tracking module initial work, reading next frame video flowing, obtain new images, carry out
Target following and target detection;
Step 5: tracking module carries out target following: carrying out similarity mode using obtained weight matrix, obtains target response
Figure;The maximum response of target response figure is calculated, and the maximum response is classified as to new tracking creditability numerical value, this is new
Evaluation new standard of the tracking creditability numerical value as current tracking target accuracy;
Step 6: detection module carries out target detection: passing sequentially through variance classification by will traverse the window to be detected that image obtains
Device, Ensemble classifier and nearest neighbor classifier;
Wherein, by calculating integrogram, the variance of each window to be detected is obtained, variance is greater than given threshold and then thinks the detection
Window includes target, which is entered Ensemble classifier by variance classifier;
Ensemble classifier has N tree, and each tree has M binary system to judge node, forms M binary codings, each coding pair
Answer a posterior probability;Each window to be detected forms corresponding posterior probability after entering Ensemble classifier, if posterior probability
Greater than the threshold value of setting, then the detection window by Ensemble classifier enters nearest neighbor classifier;
The similarity of each window and on-time model to be detected into nearest neighbor classifier is calculated, similarity is greater than the threshold of setting
Value then judges that the detection window contains required target;
Step 7: after completing target following and target detection, the result of the result of target following and target detection being sent into comprehensive mould
Block is integrated;
Step 8: integration module is responsible for the result of comprehensive tracking and detection;Integration module passes through cluster calculation detection block and tracking box
Similarity, the screening output maximum detection target frame of similarity, and the target frame obtained with tracking module are carried out to detection block
It is integrated to obtain final target position and size;
Step 9: meanwhile, judge whether the tracking creditability of tracking module is greater than given threshold value, is to export target position
And it shows;Otherwise judgement tracking target has been lost, and detects that target frame reinitializes tracking module by detection module, again
By the tracking and detection that continue a new round step 2;
Step 10: judging whether target meets model modification condition, be, enable study module and carry out model learning, otherwise directly
Next frame image is read, the tracking and detection of a new round are opened;
Step 11: study module re-starts model learning;Study module passes through the target position information weight being currently calculated
New traversal image, updates positive and negative sample data set;
Step 12: judging whether to stop tracking, be to terminate and exit, otherwise update and save data set, read next frame figure
Picture opens the tracking and detection of a new round.
2. a kind of video tracing method of monitoring system according to claim 1, it is characterised in that: feature tag HOG
Feature tag.
3. a kind of video tracing method of monitoring system according to claim 1, it is characterised in that: feature tag Lab
Color characteristic label.
4. a kind of video tracing method of monitoring system according to claim 1, it is characterised in that: adjustment target frame scale
Variation, to seek the object matching response with target frame size variation, if the response newly obtained is obtained than original scale
Response it is big, then exported the corresponding target frame of the scale as new tracking result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711305847.0A CN109003290A (en) | 2017-12-11 | 2017-12-11 | A kind of video tracing method of monitoring system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711305847.0A CN109003290A (en) | 2017-12-11 | 2017-12-11 | A kind of video tracing method of monitoring system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109003290A true CN109003290A (en) | 2018-12-14 |
Family
ID=64574218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711305847.0A Pending CN109003290A (en) | 2017-12-11 | 2017-12-11 | A kind of video tracing method of monitoring system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109003290A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060276A (en) * | 2019-04-18 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Object tracking method, tracking process method, corresponding device, electronic equipment |
CN110738149A (en) * | 2019-09-29 | 2020-01-31 | 深圳市优必选科技股份有限公司 | Target tracking method, terminal and storage medium |
CN111127505A (en) * | 2019-11-27 | 2020-05-08 | 天津津航技术物理研究所 | Online learning tracking and engineering realization method based on space planning |
CN111199179A (en) * | 2018-11-20 | 2020-05-26 | 深圳市优必选科技有限公司 | Target object tracking method, terminal device and medium |
CN111209869A (en) * | 2020-01-08 | 2020-05-29 | 重庆紫光华山智安科技有限公司 | Target following display method, system, equipment and medium based on video monitoring |
CN111242977A (en) * | 2020-01-09 | 2020-06-05 | 影石创新科技股份有限公司 | Target tracking method of panoramic video, readable storage medium and computer equipment |
CN111428539A (en) * | 2019-01-09 | 2020-07-17 | 成都通甲优博科技有限责任公司 | Target tracking method and device |
CN112862863A (en) * | 2021-03-04 | 2021-05-28 | 广东工业大学 | Target tracking and positioning method based on state machine |
CN115797647A (en) * | 2022-11-14 | 2023-03-14 | 西安电子科技大学广州研究院 | Target stable tracking method under embedded open environment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945375A (en) * | 2012-11-20 | 2013-02-27 | 天津理工大学 | Multi-view monitoring video behavior detection and recognition method under multiple constraints |
CN103116896A (en) * | 2013-03-07 | 2013-05-22 | 中国科学院光电技术研究所 | Automatic detection tracking method based on visual saliency model |
CN103679186A (en) * | 2012-09-10 | 2014-03-26 | 华为技术有限公司 | Target detecting and tracking method and device |
CN105469430A (en) * | 2015-12-10 | 2016-04-06 | 中国石油大学(华东) | Anti-shielding tracking method of small target in large-scale scene |
CN106204638A (en) * | 2016-06-29 | 2016-12-07 | 西安电子科技大学 | A kind of based on dimension self-adaption with the method for tracking target of taking photo by plane blocking process |
WO2017112982A1 (en) * | 2015-12-29 | 2017-07-06 | L'oreal | Cosmetic compositions with silica aerogel sun protection factor boosters |
CN107392210A (en) * | 2017-07-12 | 2017-11-24 | 中国科学院光电技术研究所 | Target detection tracking method based on TLD algorithm |
CN107452022A (en) * | 2017-07-20 | 2017-12-08 | 西安电子科技大学 | A kind of video target tracking method |
-
2017
- 2017-12-11 CN CN201711305847.0A patent/CN109003290A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679186A (en) * | 2012-09-10 | 2014-03-26 | 华为技术有限公司 | Target detecting and tracking method and device |
CN102945375A (en) * | 2012-11-20 | 2013-02-27 | 天津理工大学 | Multi-view monitoring video behavior detection and recognition method under multiple constraints |
CN103116896A (en) * | 2013-03-07 | 2013-05-22 | 中国科学院光电技术研究所 | Automatic detection tracking method based on visual saliency model |
CN105469430A (en) * | 2015-12-10 | 2016-04-06 | 中国石油大学(华东) | Anti-shielding tracking method of small target in large-scale scene |
WO2017112982A1 (en) * | 2015-12-29 | 2017-07-06 | L'oreal | Cosmetic compositions with silica aerogel sun protection factor boosters |
CN106204638A (en) * | 2016-06-29 | 2016-12-07 | 西安电子科技大学 | A kind of based on dimension self-adaption with the method for tracking target of taking photo by plane blocking process |
CN107392210A (en) * | 2017-07-12 | 2017-11-24 | 中国科学院光电技术研究所 | Target detection tracking method based on TLD algorithm |
CN107452022A (en) * | 2017-07-20 | 2017-12-08 | 西安电子科技大学 | A kind of video target tracking method |
Non-Patent Citations (1)
Title |
---|
于蕾: ""基于核相关滤波器的TLD目标跟踪算法"", 《应用科技》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111199179A (en) * | 2018-11-20 | 2020-05-26 | 深圳市优必选科技有限公司 | Target object tracking method, terminal device and medium |
CN111199179B (en) * | 2018-11-20 | 2023-12-29 | 深圳市优必选科技有限公司 | Target object tracking method, terminal equipment and medium |
CN111428539A (en) * | 2019-01-09 | 2020-07-17 | 成都通甲优博科技有限责任公司 | Target tracking method and device |
CN110060276A (en) * | 2019-04-18 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Object tracking method, tracking process method, corresponding device, electronic equipment |
US11967089B2 (en) * | 2019-04-18 | 2024-04-23 | Tencent Technology (Shenzhen) Company Limited | Object tracking method, tracking processing method, corresponding apparatus, and electronic device |
WO2020211624A1 (en) * | 2019-04-18 | 2020-10-22 | 腾讯科技(深圳)有限公司 | Object tracking method, tracking processing method, corresponding apparatus and electronic device |
US20210287381A1 (en) * | 2019-04-18 | 2021-09-16 | Tencent Technology (Shenzhen) Company Limited | Object tracking method, tracking processing method, corresponding apparatus, and electronic device |
CN110738149A (en) * | 2019-09-29 | 2020-01-31 | 深圳市优必选科技股份有限公司 | Target tracking method, terminal and storage medium |
CN111127505A (en) * | 2019-11-27 | 2020-05-08 | 天津津航技术物理研究所 | Online learning tracking and engineering realization method based on space planning |
CN111127505B (en) * | 2019-11-27 | 2024-03-26 | 天津津航技术物理研究所 | Online learning tracking and engineering realization method based on space planning |
CN111209869A (en) * | 2020-01-08 | 2020-05-29 | 重庆紫光华山智安科技有限公司 | Target following display method, system, equipment and medium based on video monitoring |
CN111242977B (en) * | 2020-01-09 | 2023-04-25 | 影石创新科技股份有限公司 | Target tracking method of panoramic video, readable storage medium and computer equipment |
CN111242977A (en) * | 2020-01-09 | 2020-06-05 | 影石创新科技股份有限公司 | Target tracking method of panoramic video, readable storage medium and computer equipment |
CN112862863B (en) * | 2021-03-04 | 2023-01-31 | 广东工业大学 | Target tracking and positioning method based on state machine |
CN112862863A (en) * | 2021-03-04 | 2021-05-28 | 广东工业大学 | Target tracking and positioning method based on state machine |
CN115797647A (en) * | 2022-11-14 | 2023-03-14 | 西安电子科技大学广州研究院 | Target stable tracking method under embedded open environment |
CN115797647B (en) * | 2022-11-14 | 2023-09-08 | 西安电子科技大学广州研究院 | Target stable tracking method under embedded open environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109003290A (en) | A kind of video tracing method of monitoring system | |
CN107016357B (en) | Video pedestrian detection method based on time domain convolutional neural network | |
US9147259B2 (en) | Multi-mode video event indexing | |
CN101739551B (en) | Method and system for identifying moving objects | |
CN102201146B (en) | Active infrared video based fire smoke detection method in zero-illumination environment | |
CN107145851A (en) | Constructions work area dangerous matter sources intelligent identifying system | |
CN105243356B (en) | A kind of method and device that establishing pedestrian detection model and pedestrian detection method | |
CN109299735A (en) | Anti-shelter target tracking based on correlation filtering | |
CN109033934A (en) | A kind of floating on water surface object detecting method based on YOLOv2 network | |
CN113076809A (en) | High-altitude falling object detection method based on visual Transformer | |
CN104915655A (en) | Multi-path monitor video management method and device | |
CN108230254A (en) | A kind of full lane line automatic testing method of the high-speed transit of adaptive scene switching | |
CN106408591A (en) | Anti-blocking target tracking method | |
CN104992453A (en) | Target tracking method under complicated background based on extreme learning machine | |
CN107133569A (en) | The many granularity mask methods of monitor video based on extensive Multi-label learning | |
CN104378582A (en) | Intelligent video analysis system and method based on PTZ video camera cruising | |
CN107424171A (en) | A kind of anti-shelter target tracking based on piecemeal | |
CN103034870B (en) | The boats and ships method for quickly identifying of feature based | |
CN110119726A (en) | A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model | |
CN111832400A (en) | Mask wearing condition monitoring system and method based on probabilistic neural network | |
CN107480607A (en) | A kind of method that standing Face datection positions in intelligent recording and broadcasting system | |
CN103593679A (en) | Visual human-hand tracking method based on online machine learning | |
CN109255326A (en) | A kind of traffic scene smog intelligent detecting method based on multidimensional information Fusion Features | |
CN109903311A (en) | It is a kind of improve TLD mine under video target tracking method | |
CN114202646A (en) | Infrared image smoking detection method and system based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 361008 Fujian province Xiamen Software Park Siming District Road No. 59 102 two expected Applicant after: ROPT TECHNOLOGY GROUP Co.,Ltd. Address before: 361008 Fujian province Xiamen Software Park Siming District Road No. 59 102 two expected Applicant before: Roput (Xiamen) Technology Group Co.,Ltd. |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181214 |
|
RJ01 | Rejection of invention patent application after publication |