CN107146238A - The preferred motion target tracking method of feature based block - Google Patents
The preferred motion target tracking method of feature based block Download PDFInfo
- Publication number
- CN107146238A CN107146238A CN201710269627.0A CN201710269627A CN107146238A CN 107146238 A CN107146238 A CN 107146238A CN 201710269627 A CN201710269627 A CN 201710269627A CN 107146238 A CN107146238 A CN 107146238A
- Authority
- CN
- China
- Prior art keywords
- mrow
- target area
- msub
- msubsup
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of visual target tracking method based on preferred feature block, including 1. read video to be tracked;2. initialization;3. piecemeal simultaneously detects characteristic point;4. screen characteristic point;5. screen sub-block;6. calculate target area response diagram;7. update target area state;8. update target area grader;9. export target area information;10. judge whether to run through all frames of video to be tracked;11. motion target tracking terminates.The present invention has good real-time and a higher precision during motion target tracking, the present invention target block with target scale change it is violent in the case of being capable of stable pursuit movement target.
Description
Technical field
The invention belongs to field of computer technology, a kind of base in computer vision tracking technique field is further related to
In the preferred motion target tracking method of characteristic block.The present invention can be used for transporting the vehicle in video scene, tank, pedestrian etc.
Moving-target carries out real-time tracking.
Background technology
Motion target tracking method is broadly divided into monotrack and multiple target tracking.Monotrack is mainly to video field
Single target interested is persistently tracked in scape, obtains position of the target in each frame of video.As long as multiple target tracking to regarding
Multiple targets are tracked in frequency scene.In recent years, the higher motion target tracking method of tracking performance mainly have STRUCK,
The trackings such as KCF, STC.Tracking problem is converted into classification problem by STRUCK, and on-line training sample updates object module,
But the sample of training might not be consistent with realistic objective, so as to influence classifying quality.KCF trackers are using coring correlation filter
Wave method, the training of sample is completed using circular matrix and Fourier transformation, obtains efficient object classifiers, but this method
Target occlusion can not be tackled, when target is blocked, tracking failure is easily caused.STC track algorithms to target and it above and below
The spatial relationship of text is modeled, and this method can be very good to tackle target occlusion problem.Visual target tracking generally existing
The problems such as scape illumination complexity, target occlusion, dimensional variation, target deformation, cause the precision of track algorithm, robustness not high.
" one kind is High-Speed Automatic more based on coring correlation filtering for the patent document that Zhejiang Shenghui Lighting Co., Ltd. applies at it
Disclose a kind of based on coring phase in target following " (number of patent application 201410418797.7, publication number CN104200237A)
Close the High-Speed Automatic multi-object tracking method of filtering.This method uses ridge regression training program, by accelerating Fourier transformation to train
Target sample obtains grader, completes the Fast Training to great amount of samples, obtains high performance object classifiers.But, the party
The weak point that method is still present is not account for the problem of target is blocked on successive frame, when target is blocked,
Tracking failure is easily caused, so that can not be long lasting for tracking target.
Patent document " a kind of new video tracing method based on partition strategy " (patent that Shandong University applies at it
Application number 201510471102.6, publication number CN105139418A) in disclose a kind of new video based on partition strategy
Tracking.Target area is divided into several fritters by this method, using each fritter color histogram and surrounding it is small
The color histogram of block compares, and judges whether target blocks, when blocking, in particle filter algorithm below
The weight of the block is reduced, the influence caused to target following is blocked in reduction.But, the weak point that this method is still present is,
The robustness of color histogram is not high, when occurring to look after change, is easily lost target, causes tracking to fail.
The content of the invention
It is an object of the invention to overcome the shortcomings of that above-mentioned prior art is present, it is proposed that a kind of feature based block is preferred
Motion target tracking method.
Realizing the thinking of the present invention is, distinguished point based simultaneously obtains characteristic point pair using optical flow tracking, according to two features
Point judgment criterion screening characteristic point, so as to obtain active block.Met with a response figure, peak sidelobe ratio using coring correlation filtering KCF
PSR and object classifiers, using response diagram as the weight of characteristic point, and then obtain translation change and dimensional variation.Pass through peak
It is worth secondary lobe than PSR to judge, obtains the occlusion state of target, according to occlusion state using corresponding different strategies, so as to completes pair
Target Continuous robustness is tracked, and is improved algorithm and is tackled the ability blocked with dimensional variation, strengthens the robustness and precision of algorithm.
To achieve these goals, it is of the invention comprise the following steps that including:
(1) Moving Targets Based on Video Streams to be tracked is read from video frequency pick-up head:
(2) initialize:
(2a) reads in the first two field picture from Moving Targets Based on Video Streams to be tracked;
(2b) outlines moving target to be tracked with mouse on the first two field picture to be tracked;
(2c) utilizes method of partition, carries out piecemeal to motion target area to be tracked, obtains the son of motion target area
Block region;
(2d) utilizes property detector FAST, from each sub-block of the first two field picture motion target area to be tracked
Characteristic point is detected, all characteristic points detected are regard as initialization feature point;
The method that (2e) uses coring correlation filtering tracker KCF training samples, training objective area sample obtains first
The grader of motion target area on two field picture, regard target area grader as initialization motion target area grader;
(3) characteristic point is detected:
(3a) is any from Moving Targets Based on Video Streams to be tracked to read in a two field picture;
(3b) utilizes method of partition, carries out piecemeal to obtained motion target area on read in two field picture, obtains institute
Read in the sub-block region of motion target area on two field picture;
(3c) utilizes property detector FAST, detects each sub-block region of motion target area, obtains each height
The characteristic point in block region;
(4) characteristic point is screened:
(4a) using light stream tracking forward, in all characteristic points on the picture frame currently read in, find with it is upper
The one-to-one characteristic point of feature on the picture frame once read in, constitutes a characteristic point pair, by all characteristic points to constituting
One feature point set, is stored in the picture frame currently read in;
(4b) according to the following formula, calculates the Backward error forward of every a pair of characteristic points in feature point set:
Wherein,Represent k-th of region of l sub-blocks characteristic point pair in the picture frame that currently reads in forward backward
Error,K-th of region of l sub-blocks characteristic point in the picture frame c that expression is currently read in,Represent withIt is corresponding upper one
K-th of region of l sub-blocks characteristic point in the picture frame b of secondary reading,WithForm a pair of characteristic points, l=1,2 ..., M, k=
1,2 ..., Ql, c represents present frame, and b represents previous frame, and M represents the sub-block sum that motion target area divides, QlRepresent l
The sum of characteristic point in block region;
(4c) according to the following formula, calculates every a pair of characteristic points normalization error in feature point set:
Wherein,Represent that the normalization of k-th of region of l sub-blocks characteristic point pair on the picture frame currently read in is missed
Difference, QlRepresent the sum of l sub-blocks provincial characteristics point, μbAnd δ (l)b(l) the on the last picture frame b currently read in is represented
The average and standard deviation of all characteristic point pixels, μ in l sub-blocks regioncAnd δ (l)c(l) on the picture frame c that expression is currently read in
The average and standard deviation of the pixel of all characteristic points in l sub-blocks region;
(4d) according to the following formula, calculates the normalized bias mean value of all characteristic points;
Wherein, μ represents the normalized bias mean value of all characteristic points, and M represents the sub-block region that motion target area divides
Number, QlRepresent the number sum of the trace point pair in l sub-blocks;
(4e) chooses all characteristic points for meeting division condition from characteristic point, and the characteristic point for meeting division condition is returned
Enter reliable characteristic point set;
(5) preferred feature block is obtained:
(5a) chooses all sub-blocks for meeting precision conditions from sub-block region, and sub-block is included into preferred chunk and concentrated;
(5b) merges all characteristic points that preferred chunk is concentrated, and obtains preferred feature point set;
(6) response diagram and peak sidelobe ratio of target area are calculated:
(6a) utilizes target area detection of classifier target area, obtains the response diagram of target area, is made with the response diagram
It is characterized weight a little;
(6b) according to the following formula, calculates the peak sidelobe ratio of the response diagram of target area;
Wherein, PSR (o) represents the peak sidelobe ratio of target area o response diagram, and max (R (o)) represents response diagram R (o)
Maximum, μ (R (o)) represent response diagram R (o) average, standard deviation sigma (R (o)) represent response diagram R (o) standard deviation;
(7) target area state is updated;
(7a) according to the following formula, calculates the dimensional variation of target area;
Wherein, S (Ic) dimensional variation of target area on the picture frame that currently reads in is represented,WithRepresent current to read in
Picture frame on preferred feature point concentrate characteristic point,WithRepresent preferred feature point set on the last picture frame read in
In characteristic point, wi、wjRepresent the characteristic point that preferred feature point is concentratedWeight;
(7b) according to the following formula, calculates the size of target area;
wc=wb*S(Ic)
hc=hb*S(Ic)
Wherein, wcThe width of target area, h on the picture frame that expression is currently read incTarget on the picture frame that expression is currently read in
The height in region, wbRepresent the width of target area on the last picture frame read in, hbRepresent target on the last picture frame read in
The height in region;
(7c) according to the following formula, calculates the translation change of target area:
Wherein, D (Ic) representing that the translation of target area on the picture frame currently read in changes, η represents transformation factor one
As take 0.35, D (Ib) represent that the translation of target area on the last picture frame read in changes, wiRepresent correspondence reliable characteristic point
Weights,The coordinate corresponding to i-th of reliable characteristic point on the picture frame currently read in is represented, u represents characteristic point
Abscissa, v represents the ordinate of characteristic point,Represent to correspond to i-th of reliable characteristic on the last picture frame read in
The coordinate of point;
(7d) according to the following formula, calculates target area position:
Cc=Cb+D(Ic)
Wherein, CcRepresent the current position for reading in target on picture frame, CbRepresent the last position for reading in target on picture frame
Put;
The size and target area position of (7e) using target area, update the current target area shape read on picture frame
State;
(8) target area grader is updated;
(8a) is used and transported on coring correlation filtering method KCF training objective area samples, the picture frame currently read in
The grader in moving-target region;
(8b) judges whether target area grader meets update condition, if so, then performing (8c), otherwise, performs (8d);
(8c) target area is not blocked, updates target area grader, the current reading obtained with step (8a)
The grader of target area replaces the target area grader obtained on the last picture frame read in on picture frame;
(8d) target area is blocked, and does not update object classifiers, keeps what is obtained on the last picture frame read in
Target area grader is constant;
(9) by target area status information, be output in the form of rectangle frame in reading two field picture;
(10) judge whether to run through all frames of video to be tracked, if so, then performing step (11), otherwise, perform step
(3);
(11) tracking of moving target is terminated.
The present invention has advantages below compared with prior art
First, because the present invention is using property detector FAST detection characteristic points, overcome the description of prior art characteristic point
Scarce capacity, the problem of detection speed is slow so that the present invention has good real-time during motion target tracking.
Second, because the present invention uses target area method of partition, target occlusion can not be tackled in the prior art by overcoming
Under tracking so that the present invention can block lower good pursuit movement target.
3rd, due to the method that the present invention obtains characteristic point weight using target area detection of classifier image, customer service
The problem of characteristic point weight is inaccurately described in the prior art so that the present invention obtains and results in high-precision target size.
4th, because the present invention uses light stream tracking forward, feature point tracking precision is low in the prior art for customer service
Problem so that the present invention being capable of high-precision moving target displacement.
Brief description of the drawings
Fig. 1 is flow chart of the invention;
Fig. 2 is bent for the center error tracked in emulation experiment of the present invention to davidce3 Moving Object in Video Sequences
Line comparison diagram;
Fig. 3 be emulation experiment of the present invention in davidce3 Moving Object in Video Sequences tracking Duplication curve comparison
Figure;
Fig. 4 is bent for the center error tracked in emulation experiment of the present invention to Jogging Moving Object in Video Sequences
Line comparison diagram;
The Duplication curve comparison that Fig. 5 tracks for the present invention to Jogging Moving Object in Video Sequences in emulation experiment
Figure;
Fig. 6 be emulation experiment of the present invention in Car Moving Object in Video Sequences tracking center error curve pair
Than figure;
Fig. 7 be emulation experiment of the present invention in Car Moving Object in Video Sequences tracking Duplication curve comparison figure.
Embodiment
The present invention will be further described below in conjunction with the accompanying drawings.
Reference picture 1, the specific steps to the present invention are further described.
Step 1, Moving Targets Based on Video Streams to be tracked is read from video frequency pick-up head.
Step 2, initialize.
The first two field picture is read in from Moving Targets Based on Video Streams to be tracked.
Moving target to be tracked is outlined with mouse on the first two field picture to be tracked.
Using method of partition, piecemeal is carried out to motion target area to be tracked, the sub-block area of motion target area is obtained
Domain.
The first step, can be divided into the standard in integer sub-block region, the sub-block region length of side is greater than according to target area
In 10 pixels, four borders of motion target area to be tracked are extended simultaneously, be expanded motion target area.
Second step, to extension movement target area piecemeal, obtains the sub-block region of motion target area, sub-block region is side
The long square area for being more than or equal to 10 pixels, and each sub-block area size is identical.
Using property detector FAST, detected from each sub-block of the first two field picture motion target area to be tracked
Characteristic point, regard all characteristic points detected as initialization feature point.
Using the method for coring correlation filtering tracker KCF training samples, training objective area sample obtains the first frame figure
As the grader of upper motion target area, target area grader is regard as initialization motion target area grader.
Step 3, characteristic point is detected.
It is any from Moving Targets Based on Video Streams to be tracked to read in a two field picture.
Using method of partition, piecemeal is carried out to obtained motion target area on read in two field picture, obtains being read in
The sub-block region of motion target area on two field picture.
The first step, can be divided into the standard in integer sub-block region, the sub-block region length of side is greater than according to target area
In 10 pixels, four borders of motion target area to be tracked are extended simultaneously, be expanded motion target area.
Second step, to extension movement target area piecemeal, obtains the sub-block region of motion target area, sub-block region is side
The long square area for being more than or equal to 10 pixels, and each sub-block area size is identical.
Using property detector FAST, each sub-block region of motion target area is detected, each sub-block area is obtained
The characteristic point in domain.
Step 4, characteristic point is screened.
Using light stream tracking forward, in all characteristic points on the picture frame currently read in, find with it is last
The one-to-one characteristic point of feature on the picture frame of reading, constitutes a characteristic point pair, by all characteristic points to constituting one
Feature point set, is stored in the picture frame currently read in.
According to the following formula, the Backward error forward of every a pair of characteristic points in feature point set is calculated:
Wherein,K-th of region of l sub-blocks characteristic point pair misses forward backward in the picture frame that expression is currently read in
Difference,K-th of region of l sub-blocks characteristic point in the picture frame c that expression is currently read in,Represent withIt is corresponding upper one
K-th of region of l sub-blocks characteristic point in the picture frame b of secondary reading,WithForm a pair of characteristic points, l=1,2 ..., M, k=
1,2 ..., Ql, c represents present frame, and b represents previous frame, and M represents the sub-block sum that motion target area divides, QlRepresent l
The sum of characteristic point in block region.
According to the following formula, every a pair of characteristic points normalization error in feature point set is calculated:
Wherein,Represent that the normalization of k-th of region of l sub-blocks characteristic point pair on the picture frame currently read in is missed
Difference, QlRepresent the sum of l sub-blocks provincial characteristics point, μbAnd δ (l)b(l) the on the last picture frame b currently read in is represented
The average and standard deviation of all characteristic point pixels, μ in l sub-blocks regioncAnd δ (l)c(l) on the picture frame c that expression is currently read in
The average and standard deviation of the pixel of all characteristic points in l sub-blocks region.
According to the following formula, the normalized bias mean value of all characteristic points is calculated;
Wherein, μ represents the normalized bias mean value of all characteristic points, and M represents the sub-block region that motion target area divides
Number, QlRepresent the number sum of the trace point pair in l sub-blocks.
All characteristic points for meeting division condition are chosen from characteristic point, the characteristic point for meeting division condition is included into can
By feature point set, described division condition refers to:
NCC(Pl k) > μ and FB (Pl k) < QFB
Wherein, QFBThe previous Backward error threshold value of characteristic point is represented, according to target translational speed or camera translational speed
Depending on, the integer in 8~12 is can use, translational speed is small, and the value is smaller, conversely, the value is bigger.
Step 5, preferred feature block is obtained.
All sub-blocks for meeting precision conditions are chosen from sub-block region, sub-block is included into preferred chunk and concentrated.
Merge all characteristic points that preferred chunk is concentrated, obtain preferred feature point set.
Step 6, the response diagram and peak sidelobe ratio of target area are calculated.
Using target area detection of classifier target area, the response diagram of target area is obtained, spy is used as using the response diagram
Levy weight a little.
According to the following formula, the peak sidelobe ratio of the response diagram of target area is calculated.
Wherein, PSR (o) represents the peak sidelobe ratio of target area o response diagram, and max (R (o)) represents response diagram R (o)
Maximum, μ (R (o)) represent response diagram R (o) average, standard deviation sigma (R (o)) represent response diagram R (o) standard deviation.
Step 7, target area state is updated.
According to the following formula, the dimensional variation of target area is calculated;
Wherein, S (Ic) dimensional variation of target area on the picture frame that currently reads in is represented,WithRepresent current to read in
Picture frame on preferred feature point concentrate characteristic point,WithRepresent that preferred feature point is concentrated on the last picture frame read in
Characteristic point, wi、wjRepresent the characteristic point that preferred feature point is concentratedWeight;
According to the following formula, the size of target area is calculated;
wc=wb*S(Ic)
hc=hb*S(Ic)
Wherein, wcThe width of target area, h on the picture frame that expression is currently read incTarget on the picture frame that expression is currently read in
The height in region, wbRepresent the width of target area on the last picture frame read in, hbRepresent target on the last picture frame read in
The height in region;
According to the following formula, the translation change of target area is calculated:
Wherein, D (Ic) representing that the translation of target area on the picture frame currently read in changes, η represents transformation factor one
As take 0.35, D (Ib) represent that the translation of target area on the last picture frame read in changes, wiRepresent correspondence reliable characteristic point
Weights,The coordinate corresponding to i-th of reliable characteristic point on the picture frame currently read in is represented, u represents characteristic point
Abscissa, v represents the ordinate of characteristic point,Represent to correspond to i-th of reliable characteristic on the last picture frame read in
The coordinate of point.
According to the following formula, target area position is calculated:
Cc=Cb+D(Ic)
Wherein, CcRepresent the current position for reading in target on picture frame, CbRepresent the last position for reading in target on picture frame
Put;
Size and target area position using target area, update the current target area state read on picture frame.
Step 8, target area grader is updated.
Using coring correlation filtering method KCF training objective area samples, mesh is moved on the picture frame currently read in
Mark the grader in region;
(8b) judges whether target area grader meets update condition, if so, then performing (8c), otherwise, performs (8d),
Described update condition refers to:
PSR(o)≥T
Wherein, T is represented to update threshold value, and the translational speed according to target sizes and in viewing field of camera is determined, takes 20~30
In integer, target is smaller, and the value is smaller, and target is bigger, and the value is bigger, and target speed is faster, and the value is bigger, target fortune
Dynamic speed is slower, and the value is smaller;
(8c) target area is not blocked, target area grader is updated, with target area on the picture frame currently read in
The grader in domain replaces the target area grader obtained on the last picture frame read in;
(8d) target area is blocked, and does not update object classifiers, keeps what is obtained on the last picture frame read in
Target area grader is constant;
Step 9, target area status information is exported.
By target area status information, to be output in the form of frame on the two field picture currently read in.
Step 10, judge whether to run through all frames, if so, then performing step 3, otherwise, perform step 11.
Step 11, motion target tracking terminates.
It is described further with reference to emulation experiment Dui this Fa Ming Alto effects of the present invention.
1st, simulated conditions:
The emulation experiment of the present invention be on AMD A10-6700, the computer of 4GB internal memories using VS2010 and
What OpenCV2.4.4 softwares were carried out.Emulation experiment use video be, standard testing video set Visual Tracker
Benchmark Dataset 2013。
2nd, emulation content:
The tracker OUR of the present invention is emulated with three prior arts, the target of three prior arts use with
Track algorithm is KCF methods, STRUCK methods, STC methods, and three groups of emulation have been carried out respectively.
First group of emulation experiment, center error is carried out using the present invention and three prior arts to davidce3 videos
With the emulation of Duplication, four target center error curves are obtained as shown in Fig. 2 obtaining four target area Duplication songs
Line is as shown in Figure 3.
Second group of emulation experiment, center error is carried out using the present invention and three prior arts to Jogging videos
With the emulation of Duplication, four target center error curves are obtained as shown in figure 4, obtaining four target area Duplication songs
Line is as shown in Figure 5.
3rd group of emulation experiment, center error is carried out using the present invention and three prior arts to CarScale videos
With the emulation of Duplication, four target center error curves are obtained as shown in fig. 6, obtaining four target area Duplication songs
Line is as shown in Figure 7.
3rd, analysis of simulation result:
In Fig. 2, Fig. 3, Fig. 4, Fig. 5, Fig. 6, Fig. 7 withThe curve of sign represents the tracking result of KCF methods
Precision curve, with " +++ " sign curve represent STRUCK methods tracking result precision curve, with " _ _ _ _ _ _ " sign
Curve represent STC methods tracking result precision curve, with "" sign curve represent the present invention tracking result
Precision curve.Fig. 2, Fig. 4, Fig. 6 abscissa represent frame number, and ordinate represents center error amount.Fig. 3, Fig. 5, Fig. 7's
Abscissa represents frame number, and ordinate represents target area Duplication.
From Figure 2 it can be seen that the present invention and KCF methods are always maintained at relatively low target's center's site error rate, STRUCK methods
Target's center site error undergo the process of an increase and reduction, target's center's site error of STC methods gradually increases.
As seen from Figure 3, the target area Duplication of the present invention, STRUCK methods and KCF methods all undergoes reduction rise process again,
The target Duplication of STC methods is reduced always.The moving target of of the invention and existing three Technical Follow-Ups is the pedestrian in video,
Due to being blocked in the 100th frame to the 160th frame line people by electric pole, STRUCK methods directly track failure, but
Recover below and gradually tracking, the generation tracking of STC methods fails, and without recovering to track, in general, tracking of the invention
Precision is higher, can preferably tackle target occlusion.
From fig. 4, it can be seen that the present invention and KCF methods maintain relatively low center error persistently to track, STC methods and STRUCK
The center error of method gradually increases.As seen from Figure 5, the target area Duplication of the present invention and KCF methods remains higher
Level, but the center error of KCF methods has the mesh of a reduction and the process gone up, STC methods and STRUCK methods
Mark region Duplication is gradually reduced, down to 0.The moving target of of the invention and existing three Technical Follow-Ups is worn in video
The people of white cotta, because in 48 frame, pedestrian is blocked completely by electric pole, causes STC methods and STRUCK methods rapid
Generation tracking failure, and from not recovering unsuccessfully, the tracking accuracy of KCF methods also occurs in that temporary transient reduction, and of the invention
Then persistently tracked with low target's center's site error and high target Duplication, in general, anti-target occlusion of the invention
Ability is most strong, can preferably tackle the tracking problem under target occlusion.
As seen from Figure 6, target's center's site error of three technologies of the invention and existing gradually increases, but the present invention
Target's center's site error is always minimum.As seen from Figure 7, the target area Duplication of three technologies of the invention and existing is equal
It is gradually reduced, but the target area Duplication reduction speed of the present invention is most slow, and higher target area is maintained in four curves
Duplication.The moving target of of the invention and existing three Technical Follow-Ups be one by as far as the vehicle closely travelled, due to vehicle by
As far as nearly traveling, substantially, as camera lens is more and more nearer from vehicle, target is also increasing, after the 150th frame for dimensional variation,
Target's center's site error of of the invention and existing three technologies is all become larger, and target area Duplication is gradually reduced.Tracking
Precision is all decreased, but the tracking of the present invention is still in four algorithms that performance is best, and the present invention can be tackled more preferably
The problem of being tracked under target scale change acutely.
Claims (5)
1. a kind of motion target tracking method based on preferred feature block, including step are as follows:
(1) Moving Targets Based on Video Streams to be tracked is read from video frequency pick-up head:
(2) initialize:
(2a) reads in the first two field picture from Moving Targets Based on Video Streams to be tracked;
(2b) outlines moving target to be tracked with mouse on the first two field picture to be tracked;
(2c) utilizes method of partition, carries out piecemeal to motion target area to be tracked, obtains the sub-block area of motion target area
Domain;
(2d) utilizes property detector FAST, is detected from each sub-block of the first two field picture motion target area to be tracked
Characteristic point, regard all characteristic points detected as initialization feature point;
The method that (2e) uses coring correlation filtering tracker KCF training samples, training objective area sample obtains the first frame figure
As the grader of upper motion target area, target area grader is regard as initialization motion target area grader;
(3) characteristic point is detected:
(3a) is any from Moving Targets Based on Video Streams to be tracked to read in a two field picture;
(3b) utilizes method of partition, carries out piecemeal to obtained motion target area on read in two field picture, obtains being read in
The sub-block region of motion target area on two field picture;
(3c) utilizes property detector FAST, detects each sub-block region of motion target area, obtains each sub-block area
The characteristic point in domain;
(4) characteristic point is screened:
(4a) is utilized in light stream tracking forward, all characteristic points on the picture frame currently read in, is found and the last time
The one-to-one characteristic point of feature on the picture frame of reading, constitutes a characteristic point pair, by all characteristic points to constituting one
Feature point set, is stored in the picture frame currently read in;
(4b) according to the following formula, calculates the Backward error forward of every a pair of characteristic points in feature point set:
<mrow>
<mi>F</mi>
<mi>B</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>P</mi>
<mi>l</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>P</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>l</mi>
</mrow>
<mi>k</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>P</mi>
<mrow>
<mi>b</mi>
<mo>,</mo>
<mi>l</mi>
</mrow>
<mi>k</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
Wherein, FB (Pl k) Backward error forward of k-th of region of l sub-blocks characteristic point pair in the picture frame that currently reads in is represented,K-th of region of l sub-blocks characteristic point in the picture frame c that expression is currently read in,Represent withIt is corresponding once to be read upper
K-th of region of l sub-blocks characteristic point in the picture frame b entered,WithForm a pair of characteristic points, l=1,2 ..., M, k=1,
2 ..., Ql, c represents present frame, and b represents previous frame, and M represents the sub-block sum that motion target area divides, QlRepresent l sub-blocks
The sum of characteristic point in region;
(4c) according to the following formula, calculates every a pair of characteristic points normalization error in feature point set:
<mrow>
<mi>N</mi>
<mi>C</mi>
<mi>C</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>P</mi>
<mi>l</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>l</mi>
<mo>&prime;</mo>
</msubsup>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msubsup>
<mi>Q</mi>
<mi>l</mi>
<mo>&prime;</mo>
</msubsup>
</munderover>
<mfrac>
<mrow>
<mo>(</mo>
<msubsup>
<mi>P</mi>
<mrow>
<mi>b</mi>
<mo>,</mo>
<mi>l</mi>
</mrow>
<mi>k</mi>
</msubsup>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>b</mi>
</msub>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
<mo>)</mo>
<mo>*</mo>
<mo>(</mo>
<msup>
<msub>
<mi>P</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>l</mi>
</mrow>
</msub>
<mi>k</mi>
</msup>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mrow>
<msub>
<mi>&delta;</mi>
<mi>b</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>&delta;</mi>
<mi>c</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
Wherein, NCC (Pl k) represent the normalization error of k-th of region of l sub-blocks characteristic point pair on the picture frame that currently reads in, Ql
Represent the sum of l sub-blocks provincial characteristics point, μbAnd δ (l)b(l) the l sub-blocks on the last picture frame b currently read in are represented
The average and standard deviation of all characteristic point pixels, μ in regioncAnd δ (l)c(l) l on the picture frame c that expression is currently read in
The average and standard deviation of the pixel of all characteristic points in block region;
(4d) according to the following formula, calculates the normalized bias mean value of all characteristic points;
<mrow>
<mi>&mu;</mi>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>M</mi>
<mo>&times;</mo>
<msubsup>
<mi>Q</mi>
<mi>l</mi>
<mo>&prime;</mo>
</msubsup>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>l</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mi>k</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>l</mi>
<mo>=</mo>
<mi>M</mi>
<mo>,</mo>
<mi>k</mi>
<mo>=</mo>
<msubsup>
<mi>Q</mi>
<mi>l</mi>
<mo>&prime;</mo>
</msubsup>
</mrow>
</munderover>
<mi>N</mi>
<mi>C</mi>
<mi>C</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>P</mi>
<mi>l</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
Wherein, μ represents the normalized bias mean value of all characteristic points, and M represents the sub-block number of regions that motion target area divides,
QlRepresent the number sum of the trace point pair in l sub-blocks;
(4e) chooses all characteristic points for meeting division condition from characteristic point, and the characteristic point for meeting division condition is included into can
By feature point set;
(5) preferred feature block is obtained:
(5a) chooses all sub-blocks for meeting precision conditions from sub-block region, and sub-block is included into preferred chunk and concentrated;
(5b) merges all characteristic points that preferred chunk is concentrated, and obtains preferred feature point set;
(6) response diagram and peak sidelobe ratio of target area are calculated:
(6a) utilizes target area detection of classifier target area, obtains the response diagram of target area, spy is used as using the response diagram
Levy weight a little;
(6b) according to the following formula, calculates the peak sidelobe ratio of the response diagram of target area;
<mrow>
<mi>P</mi>
<mi>S</mi>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>o</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>R</mi>
<mo>(</mo>
<mi>o</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
<mo>-</mo>
<mi>&mu;</mi>
<mrow>
<mo>(</mo>
<mi>R</mi>
<mo>(</mo>
<mi>o</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>&sigma;</mi>
<mrow>
<mo>(</mo>
<mi>R</mi>
<mo>(</mo>
<mi>o</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
</mfrac>
</mrow>
Wherein, PSR (o) represents the peak sidelobe ratio of target area o response diagram, and max (R (o)) represents response diagram R (o) most
Big value, μ (R (o)) represents response diagram R (o) average, and standard deviation sigma (R (o)) represents response diagram R (o) standard deviation;
(7) target area state is updated;
(7a) according to the following formula, calculates the dimensional variation of target area;
<mrow>
<mi>S</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mi>i</mi>
<mo>&NotEqual;</mo>
<mi>j</mi>
</mrow>
<msub>
<mi>Q</mi>
<mi>g</mi>
</msub>
</munderover>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
<mo>*</mo>
<msub>
<mi>w</mi>
<mi>j</mi>
</msub>
</mrow>
</mfrac>
<mo>*</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mi>i</mi>
<mo>&NotEqual;</mo>
<mi>j</mi>
</mrow>
<msub>
<mi>Q</mi>
<mi>g</mi>
</msub>
</munderover>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
<mo>*</mo>
<msub>
<mi>w</mi>
<mi>j</mi>
</msub>
<mo>*</mo>
<mfrac>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>A</mi>
<mi>c</mi>
<mi>i</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>A</mi>
<mi>c</mi>
<mi>j</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>A</mi>
<mi>b</mi>
<mi>i</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>A</mi>
<mi>b</mi>
<mi>j</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
Wherein, S (Ic) dimensional variation of target area on the picture frame that currently reads in is represented,WithRepresent the figure currently read in
As the characteristic point that preferred feature point is concentrated on frame,WithRepresent the spy that preferred feature point is concentrated on the last picture frame read in
Levy a little, wi、wjRepresent the characteristic point that preferred feature point is concentratedWeight;
(7b) according to the following formula, calculates the size of target area;
wc=wb*S(Ic)
hc=hb*S(Ic)
Wherein, wcThe width of target area, h on the picture frame that expression is currently read incTarget area on the picture frame that expression is currently read in
Height, wbRepresent the width of target area on the last picture frame read in, hbRepresent target area on the last picture frame read in
Height;
(7c) according to the following formula, calculates the translation change of target area:
<mrow>
<mi>D</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&eta;</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<mi>D</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>b</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mfrac>
<mi>&eta;</mi>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>Q</mi>
<mi>g</mi>
</msub>
</munderover>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
</mrow>
</mfrac>
<mo>*</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>Q</mi>
<mi>g</mi>
</msub>
</munderover>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
<mo>*</mo>
<mrow>
<mo>(</mo>
<msubsup>
<mi>Z</mi>
<mi>c</mi>
<mi>i</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>Z</mi>
<mi>b</mi>
<mi>i</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
Wherein, D (Ic) representing that the translation of target area on the picture frame currently read in changes, η represents that transformation factor typically takes
0.35, D (Ib) represent that the translation of target area on the last picture frame read in changes, wiRepresent the power of correspondence reliable characteristic point
Value,The coordinate corresponding to i-th of reliable characteristic point on the picture frame currently read in is represented, u represents the horizontal seat of characteristic point
Mark, v represents the ordinate of characteristic point,Represent to correspond on the last picture frame read in i-th reliable characteristic point
Coordinate;
(7d) according to the following formula, calculates target area position:
Cc=Cb+D(Ic)
Wherein, CcRepresent the current position for reading in target on picture frame, CbRepresent the last position for reading in target on picture frame;
The size and target area position of (7e) using target area, update the current target area state read on picture frame;
(8) target area grader is updated;
(8a) is used and is moved mesh on coring correlation filtering method KCF training objective area samples, the picture frame currently read in
Mark the grader in region;
(8b) judges whether target area grader meets update condition, if so, then performing (8c), otherwise, performs (8d);
(8c) target area is not blocked, target area grader is updated, with target area on the picture frame currently read in
Grader replaces the target area grader obtained on the last picture frame read in;
(8d) target area is blocked, and does not update object classifiers, keeps the target obtained on the last picture frame read in
Region classifier is constant;
(9) by target area status information, be output in the form of rectangle frame in reading two field picture;
(10) judge whether to run through all frames of video to be tracked, if so, then performing step (11), otherwise, perform step (3);
(11) tracking of moving target is terminated.
2. the motion target tracking method according to claim 1 based on preferred feature block, it is characterised in that step
Method of partition comprises the following steps that described in (2c), step (3b):
The first step, can be divided into the standard in integer sub-block region according to target area, and the sub-block region length of side is greater than being equal to 10
Individual pixel, four borders of motion target area to be tracked is extended simultaneously, be expanded motion target area;
Second step, to extension movement target area piecemeal, obtains the sub-block region of motion target area, sub-block region is that the length of side is big
In the square area equal to 10 pixels, and each sub-block area size is identical.
3. the motion target tracking method according to claim 1 based on preferred feature block, it is characterised in that step (4e)
Described in division condition it is as follows:
NCC(Pl k) > μ and FB (Pl k) < QFB
Wherein, QFBRepresent the previous Backward error threshold value of characteristic point.
4. the motion target tracking method according to claim 1 based on preferred feature block, it is characterised in that step (5a)
Described in precision conditions it is as follows:
<mrow>
<msubsup>
<mi>Q</mi>
<mi>l</mi>
<mo>&prime;</mo>
</msubsup>
<mo>&GreaterEqual;</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msub>
<mi>Q</mi>
<mi>l</mi>
</msub>
</mrow>
Wherein, QlRepresent characteristic point sum, Q on l sub-blocks regionl' represent that characteristic point is total on l sub-blocks region after screening.
5. the motion target tracking method according to claim 1 based on preferred feature block, it is characterised in that step (8b)
Described in update condition it is as follows:
PSR(o)≥T
Wherein, T represents to update threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710269627.0A CN107146238B (en) | 2017-04-24 | 2017-04-24 | Based on the preferred motion target tracking method of characteristic block |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710269627.0A CN107146238B (en) | 2017-04-24 | 2017-04-24 | Based on the preferred motion target tracking method of characteristic block |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107146238A true CN107146238A (en) | 2017-09-08 |
CN107146238B CN107146238B (en) | 2019-10-11 |
Family
ID=59773981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710269627.0A Active CN107146238B (en) | 2017-04-24 | 2017-04-24 | Based on the preferred motion target tracking method of characteristic block |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107146238B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610177A (en) * | 2017-09-29 | 2018-01-19 | 联想(北京)有限公司 | A kind of method and apparatus that characteristic point is determined in synchronous superposition |
CN107920257A (en) * | 2017-12-01 | 2018-04-17 | 北京奇虎科技有限公司 | Video Key point real-time processing method, device and computing device |
CN107967693A (en) * | 2017-12-01 | 2018-04-27 | 北京奇虎科技有限公司 | Video Key point processing method, device, computing device and computer-readable storage medium |
CN109448021A (en) * | 2018-10-16 | 2019-03-08 | 北京理工大学 | A kind of motion target tracking method and system |
CN109584269A (en) * | 2018-10-17 | 2019-04-05 | 龙马智芯(珠海横琴)科技有限公司 | A kind of method for tracking target |
CN110246155A (en) * | 2019-05-17 | 2019-09-17 | 华中科技大学 | One kind being based on the alternate anti-shelter target tracking of model and system |
CN110276784A (en) * | 2019-06-03 | 2019-09-24 | 北京理工大学 | Correlation filtering motion target tracking method based on memory mechanism Yu convolution feature |
CN110619654A (en) * | 2019-08-02 | 2019-12-27 | 北京佳讯飞鸿电气股份有限公司 | Moving target detection and tracking method |
CN111712833A (en) * | 2018-06-13 | 2020-09-25 | 华为技术有限公司 | Method and device for screening local feature points |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091348A (en) * | 2014-05-19 | 2014-10-08 | 南京工程学院 | Multi-target tracking method integrating obvious characteristics and block division templates |
US9582718B1 (en) * | 2015-06-30 | 2017-02-28 | Disney Enterprises, Inc. | Method and device for multi-target tracking by coupling multiple detection sources |
CN106485732A (en) * | 2016-09-09 | 2017-03-08 | 南京航空航天大学 | A kind of method for tracking target of video sequence |
-
2017
- 2017-04-24 CN CN201710269627.0A patent/CN107146238B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091348A (en) * | 2014-05-19 | 2014-10-08 | 南京工程学院 | Multi-target tracking method integrating obvious characteristics and block division templates |
US9582718B1 (en) * | 2015-06-30 | 2017-02-28 | Disney Enterprises, Inc. | Method and device for multi-target tracking by coupling multiple detection sources |
CN106485732A (en) * | 2016-09-09 | 2017-03-08 | 南京航空航天大学 | A kind of method for tracking target of video sequence |
Non-Patent Citations (3)
Title |
---|
JOÃO F. HENRIQUES 等: "High-Speed Tracking with Kernelized Correlation Filters", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
余礼杨 等: "改进的核相关滤波器目标跟踪算法", 《计算机应用》 * |
邢运龙 等: "改进核相关滤波的运动目标跟踪算法", 《红外与激光工程》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610177A (en) * | 2017-09-29 | 2018-01-19 | 联想(北京)有限公司 | A kind of method and apparatus that characteristic point is determined in synchronous superposition |
CN107920257B (en) * | 2017-12-01 | 2020-07-24 | 北京奇虎科技有限公司 | Video key point real-time processing method and device and computing equipment |
CN107920257A (en) * | 2017-12-01 | 2018-04-17 | 北京奇虎科技有限公司 | Video Key point real-time processing method, device and computing device |
CN107967693A (en) * | 2017-12-01 | 2018-04-27 | 北京奇虎科技有限公司 | Video Key point processing method, device, computing device and computer-readable storage medium |
CN107967693B (en) * | 2017-12-01 | 2021-07-09 | 北京奇虎科技有限公司 | Video key point processing method and device, computing equipment and computer storage medium |
CN111712833B (en) * | 2018-06-13 | 2023-10-27 | 华为技术有限公司 | Method and device for screening local feature points |
CN111712833A (en) * | 2018-06-13 | 2020-09-25 | 华为技术有限公司 | Method and device for screening local feature points |
CN109448021A (en) * | 2018-10-16 | 2019-03-08 | 北京理工大学 | A kind of motion target tracking method and system |
CN109584269A (en) * | 2018-10-17 | 2019-04-05 | 龙马智芯(珠海横琴)科技有限公司 | A kind of method for tracking target |
CN110246155B (en) * | 2019-05-17 | 2021-05-18 | 华中科技大学 | Anti-occlusion target tracking method and system based on model alternation |
CN110246155A (en) * | 2019-05-17 | 2019-09-17 | 华中科技大学 | One kind being based on the alternate anti-shelter target tracking of model and system |
CN110276784A (en) * | 2019-06-03 | 2019-09-24 | 北京理工大学 | Correlation filtering motion target tracking method based on memory mechanism Yu convolution feature |
CN110276784B (en) * | 2019-06-03 | 2021-04-06 | 北京理工大学 | Correlation filtering moving target tracking method based on memory mechanism and convolution characteristics |
CN110619654A (en) * | 2019-08-02 | 2019-12-27 | 北京佳讯飞鸿电气股份有限公司 | Moving target detection and tracking method |
CN110619654B (en) * | 2019-08-02 | 2022-05-13 | 北京佳讯飞鸿电气股份有限公司 | Moving target detection and tracking method |
Also Published As
Publication number | Publication date |
---|---|
CN107146238B (en) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107146238A (en) | The preferred motion target tracking method of feature based block | |
CN110929578B (en) | Anti-shielding pedestrian detection method based on attention mechanism | |
Wang et al. | Pixel-wise crowd understanding via synthetic data | |
CN104183127B (en) | Traffic surveillance video detection method and device | |
CN106022263B (en) | A kind of wireless vehicle tracking of fusion feature matching and optical flow method | |
WO2021212443A1 (en) | Smoke video detection method and system based on lightweight 3d-rdnet model | |
CN108921875A (en) | A kind of real-time traffic flow detection and method for tracing based on data of taking photo by plane | |
CN105069472A (en) | Vehicle detection method based on convolutional neural network self-adaption | |
CN110472542A (en) | A kind of infrared image pedestrian detection method and detection system based on deep learning | |
CN109961037A (en) | A kind of examination hall video monitoring abnormal behavior recognition methods | |
CN109800692A (en) | A kind of vision SLAM winding detection method based on pre-training convolutional neural networks | |
CN112241969A (en) | Target detection tracking method and device based on traffic monitoring video and storage medium | |
CN115346177A (en) | Novel system and method for detecting target under road side view angle | |
CN111126205A (en) | Optical remote sensing image airplane target detection method based on rotary positioning network | |
CN106295532A (en) | A kind of human motion recognition method in video image | |
CN110458128A (en) | A kind of posture feature acquisition methods, device, equipment and storage medium | |
CN109033947A (en) | Drop recognition methods in road surface based on deep learning | |
CN110633678A (en) | Rapid and efficient traffic flow calculation method based on video images | |
CN105590328A (en) | Sparsely represented selective appearance model-based frame-adaptive target tracking algorithm | |
CN116503725A (en) | Real-time detection method and device for infrared weak and small target | |
CN117611996A (en) | Grape planting area remote sensing image change detection method based on depth feature fusion | |
CN115115672A (en) | Dynamic vision SLAM method based on target detection and feature point speed constraint | |
CN111260687A (en) | Aerial video target tracking method based on semantic perception network and related filtering | |
CN112347967B (en) | Pedestrian detection method fusing motion information in complex scene | |
CN107038710B (en) | It is a kind of using paper as the Vision Tracking of target |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |