CN109035302A - Target tracking algorithm based on space-time perception correlation filtering - Google Patents

Target tracking algorithm based on space-time perception correlation filtering Download PDF

Info

Publication number
CN109035302A
CN109035302A CN201810831686.7A CN201810831686A CN109035302A CN 109035302 A CN109035302 A CN 109035302A CN 201810831686 A CN201810831686 A CN 201810831686A CN 109035302 A CN109035302 A CN 109035302A
Authority
CN
China
Prior art keywords
target
image
scale
space
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810831686.7A
Other languages
Chinese (zh)
Other versions
CN109035302B (en
Inventor
胡永江
葛宝义
李爱华
褚丽娜
李永科
张玉华
赵月飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN201810831686.7A priority Critical patent/CN109035302B/en
Publication of CN109035302A publication Critical patent/CN109035302A/en
Application granted granted Critical
Publication of CN109035302B publication Critical patent/CN109035302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a relevant filtering target tracking method based on space-time perception, and relates to the field of image processing target tracking. The method carries out target tracking according to the ideas of reading a target image, extracting target characteristics, training a filter template, extracting multi-scale characteristics of the target and determining the scale and the position in a new target image, optimizes and improves the structural framework of a related filtering target tracking algorithm and the target characteristic extraction mode, and has obvious advantages in the aspects of tracking robustness and precision comparison compared with the traditional algorithm. The method can enhance the robustness and precision of the related filtering target tracking, solve the problem of model drift caused by linear updating of the template, improve the long-term tracking effect of the related filtering target tracking, ensure the real-time performance of the target tracking, and is an important improvement on the prior art.

Description

Target tracking algorism based on the perceptually relevant filtering of space-time
Technical field
The present invention relates to computer visions and image procossing target following technical field, particularly relate to a kind of based on space-time sense Know the target tracking algorism of correlation filtering.
Background technique
Target following technology is increasingly becoming the hot issue of research with the development of computer vision technique, in terms of It has been more and more widely used.But it due to the deformation of target in target following, blocks and the factors such as the interference of background cause Target following remains a difficulties.Correlation filtering target following in recent years due to its high speed and preferable robustness, by Extensive concern.
Foreign scholar conducts in-depth research the field: Bolme et al. for the first time by correlation filtering be applied to target with Track field passes through the correlation of target and filter template by minimum empiric risk come training objective associated filter template Target position is judged to complete object tracking process.The algorithm is using gray feature training filter, and tracking velocity is high, robustness Preferably.Circular matrix property is applied to correlation filtering filter template training process, training sampling process by Henriques et al. It is equivalent to target signature matrix circular shifting function, and then completes the intensive sampling process of filter template training, this method is excellent Change target signature training sampling process, further improves the robustness of tracking.Danelljan is true in position correlation filter It sets the goal behind position, by additional unidimensional scale correlation filter, constructs target scale pond come come the best ruler of estimating target Degree.This method, which is simple and efficient, preferably to be estimated the dimensional variation of target.Matthias Mueller is filtered in correlation Background structure perception information is added during wave device template training, the robust of filter is further enhanced by background structure information Property.Feature Chao Ma abundant using convolutional neural networks Deep Semantics abundant information and shallow-layer detailed information, is arrived by deep layer Shallow-layer by slightly to essence target position determine method, further improve mesh using the powerful ability in feature extraction of neural network Mark the precision and robustness of tracking.
Above-mentioned algorithm is innovated and has been improved for correlation filtering target tracking algorism, but still is had the following problems: mesh The precision and robustness for marking tracking need to be further increased;Filter template uses linear real-time update strategy, when target occurs It will lead to template drift when blocking, while this update mode causes tracking process bigger therefore long to nearest sample dependence When tracking will increase unstability.
Summary of the invention
In view of this, it is an object of the invention to propose a kind of target tracking algorism based on the perceptually relevant filtering of space-time, This method can be improved the tracking accuracy and robustness of target tracking algorism, effectively solve in correlation filtering target tracking algorism Model modification problem.
Based on above-mentioned purpose, present invention provide the technical scheme that
A kind of target tracking algorism based on the perceptually relevant filtering of space-time, this method read more comprising same target one by one Sequential picture is opened, and following steps are executed to image:
Step 1: for present image, target position in the images and scale size are obtained;
Step 2: according to current resulting position and scale, HOG the and CN fusion feature of target in present image is extracted;
Step 3: the subject fusion feature training perceptually relevant filter template of space-time currently extracted is utilized;
Step 4: with next image for new present image, corresponding in former present image in new present image In place of target position, on the basis of current acquired scale, HOG and CN fusion of the target in new present image are extracted Analysis On Multi-scale Features;
Step 5: position and scale of the target in new present image are determined with the perceptually relevant filter template of current space-time Size;
Step 6: step 2 is repeated to five, until target following terminates.
Optionally, the concrete mode of the step 1 are as follows:
(101) present image is read, judges whether present image is Three Channel Color image, if present image is threeway Road color image then enables Cl=1, otherwise enable Cl=0;
(102) the position L=[x, y] and scale size S of target in present image are obtainedz=[W, H], wherein W is current The width of target in image, H are the height of target in present image.
Optionally, the concrete mode of the step 2 are as follows:
(201) in the target location of present image, image I is taken with 5 times of target scale size;
(202) according to image I, 31 dimensions of target, the HOG feature x that cell size is 4 are extractedh
(203) if Cl=1, then 11 Victoria C N feature x of target are extracted according to image Ic, otherwise, mesh is extracted according to image I Target 1 ties up gray feature xg
(204) if Cl=1, then 42 dimension fusion feature x of target are obtained according to the HOG feature of extraction and CN featuret= cat(3,xh,xc), otherwise, 32 dimension fusion feature x of target are obtained according to image It=cat (3, xh,xg), wherein cat () It indicates to be coupled array function, number 3 indicates to be coupled in a manner of the matrix third dimension.
Optionally, the concrete mode of the step 3 are as follows:
(301) according to the subject fusion feature x of extractiont, the perceptually relevant filter template of space-time, which is arranged, is
In formula,Representing matrix Fourier transformation,
T=WH, representing matrix dimension,
U=1 indicates Lagrangian regularization factors,
λ=14 indicate regularization factors,
The two is scalar,Representing matrix conjugate transposition, Indicate that filter responds target value, wherein u={ 1,2 ... W }, v={ 1,2 ... H },
(302) intermediate quantity is calculated
In formula, ∮-1() indicates Fourier inversion,
U=min (β u, θ), wherein θ=0.1, β=10, min () expression are minimized function;
(303) according to the subject fusion feature x of extractiont, being once again set up the perceptually relevant filter template of space-time is
In formula,
The two is scalar;
(304) updating intermediate quantity is
(305) according to the required accuracy, step (303)~(304) are repeated 0 time or repeatedly, finally obtain training completion when Empty perceptually relevant filter template
Optionally, the concrete mode of the step 4 are as follows:
(401) correspond to target in place of the position in former present image in new present image, with current N times of target scale size takes image, obtains target multi-scale image pond Is
Wherein, n=5am,N takes odd number, is arranged for scale pond Number, a are scale step-length;
(402) according to target multi-scale image pond Is, special to the HOG that each target extracts 31 dimensions respectively, cell size is 4 Levy xh s
(403) if Cl=1, then according to target multi-scale image pond Is11 Victoria C N features of each target are extracted respectively xc s, otherwise, according to target multi-scale image pond Is1 dimension gray feature x of each target is extracted respectivelyg s
(404) if Cl=1, then according to target multi-scale image pond Is42 dimension fusion features of each target are obtained respectively xt s=cat (3, xh s,xc s), otherwise, according to target multi-scale image pond Is32 dimension fusion feature x of each target are obtained respectivelyt s =cat (3, xh s,xg s)。
Optionally, the concrete mode of the step 5 are as follows:
(501) the perceptually relevant filter template of space-time completed according to step 3 trainingSeek target responseIn formula, ()*The complex conjugate of representing matrix;
(502) target response maximum value position r is searched for according to target response rmax, target response maximum value position is corresponding Target scale is new current goal scalemNFor the corresponding scale of target response maximum value;
(503) according to target response maximum value position rmax, target response maximum value position is new current goal position L=[x, y].
The present invention is compared to the advantages of background technique:
1, the robustness and precision for focusing on reinforcing related filtered target tracking of this method, solves template and linearly updates Caused model drifting problem, improves the long time-tracking effect of correlation filtering target following, while ensure that the reality of target following Shi Xing.
2, this method fusion HOG and CN feature obtains target more comprehensively characteristic present, improves under complex background Track robustness.
3, this method avoids the line of template using the perceptually relevant filtered target track algorithm training filter template of space-time Property update caused by model drifting problem, further improve the robustness of track algorithm.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is an algorithm flow chart of the embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference Attached drawing, the present invention is described in further detail.
As shown in Figure 1, a kind of target tracking algorism based on the perceptually relevant filtering of space-time, this method in image sequence by (image sequence can be the sequence photo of capture apparatus capture to one reading image, be also possible to the sequence extracted from video view Frequency frame), and following steps are executed to image:
Step 1: position and scale size of the target in present image are obtained;Concrete mode are as follows:
(101) image is read, judges whether present image is Three Channel Color image.If present image is triple channel coloured silk Chromatic graph picture then Cl=1, otherwise Cl=0;
(102) according to image information, target position L=[x, y] and scale size S in present image are obtainedz=[W, H], wherein W is the width of target in present image, and H is the height of target in present image.
Step 2: subject fusion feature is extracted.According to currently obtained target position and target scale, target is extracted HOG and CN fusion feature;Concrete mode are as follows:
(201) in the target location of present image, image I is taken with 5 times of target scale size;
(202) 31 dimensions of target, the HOG feature x that cell size is 4 are extracted according to image Ih
(203) if Cl=1,11 Victoria C N feature x of target are extracted according to image Ic, otherwise, target is extracted according to image I 1 dimension gray feature xg
(204) if Cl=1,42 dimension fusion feature x of target are obtained according to the HOG feature of extraction and CN featuret=cat (3,xh,xc), otherwise, 32 dimension fusion feature x of target are obtained according to image It=cat (3, xh,xg).Wherein, cat () is indicated It is coupled array function, number 3 indicates to be coupled in a manner of the matrix third dimension;
Step 3: the training perceptually relevant filter template of space-time.When the subject fusion feature training extracted using step 2 Empty perceptually relevant filter template;Concrete mode are as follows:
(301) according to the subject fusion feature x of extractiont, the perceptually relevant filter template of space-time, which is arranged, is
Wherein,Representing matrix Fourier transformation, T=WH representing matrix dimension, u=1 indicate Lagrangian regularization because Son, λ=14 indicate regularization factors,It is scalar,Representing matrix conjugation turns It sets,Be filter response target value, u={ 1,2 ... W }, v={ 1,2 ... H },
(302) intermediate quantity is calculated
Wherein, ∮-1() indicates Fourier inversion,U=min (β u, θ), θ=0.1, β =10, min () expression are minimized function;
(303) according to the subject fusion feature x of extractiont, being once again set up the perceptually relevant filter template of space-time is
Wherein,For scalar,For scalar;
(304) intermediate quantity is calculated again
Wherein,U=min (β u, θ);
(305) according to required precision, step (303)~(304) are repeated, number of repetition is more, and precision is higher, is instructed Practice the perceptually relevant filter template of space-time completedIn this example, precision and arithmetic speed, can make step (303) in order to balance ~(304) Exactly-once.
The iteration that template turnover rate is used to adjust filter template in traditional correlation filtering object tracking process updates speed Degree, biggish turnover rate can meet the violent situation of object variations, but lower to the robustness of target occlusion or background interference etc., It is easy to cause model to drift about, and lesser turnover rate is higher to the robustness of target occlusion etc., it can be extensive preferably after blocking Multiple tracking, but it is not able to satisfy the tracking situation of target drastic mechanical deformation, therefore template turnover rate seriously restricts the effect of target following. Meanwhile the weight that fixed turnover rate template update mode will lead to current sample is larger, historical frames target sample information weight is got over Next smaller, i.e., filter template information will gradually replace historical information, and especially target initial target information will be lost, in length When tracking during, the robustness of target following will gradually decrease.In addition linear update is the mould in order to guarantee tracking velocity Plate approximation updates, and the robustness of target following template can be further decreased in long time-tracking.Therefore this method can be preferably Correlation filtering target template replacement problem is solved, the precision and robustness of target following are further increased.
Step 4: target Multiscale Fusion feature is extracted.The mesh corresponded in former present image in new present image In place of marking position, on the basis of current target scale, the Analysis On Multi-scale Features of HOG and the CN fusion of target are extracted;Specifically Mode are as follows:
(401) correspond to target in new present image in place of the position in former present image, with current goal scale N times of size obtains target multi-scale image pond Is,
Wherein, n=5 [am],N is scale pond setting number, Odd number is taken, number setting more multiscale estimatiL is more accurate, and 7, a is taken as in this example indicates scale step-length, is taken as 1.01 in this example;
" n=5 [am] " meaning be, with each of vector m member usually carry out operation, the knot of each secondary operation respectively Fruit collectively constitutes vector n;
" target multi-scale image pond I is obtained with n times of current goal scale sizes" meaning be, with the every of vector n One element obtains an image as multiple respectively, and all images collectively constitute image pond Is
(402) according to target multi-scale image pond Is, 31 dimensions of target, the HOG feature that cell size is 4 are extracted respectively xh s
(403) if Cl=1, according to target multi-scale image pond Is11 Victoria C N feature x of target are extracted respectivelyc s, otherwise, According to target multi-scale image pond Is1 dimension gray feature x of target is extracted respectivelyg s
(404) if Cl=1, according to target multi-scale image pond Is42 dimension fusion feature x of target are obtained respectivelyt s= cat(3,xh s,xc s), otherwise, according to target multi-scale image pond Is32 dimension fusion feature x of target are obtained respectivelyt s=cat (3, xh s,xg s);
Step 5: new target position and scale are determined.The perceptually relevant filter template of space-time with step 3 training is true Set the goal position and scale size in present image;Concrete mode are as follows:
(501) according to the perceptually relevant filter template of space-time of step 3 trainingSeek target response(·)*The complex conjugate of representing matrix;
(502) target response maximum value position r is searched for according to target response rmax, target response maximum value position is corresponding Target scale is new target scalemNFor the corresponding scale of target response maximum value;
(503) according to target response maximum value position rmax, target response maximum value position is new target position L= [x,y];
Step 6: step 2 is repeated to five, until target following terminates.
So far, target following is completed.
This method is according to reading target image, extraction target signature, training filter template, the extraction multiple dimensioned spy of target It levies, determine that the thinking of scale and position in target new images carries out target following, to the knot of correlation filtering target tracking algorism Improvement is optimized in structure frame and target's feature-extraction mode so that this method relative to traditional algorithm tracking robustness and It has a clear superiority in terms of accuracy comparison.Specifically, the present invention obtains target more comprehensively by fusion HOG and CN feature Characteristic present is filtered to improve the tracking robustness under complex background using the perceptually relevant filtered target track algorithm training of space-time Wave device template avoids model drifting problem caused by the linear update of template, improves the robustness of track algorithm.
In short, this method can reinforce the robustness and precision of related filtered target tracking, solution template, which linearly updates, to be led The model drifting problem of cause, improves the long time-tracking effect of correlation filtering target following, while ensure that the real-time of target following Property, it is to one kind of the prior art in important improvement.
It should be understood by those ordinary skilled in the art that: the discussion of any of the above embodiment is exemplary only, not It is intended to imply that the scope of the present disclosure (including claim) is limited to these examples.All within the spirits and principles of the present invention, Any omission made to the above embodiment, modification, equivalent replacement, improvement etc., should be included in protection scope of the present invention it It is interior.

Claims (6)

1. a kind of target tracking algorism based on the perceptually relevant filtering of space-time, which is characterized in that reading one by one includes same target Multiple sequential pictures, and to image execute following steps:
Step 1: for present image, target position in the images and scale size are obtained;
Step 2: according to current resulting position and scale, HOG the and CN fusion feature of target in present image is extracted;
Step 3: the subject fusion feature training perceptually relevant filter template of space-time currently extracted is utilized;
Step 4: with next image for new present image, correspond to target in former present image in new present image In place of position, on the basis of current acquired scale, the more of HOG and CN fusion of the target in new present image are extracted Scale feature;
Step 5: determine that position and scale of the target in new present image are big with the perceptually relevant filter template of current space-time It is small;
Step 6: step 2 is repeated to five, until target following terminates.
2. the target tracking algorism according to claim 1 based on the perceptually relevant filtering of space-time, it is characterised in that: the step Rapid one concrete mode are as follows:
(101) present image is read, judges whether present image is Three Channel Color image, if present image is triple channel coloured silk Chromatic graph picture then enables Cl=1, otherwise enable Cl=0;
(102) the position L=[x, y] and scale size S of target in present image are obtainedz=[W, H], wherein W is present image The width of middle target, H are the height of target in present image.
3. the target tracking algorism according to claim 2 based on the perceptually relevant filtering of space-time, it is characterised in that: the step Rapid two concrete mode are as follows:
(201) in the target location of present image, image I is taken with 5 times of target scale size;
(202) according to image I, 31 dimensions of target, the HOG feature x that cell size is 4 are extractedh
(203) if Cl=1, then 11 Victoria C N feature x of target are extracted according to image Ic, otherwise, the 1 of target is extracted according to image I Tie up gray feature xg
(204) if Cl=1, then 42 dimension fusion feature x of target are obtained according to the HOG feature of extraction and CN featuret=cat (3, xh,xc), otherwise, 32 dimension fusion feature x of target are obtained according to image It=cat (3, xh,xg), wherein cat () indicates connection Array function is tied, number 3 indicates to be coupled in a manner of the matrix third dimension.
4. the target tracking algorism according to claim 3 based on the perceptually relevant filtering of space-time, it is characterised in that: the step Rapid three concrete mode are as follows:
(301) according to the subject fusion feature x of extractiont, the perceptually relevant filter template of space-time, which is arranged, is
In formula,Representing matrix Fourier transformation,
T=WH, representing matrix dimension,
U=1 indicates Lagrangian regularization factors,
λ=14 indicate regularization factors,
The two is scalar,Representing matrix conjugate transposition,
Indicate that filter responds target value, wherein u={ 1,2 ... W }, v={ 1,2 ... H },
(302) intermediate quantity is calculated
In formula, ∮-1() indicates Fourier inversion,
U=min (β u, θ), wherein θ=0.1, β=10, min () expression are minimized function;
(303) according to the subject fusion feature x of extractiont, being once again set up the perceptually relevant filter template of space-time is
In formula,
The two is scalar;
(304) updating intermediate quantity is
(305) according to the required accuracy, step (303)~(304) are repeated 0 time or repeatedly, finally obtains the space-time sense of training completion Know associated filter template
5. the target tracking algorism according to claim 4 based on the perceptually relevant filtering of space-time, it is characterised in that: the step Rapid four concrete mode are as follows:
(401) correspond to target in place of the position in former present image in new present image, with current target N times of scale size takes image, obtains target multi-scale image pond Is
Wherein, n=5am,N takes odd number, and number is arranged for scale pond Mesh, a are scale step-length;
(402) according to target multi-scale image pond Is, to the HOG feature that each target extracts 31 dimensions respectively, cell size is 4 xh s
(403) if Cl=1, then according to target multi-scale image pond Is11 Victoria C N feature x of each target are extracted respectivelyc s, no Then, according to target multi-scale image pond Is1 dimension gray feature x of each target is extracted respectivelyg s
(404) if Cl=1, then according to target multi-scale image pond Is42 dimension fusion feature x of each target are obtained respectivelyt s= cat(3,xh s,xc s), otherwise, according to target multi-scale image pond Is32 dimension fusion feature x of each target are obtained respectivelyt s= cat(3,xh s,xg s)。
6. the target tracking algorism according to claim 5 based on the perceptually relevant filtering of space-time, it is characterised in that: the step Rapid five concrete mode are as follows:
(501) the perceptually relevant filter template of space-time completed according to step 3 trainingSeek target responseIn formula, ()*The complex conjugate of representing matrix;
(502) target response maximum value position r is searched for according to target response rmax, the corresponding target of target response maximum value position Scale is new current goal scalemNFor the corresponding scale of target response maximum value;
(503) according to target response maximum value position rmax, target response maximum value position is new current goal position L= [x,y]。
CN201810831686.7A 2018-07-26 2018-07-26 Target tracking algorithm based on space-time perception correlation filtering Active CN109035302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810831686.7A CN109035302B (en) 2018-07-26 2018-07-26 Target tracking algorithm based on space-time perception correlation filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810831686.7A CN109035302B (en) 2018-07-26 2018-07-26 Target tracking algorithm based on space-time perception correlation filtering

Publications (2)

Publication Number Publication Date
CN109035302A true CN109035302A (en) 2018-12-18
CN109035302B CN109035302B (en) 2021-07-06

Family

ID=64646389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810831686.7A Active CN109035302B (en) 2018-07-26 2018-07-26 Target tracking algorithm based on space-time perception correlation filtering

Country Status (1)

Country Link
CN (1) CN109035302B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1419680A (en) * 2001-01-26 2003-05-21 皇家菲利浦电子有限公司 Spatio-temporal filter unit and image display apparatus comprising such a spatio-temporal filter unit
CN106651913A (en) * 2016-11-29 2017-05-10 开易(北京)科技有限公司 Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)
CN107316316A (en) * 2017-05-19 2017-11-03 南京理工大学 The method for tracking target that filtering technique is closed with nuclear phase is adaptively merged based on multiple features
CN107452022A (en) * 2017-07-20 2017-12-08 西安电子科技大学 A kind of video target tracking method
CN107578423A (en) * 2017-09-15 2018-01-12 杭州电子科技大学 The correlation filtering robust tracking method of multiple features hierarchical fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1419680A (en) * 2001-01-26 2003-05-21 皇家菲利浦电子有限公司 Spatio-temporal filter unit and image display apparatus comprising such a spatio-temporal filter unit
CN106651913A (en) * 2016-11-29 2017-05-10 开易(北京)科技有限公司 Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)
CN107316316A (en) * 2017-05-19 2017-11-03 南京理工大学 The method for tracking target that filtering technique is closed with nuclear phase is adaptively merged based on multiple features
CN107452022A (en) * 2017-07-20 2017-12-08 西安电子科技大学 A kind of video target tracking method
CN107578423A (en) * 2017-09-15 2018-01-12 杭州电子科技大学 The correlation filtering robust tracking method of multiple features hierarchical fusion

Also Published As

Publication number Publication date
CN109035302B (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN110111366A (en) A kind of end-to-end light stream estimation method based on multistage loss amount
CN109376576A (en) The object detection method for training network from zero based on the intensive connection of alternately update
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN110021037A (en) A kind of image non-rigid registration method and system based on generation confrontation network
CN106780543A (en) A kind of double framework estimating depths and movement technique based on convolutional neural networks
CN109685732A (en) A kind of depth image high-precision restorative procedure captured based on boundary
CN104376568B (en) Method for processing DICOM (digital imaging and communications in medicine) medical images on basis of formats
CN111325750B (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN103605637B (en) Particle image velocimetry vector estimation method for spatial resolution self-adaptation adjustment
CN112927279A (en) Image depth information generation method, device and storage medium
CN116342596A (en) YOLOv5 improved substation equipment nut defect identification detection method
CN114140469B (en) Depth layered image semantic segmentation method based on multi-layer attention
CN110298402A (en) A kind of small target deteection performance optimization method
CN108364305A (en) Vehicle-mounted pick-up video target tracking method based on modified DSST
CN114511710A (en) Image target detection method based on convolutional neural network
CN113989683A (en) Ship detection method for synthesizing synchronous orbit sequence optical image space-time information
CN116645592A (en) Crack detection method based on image processing and storage medium
CN110147724A (en) For detecting text filed method, apparatus, equipment and medium in video
CN109035302A (en) Target tracking algorithm based on space-time perception correlation filtering
CN110728660B (en) Method and device for lesion segmentation based on ischemic stroke MRI detection mark
CN116052108A (en) Transformer-based traffic scene small sample target detection method and device
CN117133014A (en) Live pig face key point detection method
CN115205215A (en) Corneal nerve image segmentation method and system based on Transformer
CN110136185A (en) A kind of monocular depth estimation method and system
CN105654527A (en) Magnetic resonance imaging reconstruction method and device based on structural dictionary learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant