CN111583306A - Anti-occlusion visual target tracking method - Google Patents

Anti-occlusion visual target tracking method Download PDF

Info

Publication number
CN111583306A
CN111583306A CN202010398545.8A CN202010398545A CN111583306A CN 111583306 A CN111583306 A CN 111583306A CN 202010398545 A CN202010398545 A CN 202010398545A CN 111583306 A CN111583306 A CN 111583306A
Authority
CN
China
Prior art keywords
target
frame
tracking
filter
cnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010398545.8A
Other languages
Chinese (zh)
Inventor
蒋畅江
陈思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202010398545.8A priority Critical patent/CN111583306A/en
Publication of CN111583306A publication Critical patent/CN111583306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an anti-occlusion visual target tracking method, and belongs to the field of pattern recognition and computer vision. According to the method, the area of the target is found out in the rest image frames according to the position parameters of an input video and the initial frame of the target to be tracked. And judging the tracking quality by combining the peak side lobe ratio according to an APSR strategy to determine whether to update a related filter, and solving the problem of target tracking failure under the conditions of shielding or deformation of a tracked target in a video and the like. The invention solves the problem of invalid updating of the filter under the condition of target shielding or deformation, thereby improving the tracking precision in actual tracking and having important significance on a video target tracking algorithm and subsequent behavior analysis of a specific target.

Description

Anti-occlusion visual target tracking method
Technical Field
The invention belongs to the field of pattern recognition and computer vision, and relates to an anti-blocking visual target tracking method which can be applied to the fields of intelligent video monitoring, intelligent driving, unmanned aerial vehicle monitoring and the like.
Background
Video motion target tracking is one of the popular technologies in the current computer field, target behaviors in videos can be accurately tracked and positioned, the target tracking theory is more and more perfect along with the continuous updating of an algorithm, and the application field relates to intelligent video monitoring, unmanned aerial vehicle reconnaissance, intelligent driving and the like. In brief, the target tracking means that the initial position of the target is given in the first frame, and the position information of the target in each subsequent frame of image is calculated by using a tracking algorithm. Theoretically, target tracking can be performed in real time, but in practical application, due to factors such as illumination, shielding, scale change and the like, a target is easily lost.
In general, a target tracking algorithm can be classified into a generator method and a discriminant method from the viewpoint of constructing a target model. And (3) carrying out feature extraction and model construction on the target by a generative method, and finding a region similar to the model in the next frame, namely the prediction region of the target. The discriminant method attributes the tracking problem to a binary classification problem, and mainly studies how to distinguish the target from the background.
Compared with the two methods, the discriminant method can be more suitable for complex problems such as background change and the like. The discriminant method is continuously improved in recent years, great breakthrough is made in the technical aspect, and researchers continuously improve the algorithm from the aspects of characteristics, scales and the like, so that the target tracking is more suitable for complex and changeable environments.
Due to the continuous change of target characteristics caused by the change of target and environment information in the target tracking process and the requirements of target tracking on tracking speed and precision, the target tracking has the following main difficulties:
1) the appearance of the object changes. The appearance of the target changes due to the movement of the object and the non-rigid deformation (such as jumping and walking of people), or the appearance of the target changes due to the change of the shooting angle.
2) And (4) dimension change. The size of the area occupied by the target in the image changes due to factors such as the shooting distance.
3) The environment changes. The imaging characteristics of the target image and the like are changed due to the change of the shooting environment (such as illumination, weather and the like).
4) The object moves rapidly. The rapid movement of the target causes abrupt change of the coordinate position in the image, and the speed and the precision of target searching are influenced.
5) And (5) blocking the target and getting out of the visual field. The method is characterized by comprising the following steps of partially or completely losing features due to the fact that a target is shielded by other objects in shooting, and failing to track due to the fact that the target jumps out of a visual field to track again in shooting.
6) The imaging effect. Due to the fact that the infrared camera is low in resolution, the target resolution is low, the edge and target features are not obvious, sometimes, the difference between a target and an environment is small, focusing problems of task equipment and the like can cause difficulty in feature extraction during target tracking, and the like.
The factors have great influence on target feature extraction and a target search strategy in target tracking, and in the actual tracking process, the influence caused by the factors is accurately and timely processed, so that the accuracy and robustness of target tracking can be ensured.
Disclosure of Invention
In view of the above, the present invention provides an anti-occlusion visual target tracking method, which solves the problem of poor accuracy in target tracking in the prior art. The method judges the tracking quality by using an APSR strategy, solves the problem of invalid updating of a filter under the condition that a target is shielded or deformed, and accordingly improves the tracking precision in actual tracking.
In order to achieve the purpose, the invention provides the following technical scheme:
an anti-occlusion visual target tracking method specifically comprises the following steps:
s1: reading a video sequence to be tracked, determining the position information of a target to be tracked according to the target to be tracked of a given initial frame, extracting the characteristics of CN, CNN and Hog, and calculating a weight graph G of a t framet
S2: training a relevant filter f by using the extracted features of the target CN, the CNN and the Hog;
s3: inputting a t +1 frame image, searching a target position in the t +1 frame, extracting CN, CNN and Hog characteristics, and solving a filter f in the t frametCalculating a target response graph K of the t +1 framet+1
S4: target response map K of t +1 th framet+1Judging the tracking quality by adopting an APSR strategy and combining a peak side lobe ratio to determine whether to update a related filter;
s5: and forming a target position frame by combining the target position information of all the frames, generating a video for calibrating a target area, and outputting the video, thereby facilitating subsequent analysis and research.
Further, in the step S1, the weight map G of the t-th frametComposed of a superposition of a spatially perceptual weight map Q and a target-likeness weight map R, i.e. Gt=Q+R;
Calculating a spatial perception weight map Q, specifically comprising: the numerical value of the spatial perception weight of any pixel pi is recorded as Q (pi), Q (pi) of all pixel points of the whole target frame is calculated, and a final Q is generated;
calculating a target similarity weight graph R, which specifically comprises the following steps: knowing the image of the t-th frame as an initial frame, constructing a required color histogram given the position and size of the target
Figure BDA0002488470870000021
And
Figure BDA0002488470870000022
the expression is as follows:
Figure BDA0002488470870000023
Figure BDA0002488470870000024
wherein the content of the first and second substances,
Figure BDA0002488470870000025
which represents the target of the t-th frame,
Figure BDA0002488470870000026
representing the background color histogram of the t-th frame, gamma is a fixed update rate,
Figure BDA0002488470870000027
and
Figure BDA0002488470870000028
target and background color histograms for frame 1 through frame t-1, respectively, based onThe target similarity weight map R based on the color histogram is obtained by:
Figure BDA0002488470870000031
wherein the content of the first and second substances,
Figure BDA0002488470870000032
and
Figure BDA0002488470870000033
respectively representing the proportion of the pixel value size of the target area and the background area of the t-th frame to the total pixel value of the whole given search area.
Further, in step S2, the correlation filter f is trained by using the CN, CNN and Hog features of the target region in the t-th frametThe actual optimization function is:
Figure BDA0002488470870000034
wherein x isk、ykTo satisfy the labels of the gaussian distribution, ω is the constraint of the filter; f. oft=ft⊙GtWhere ⊙ represents the dot product, and at the minimum (f), training the correlation filter f needed to obtain the t-th framet
Further, the step S3 specifically includes: reading the image of the t +1 th frame, finding a target position in a given search area of the t +1 th frame, performing center cutting on the target position of the previous frame to obtain a search area, extracting CN, CNN and Hog characteristics of the search area, and determining the characteristics as zt+1In combination with ftTo calculate the final target response map K of the t +1 th framet+1The expression is:
Kt+1=F-1(f︿t⊙z︿t+1)
wherein f is︿tAnd z︿t+1Represents a pair of ftAnd zt+1Performing Fourier transform, F-1Representing an inverse fourier transform;
according to the target response chart Kt+1To obtain the t +1 th frameThe actual target position of.
Further, the step S4 specifically includes: target response map K for frame t +1t+1Tracking quality according to an APSR strategy and a peak-to-side lobe ratio; wherein APSR is defined as follows:
Figure BDA0002488470870000035
wherein, KmaxAnd KminAre each Kt+1Maximum and minimum values of, the region around the peak
Figure BDA0002488470870000036
Is μ and the standard deviation is1And σ1(ii) a Wherein, Kt+1Is not
Figure BDA0002488470870000037
All ranges are noted as
Figure BDA0002488470870000038
τ is a given parameter (τ ≈ 1.0), w and h are each Kt+1Horizontal and vertical coordinates of middle pixel, Kw,hIs Kt+1The corresponding value of the coordinate (w, h) of (a), mean is the averaging function;
and calculating the value of the APSR, representing the tracking quality, and influencing whether the correlation filter of the t-th frame is updated or not by combining the peak-to-side lobe ratio.
The invention has the beneficial effects that:
(1) the invention can effectively process the tracking of the target under complex scenes such as shielding, deformation and the like. In the visual target tracking under the actual scene, the method utilizes a spatial punishment mechanism to train a relevant filter, can obtain a filter capable of effectively locking correct target pixels, and also weakens the influence of background pixels to a certain extent. Therefore, the learned filter has certain memorability, when the tracked target disappears in the sight line for a short time, the tracker can confirm that the tracked target does not exist in the current search area any more, the training model (avoiding being polluted by background pixels) at the moment can be stopped from being updated, the tracker with the correct characteristic information of the target is reserved, and the target can still wait for reappearance of the target in the subsequent frame and be tracked.
(2) The tracking algorithm of the invention is less time consuming. The method has the advantages of high tracking speed, which is beneficial to the speed advantage of a related filtering algorithm, and because the method has simple and direct optimization process, an ideal filtering template can be quickly trained based on the idea of loop iteration.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is an overall work flow diagram of the anti-occlusion target tracking method of the present invention;
FIG. 2 is a flowchart of the process of calculating the weight map for the t-th frame.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1 to 2, fig. 1 is an overall work flow diagram of an anti-occlusion target tracking method provided by the present invention, as shown in fig. 1, the method specifically includes the following steps:
step 1: reading a video sequence to be tracked, determining the position information of a target to be tracked according to the target to be tracked of a given initial frame, extracting the characteristics of CN, CNN and Hog, and calculating a weight map of a t-th frame, as shown in FIG. 2, specifically comprising:
defining the weight map of the t-th frame as GtThe weight graph is formed by superposing a spatial perception weight graph Q and a target similarity weight graph R;
the features to be extracted include CN, CNN and Hog features, the spatial perception weight graph Q is obtained, and the weight value of each pixel point in the target frame is attenuated along with the distance from the target center. The numerical value of the spatial perception weight of any pixel pi is recorded as Q (pi), Q (pi) of all pixel points of the whole target frame is calculated, and a final Q is generated;
calculating a target similarity weight map R: knowing the image of the t-th frame as the initial frame, given the target position and size, we can construct the desired color histogram
Figure BDA0002488470870000041
And
Figure BDA0002488470870000042
as follows:
Figure BDA0002488470870000043
Figure BDA0002488470870000051
wherein, among others,
Figure BDA0002488470870000052
which represents the target of the t-th frame,
Figure BDA0002488470870000053
representing the background color histogram of the t-th frame, gamma is a fixed update rate,
Figure BDA0002488470870000054
and
Figure BDA0002488470870000055
the target and background color histograms for frame 1 to frame t-1, respectively, may be obtained from a color histogram based target similarity weight map R according to the following equation:
Figure BDA0002488470870000056
wherein the content of the first and second substances,
Figure BDA0002488470870000057
and
Figure BDA0002488470870000058
respectively representing the proportion of the pixel value of the target area and the background area of the t-th frame to the total pixel value of the whole given search area;
the above steps obtain a spatial perception weight map Q and a target similar weight map R, and then the final weight map G of the t-th frametCalculated by the following formula:
Gt=Q+R
step 2: and training the correlation filter f by using the extracted target CN, CNN and Hog characteristics. The method specifically comprises the following steps: the CN, CNN and Hog characteristics of the target area of the t frame are extracted, and a correlation filter f can be trainedtThe actual optimization function is also as follows:
Figure BDA0002488470870000059
in the above formula, x is definedk,ykThe labels satisfy Gaussian distribution, and omega is the constraint of the filter; f. oft=ft⊙GtWherein ⊙ represents the dot product, and training the correlation filter f needed to obtain the t-th frame when (f) is minimumt
And step 3: inputting a t +1 frame image, searching a target position in the t +1 frame, extracting CN, CNN and Hog characteristics, and solving a filter f in the t frametCalculating a t +1 frame response map Kt+1. The method specifically comprises the following steps:
reading the image of the t +1 th frame, finding a target position in a given search area of the t +1 th frame, performing center cutting on the target position of the previous frame to obtain a search area, extracting CN, CNN and Hog characteristics of the search area, and determining the characteristics as zt+1In combination with ftTo calculate the final response map K of the t +1 th framet+1
Kt+1=F-1(f︿t⊙z︿t+1)
Wherein f is︿tAnd z︿t+1Namely, is to ftAnd zt+1Performing Fourier transform, F-1I.e. the inverse Fourier transform, Kt+1Is the target response map of the t +1 th frame;
according to the response chart Kt+1And obtaining the actual target position of the t +1 th frame.
And 4, step 4: target response map K of t +1 th framet+1And judging the tracking quality by adopting an APSR strategy and combining a peak-to-side lobe ratio, and determining whether to update a correlation filter. The method specifically comprises the following steps:
target response map K for the t +1 th frame calculated in step 3t+1The tracking quality can be known according to the APSR strategy and the peak-to-side lobe ratio, where APSR is defined as follows:
Figure BDA0002488470870000061
wherein, Kt+1Maximum value of (A) is Kmax,Kt+1Is KminNear peak region
Figure BDA0002488470870000062
Is μ1Near peak region
Figure BDA0002488470870000063
Has a standard deviation of1Wherein, K ist+1Is not
Figure BDA0002488470870000064
All ranges are noted as
Figure BDA0002488470870000065
τ is a given parameter (τ ≈ 1.0), w and h are Kt+1Horizontal and vertical coordinates of middle pixel, Kw,hIs Kt+1The corresponding value of the coordinate (w, h) of (a), mean is the averaging function;
and calculating the value of the APSR, which can represent the tracking quality, and can influence whether the correlation filter of the t-th frame is updated or not by combining the peak-to-side lobe ratio.
And 5: and forming a target position frame by combining the target position information of all the frames, generating a video for calibrating a target area, and outputting the video, thereby facilitating subsequent analysis and research.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (5)

1. An anti-occlusion visual target tracking method is characterized by specifically comprising the following steps:
s1: reading a video sequence to be tracked, determining the position information of a target to be tracked according to the target to be tracked of a given initial frame, extracting the characteristics of CN, CNN and Hog, and calculating a weight graph G of a t framet
S2: training a relevant filter f by using the extracted features of the target CN, the CNN and the Hog;
s3: inputting a t +1 frame image, searching a target position in the t +1 frame, extracting CN, CNN and Hog characteristics, and solving a filter f in the t frametCalculating a target response graph K of the t +1 framet+1
S4: target response map K of t +1 th framet+1Judging the tracking quality by adopting an APSR strategy and combining a peak side lobe ratio to determine whether to update a related filter;
s5: and combining the target position information of all the frames to form a target position frame and generating a video of a calibration target area.
2. The method for tracking an anti-occlusion visual target according to claim 1, wherein in the step S1, the weight map G of the t frametComposed of a superposition of a spatially perceptual weight map Q and a target-likeness weight map R, i.e. Gt=Q+R;
Calculating a spatial perception weight map Q, specifically comprising: the numerical value of the spatial perception weight of any pixel pi is recorded as Q (pi), Q (pi) of all pixel points of the whole target frame is calculated, and a final Q is generated;
calculating a target similarity weight graph R, which specifically comprises the following steps: knowing the image of the t-th frame as an initial frame, constructing a required color histogram given the position and size of the target
Figure FDA0002488470860000011
And
Figure FDA0002488470860000012
the expression is as follows:
Figure FDA00024884708600000112
Figure FDA0002488470860000013
wherein the content of the first and second substances,
Figure FDA0002488470860000014
which represents the target of the t-th frame,
Figure FDA0002488470860000015
representing the background color histogram of the t-th frame, gamma is a fixed update rate,
Figure FDA0002488470860000016
and
Figure FDA0002488470860000017
respectively obtaining target similarity weight graph R based on the color histogram according to the following formula from the 1 st frame to the t-1 st frame as well as the background color histogram:
Figure FDA0002488470860000018
wherein the content of the first and second substances,
Figure FDA0002488470860000019
and
Figure FDA00024884708600000110
respectively representing the proportion of the pixel value size of the target area and the background area of the t-th frame to the total pixel value of the whole given search area.
3. The method for tracking an anti-occlusion visual target according to claim 2, wherein in the step S2, the correlation filter f is trained by using CN, CNN and Hog features of the target region of the t-th frametThe actual optimization function is:
Figure FDA00024884708600000111
wherein x isk、ykTo satisfy the labels of the gaussian distribution, ω is the constraint of the filter; f. oft=ft⊙GtWhere ⊙ represents the dot product, and at the minimum (f), training the correlation filter f needed to obtain the t-th framet
4. The anti-occlusion visual target tracking method according to claim 3, wherein the step S3 specifically comprises: reading the image of the t +1 th frame, finding a target position in a given search area of the t +1 th frame, performing center cutting on the target position of the previous frame to obtain a search area, extracting CN, CNN and Hog characteristics of the search area, and determining the characteristics as zt+1Knot ofAnd ftTo calculate the final target response map K of the t +1 th framet+1The expression is:
Kt+1=F-1(f︿t⊙z︿t+1)
wherein f is︿tAnd z︿t+1Represents a pair of ftAnd zt+1Performing Fourier transform, F-1Representing an inverse fourier transform;
according to the target response chart Kt+1And obtaining the actual target position of the t +1 th frame.
5. The anti-occlusion visual target tracking method according to claim 4, wherein the step S4 specifically comprises: target response map K for frame t +1t+1Tracking quality according to an APSR strategy and a peak-to-side lobe ratio; wherein APSR is defined as follows:
Figure FDA0002488470860000021
wherein, KmaxAnd KminAre each Kt+1Maximum and minimum values of, the region around the peak
Figure FDA0002488470860000022
Is μ and the standard deviation is1And σ1(ii) a Wherein, Kt+1Is not
Figure FDA0002488470860000023
All ranges are noted as
Figure FDA0002488470860000024
τ is a given parameter, w and h are each Kt+1Horizontal and vertical coordinates of middle pixel, Kw,hIs Kt+1The corresponding value of the coordinate (w, h) of (a), mean is the averaging function;
and calculating the value of the APSR, representing the tracking quality, and influencing whether the correlation filter of the t-th frame is updated or not by combining the peak-to-side lobe ratio.
CN202010398545.8A 2020-05-12 2020-05-12 Anti-occlusion visual target tracking method Pending CN111583306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010398545.8A CN111583306A (en) 2020-05-12 2020-05-12 Anti-occlusion visual target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010398545.8A CN111583306A (en) 2020-05-12 2020-05-12 Anti-occlusion visual target tracking method

Publications (1)

Publication Number Publication Date
CN111583306A true CN111583306A (en) 2020-08-25

Family

ID=72112264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010398545.8A Pending CN111583306A (en) 2020-05-12 2020-05-12 Anti-occlusion visual target tracking method

Country Status (1)

Country Link
CN (1) CN111583306A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359337A (en) * 2021-12-07 2022-04-15 中国人民解放军国防科技大学 RGBT visual target tracking method and device, electronic equipment and storage medium
CN116563348A (en) * 2023-07-06 2023-08-08 中国科学院国家空间科学中心 Infrared weak small target multi-mode tracking method and system based on dual-feature template

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785366A (en) * 2019-01-21 2019-05-21 中国科学技术大学 It is a kind of for the correlation filtering method for tracking target blocked
CN110211157A (en) * 2019-06-04 2019-09-06 重庆邮电大学 A kind of target long time-tracking method based on correlation filtering
CN110458862A (en) * 2019-05-22 2019-11-15 西安邮电大学 A kind of motion target tracking method blocked under background
CN110599519A (en) * 2019-08-27 2019-12-20 上海交通大学 Anti-occlusion related filtering tracking method based on domain search strategy
CN111091583A (en) * 2019-11-22 2020-05-01 中国科学技术大学 Long-term target tracking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785366A (en) * 2019-01-21 2019-05-21 中国科学技术大学 It is a kind of for the correlation filtering method for tracking target blocked
CN110458862A (en) * 2019-05-22 2019-11-15 西安邮电大学 A kind of motion target tracking method blocked under background
CN110211157A (en) * 2019-06-04 2019-09-06 重庆邮电大学 A kind of target long time-tracking method based on correlation filtering
CN110599519A (en) * 2019-08-27 2019-12-20 上海交通大学 Anti-occlusion related filtering tracking method based on domain search strategy
CN111091583A (en) * 2019-11-22 2020-05-01 中国科学技术大学 Long-term target tracking method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359337A (en) * 2021-12-07 2022-04-15 中国人民解放军国防科技大学 RGBT visual target tracking method and device, electronic equipment and storage medium
CN116563348A (en) * 2023-07-06 2023-08-08 中国科学院国家空间科学中心 Infrared weak small target multi-mode tracking method and system based on dual-feature template
CN116563348B (en) * 2023-07-06 2023-11-14 中国科学院国家空间科学中心 Infrared weak small target multi-mode tracking method and system based on dual-feature template

Similar Documents

Publication Publication Date Title
Wang et al. Automatic laser profile recognition and fast tracking for structured light measurement using deep learning and template matching
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN111797716B (en) Single target tracking method based on Siamese network
CN111539273B (en) Traffic video background modeling method and system
CN108010067B (en) A kind of visual target tracking method based on combination determination strategy
CN106845374B (en) Pedestrian detection method and detection device based on deep learning
CN106875424B (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN111563915B (en) KCF target tracking method integrating motion information detection and Radon transformation
CN110533695A (en) A kind of trajectory predictions device and method based on DS evidence theory
JP7263216B2 (en) Object Shape Regression Using Wasserstein Distance
CN111582349B (en) Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering
CN107833239B (en) Optimization matching target tracking method based on weighting model constraint
CN110033472B (en) Stable target tracking method in complex infrared ground environment
CN111724411B (en) Multi-feature fusion tracking method based on opposite-impact algorithm
CN107424175B (en) Target tracking method combined with space-time context information
CN106780560A (en) A kind of feature based merges the bionic machine fish visual tracking method of particle filter
CN111583306A (en) Anti-occlusion visual target tracking method
CN106127766B (en) Method for tracking target based on Space Coupling relationship and historical models
CN107609571A (en) A kind of adaptive target tracking method based on LARK features
CN115205903B (en) Pedestrian re-recognition method based on identity migration generation countermeasure network
Liu et al. Correlation filter with motion detection for robust tracking of shape-deformed targets
CN110517285B (en) Large-scene minimum target tracking based on motion estimation ME-CNN network
CN117011381A (en) Real-time surgical instrument pose estimation method and system based on deep learning and stereoscopic vision
CN117011342A (en) Attention-enhanced space-time transducer vision single-target tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825