CN112734806B - Visual target tracking method and device based on peak sharp guidance confidence - Google Patents

Visual target tracking method and device based on peak sharp guidance confidence Download PDF

Info

Publication number
CN112734806B
CN112734806B CN202110046537.1A CN202110046537A CN112734806B CN 112734806 B CN112734806 B CN 112734806B CN 202110046537 A CN202110046537 A CN 202110046537A CN 112734806 B CN112734806 B CN 112734806B
Authority
CN
China
Prior art keywords
target
max
res
peak
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110046537.1A
Other languages
Chinese (zh)
Other versions
CN112734806A (en
Inventor
王慧斌
卢颖
陈哲
沈洁
刘海韵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202110046537.1A priority Critical patent/CN112734806B/en
Publication of CN112734806A publication Critical patent/CN112734806A/en
Application granted granted Critical
Publication of CN112734806B publication Critical patent/CN112734806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual target tracking method and a device based on peak sharp guidance confidence, wherein a multi-feature fusion characterization target is introduced into a background perception related filtering method BACF, and self-adaptive weight is designed based on peak sharp degree, so that the problem that the target in an underwater image is difficult to accurately characterize is solved; and a high-confidence model updating strategy based on peak sharpness guidance is provided, so that the tracking robustness of the method under the condition that the target is shielded or partially exceeds the visual field is effectively improved, and the problem of variable characterization of the underwater target is solved. On the premise of ensuring good running speed of the method, compared with the traditional BACF method, the improved method provided by the invention has the advantages that the precision and the success rate are remarkably improved, and the improved method is particularly suitable for being used under the conditions that the underwater sequence has low resolution and illumination change, and the underwater target part exceeds the field of vision and is shielded.

Description

Visual target tracking method and device based on peak sharp guidance confidence
Technical Field
The invention belongs to the technical field of target tracking, and relates to a visual target tracking method and device based on peak sharp guidance confidence.
Background
The underwater target tracking has very important application value in the fields of marine resource surveying, underwater engineering operation, sea battlefield monitoring and the like. However, due to the problems of complicated and changeable underwater imaging environment, changeable target form and the like, underwater target tracking research has a greater challenge. Different from the atmosphere or the land environment, the water body has extremely strong absorption and scattering effects on target imaging light, so that the intensity attenuation and color distortion of target imaging information are caused, and in addition, the light curtain light noise formed by scattered light is difficult to accurately represent a target model. Meanwhile, under the influence of buoyancy of a water body, the motion state of an underwater target has higher degree of freedom, for example, a fish target and the like can move in a three-dimensional space of the water body with high degree of freedom, and the dynamic change of a target model is easily caused by the change of form and the shielding of a target area.
Aiming at the problems, the traditional technical strategy mainly adopts and improves methods such as Kalman filtering, particle filtering and the like. However, the method only models the target, ignores a large amount of background information in the image, and cannot well solve the problems of underwater illumination change, visual field edge distortion caused by underwater shooting, complex color channel and the like. In recent years, underwater target tracking based on correlation filtering has become a research hotspot in the field. The method is improved based on a KCF (Kernel Correlation Filter) method, background information in an image can be effectively utilized, a training sample is constructed by utilizing a cyclic matrix and a cosine window, the complexity of calculation is reduced, and the boundary effect generated by the method causes the method to have poor tracking effect when an underwater target moves fast or an underwater camera shakes fast.
The Background perception Correlation filtering (BACF) Method effectively solves the problem of boundary effect, adopts the HOG (histogram of ordered gradient) characteristic, enlarges the detected image block, utilizes a Filter with smaller size to improve the proportion of real samples, and not only solves the boundary effect, but also has considerable effect on precision and speed through ADMM (alternating Direction Method of multipliers) iterative optimization. However, it has disadvantages in that: (1) when the underwater target with high degree of freedom moves or rotates rapidly, the density of the HOG characteristic in the gradient direction is fuzzy, and the target is difficult to characterize. (2) When underwater target information is missing, background information is easily mixed in the model by a fixed model updating strategy, so that a tracking drift phenomenon occurs in a subsequent frame.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a visual target tracking method and device based on a peak sharp guidance confidence coefficient, which are used for overcoming the defects existing in the prior art in an underwater environment, solving the problem that an underwater image target is difficult to accurately represent and simultaneously improving the tracking robustness of the underwater target tracking method when the target partially exceeds the visual field or is partially shielded.
The technical scheme is as follows: in order to achieve the purpose, the technical scheme adopted by the invention is a visual target tracking method based on peak sharp guidance confidence, the method comprises the steps of extracting the characteristics of a tracked target in a video image, carrying out convolution operation on the characteristics and a correlation filter model to obtain a response image, taking the position of the peak of the response image as a prediction position, and updating the correlation filter model according to self-adaptive weight in the target tracking process to complete target tracking in a video image sequence; wherein:
the extracted target characteristics in each frame image are gray level characteristics, HOG (histogram of ordered gradient) characteristics and CN (color names) characteristics, self-adaptive weight is calculated according to the peak sharpness degree of the response image corresponding to each characteristic, and a total response image is obtained after weighting each response image;
in the model updating process, updating a model of a Correlation filter based on a DAPCE (Dynamic Average Peak-to-Correlation Energy) strategy, wherein the DAPCE strategy judges the confidence coefficient of the current tracking result by utilizing the Peak sharpness degree of a response graph of the current frame, and if the confidence coefficient is relatively high, updating the model of the Correlation filter by using a self-adaptive weight, otherwise, not updating.
Further, extracting the gray scale feature, the HOG feature and the CN feature of the target sample block from each frame image, calculating the response result of convolution of each extracted feature and a related filter, and designing self-adaptive feature weight for each feature according to the sharpness of the peak value of each response; the specific calculation formula is as follows:
diff HOG =(Res HOG ) max -(Res HOG ) min
diff CN =(Res CN ) max -(Res CN ) min
diff GRAY =(Res GRAY ) max -(Res GRAY ) mi
the assigned weights are:
Figure GDA0003722271730000021
Figure GDA0003722271730000022
Figure GDA0003722271730000023
the final weighted total response is:
Res=ω HOG *Res HoGCN *Res CNGRAY *Res GRAY
wherein, Res HoG 、Res CN And Res GRAY The result matrix of convolution operation of HOG characteristic, CN characteristic and gray characteristic of the target sample block and the correlation filter respectively (.) max 、(·) min Representing the maximum and minimum values of the elements in the matrix, respectively.
Further, the method for updating the model of the correlation filter based on the dapec policy comprises the following steps: the maximum value and the minimum value of the total response map of the previous frame are respectively marked as F max 、F min And the response value at position (w, h) is denoted as F w,h Then APCE is calculated as:
Figure GDA0003722271730000031
APCE and F max Respectively record the historical mean values of mean_old And F max_mean_old When APCE > APCE is satisfied mean_old ×β 1 And F max >F max_mean_old ×β 2 When (beta) 1 、β 2 Are all constant, beta 1 、β 2 ∈[0,1]) The correlation filter model is updated as follows:
Figure GDA0003722271730000032
Figure GDA0003722271730000033
wherein f is the serial number of the current frame,
Figure GDA0003722271730000034
the correlation filter models corresponding to the current frame and the previous frame respectively,
Figure GDA0003722271730000035
is a history correlation filter model, eta is a learning rate, eta belongs to [0,1 ]]And epsilon is a constant and satisfies APCE epsilon [ -10,10](ii) a Simultaneous updating of APCE and F max Historical mean of (a):
Figure GDA0003722271730000036
Figure GDA0003722271730000037
where ψ is the number of frames in the history frames used to calculate the history mean.
Based on the same inventive concept, the invention provides a visual target tracking device based on peak sharp guidance confidence, which comprises:
the target feature extraction module is used for extracting gray features, HOG features and CN features of a tracked target in the video image and performing convolution operation on each feature and a relevant filter model respectively to obtain a response graph;
the target position prediction module is used for calculating self-adaptive weight according to the peak sharpness degree of the response image corresponding to each characteristic, weighting each response image to obtain a total response image, and taking the position of the peak of the total response image as a prediction position;
the filter model updating module is used for updating the relevant filter model according to self-adaptive weight in the target tracking process, specifically updating the relevant filter model based on a DAPCE strategy, wherein the DAPCE strategy judges the confidence coefficient of the current tracking result by utilizing the peak sharpness of the response image of the current frame, if the confidence coefficient is relatively high, the relevant filter model is updated by using the self-adaptive weight, otherwise, the relevant filter model is not updated;
and the target tracking module is used for respectively calling the target feature extraction module, the target position prediction module and the filter model updating module, processing each frame of image in the video image sequence and completing target tracking in the video image sequence.
Based on the same inventive concept, the device for tracking a visual target based on peak-sharp guidance confidence provided by the invention comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the method for tracking the visual target based on the peak-sharp guidance confidence when being loaded into the processor.
Has the advantages that: aiming at the defects of the BACF method in the underwater environment, the invention improves the following steps that the adaptive feature fusion technology based on peak sharp guidance is introduced in the response stage, so that the characterization capability of the target tracking method on the underwater target features is improved; by designing a high-confidence model updating strategy based on peak sharp guidance, the robustness of the target tracking method in a state that the target is partially shielded is improved.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention.
Fig. 2 is a graph showing the tracking results in a partial underwater sequence, in which (a) is the tracking result of the BACF method on fish data, and (b) is the tracking result of an embodiment of the present invention on fish data.
Fig. 3 is a comparison graph of success rate AUC of the underwater fish data set by the method of Adaptive Updating Correlation Filter (AUCF) based on peak sharpness guidance confidence and the existing methods such as BACF.
Fig. 4 is a robustness comparison graph of the AUCF method of the present invention and other comparison methods, where (a) is spatial robustness and (b) is temporal robustness.
Fig. 5A and 5B are graphs showing the success rate AUC comparison between the AUCF and other comparison methods of the present invention on each target attribute, where (a) - (k) respectively correspond to 11 target attributes of scale change, partial out-of-view, illumination change, motion blur, low resolution, in-plane rotation, background blur, fast motion, deformation, occlusion, and out-of-plane rotation.
Detailed Description
To clearly highlight the objects and advantages of the present invention, the present invention will be further described with reference to the accompanying drawings in the examples of the present invention.
According to the visual target tracking method based on the peak sharp guidance confidence, the multi-feature fusion characterization target is introduced into the BACF method, and the self-adaptive weight is designed based on the peak sharp degree, so that the problem that the target in an underwater image is difficult to accurately characterize is solved; and a high-confidence model updating strategy based on peak sharpness guidance is provided, so that the tracking robustness of the method under the condition that the target is shielded or partially exceeds the visual field is effectively improved, and the problem of variable underwater target representation is solved. Specifically, the target features extracted from each sample block are gray scale features, HOG features and CN features, adaptive weights are calculated according to the peak sharpness degree of the response graphs corresponding to the features, and the response graphs are weighted to obtain a total response graph; and in the model updating process, updating the correlation filter model based on a DAPCE strategy, judging the confidence coefficient of the current tracking result by utilizing the peak sharpness degree of the response graph of the current frame by using the DAPCE strategy, updating the correlation filter model by using a self-adaptive weight if the confidence coefficient is relatively high, and not updating the correlation filter model if the confidence coefficient is not relatively high.
Based on the core thought, the implementation process of the visual target tracking method based on the peak sharp guidance confidence degree disclosed by the embodiment of the invention is shown in fig. 1, and the method mainly comprises the following steps:
(1) reading a first frame of underwater video image, extracting the characteristics of a tracked target according to a real boundary frame, and initializing a relevant filter; the invention adopts the same relevant filter framework as in the BACF method;
(2) inputting the next frame image, taking the target position in the previous frame as the translation center, taking the scale a relative to the target size as the scale center, extracting a new target sample block Z a
(3) At the new target sample block Z a Extracting gray scale features, CN features and HOG features of the image;
(4) performing convolution operation on the extracted three features and a relevant filter respectively to obtain a response graph corresponding to the features, and distributing self-adaptive weight to each feature according to the maximum value and the minimum value of the response graph;
(5) taking the position corresponding to the peak value of the final response image as a prediction position, and taking the corresponding scale as a prediction scale;
(6) judging whether the current response graph meets the model updating condition of the DAPCE strategy, if so, updating the relevant filter model, otherwise, not updating;
(7) in a new input frame, repeating the step (2) and the step (3) to extract the characteristics of the tracked target, repeating the step (4) and the step (5) to obtain the predicted position and the predicted scale of the tracking method, and repeating the step (6) to update the model of the relevant filter;
(8) and completing the underwater target tracking task in the whole video sequence until all the frames are completely operated.
Extracting the gray scale feature, the HOG feature and the CN feature of the target sample block in the step (4), calculating a response result of convolution of each extracted feature and a related filter, and designing a self-adaptive feature weight for each feature according to the sharpness of the peak value of each response. The resulting matrix of the convolution operation using signature A and the correlation filter is denoted Res A The difference between the maximum and minimum elements of the matrix is denoted diff A Namely:
diff HOG =(Res HoG ) max -(Res HoG ) min
diff CN =(Res CN ) max -(Res CN ) mmin
diff GRAY =(Res GRAY ) max -(Res GRAY ) mmin
the assigned weights are:
Figure GDA0003722271730000061
Figure GDA0003722271730000062
Figure GDA0003722271730000063
the final weighted total response is:
Res=ω HOG *Res HoGCN *Res CNGRAY *Res GRAY
and (4) updating the model of the relevant filter based on the DAPCE strategy in the step (6). The maximum value and the minimum value of the total response map of the previous frame are respectively marked as F max 、F min And the response value at position (w, h) is denoted as F w,h Then APCE is calculated as:
Figure GDA0003722271730000064
APCE and F max Respectively record the historical mean values of mean_old And F max_mean_old When APCE > APCE is satisfied mean_old ×β 1 And F max >F max_mean_old ×β 2 When (beta) 1 、β 2 Are all constant, beta 1 、β 2 ∈[0,1]) The correlation filter model is updated as follows:
Figure GDA0003722271730000065
Figure GDA0003722271730000066
wherein f is the serial number of the current frame,
Figure GDA0003722271730000067
correlation filter models corresponding to the current frame and the previous frame respectively,
Figure GDA0003722271730000071
Is a history correlation filter model, eta is a learning rate, eta belongs to [0,1 ]]Epsilon is constant and is determined according to the value of APCE, so as to limit the value range of APCE epsilon to [ -10,10]In this embodiment, ε is 0.025. Simultaneous updating of APCE and F max Historical mean of (a):
Figure GDA0003722271730000072
Figure GDA0003722271730000073
where ψ is the number of frames in the history frames used to calculate the history mean. The improved tracking process of the present invention is as follows:
Figure GDA0003722271730000074
based on the same inventive concept, the embodiment of the invention discloses a visual target tracking device based on peak sharp guidance confidence, which comprises: the target feature extraction module is used for extracting gray features, HOG features and CN features of a tracked target in a video image, and performing convolution operation on each feature and a relevant filter model respectively to obtain a response map; the target position prediction module is used for calculating self-adaptive weight according to the peak sharpness degree of the response image corresponding to each characteristic, weighting each response image to obtain a total response image, and taking the position of the peak of the total response image as a prediction position; the filter model updating module is used for updating the relevant filter model according to self-adaptive weight in the target tracking process, specifically updating the relevant filter model based on a DAPCE strategy, wherein the DAPCE strategy judges the confidence coefficient of the current tracking result by utilizing the peak sharpness of the response image of the current frame, if the confidence coefficient is relatively high, the relevant filter model is updated by using the self-adaptive weight, otherwise, the relevant filter model is not updated; and the target tracking module is used for respectively calling the target feature extraction module, the target position prediction module and the filter model updating module, processing each frame of image in the video image sequence and completing target tracking in the video image sequence.
Based on the same inventive concept, the visual target tracking device based on the peak-sharp guidance confidence degree disclosed by the embodiment of the invention comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, and is characterized in that when the computer program is loaded to the processor, the visual target tracking method based on the peak-sharp guidance confidence degree of the embodiment is realized.
In order to verify the tracking effect of the invention in an underwater environment, an underwater target tracking reference database is constructed, and a simulation experiment is carried out on an MATLAB platform. The process of tracking and initializing the correlation filter starting from the first frame image of each video is denoted as one-pass evaluation (OPE). The performance of the method of the invention was evaluated using accuracy, success rate and robustness in the experiment. (1) Precision: the accuracy is the number of frames in the tracking result within a threshold distance (center position error) of the target truth value as a percentage of the total number of frames. The score threshold used in the method of the invention is 20 pixel points. (2) Success rate: assume the bounding box of the tracking result to be Y a The real bounding box is Y b Then, the overlap ratio S is calculated as:
Figure GDA0003722271730000081
the success rate is defined as the overlapping rate S is larger than a given threshold t 0 The number of successful frames. Threshold t used in the method of the invention 0 Is 50%. (3) Robustness: robustness is defined as the ability of the target tracking algorithm to maintain tracking performance under certain disturbances. The method respectively evaluates two Robustness indexes of spatial Robustness SRE (spatial Robustness evaluation) and temporal Robustness TRE (temporal Robustness evaluation). The spatial robustness SRE is evaluated by changing the initial value of the tracked objectStarting with the bounding box, 8 spatial shifts (4 central shifts and 4 angular shifts) and 4 scale changes are used in the method of the invention. The offset of the spatial displacement is 10% of the target size, the scale ratio of the scale change is 80% -120% of the ground-truth, and the increment is 10%. The results of SRE are the average of these 12 evaluations. The evaluation of the temporal robustness TRE is performed by changing the initial frame of the tracked object, in the method of the present invention, each sequence is randomly divided into 20 segments, and the tracking algorithm starts tracking from the initial frame of the 20 segments respectively until the whole sequence is ended. The TRE results are the average of the 20 evaluations.
In order to verify the effectiveness and reliability of the method in underwater target Tracking, the method of the present invention is used for carrying out complete Tracking experiments on an underwater target Tracking data set, and simultaneously carrying out experimental comparison with other similar methods, including BACF (Background-Aware Correlation Filters), STAPLE (Sum of templates and Pixel-wires), SRDCF (spatial regulated Correlation Filters), CSK (circular Structure of Tracking-by-detection with Kernels), DSST (distributed index) and DSST (spatial Adaptive Kernel Correlation Filters), CN (color Names), KCF (spatial matched Correlation Filters), DSST (spatial mapping) and spatial mapping methods. Fig. 2 shows the tracking results in a partial underwater sequence. Since the performance of the tracking method is evaluated by using a certain threshold, fairness and representativeness are not provided. The Area Under the Curve (AUC) of the success rate plot was therefore used to rank the performance of each tracking method. The results of the OPE experiment show that the method of the invention achieves a success rate AUC score of 63.7%, which is 2.7% higher than the BACF method ranked second and 6% higher than the DSST method ranked third, as shown in FIG. 3.
In order to verify that the method has good robustness, the method is evaluated in time robustness and space robustness compared with other comparison methods. In terms of Temporal Robustness (TRE), each tracking method is evaluated 20 times, each time given a different initial frame, the tracking method runs from that frame until the end of the sequence. In terms of Spatial Robustness (SRE), each tracking method was evaluated 12 times, each time with a different spatial deformation of the true bounding box in the first frame. Experimental results show that the method of the present invention achieves the highest AUC scores, both in spatial robustness and temporal robustness, 1% and 1.4% higher than the second-ranked BACF method, and 4.6% and 5.3% higher than the third-ranked repeat method, respectively, as shown in fig. 4.
In order to verify the target attributes and underwater scenes to which the method is applicable, the underwater data sets are classified, the method and other comparison tracking methods are operated on corresponding sequences of different target attributes, and the tracking methods are ranked according to AUC scores of success rates. The experimental results show that the method of the invention shows good performance on 11 target attributes. Wherein, compared to the BACF method, the AUC scores of the method of the present invention are all higher than BACF on 10 attributes except for motion blur; compared with other comparative methods, the AUC scores of the inventive method were all ranked first on 8 attributes, as shown in fig. 5A, 5B. Generally, the method obtains excellent tracking results in an underwater environment, and is particularly suitable for being used under the conditions that the underwater sequence has low resolution, illumination changes, and the underwater target part exceeds the visual field, is partially shielded and the like.

Claims (4)

1. The visual target tracking method based on the peak sharp guidance confidence coefficient is characterized in that the method comprises the steps of extracting the characteristics of a tracked target in a video image, carrying out convolution operation on the characteristics and a relevant filter model to obtain a response image, taking the position of the peak of the response image as a prediction position, and updating the relevant filter model according to self-adaptive weight in the target tracking process to complete target tracking in a video image sequence; wherein:
the extracted target features in each frame image are gray scale features, HOG features and color naming CN features, adaptive weights are calculated according to the peak sharpness degree of the response image corresponding to each feature, and the response images are weighted to obtain a total response image;
in the model updating process, a correlation filter model is matched based on a DAPCE strategyUpdating, wherein the DAPCE strategy judges the confidence coefficient of the current tracking result by utilizing the peak sharpness of the response image of the current frame, if the confidence coefficient is relatively high, the relevant filter model is updated by a self-adaptive weight, otherwise, the relevant filter model is not updated; the method for updating the model of the relevant filter based on the DAPCE strategy comprises the following steps: marking the maximum value and the minimum value of the total response image of the previous frame as F respectively max 、E min And the response value at position (w, h) is denoted as F w,h Then APCE is calculated as:
Figure FDA0003722271720000011
APCE and F max Respectively record the historical mean values of mean_old And F max_mean_old When APCE > APCE is satisfied mean_old ×β 1 And F max >F max_mean_old ×β 2 When is beta 1 、β 2 Are all constant, beta 1 、β 2 ∈[0,1]The correlation filter model is updated as follows:
Figure FDA0003722271720000012
Figure FDA0003722271720000013
wherein f is the serial number of the current frame,
Figure FDA0003722271720000014
the relevant filter models corresponding to the current frame and the previous frame respectively,
Figure FDA0003722271720000015
is a history correlation filter model, eta is a learning rate, eta belongs to [0,1 ]]And epsilon is a constant and satisfies APCE epsilon [ -10,10](ii) a Simultaneous updating of APCE and F max Historical mean of (a):
Figure FDA0003722271720000016
Figure FDA0003722271720000017
where ψ is the number of frames in the history frames used to calculate the history mean.
2. The peak-sharpness-guidance-confidence-based visual target tracking method according to claim 1, wherein: extracting gray scale features, HOG features and CN features of a target sample block from each frame image, calculating a response result of convolution of each extracted feature and a related filter, and designing self-adaptive feature weight for each feature according to the sharpness of the peak value of each response; the specific calculation formula is as follows:
diff HOG =(Res HOG ) max -(Res HOG ) min
diff CN =(Res CN ) max -(Res CN ) min
diff GRAY =(Res GRAY ) max -(Res GRAY ) min
the assigned weights are:
Figure FDA0003722271720000021
Figure FDA0003722271720000022
Figure FDA0003722271720000023
the final weighted total response is:
Res=ω HOG *Res HoGCN *Res CNGRAY *Res GRAY
wherein Res HOG 、Res CN And Res GRAY The result matrix of convolution operation of HOG characteristic, CN characteristic and gray characteristic of the target sample block and the correlation filter respectively (.) max 、(·) min Representing the maximum and minimum values of the elements in the matrix, respectively.
3. A visual target tracking device based on peak sharpness guidance confidence, comprising:
the target feature extraction module is used for extracting gray level features, HOG features and color naming CN features of a tracked target in a video image, and performing convolution operation on each feature and a relevant filter model respectively to obtain a response graph;
the target position prediction module is used for calculating self-adaptive weight according to the peak sharpness degree of the response image corresponding to each characteristic, weighting each response image to obtain a total response image, and taking the position of the peak of the total response image as a prediction position;
the filter model updating module is used for updating the relevant filter model according to self-adaptive weight in the target tracking process, specifically updating the relevant filter model based on a DAPCE strategy, wherein the DAPCE strategy judges the confidence coefficient of the current tracking result by utilizing the peak sharpness of the response image of the current frame, if the confidence coefficient is relatively high, the relevant filter model is updated by using the self-adaptive weight, otherwise, the relevant filter model is not updated; the method for updating the model of the relevant filter based on the DAPCE strategy comprises the following steps: the maximum value and the minimum value of the total response map of the previous frame are respectively marked as F max 、F min And the response value at position (w, h) is denoted as F w,h Then APCE is calculated as:
Figure FDA0003722271720000031
APCE and F max Respectively record the historical mean values of mean_old And F max_mean_old When APCE > APCE is satisfied mean_old ×β 1 And F max >F max_mean_old ×β 2 When is beta 1 、β 2 Are all constant, beta 1 、β 2 ∈[0,1]The correlation filter model is updated as follows:
Figure FDA0003722271720000032
Figure FDA0003722271720000033
wherein f is the serial number of the current frame,
Figure FDA0003722271720000034
the relevant filter models corresponding to the current frame and the previous frame respectively,
Figure FDA0003722271720000035
is a history correlation filter model, eta is a learning rate, eta belongs to [0,1 ]]And epsilon is a constant and satisfies APCE epsilon [ -10,10](ii) a Simultaneous updating of APCE and F max Historical mean of (a):
Figure FDA0003722271720000036
Figure FDA0003722271720000037
wherein psi is the number of frames in the history frames used for calculating the history mean;
and the target tracking module is used for respectively calling the target feature extraction module, the target position prediction module and the filter model updating module, processing each frame of image in the video image sequence and completing target tracking in the video image sequence.
4. Visual target tracking apparatus based on peak sharpness guidance confidence, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the computer program, when loaded into the processor, implements a visual target tracking method based on peak sharpness guidance confidence according to any of claims 1-2.
CN202110046537.1A 2021-01-14 2021-01-14 Visual target tracking method and device based on peak sharp guidance confidence Active CN112734806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110046537.1A CN112734806B (en) 2021-01-14 2021-01-14 Visual target tracking method and device based on peak sharp guidance confidence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110046537.1A CN112734806B (en) 2021-01-14 2021-01-14 Visual target tracking method and device based on peak sharp guidance confidence

Publications (2)

Publication Number Publication Date
CN112734806A CN112734806A (en) 2021-04-30
CN112734806B true CN112734806B (en) 2022-09-02

Family

ID=75592907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110046537.1A Active CN112734806B (en) 2021-01-14 2021-01-14 Visual target tracking method and device based on peak sharp guidance confidence

Country Status (1)

Country Link
CN (1) CN112734806B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584271B (en) * 2018-11-15 2021-10-08 西北工业大学 High-speed correlation filtering tracking method based on high-confidence updating strategy
CN110414439B (en) * 2019-07-30 2022-03-15 武汉理工大学 Anti-blocking pedestrian tracking method based on multi-peak detection
CN111260689B (en) * 2020-01-16 2022-10-11 东华大学 Confidence enhancement-based correlation filtering visual tracking method
CN111862155B (en) * 2020-07-14 2022-11-22 中国电子科技集团公司第五十四研究所 Unmanned aerial vehicle single vision target tracking method aiming at target shielding

Also Published As

Publication number Publication date
CN112734806A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN111354017B (en) Target tracking method based on twin neural network and parallel attention module
CN111310582A (en) Turbulence degradation image semantic segmentation method based on boundary perception and counterstudy
CN106815323B (en) Cross-domain visual retrieval method based on significance detection
CN111738055B (en) Multi-category text detection system and bill form detection method based on same
CN111046856B (en) Parallel pose tracking and map creating method based on dynamic and static feature extraction
CN111612817A (en) Target tracking method based on depth feature adaptive fusion and context information
CN112364865B (en) Method for detecting small moving target in complex scene
US11367206B2 (en) Edge-guided ranking loss for monocular depth prediction
CN114445715A (en) Crop disease identification method based on convolutional neural network
CN110827262A (en) Weak and small target detection method based on continuous limited frame infrared image
CN110706253B (en) Target tracking method, system and device based on apparent feature and depth feature
CN116977633A (en) Feature element segmentation model training method, feature element segmentation method and device
CN113379789B (en) Moving target tracking method in complex environment
CN115170978A (en) Vehicle target detection method and device, electronic equipment and storage medium
CN112164087B (en) Super-pixel segmentation method and device based on edge constraint and segmentation boundary search
CN112734806B (en) Visual target tracking method and device based on peak sharp guidance confidence
CN116758421A (en) Remote sensing image directed target detection method based on weak supervised learning
CN110751671B (en) Target tracking method based on kernel correlation filtering and motion estimation
CN114821356B (en) Optical remote sensing target detection method for accurate positioning
CN115984325A (en) Target tracking method for target volume searching space-time regularization
CN113470074B (en) Self-adaptive space-time regularization target tracking method based on block discrimination
CN113627342B (en) Method, system, equipment and storage medium for video depth feature extraction optimization
CN111899284B (en) Planar target tracking method based on parameterized ESM network
CN113327273B (en) Infrared target tracking method based on variable window function correlation filtering
CN118172242B (en) Unmanned plane water surface image stitching method, medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant