CN111860189A - Target tracking method and device - Google Patents

Target tracking method and device Download PDF

Info

Publication number
CN111860189A
CN111860189A CN202010588437.7A CN202010588437A CN111860189A CN 111860189 A CN111860189 A CN 111860189A CN 202010588437 A CN202010588437 A CN 202010588437A CN 111860189 A CN111860189 A CN 111860189A
Authority
CN
China
Prior art keywords
target
region
block
candidate
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010588437.7A
Other languages
Chinese (zh)
Other versions
CN111860189B (en
Inventor
侯棋文
张樯
崔洪
张蛟淏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Environmental Features
Original Assignee
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Environmental Features filed Critical Beijing Institute of Environmental Features
Priority to CN202010588437.7A priority Critical patent/CN111860189B/en
Publication of CN111860189A publication Critical patent/CN111860189A/en
Application granted granted Critical
Publication of CN111860189B publication Critical patent/CN111860189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target tracking method and device, and relates to the technical field of image processing. Wherein, the method comprises the following steps: determining candidate areas of all target blocks in the current frame image; inputting image characteristics obtained by compression sampling of the block candidate region into a trained classifier to obtain a class prediction score of the block candidate region; screening out the region where the block is located according to the category prediction score of the block candidate region; and under the condition that the blocked blocks exist, calculating the position coordinates of the target in the current frame image according to the position coordinates of other blocks except the blocked blocks. Through the steps, the problems of poor tracking effect and poor tracking algorithm robustness caused by occlusion and scale change in the conventional compression tracking algorithm can be solved.

Description

Target tracking method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a target tracking method and device.
Background
In many photoelectric tracking search systems, the requirements on the real-time performance and accuracy of a tracking algorithm are high due to the fact that the target moves fast and the target scale changes greatly. Therefore, it is a basic requirement of the photoelectric tracking search system to develop a fast and effective tracking algorithm.
The compressed tracking algorithm is a simple and fast tracking algorithm, and is concerned by many people once being put forward. However, this algorithm has certain limitations. Firstly, the compressed tracking is not robust enough to occlusion, and when the image has occlusion, the tracking effect is poor. In addition, the tracking window of the compressed tracking algorithm is fixed and is not robust to scale changes.
Therefore, in order to overcome the above disadvantages, a new scheme needs to be provided to solve the problems of poor tracking effect and poor robustness of the tracking algorithm due to occlusion and scale change in the existing compression tracking algorithm.
Disclosure of Invention
Technical problem to be solved
The invention aims to solve the technical problems of poor tracking effect and poor tracking algorithm robustness caused by occlusion and scale change in the conventional compression tracking algorithm.
(II) technical scheme
In order to solve the above technical problem, in one aspect, the present invention provides a target tracking method.
The target tracking method of the invention comprises the following steps: determining candidate areas of all blocks of a target in a current frame image; carrying out compression sampling on the candidate regions of the blocks, and inputting image features obtained by compression sampling into a trained classifier to obtain class prediction scores of the candidate regions of the blocks; screening out the region where the block is located from the candidate regions of the block according to the category prediction score of the candidate regions of the block; and under the condition that the blocked block area exists, calculating the position coordinate of the target in the current frame image according to the position coordinates of the areas of other blocks except the blocked block area.
Optionally, the method further comprises: determining a candidate region of the target in the current frame image according to the position coordinate of the target in the current frame image and the scale of the target in the previous frame image; carrying out compression sampling and normalization processing on the candidate region of the target, and inputting the image characteristics obtained by processing into a trained classifier to obtain the category prediction score of the candidate region of the target; and screening out the region where the target is located from the candidate region of the target according to the category prediction score of the candidate region of the target, and taking the scale of the region where the target is located as the scale of the target in the current frame image.
Optionally, the performing compression sampling on the candidate regions of the blocks and inputting image features obtained by compression sampling into a trained classifier to obtain the class prediction score of the candidate regions of the blocks includes: extracting a plurality of Haar features in each candidate region of the block, and taking the Haar features as image features obtained by compression sampling; and inputting the plurality of Haar features into a trained first Bayes classifier to obtain a category prediction score of the candidate region.
Optionally, the response of the trained bayesian classifier satisfies:
Figure BDA0002555520920000021
Figure BDA0002555520920000022
wherein Hi(yi) Is the category prediction score of the candidate region of the ith block, i is 1, 2, …, N is the total number of blocks; p (y)ijI k ═ 1) represents the feature yijA predicted probability value for the target feature; p (y)ijI k ═ 0) represents the feature yijA predicted probability value for a background feature; y isijIs the jth Haar feature in the candidate region of the ith patch; l is the number of Haar features in the candidate region of each partition; w is aijIs yijWeight of (1), wijIs formed by the center coordinates of Haar features
Figure BDA0002555520920000023
Determining; (x)c,yc) Is the center coordinate of the entire target; beta is a pair with the target areaThe angular line is a constant.
Optionally, the method further comprises: extracting some image feature sequences from a region which is close to the target in the previous frame of image as a positive sample, extracting some image feature sequences from a region which is far away from the target in the previous frame of image as a negative sample, and training a first Bayes classifier according to the positive sample and the negative sample to obtain the trained first Bayes classifier.
Optionally, whether there is a blocked block area is determined according to the following manner: constructing a block feature vector for clustering according to the offset of the position coordinate of the region where the block is located compared with the position coordinate of the region where the corresponding block is located in the previous frame of image and the category prediction score of the region where the block is located; and carrying out clustering processing on each block according to the block feature vector used for clustering, and judging whether a blocked block area exists according to a clustering processing result.
Optionally, the determining, according to the position coordinates of the target in the current frame image and the scale of the target in the previous frame image, the candidate region of the target in the current frame image includes: constructing a reference region by taking the position coordinates of the target in the current frame image as the center of the reference region and the scale of the target in the previous frame image as the scale of the reference region; and carrying out amplification and reduction processing on the reference region to obtain a candidate region of the target.
Optionally, the performing compression sampling and normalization processing on the candidate region of the target, and inputting the processed image features into a trained classifier to obtain the class prediction score of the candidate region of the target includes: and extracting a plurality of Haar features in each candidate region of the target, then carrying out normalization processing on the Haar features, and inputting the image features obtained by processing into a trained second Bayes classifier to obtain the category prediction score of the candidate region.
Optionally, the method further comprises: and under the condition that the blocked block areas do not exist, calculating the position coordinates of the target in the current frame image according to the position coordinates of the areas where the blocks of the target are located.
In order to solve the above technical problem, in another aspect, the present invention provides a target tracking apparatus.
The object tracking device of the present invention includes: the determining module is used for determining candidate areas of all blocks of the target in the current frame image; the screening module is used for carrying out compression sampling on the candidate regions of the blocks and inputting image characteristics obtained by compression sampling into a trained classifier so as to obtain class prediction scores of the candidate regions of the blocks; screening out the region where the block is located from the candidate regions of the block according to the category prediction score of the candidate regions of the block; and the position calculation module is used for determining the position coordinates of the target in the current frame image according to the position coordinates of the areas where other blocks except the blocked block area are located under the condition that the blocked block area is judged to exist.
Optionally, the apparatus further comprises: the scale calculation module is used for determining a candidate region of the target in the current frame image according to the position coordinate of the target in the current frame image and the scale of the target in the previous frame image; the scale calculation module is further configured to perform compression sampling and normalization processing on the candidate region of the target, and input the processed image features into a trained classifier to obtain a category prediction score of the candidate region of the target; the scale calculation module is further configured to screen out a region where the target is located from the candidate region of the target according to the category prediction score of the candidate region of the target, and use the scale of the region where the target is located as the scale of the target in the current frame image.
(III) advantageous effects
The technical scheme of the invention has the following advantages: by determining candidate regions for respective blocks of a target in a current frame image, compressively sampling the candidate regions for the blocks, inputting the image characteristics obtained by compression sampling into a trained classifier to obtain the class prediction score of the candidate region of the block, screening out the region of the block from the candidate regions of the block according to the category prediction score of the candidate region of the block, under the condition that the blocked block area exists, calculating the position coordinates of the target in the current frame image according to the position coordinates of the areas where other blocks except the blocked block area exist, solving the problems of poor tracking effect and poor tracking algorithm robustness caused by blocking and scale change of the existing compression tracking algorithm, and improving the target tracking effect and the robustness of the tracking algorithm to blocking and scale change.
Drawings
Fig. 1 is a schematic main flow chart of a target tracking method in a first embodiment of the present invention;
FIG. 2 is a schematic main flow chart of a target tracking method according to a second embodiment of the present invention;
FIG. 3 is a schematic illustration of Haar features in an embodiment of the invention;
FIG. 4 is a schematic diagram of the main modules of a target tracking device in the third embodiment of the present invention;
fig. 5 is a schematic block diagram of a target tracking apparatus according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Example one
Fig. 1 is a schematic main flow chart of a target tracking method in a first embodiment of the present invention; as shown in fig. 1, a target tracking method provided in an embodiment of the present invention includes:
step S101, determining candidate areas of each block of the target in the current frame image.
In an alternative example, candidate regions for respective blocks of the target in the current frame image may be determined based on the tracking result of the target in the previous frame image, such as the scale and position of the target in the previous frame image. For example, the current frame image may be divided into a plurality of blocks according to the size of each block in the previous frame image and the position coordinates of each block, and the search may be performed by moving the search box within each block and its neighborhood, where the search area selected by the search box each time may be regarded as a candidate area of the block.
Step S102, carrying out compression sampling on the candidate regions of the blocks, and inputting image characteristics obtained by compression sampling into a trained classifier to obtain class prediction scores of the candidate regions of the blocks; and screening out the region of the block from the candidate regions of the block according to the category prediction score of the candidate regions of the block.
In this step, each candidate region in each target block may be compression sampled separately. In one optional example, compressively sampling the candidate region includes: performing convolution operation on neighborhoods with different sizes at different positions of the candidate region, taking the convolution result as a first image feature (or called a high-dimensional convolution feature), and then calculating a projection matrix for the first image feature to obtain a second image feature, namely the image feature obtained by compression sampling. The second image feature essentially corresponds to a linear combination of a sum of a plurality (e.g. 2 or 3 or other values) of rectangular region pixel values. That is, the second image feature, i.e., the image feature obtained by compressing the samples, can be obtained by linearly combining the pixel value sums of the plurality of matrix regions, and the formula can be expressed as: .
Figure BDA0002555520920000061
Wherein, XjThe sum of pixel values in the jth rectangular region (or called high-dimensional convolution characteristic in the jth rectangular local region) is j, the value of j is 1, 2, … s, and s is the number of rectangular regions participating in linear combination; y isiIs obtained by the formula (1)The second image characteristic can be obtained by accelerated calculation of an integral image in specific implementation; a isijAre weight coefficients.
The Haar feature is an image feature obtained based on formula (1), and four Haar features are commonly used and are shown in fig. 3. The Haar feature is the difference between the sum of pixels in the white area and the sum of pixels in the black area in fig. 3. In an alternative example, a plurality of Haar features may be extracted from each candidate region of the block, and the plurality of Haar features may be input into a trained classifier as image features obtained by compression sampling to obtain a class prediction score of the candidate region. The trained classifier can be a trained Bayesian classifier, a decision tree-based classifier, or a neural network-based classifier, and the trained classifier can predict whether the candidate region is a target or a background.
In an alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a target for the category of the candidate region. In this alternative embodiment, a candidate region with the highest category prediction score may be selected from the candidate regions of a block, and the candidate region with the highest prediction score may be used as the region where the block is located.
In another alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a background which is a category of the candidate region. In this alternative embodiment, a candidate region with the lowest category prediction score may be selected from the candidate regions of a block, and the candidate region with the lowest prediction score may be used as the region where the block is located.
In the embodiment of the present invention, the area where each block of the target in the current frame image is located may be determined through step S101 and step S102. Further, the position coordinates of the region where each block of the target is located in the current frame image can be determined.
And step S103, under the condition that the blocked block area exists, calculating the position coordinate of the target in the current frame image according to the position coordinates of the areas where other blocks except the blocked block area are located.
For example, assuming that the target is divided into 5 blocks, which are respectively the block A, B, C, D, E, if the block B is occluded, the position coordinates of the target in the current frame image are calculated from the position coordinates of the area where the block a is located, the position coordinates of the area where the block C is located, the position coordinates of the area where the block D is located, and the position coordinates of the area where the block E is located.
In the embodiment of the invention, the compressed sensing tracking algorithm and the blocking tracking algorithm are well combined through the steps 101 to S103, and meanwhile, the real-time performance and the accuracy of target tracking and the robustness of the tracking algorithm to the shielding and scale change can be improved by judging whether the shielded blocking area exists or not and calculating the position coordinates of the target in the current frame image through the step S103 when the shielded blocking area exists, so that the problems of poor tracking effect and poor robustness of the tracking algorithm due to the shielding and scale change of the existing compressed tracking algorithm are solved.
Example two
FIG. 2 is a schematic main flow chart of a target tracking method according to a second embodiment of the present invention; as shown in fig. 2, the target tracking method according to the embodiment of the present invention includes:
step S201, determining candidate regions of each block of the target in the current frame image.
In an alternative example, candidate regions for respective blocks of the target in the current frame image may be determined based on the tracking result of the target in the previous frame image, such as the scale and position of the target in the previous frame image. For example, the current frame image may be divided into a plurality of blocks according to the size of each block in the previous frame image and the position coordinates of each block, and the search may be performed by moving the search box within each block and its neighborhood, where the search area selected by the search box each time may be regarded as a candidate area of the block.
Step S202, carrying out compression sampling on the candidate regions of the blocks, and inputting image features obtained by compression sampling into a trained classifier to obtain class prediction scores of the candidate regions of the blocks; and screening the region of the block from the candidate region of the block according to the category prediction score of the candidate region of the block.
In this step, each candidate region in each target block may be compression sampled separately. In one optional example, compressively sampling the candidate region includes: performing convolution operation on neighborhoods with different sizes at different positions of the candidate region, taking the convolution result as a first image feature (or called a high-dimensional convolution feature), and then calculating a projection matrix for the first image feature to obtain a second image feature, namely the image feature obtained by compression sampling. The second image feature essentially corresponds to a linear combination of a sum of a plurality (e.g. 2 or 3 or other values) of rectangular region pixel values. That is, the second image feature, i.e., the image feature obtained by compressing the samples, can be obtained by linearly combining the pixel value sums of the plurality of matrix regions, and the formula can be expressed as: .
Figure BDA0002555520920000081
Wherein, XjThe sum of pixel values in the jth rectangular region (or called high-dimensional convolution characteristic in the jth rectangular local region) is j, the value of j is 1, 2, … s, and s is the number of rectangular regions participating in linear combination; y isiThe second image feature obtained through the formula (1) can be obtained by utilizing an integral graph to accelerate calculation in specific implementation; a isijIs a preset coefficient.
The Haar feature is an image feature obtained based on formula (1), and four Haar features are commonly used and are shown in fig. 3. The Haar feature is the difference between the sum of pixels in the white area and the sum of pixels in the black area in fig. 3. In an alternative example, a plurality of Haar features may be extracted in each candidate region of the block and input into a trained classifier to obtain a class prediction score for the candidate region.
In an alternative example, the trained classifier is a bayesian classifier (which may be referred to as a first bayesian classifier). In this alternative example, a plurality of Haar features respectively extracted in each candidate region of a block may be input into a trained first bayesian classifier to obtain a class prediction score for the candidate region. The category prediction score of the candidate region is specifically a prediction score of a target for the category of the candidate region. After the category prediction scores of the candidate regions of a block are obtained, a candidate region with the highest category prediction score can be selected from the candidate regions of the block, and the candidate region with the highest prediction score can be used as the region where the block is located.
In an alternative embodiment, the first bayesian classifier may select a na iotave bayes classifier. Bayesian theorem is the mathematical basis for a naive bayesian classifier, which assumes that each feature value in a feature vector is independent of other feature values. A naive bayes classifier can be constructed using the extracted features of the target. Wherein the response of the naive bayes classifier can be shown as formula (2):
Figure BDA0002555520920000091
where H (y) is the response of the naive Bayes classifier.
In another alternative embodiment, in order to further improve the target tracking effect, the inventor of the present invention makes an improvement on the response of the first bayesian classifier, and the improved response of the first bayesian classifier satisfies:
Figure BDA0002555520920000092
wherein Hi(yi) Is the category prediction score of the candidate region of the ith block, i is 1, 2, …, N is the total number of blocks; p (y)ijI k ═ 1) represents the feature yijA predicted probability value for the target feature; p (y)ijI k ═ 0) represents the feature yijA predicted probability value for a background feature; y isijIs the jth Haar feature in the candidate region of the ith patch; l is the number of Haar features in the candidate region of each partition; w is aijIs yijWeight of (1), wijIs formed by the centre of Haar featuresCoordinates of the object
Figure BDA0002555520920000101
Determining; (x) c,yc) Is the center coordinate of the entire target; beta is a constant related to the target area diagonal.
In the above improved response formula of the first bayesian classifier, a weight w is set for each Haar feature in the block candidate regionijThe method considers the difference of the contribution of the Haar features extracted from different positions to the category prediction score, and is beneficial to improving the accuracy of the category prediction score and further improving the accuracy of target tracking.
Further, before step S202, the method of the embodiment of the present invention may further include the following steps: and training the first Bayes classifier to obtain a trained first Bayes classifier. For example, when the response formula of the improved first bayesian classifier shown in formula (3) is adopted, some image feature sequences may be extracted from a region of the previous frame of image that is close to the target as a positive sample, some image feature sequences may be extracted from a region of the previous frame of image that is far from the target as a negative sample, and the first bayesian classifier is trained according to the positive sample and the negative sample to obtain the trained first bayesian classifier.
Step S203, judging whether the blocked block area exists.
In an alternative example, whether there is an occluded block area may be determined according to the following: constructing a block feature vector for clustering according to the offset of the position coordinate of the region where the block is located compared with the position coordinate of the region where the corresponding block is located in the previous frame of image and the category prediction score of the region where the block is located; and carrying out clustering processing on each block according to the block feature vector used for clustering, and judging whether a blocked block area exists according to a clustering processing result.
Further, in an optional implementation manner of the above optional example, the abscissa of the center point of the area where any one of the blocks is located may be compared with the abscissa of the center point of the previous frame of imageOffset deltax of the abscissa of the center point of the region of the corresponding block in the imageiComparing the longitudinal coordinate of the central point of the area where the block is located with the longitudinal coordinate of the central point of the area where the corresponding block is located in the previous frame of imageiAnd z obtained by normalizing the category prediction score of the region in which the block is locatediPerforming a stitching to obtain a feature f of the block used for the clusteringiI.e. the characteristics of the patch used by the cluster satisfy: f. ofi=(Δxi,Δyi,zi). After the block feature vectors used for clustering are obtained, a preset clustering algorithm, such as a K-means clustering algorithm, may be used to perform clustering processing on each block. The clustering function used in the clustering process can be shown as follows:
Figure BDA0002555520920000111
Wherein f isiThe feature vector of the ith block used for clustering, K is the number of clusters, mujThe average distance S from the feature vector of each block in the jth cluster to the cluster center pointjIs the jth cluster, j has a value ranging from 1 to K,
Figure BDA0002555520920000112
the clustering result obtained when the summation function takes the minimum value is used as the final clustering result. And then, judging whether the blocked block area exists according to the final clustering processing result.
If it is determined in step S203 that there is a blocked partitioned area, step S204 is executed; if it is determined in step S203 that there is no blocked blocking area, step S205 is executed.
And step S204, calculating the position coordinates of the target in the current frame image according to the position coordinates of the areas where the other blocks except the blocked block areas are located.
For example, assuming that the target is divided into 5 blocks, which are respectively the block A, B, C, D, E, if the block B is occluded, the position coordinates of the target in the current frame image are calculated from the position coordinates of the area where the block a is located, the position coordinates of the area where the block C is located, the position coordinates of the area where the block D is located, and the position coordinates of the area where the block E is located.
And step S205, calculating the position coordinates of the target in the current frame image according to the position coordinates of the area where each block of the target is located.
For example, assuming that the target is divided into 5 blocks, which are respectively the block A, B, C, D, E, if there is no blocked block, the position coordinate of the target in the current frame image is calculated from the position coordinate of the region in which the block a is located, from the position coordinate of the region in which the block B is located, from the position coordinate of the region in which the block C is located, from the position coordinate of the region in which the block D is located, and from the position coordinate of the region in which the block E is located.
Step S206, determining the candidate area of the target in the current frame image according to the position coordinate of the target in the current frame image and the scale of the target in the previous frame image.
For example, in this step, the reference region s may be constructed by using the position coordinates of the target in the current frame image as the center of the reference region and the scale of the target in the previous frame image as the scale of the reference region0(ii) a For the reference region s0Performing enlargement or reduction processing to obtain a target candidate region set S ═ S1,s2,L,sN}. Further, if the reference region s0Length of dimension L0Indicates that any one of the candidate regions s is iThe scale of (d) can be expressed as: l isi=aiL0Wherein a isiAnd (3) representing a scale change coefficient corresponding to the ith candidate region of the target, wherein the value of i is 1-N, and represents any one candidate region of the target.
Step S207, performing compression sampling and normalization processing on the candidate region of the target, and inputting the processed image features into a trained classifier to obtain a class prediction score of the candidate region of the target.
The trained classifier in step S207 may be a trained bayesian classifier, a decision tree-based classifier, or a neural network-based classifier, and the trained classifier can predict whether the candidate region is a target or a background.
In an alternative example, the trained classifier in step S207 is a bayesian classifier (which may be referred to as a second bayesian classifier). In this alternative example, a plurality of Haar features may be extracted from each candidate region of the target, then the plurality of Haar features may be normalized, and the image features obtained through the processing may be input into a trained second bayesian classifier to obtain a class prediction score of the candidate region.
Further, in the above alternative example, the normalization of the Haar features may use the following formula:
Figure BDA0002555520920000121
wherein, yiRepresenting Haar features of an ith target candidate region; y'iAnd representing the Haar characteristics of the normalized ith candidate region of the target.
Further, in the above optional example, the second bayesian classifier may select a na iotave bayes classifier. Bayesian theorem is the mathematical basis for a naive bayesian classifier, which assumes that each feature value in a feature vector is independent of other feature values. A naive bayes classifier can be constructed using the extracted features of the target. Wherein the response of the naive bayes classifier can be shown as equation (2) above.
And S208, screening out a region where the target is located from the candidate region of the target according to the category prediction score of the candidate region of the target, and taking the scale of the region where the target is located as the scale of the target in the current frame image.
In an alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a target for the category of the candidate region. After the category prediction scores of the candidate regions of the target are obtained, the candidate region with the highest category prediction score can be selected from the candidate regions, and the scale of the candidate region with the highest prediction score is used as the scale of the target in the current frame image.
In another alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a background which is a category of the candidate region. After the category prediction scores of the candidate regions of the target are obtained, the candidate region with the lowest category prediction score can be selected from the candidate regions, and the scale of the candidate region with the lowest prediction score is used as the scale of the target in the current frame image.
Further, the method of the embodiment of the present invention may further include the steps of: and after the target tracking result of the current frame image is obtained, updating the classifier so as to adapt the parameters of the Bayesian classifier to the change of the environment and the target.
In the embodiment of the invention, the position and the size of the target in the current frame image can be accurately determined in real time through the steps. The continuous tracking of the target can be realized by iteratively executing the steps aiming at each frame of image, the continuous tracking requirements of a photoelectric searching and tracking system and a video monitoring system on common targets (such as human targets, vehicle targets and the like) are met, and the continuous and stable tracking can be realized only by giving the position and the size of the first frame of target. Compared with the prior art, the method provided by the embodiment of the invention improves the real-time performance and accuracy of target tracking and the robustness of the tracking algorithm to the shielding and scale change, and solves the problems of poor tracking effect and poor robustness of the tracking algorithm caused by the shielding and scale change in the conventional compressed tracking algorithm.
EXAMPLE III
Fig. 4 is a schematic block diagram of a target tracking apparatus according to a third embodiment of the present invention. As shown in fig. 4, the target tracking apparatus 400 according to the embodiment of the present invention includes: a determination module 401, a screening module 402, a location calculation module 403.
A determining module 401, configured to determine candidate regions of each block of the target in the current frame image.
In an alternative example, the determining module 401 may determine candidate regions of respective blocks of the target in the current frame image based on the tracking result of the target in the previous frame image, such as the scale and position of the target in the previous frame image. For example, the determining module 401 may divide the current frame image into a plurality of blocks according to the size of each block in the previous frame image and the position coordinates of each block, and perform a search by moving the search box within each block and its neighborhood, wherein the determining module 401 may regard the search area selected by each search box as a candidate area of the block.
A screening module 402, configured to perform compression sampling on the candidate regions of the blocks, and input image features obtained by the compression sampling into a trained classifier to obtain category prediction scores of the candidate regions of the blocks; the screening module 402 is further configured to screen out a region where the block is located from the candidate regions of the block according to the category prediction scores of the candidate regions of the block.
The filtering module 402 may perform compression sampling on each candidate region in each target block. In an alternative example, the filtering module 402 compressively samples the candidate regions including: performing convolution operation on neighborhoods with different sizes at different positions of the candidate region, taking the convolution result as a first image feature (or called a high-dimensional convolution feature), and then calculating a projection matrix for the first image feature to obtain a second image feature, namely the image feature obtained by compression sampling. The second image feature essentially corresponds to a linear combination of a sum of a plurality (e.g. 2 or 3 or other values) of rectangular region pixel values. That is, the second image feature, i.e., the image feature obtained by compressing the samples, can be obtained by linearly combining the pixel value sums of the plurality of matrix regions, and the formula can be expressed as: .
Figure BDA0002555520920000151
Wherein, XjIs the sum of pixel values in the jth rectangular region (or called high-dimensional convolution characteristic in the jth rectangular region), j has the value of 1, 2, … s, s is the participation of linearityThe number of rectangular areas combined; y isiThe second image feature obtained through the formula (1) can be obtained by utilizing an integral graph to accelerate calculation in specific implementation; a is ijAre weight coefficients.
The Haar feature is an image feature obtained based on formula (1), and four Haar features are commonly used and are shown in fig. 3. The Haar feature is the difference between the sum of pixels in the white area and the sum of pixels in the black area in fig. 3. In an alternative example, the filtering module 402 may extract a plurality of Haar features in each candidate region of the block, and input the plurality of Haar features into a trained classifier to obtain a class prediction score for the candidate region. The trained classifier can be a trained Bayesian classifier, a decision tree-based classifier, or a neural network-based classifier, and the trained classifier can predict whether the candidate region is a target or a background.
In an alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a target for the category of the candidate region. In this alternative embodiment, the screening module 402 may select a candidate region with the highest category prediction score from the candidate regions of a block, and use the candidate region with the highest prediction score as the region where the block is located.
In another alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a background which is a category of the candidate region. In this alternative embodiment, the screening module 402 may select a candidate region with the lowest category prediction score from the candidate regions of a block, and use the candidate region with the lowest prediction score as the region where the block is located.
And a position calculating module 403, configured to, when it is determined that the blocked block area exists, calculate position coordinates of the target in the current frame image according to position coordinates of areas where other blocks except the blocked block area are located.
For example, assuming that the target is divided into 5 blocks A, B, C, D, E, if block B is occluded, the position calculation module 403 calculates the position coordinates of the target in the current frame image according to the position coordinates of the area where block a is located, the position coordinates of the area where block C is located, the position coordinates of the area where block D is located, and the position coordinates of the area where block E is located.
In the embodiment of the invention, the compressed sensing tracking algorithm and the blocking tracking algorithm are well combined through the device, meanwhile, the position coordinates of the target in the current frame image are calculated through judging whether the blocked blocking area exists or not and through the position coordinates of the areas where other blocks except the blocked blocking area exist when the blocked blocking area exists, the real-time performance and the accuracy of target tracking and the robustness of the tracking algorithm to the blocking and scale change can be improved, and the problems of poor tracking effect and poor robustness of the tracking algorithm due to the blocking and scale change in the existing compressed tracking algorithm are solved.
Example four
Fig. 5 is a schematic block diagram of a target tracking apparatus according to a fourth embodiment of the present invention. As shown in fig. 5, the target tracking apparatus 500 according to the embodiment of the present invention includes: a determination module 501, a screening module 502, a position calculation module 503, and a scale calculation module 504.
A determining module 501, configured to determine candidate regions of each block of the target in the current frame image.
As to the determining module 501, how to determine the candidate regions of the respective blocks of the target in the current frame image can refer to the exemplary illustration of the embodiment shown in fig. 4.
A screening module 502, configured to perform compression sampling on the candidate regions of the blocks, and input image features obtained by the compression sampling into a trained classifier to obtain category prediction scores of the candidate regions of the blocks; the screening module 502 is further configured to screen out a region where the block is located from the candidate regions of the block according to the category prediction scores of the candidate regions of the block.
In an optional example, the screening module 502 performs compression sampling on the candidate regions of the blocks, and inputs image features obtained by the compression sampling into a trained classifier to obtain the class prediction score of the candidate regions of the blocks includes: the screening module 502 extracts a plurality of Haar features in each candidate region of the block, and takes the plurality of Haar features as image features obtained by compression sampling; the screening module 502 inputs the plurality of Haar features into a trained first bayesian classifier to obtain a category prediction score for the candidate region.
Further, in the above optional example, the category prediction score of the candidate region may be specific to a prediction score for which the category of the candidate region is a target. After obtaining the category prediction scores of the candidate regions of a block, the screening module 502 may select a candidate region with the highest category prediction score from the candidate regions of the block, and use the candidate region with the highest prediction score as the region where the block is located.
A position calculating module 503, configured to determine, when it is determined that an occluded block area exists, a position coordinate of the target in the current frame image according to position coordinates of areas where other blocks except the occluded block area are located; the position calculating module 503 is further configured to calculate, when it is determined that there is no blocked block area, a position coordinate of the target in the current frame image according to the position coordinates of the area where each block of the target is located.
For example, assuming that the target is divided into 5 blocks, which are respectively the block A, B, C, D, E, if the block B is occluded, the position coordinate of the target in the current frame image is calculated according to the position coordinate of the area where the block a is located, the position coordinate of the area where the block C is located, the position coordinate of the area where the block D is located, and the position coordinate of the area where the block E is located; and if the blocked block does not exist, calculating the position coordinate of the target in the current frame image according to the position coordinate of the area where the block A is located, the position coordinate of the area where the block B is located, the position coordinate of the area where the block C is located, the position coordinate of the area where the block D is located and the position coordinate of the area where the block E is located.
The scale calculation module 504 is configured to determine a candidate region of the target in the current frame image according to the position coordinates of the target in the current frame image and the scale of the target in the previous frame image.
Illustratively, the determining, by the scale calculation module 504, the candidate region of the target in the current frame image according to the position coordinates of the target in the current frame image and the scale of the target in the previous frame image includes: the scale calculation module 504 may use the position coordinates of the target in the current frame image as the center of the reference region, and use the scale of the target in the previous frame image as the scale of the reference region to construct the reference region s0(ii) a The scale calculation module 504 processes the reference region s0Performing enlargement or reduction processing to obtain a target candidate region set S ═ S1,s2,L,sN}. Further, if the reference region s0Length of dimension L0Indicates that any one of the candidate regions s isiThe scale of (d) can be expressed as: l isi=aiL0Wherein a isiRepresenting the scale change coefficient.
The scale calculation module 504 is further configured to perform compression sampling and normalization processing on the candidate region of the target, and input the processed image features into a trained classifier to obtain a category prediction score of the candidate region of the target.
The trained classifier used by the scale calculation module 504 may be a trained bayesian classifier, a decision tree-based classifier, or a neural network-based classifier, which can predict whether the candidate region is a target or a background.
In an alternative example, the trained classifier used by the scale calculation module 504 is a bayesian classifier (which may be referred to as a second bayesian classifier). In this alternative example, the scale calculation module 504 may extract a plurality of Haar features in each candidate region of the target, then perform normalization processing on the Haar features, and input the processed image features into a trained second bayesian classifier to obtain a class prediction score of the candidate region.
Further, in the above alternative example, the normalization of the Haar features by the scale calculation module 504 may use the following formula:
Figure BDA0002555520920000181
wherein, yiRepresenting Haar features of an ith target candidate region; y'iAnd representing the Haar characteristics of the normalized ith candidate region of the target.
Further, in the above optional example, the second bayesian classifier may select a na iotave bayes classifier. Bayesian theorem is the mathematical basis for a naive bayesian classifier, which assumes that each feature value in a feature vector is independent of other feature values. A naive bayes classifier can be constructed using the extracted features of the target. Wherein the response of the naive bayes classifier can be shown as equation (2) above.
The scale calculation module 504 is further configured to screen out a region where the target is located from the candidate region of the target according to the category prediction score of the candidate region of the target, and use the scale of the region where the target is located as the scale of the target in the current frame image.
In an alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a target for the category of the candidate region. After obtaining the category prediction scores of the candidate regions of the target, the scale calculation module 504 may select a candidate region with the highest category prediction score from the candidate regions, and use the scale of the candidate region with the highest prediction score as the scale of the target in the current frame image.
In another alternative embodiment, the category prediction score of the candidate region is specifically a prediction score of a background which is a category of the candidate region. After obtaining the category prediction scores of the candidate regions of the target, the scale calculation module 504 may select a candidate region with the lowest category prediction score from the candidate regions, and use the scale of the candidate region with the lowest prediction score as the scale of the target in the current frame image.
In the embodiment of the invention, the position and the size of the target in the current frame image can be accurately determined in real time through the device. Continuous tracking of the target can be realized by iteratively calling each module in the device aiming at each frame of image, continuous tracking requirements of a photoelectric search tracking system and a video monitoring system on common targets (such as targets of people, vehicles and the like) are met, and continuous and stable tracking can be realized only by giving the position and the size of the first frame of target. Compared with the prior art, the device provided by the embodiment of the invention improves the real-time performance and accuracy of target tracking and the robustness of the tracking algorithm to shielding and scale change, and solves the problems of poor tracking effect and poor robustness of the tracking algorithm caused by shielding and scale change in the conventional compression tracking algorithm.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A method of target tracking, the method comprising:
determining candidate areas of all blocks of a target in a current frame image;
carrying out compression sampling on the candidate regions of the blocks, and inputting image features obtained by compression sampling into a trained classifier to obtain class prediction scores of the candidate regions of the blocks; screening out the region where the block is located from the candidate regions of the block according to the category prediction score of the candidate regions of the block;
and under the condition that the blocked block area exists, calculating the position coordinate of the target in the current frame image according to the position coordinates of the areas of other blocks except the blocked block area.
2. The method of claim 1, further comprising:
determining a candidate region of the target in the current frame image according to the position coordinate of the target in the current frame image and the scale of the target in the previous frame image; carrying out compression sampling and normalization processing on the candidate region of the target, and inputting the image characteristics obtained by processing into a trained classifier to obtain the category prediction score of the candidate region of the target; and screening out the region where the target is located from the candidate region of the target according to the category prediction score of the candidate region of the target, and taking the scale of the region where the target is located as the scale of the target in the current frame image.
3. The method of claim 2, wherein the compressively sampling the candidate regions of the blocks and inputting the compressively sampled image features into a trained classifier to obtain the class prediction scores of the candidate regions of the blocks comprises:
extracting a plurality of Haar features in each candidate region of the block, and taking the Haar features as image features obtained by compression sampling; and inputting the plurality of Haar features into a trained first Bayes classifier to obtain a category prediction score of the candidate region.
4. The method of claim 3, wherein the response of the trained Bayesian classifier satisfies:
Figure RE-FDA0002685982920000021
Figure RE-FDA0002685982920000022
wherein Hi(yi) Is the category prediction score of the candidate region of the ith block, i is 1, 2, …, N is the total number of blocks; p (y)ijI k ═ 1) represents the feature yijIs a target ofA predicted probability value of the feature; p (y)ijI k ═ 0) represents the feature yijA predicted probability value for a background feature; y isijIs the jth Haar feature in the candidate region of the ith patch; l is the number of Haar features in the candidate region of each partition; w is aijIs yijWeight of (1), wijIs formed by the center coordinates of Haar features
Figure RE-FDA0002685982920000023
Determining; (x) c,yc) Is the center coordinate of the entire target; beta is a constant related to the target area diagonal.
5. The method of claim 3, further comprising:
extracting some image feature sequences from a region which is close to the target in the previous frame of image as a positive sample, extracting some image feature sequences from a region which is far away from the target in the previous frame of image as a negative sample, and training a first Bayes classifier according to the positive sample and the negative sample to obtain the trained first Bayes classifier.
6. The method of claim 1, wherein the determination of whether there is an occluded block area is made according to the following:
constructing a block feature vector for clustering according to the offset of the position coordinate of the region where the block is located compared with the position coordinate of the region where the corresponding block is located in the previous frame of image and the category prediction score of the region where the block is located; and carrying out clustering processing on each block according to the block feature vector used for clustering, and judging whether a blocked block area exists according to a clustering processing result.
7. The method of claim 1, wherein determining the candidate region of the target in the current frame image according to the position coordinates of the target in the current frame image and the scale of the target in the previous frame image comprises:
Constructing a reference region by taking the position coordinates of the target in the current frame image as the center of the reference region and the scale of the target in the previous frame image as the scale of the reference region; and carrying out amplification and reduction processing on the reference region to obtain a candidate region of the target.
8. The method of claim 7, wherein the performing compression sampling and normalization on the candidate region of the target, and inputting the processed image features into a trained classifier to obtain the class prediction score of the candidate region of the target comprises:
and extracting a plurality of Haar features in each candidate region of the target, then carrying out normalization processing on the Haar features, and inputting the image features obtained by processing into a trained second Bayes classifier to obtain the category prediction score of the candidate region.
9. The method of claim 1, further comprising:
and under the condition that the blocked block areas do not exist, calculating the position coordinates of the target in the current frame image according to the position coordinates of the areas where the blocks of the target are located.
10. An object tracking apparatus, characterized in that the apparatus comprises:
The determining module is used for determining candidate areas of all blocks of the target in the current frame image;
the screening module is used for carrying out compression sampling on the candidate regions of the blocks and inputting image characteristics obtained by compression sampling into a trained classifier so as to obtain class prediction scores of the candidate regions of the blocks; screening out the region where the block is located from the candidate regions of the block according to the category prediction score of the candidate regions of the block;
and the position calculation module is used for determining the position coordinates of the target in the current frame image according to the position coordinates of the areas where other blocks except the blocked block area are located under the condition that the blocked block area is judged to exist.
11. The apparatus of claim 10, further comprising:
the scale calculation module is used for determining a candidate region of the target in the current frame image according to the position coordinate of the target in the current frame image and the scale of the target in the previous frame image; the scale calculation module is further configured to perform compression sampling and normalization processing on the candidate region of the target, and input the processed image features into a trained classifier to obtain a category prediction score of the candidate region of the target; the scale calculation module is further configured to screen out a region where the target is located from the candidate region of the target according to the category prediction score of the candidate region of the target, and use the scale of the region where the target is located as the scale of the target in the current frame image.
CN202010588437.7A 2020-06-24 2020-06-24 Target tracking method and device Active CN111860189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010588437.7A CN111860189B (en) 2020-06-24 2020-06-24 Target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010588437.7A CN111860189B (en) 2020-06-24 2020-06-24 Target tracking method and device

Publications (2)

Publication Number Publication Date
CN111860189A true CN111860189A (en) 2020-10-30
CN111860189B CN111860189B (en) 2024-01-19

Family

ID=72989813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010588437.7A Active CN111860189B (en) 2020-06-24 2020-06-24 Target tracking method and device

Country Status (1)

Country Link
CN (1) CN111860189B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130051624A1 (en) * 2011-03-22 2013-02-28 Panasonic Corporation Moving object detection apparatus and moving object detection method
US20130076913A1 (en) * 2011-09-28 2013-03-28 Xerox Corporation System and method for object identification and tracking
CN105427339A (en) * 2015-11-05 2016-03-23 天津工业大学 Characteristic screening and secondary positioning combined fast compression tracking method
CN105989611A (en) * 2015-02-05 2016-10-05 南京理工大学 Blocking perception Hash tracking method with shadow removing
CN106097393A (en) * 2016-06-17 2016-11-09 浙江工业大学 A kind of based on multiple dimensioned and adaptive updates method for tracking target
CN106651912A (en) * 2016-11-21 2017-05-10 广东工业大学 Compressed sensing-based robust target tracking method
CN106898015A (en) * 2017-01-17 2017-06-27 华中科技大学 A kind of multi thread visual tracking method based on the screening of self adaptation sub-block
CN108062557A (en) * 2017-11-21 2018-05-22 杭州电子科技大学 Dimension self-adaption method for tracking target based on Fast Compression track algorithm
CN109389043A (en) * 2018-09-10 2019-02-26 中国人民解放军陆军工程大学 A kind of crowd density estimation method of unmanned plane picture
US20190122378A1 (en) * 2017-04-17 2019-04-25 The United States Of America, As Represented By The Secretary Of The Navy Apparatuses and methods for machine vision systems including creation of a point cloud model and/or three dimensional model based on multiple images from different perspectives and combination of depth cues from camera motion and defocus with various applications including navigation systems, and pattern matching systems as well as estimating relative blur between images for use in depth from defocus or autofocusing applications
US20190244030A1 (en) * 2018-02-07 2019-08-08 Hitachi, Ltd. Object tracking in video using better object area
CN110717934A (en) * 2019-10-17 2020-01-21 湖南大学 Anti-occlusion target tracking method based on STRCF

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130051624A1 (en) * 2011-03-22 2013-02-28 Panasonic Corporation Moving object detection apparatus and moving object detection method
US20130076913A1 (en) * 2011-09-28 2013-03-28 Xerox Corporation System and method for object identification and tracking
CN105989611A (en) * 2015-02-05 2016-10-05 南京理工大学 Blocking perception Hash tracking method with shadow removing
CN105427339A (en) * 2015-11-05 2016-03-23 天津工业大学 Characteristic screening and secondary positioning combined fast compression tracking method
CN106097393A (en) * 2016-06-17 2016-11-09 浙江工业大学 A kind of based on multiple dimensioned and adaptive updates method for tracking target
CN106651912A (en) * 2016-11-21 2017-05-10 广东工业大学 Compressed sensing-based robust target tracking method
CN106898015A (en) * 2017-01-17 2017-06-27 华中科技大学 A kind of multi thread visual tracking method based on the screening of self adaptation sub-block
US20190122378A1 (en) * 2017-04-17 2019-04-25 The United States Of America, As Represented By The Secretary Of The Navy Apparatuses and methods for machine vision systems including creation of a point cloud model and/or three dimensional model based on multiple images from different perspectives and combination of depth cues from camera motion and defocus with various applications including navigation systems, and pattern matching systems as well as estimating relative blur between images for use in depth from defocus or autofocusing applications
CN108062557A (en) * 2017-11-21 2018-05-22 杭州电子科技大学 Dimension self-adaption method for tracking target based on Fast Compression track algorithm
US20190244030A1 (en) * 2018-02-07 2019-08-08 Hitachi, Ltd. Object tracking in video using better object area
CN109389043A (en) * 2018-09-10 2019-02-26 中国人民解放军陆军工程大学 A kind of crowd density estimation method of unmanned plane picture
CN110717934A (en) * 2019-10-17 2020-01-21 湖南大学 Anti-occlusion target tracking method based on STRCF

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
BIN LI 等: "Event-based Robotic Grasping Detection with Neuromorphic Vision Sensor and Event-Stream Dataset", ARXIV:2004.13652, pages 1 - 14 *
F. LI 等: "Adaptive and compressive target tracking based on feature point matching", 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, pages 2734 - 2739 *
Z. WU 等: "Robust compressive tracking under occlusion", 015 IEEE 5TH INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS, pages 298 - 302 *
李慧敏 等: "主动防护***探测雷达的目标跟踪", 现代防御技术, vol. 42, no. 2, pages 116 - 121 *
王振莹: "遮挡及尺度变化条件下目标跟踪方法研究", 中国硕士学位论文全文数据库 (信息科技辑), no. 2019, pages 138 - 825 *
程中建;周双娥;李康;: "基于多尺度自适应权重的稀疏表示目标跟踪算法", 计算机科学, no. 1, pages 181 - 186 *
蒋小莉: "基于压缩粒子滤波的改进目标跟踪算法研究", 中国硕士学位论文全文数据库 (信息科技辑), no. 2016, pages 138 - 387 *
邵晨宇: "面向视频运动目标的压缩跟踪鲁棒性算法研究", 中国硕士学位论文全文数据库 (信息科技辑), no. 2019, pages 138 - 1573 *
颜晓文;谢杰腾;: "基于贝叶斯方法的视觉跟踪", 物联网技术, no. 04, pages 30 - 32 *

Also Published As

Publication number Publication date
CN111860189B (en) 2024-01-19

Similar Documents

Publication Publication Date Title
CN108090456B (en) Training method for recognizing lane line model, and lane line recognition method and device
Sathya et al. PSO-based Tsallis thresholding selection procedure for image segmentation
CN107633226B (en) Human body motion tracking feature processing method
CN113689428A (en) Mechanical part stress corrosion detection method and system based on image processing
CN109544592B (en) Moving object detection algorithm for camera movement
CN112257569B (en) Target detection and identification method based on real-time video stream
CN111931686B (en) Video satellite target tracking method based on background knowledge enhancement
CN112802005A (en) Automobile surface scratch detection method based on improved Mask RCNN
CN104978738A (en) Method of detection of points of interest in digital image
CN112989910A (en) Power target detection method and device, computer equipment and storage medium
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN112651381A (en) Method and device for identifying livestock in video image based on convolutional neural network
CN110717934A (en) Anti-occlusion target tracking method based on STRCF
CN109934129B (en) Face feature point positioning method, device, computer equipment and storage medium
CN116740652B (en) Method and system for monitoring rust area expansion based on neural network model
CN114943754A (en) Image registration method, system and storage medium based on SIFT
CN113469025B (en) Target detection method and device applied to vehicle-road cooperation, road side equipment and vehicle
CN113989721A (en) Target detection method and training method and device of target detection model
CN111860189A (en) Target tracking method and device
CN110992301A (en) Gas contour identification method
CN116188826A (en) Template matching method and device under complex illumination condition
CN110570450A (en) Target tracking method based on cascade context-aware framework
Wu et al. A closer look at segmentation uncertainty of scanned historical maps
CN115439926A (en) Small sample abnormal behavior identification method based on key region and scene depth
CN114495263A (en) Alarm pre-control device for preventing personal injury

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant