CN104200495A - Multi-target tracking method in video surveillance - Google Patents

Multi-target tracking method in video surveillance Download PDF

Info

Publication number
CN104200495A
CN104200495A CN201410497957.1A CN201410497957A CN104200495A CN 104200495 A CN104200495 A CN 104200495A CN 201410497957 A CN201410497957 A CN 201410497957A CN 104200495 A CN104200495 A CN 104200495A
Authority
CN
China
Prior art keywords
asift
target
vector
proper vector
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410497957.1A
Other languages
Chinese (zh)
Other versions
CN104200495B (en
Inventor
杨丰瑞
窦绍宾
吴翠先
刘欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING XINKE DESIGN Co Ltd
Original Assignee
CHONGQING XINKE DESIGN Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING XINKE DESIGN Co Ltd filed Critical CHONGQING XINKE DESIGN Co Ltd
Priority to CN201410497957.1A priority Critical patent/CN104200495B/en
Publication of CN104200495A publication Critical patent/CN104200495A/en
Application granted granted Critical
Publication of CN104200495B publication Critical patent/CN104200495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a video target tracking method capable of integrating ASIFT features and particle filter, and belongs to the technical field of video information processing and mode recognition. The video target tracking method comprises the following steps: the adjacent frame difference method is utilized to obtain moving objects in a video sequence; according to the area corresponding to an acquired complete target, a tracking target model is established; ASIFT feature vectors of the target model are established; the particle filter technology is adopted to predict a candidate area target; ASIFT feature vectors of a candidate target model are established; the feature vectors of the tracking targets are matched with the feature vectors of the candidate target; the RANSAC algorithm is adopted to reject wrong matching; the target model is renewed, so as to realize target tracking. The video target tracking method provided by the invention can accurately and quickly track the targets under the condition that brightness changes and is shielded. Therefore, the multi-target tracking method in the video surveillance has relatively good real-time performance and robustness.

Description

Multi-object tracking method in a kind of video monitoring
Technical field
The invention belongs to video information process and mode identification technology, specifically the multi-object tracking method in a kind of video monitoring.
Background technology
Target following is machine vision, artificial intelligence and area of pattern recognition basis always.Target following can be widely used in the industries such as navigator fix, military guidance, security monitoring.
Target following is to utilize known target position information and goal succession to find interested moving target on one section of sequence image.Have for the method for tracking target in video monitoring at present multiple, as the tracking based on particle filter, the tracking based on Mean Shift, the method for tracking target based on Kalman filtering etc.But in the time that target is blocked with surrounding environment existence interference (as noise, illumination), easily there is the phenomenons such as lose objects, tracking window depart from these traditional methods, causes following the tracks of unsuccessfully.
Utilize the method for video image being carried out to multiple target followings based on unique point to there is higher robustness, for example, based on SIFT (Scale Invariant Feature Transform, SIFT) the target following technology of feature, can, in the situation that rotation, change of scale and luminance transformation appear in target, still can carry out stable identification to target.But this technology does not possess higher anti-affinity,, for the larger target of deformation, easily there is track rejection in also Shortcomings in object matching precision.Moreover there is too defect in the method in real-time.
Summary of the invention
For above deficiency of the prior art, the object of the present invention is to provide the multi-object tracking method in a kind of video monitoring, with affine-yardstick invariant features conversion (Affine-SIFT, ASIFT) feature is described object module, then utilize particle filter method to search for moving target, finally by improved ASIFT matching algorithm, characteristic matching is carried out in target area, carry out object module renewal, realize target is followed the tracks of.Under illumination variation environment and target generation circumstance of occlusion, improve accuracy, robustness and the real-time of following the tracks of.
Multi-object tracking method in a kind of video monitoring of the present invention, detects moving target by neighbor frame difference method, and the moving target detecting is set up to tracking target model, and the ASIFT proper vector of establishing target model; Adopt particle filter predicting candidate regional aim, and set up the ASIFT proper vector of candidate target model, tracking target proper vector is mated with candidate region target feature vector, adopt RANSAC algorithm to reject erroneous matching, upgrade object module, realize target is followed the tracks of, and comprises the following steps:
Steps A: read video image initial frame, adopt neighbor frame difference method to detect the moving target in video sequence;
Read video image initial frame, the image respective pixel values of adjacent two frames in video carried out to difference calculating,
D k(x,y)=|f k(x,y)-f k+1(x,y)|
D k = 0 D k < T 0 1 D k &GreaterEqual; T 0
Wherein, f k(x, y) is the image of present frame, f k+1(x, y) is the next frame image that current frame image is adjacent; D kbe the poor absolute values of two two field pictures, D k=1 is motion target area; Wherein T 0=0.7;
Step B: build tracking target model affine-yardstick invariant features conversion (Affine-SIFT, ASIFT) proper vector A; Concrete steps are:
Step B1, carries out affine transformation parameter to motion target area, in motion target area, angle of latitude θ is adopted to Geometric Sequence sampling: 1, a, a 2... a n, a>1, wherein n=5; To longitude angle carry out equal difference sampling: 0, b/t ... kb/t, wherein b=72 °, t=|1/cos θ |, k is kb/t<180 ° last integer of satisfying condition;
Step B2, carries out affined transformation to motion target area, utilizes the sequential parameter obtaining to calculate: I &prime; ( &phi; , t ) = cos &phi; - sin &phi; sin &phi; cos &phi; &CenterDot; t 0 0 1 &CenterDot; I
Wherein, I is motion target area, and I ' is the motion target area after affined transformation;
Step B3, carries out SIFT feature point detection to the motion target area after affine analog converting;
Step B4, the unique point of motion target area is carried out to vector description, build the ASIFT proper vectors of 128 dimensions;
Step B5, adopt major component component analysis method (PCA) to carry out space dimensionality reduction and obtain proper vector A to ASIFT proper vector;
Step C, reads next frame image;
Step D, adopts particle filter method to the image prediction candidate region target reading in step C, and builds the ASIFT proper vector B of candidate target model; Concrete steps are:
D1, to motion target area, M particle sample of random selection from one group of probability sample of former frame in t moment;
D2, carries out probability redistribution to the M newly a collecting particle;
D3, carrys out the weighted value of compute histograms according to RGB histogram to M particle, then M particle position is weighted to average computation according to weight, obtains the candidate region of tracking target;
D4, the ASIFT proper vector that builds candidate region obtains proper vector B;
Step e: motion target area proper vector A is mated with the ASIFT proper vector B of candidate region;
Step F: adopt random sample consistance RANSAC method to reject erroneous matching;
Step G: upgrade object module, return to step C, realize target is followed the tracks of.
Further: in described step B5, adopt major component component analysis method (PCA) to carry out space dimensionality reduction to ASIFT proper vector, concrete steps are:
B51, each ASIFT unique point of obtaining is described as respectively the vector of one 128 dimension, and using unique point as sample, writing out sample matrix is [x 1, x 2..., x n] t, wherein n is unique point number, x irepresent 128 dimensional feature vectors of i unique point;
B52, the averaged feature vector of n sample of calculating
B53, calculates the poor of the proper vector of all sample points and feature average vector, obtains difference value vector d i = x i - x &OverBar; , i = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n ;
B54, builds covariance matrix wherein Q=[d 1, d 2..., d n];
B55, asks 128 eigenvalue λ i and 128 proper vector ei of covariance matrix;
B56, arranges λ by 128 eigenwerts obtaining by order from big to small 1>=λ 2>=...>=λ 128with characteristic of correspondence vector (e 1, e 2... e 128);
B57, chooses the direction of a corresponding m maximal eigenvector as major component;
B58, the matrix R of a 128*t of structure, its each row are made up of t proper vector;
B59, presses y 128 original dimension ASIFT feature descriptors i=x i* R projection, calculates the ASIFT feature descriptor y of 36 dimensions 1, y 2..., y n, wherein, x ifor the vector representation of the ASIFT unique point in original object region, y ifor the vector representation of ASIFT unique point in target area after dimensionality reduction.
Further, in described step e, when the ASIFT proper vector of motion target area proper vector and candidate region is carried out to matching operation, adopt the approximate KNN searching method based on KD-Tree.
Beneficial effect of the present invention:
(1) adopt ASIFT feature matching method compare SIFT, SURF feature matching method, under the impact of target occlusion and environmental factor, more unique point can be detected, more stable in the time of target following, not lose objects easily.
(2) adopt PCA technology to reduce dimension processing to ASIFT proper vector, the vector of 128 dimensions is represented with 32 dimensional vectors, reduced calculated amount, more meet the real-time of target following.
(3) adopt the approximate KNN way of search based on KD-Tree to replace overall nearest neighbor search to mate with candidate region target feature vector tracking target proper vector, improved the search efficiency of matching characteristic point, reduced calculating consuming time.
(4) ASIFT feature matching method and particle filter after improving are merged, the region occurring at next frame by particle filter technology target of prediction model, has avoided ASIFT to mate whole two field picture, has improved degree of accuracy.
Compared with existing scheme, method of the present invention tracking target quickly and accurately under brightness variation, circumstance of occlusion, has good real-time and robustness.
Brief description of the drawings
Fig. 1 is the multi-object tracking method process flow diagram in a kind of video monitoring of the present invention;
Embodiment
In conjunction with Fig. 1, the multi-object tracking method in a kind of video monitoring, detects moving target by neighbor frame difference method, and the moving target detecting is set up to tracking target model, and the ASIFT proper vector of establishing target model; Adopt particle filter predicting candidate regional aim, and set up the ASIFT proper vector of candidate target model, tracking target proper vector is mated with candidate region target feature vector, adopt RANSAC algorithm to reject erroneous matching, upgrade object module, realize target is followed the tracks of, and comprises the following steps:
Steps A: read video image initial frame, adopt neighbor frame difference method to detect the moving target in video sequence; The video that the video image reading collects for monitoring camera.
Read video image initial frame, the image respective pixel values of adjacent two frames in video carried out to difference calculating,
D k(x,y)=|f k(x,y)-f k+1(x,y)|
D k = 0 D k < T 0 1 D k &GreaterEqual; T 0
Wherein, f k(x, y) is the image of present frame, x, and y represents respectively horizontal ordinate and the ordinate of pixel, f k+1(x, y) is the next frame image that current frame image is adjacent; D kbe the poor absolute values of two two field pictures, represent moving region, D k=1 is motion target area; Wherein T 0for binaryzation threshold values, binaryzation threshold values T of the present invention 0=0.7, according to different requirements, also can take other value;
Calculate through above, in figure, pixel value only has 0 and 1 two kind, and the pixel region that value is 1 is target corresponding region, and by this mode, the motion target area in video sequence can be out divided.
Step B: build tracking target model affine-yardstick invariant features conversion (Affine-SIFT, ASIFT)
Proper vector A; Concrete steps are:
Step B1, carries out affine transformation parameter to motion target area, in motion target area, angle of latitude θ is adopted to Geometric Sequence sampling: 1, a, a 2... a n, a>1, wherein n=5; To longitude angle carry out equal difference sampling: 0, b/t ... kb/t, wherein b=72 °, t=|1/cos θ |, k is kb/t<180 ° last integer of satisfying condition;
Wherein, parameter θ and the angle of latitude of representative shooting camera optical axis and the longitude angle of camera optical axis respectively.Generally can there is affine deformation to a certain degree in target area, mainly caused by the conversion of camera light direction of principal axis, and optical axis direction conversion depend on parameter θ and before affine simulation is carried out in target area, need parameter θ and carry out resampling.
The parameter θ that motion target area is obtained is as shown in table 1.
Table 1
To the parameter of motion target area sampling interval be set as: and sample range be [0,180 °].In the time of t=1, parameter concrete sampled value is: 0,72 °, and 144 °.
Step B2, carries out affined transformation to motion target area, utilizes the sequential parameter obtaining to calculate: I &prime; ( &phi; , t ) = cos &phi; - sin &phi; sin &phi; cos &phi; &CenterDot; t 0 0 1 &CenterDot; I
Wherein, I is motion target area, and I ' is the motion target area after affined transformation;
Step B3, carries out SIFT feature point detection to the motion target area after affine analog converting;
Step B4, the unique point of motion target area is carried out to vector description, build the ASIFT proper vectors of 128 dimensions;
Step B5, adopt major component component analysis method (PCA) to carry out space dimensionality reduction and obtain proper vector A to ASIFT proper vector;
Step C, reads next frame image;
Step D, adopts particle filter method to the image prediction candidate region target reading in step C, and builds the ASIFT proper vector B of candidate target model; Concrete steps are:
D1, to motion target area, M particle sample of random selection from one group of probability sample of former frame in t moment;
D2, carries out probability redistribution to the M newly a collecting particle;
If the movement velocity of t-1 moment tracking target is:
vec x &OverBar; = &Delta; x &OverBar; vecunitperpixel
vec y &OverBar; = &Delta; y &OverBar; vecunitperpixel
with represent respectively the position skew of t-1 moment motion target area, vecuniteperpixel represents the motor unit of each pixel.
Can obtain the reposition of t each particle of moment by formula below:
x t i = x t i + r t i &times; vec x &OverBar; &times; vecunitperpixel + r t i &times; H t - 1 i
y t i = y t i + r t i &times; vec x &OverBar; &times; vecunitperpixel + r t i &times; W t - 1 i
Wherein, for Gaussian number, for particle is high, for particle wide.
D3, carrys out the weighted value of compute histograms according to RGB histogram to M particle, then M particle position is weighted to average computation according to weight, obtains the candidate region of tracking target;
Computing formula is as follows:
x &OverBar; t = f &Sigma; i = 1 M W i &times; x t i
y &OverBar; t = f &Sigma; i = 1 M W i &times; y t i
Wherein, f is normalization coefficient, w ifor the weight of each particle.
Calculate behind tracking target estimated position, with t-1 moment initial position 3x3 pixel rectangular extent around, form 10 searching positions, search therein a new position, making with the quadratic sum (SSD) of previous frame t-1 moment target area gray scale difference is minimum, the reposition with this new position as moving target.
S(x,y)=(∫∫ w|(J(X)-I(X)|) (11)
( x t , y t ) = min ( S ( x , y ) , S ( x &OverBar; , y &OverBar; ) ) , ( x m + 1 , y m + 1 ) &GreaterEqual; x , y &GreaterEqual; ( x m - 1 , y m - 1 )
Wherein, S represents the brightness of this position and the luminance difference of template; X, y is for being illustrated in x m, y mcentered by reposition.J, I represent respectively the luminance function of the two width images in t-1 and t moment.
Variable M=150 in step D1-D3.
D4, the ASIFT proper vector that builds candidate region obtains proper vector B;
By the method for step B, build equally the ASIFT proper vector of candidate target model, and adopt major component component analysis technology (PCA) to carry out space dimensionality reduction, the ASIFT unique point of final candidate target region also adopts 36 dimensional vectors to represent.
Step e: motion target area proper vector A is mated with the ASIFT proper vector B of candidate region;
Step F: adopt random sample consistance RANSAC method to reject erroneous matching;
Step G: upgrade object module, return to step C, realize target is followed the tracks of.
Further: in described step B5, adopt major component component analysis method (PCA) to carry out space dimensionality reduction to ASIFT proper vector, concrete steps are:
B51, each ASIFT unique point of obtaining is described as respectively the vector of one 128 dimension, and using unique point as sample, writing out sample matrix is [x 1, x 2..., x n] t, wherein n is unique point number, x irepresent 128 dimensional feature vectors of i unique point;
B52, the averaged feature vector of n sample of calculating
B53, calculates the poor of the proper vector of all sample points and feature average vector, obtains difference value vector d i = x i - x &OverBar; , i = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n ;
B54, builds covariance matrix wherein Q=[d 1, d 2..., d n];
B55, asks 128 eigenvalue λ of covariance matrix iwith 128 proper vector e i;
B56, arranges λ by 128 eigenwerts obtaining by order from big to small 1>=λ 2>=...>=λ 128with characteristic of correspondence vector (e 1, e 2... e 128);
B57, chooses the direction of a corresponding m maximal eigenvector as major component;
B58, the matrix R of a 128*t of structure, its each row are made up of t proper vector;
B59, presses y 128 original dimension ASIFT feature descriptors i=x i* R projection, calculates the ASIFT feature descriptor y of 36 dimensions 1, y 2..., y n, wherein, x ifor the vector representation of the ASIFT unique point in original object region, y ifor the vector representation of ASIFT unique point in target area after dimensionality reduction.
Wherein, x ifor the vector representation of the ASIFT unique point in original object region.Y ifor the vector representation of ASIFT unique point in target area after dimensionality reduction.
Further, in described step e, when the ASIFT proper vector of motion target area proper vector and candidate region is carried out to matching operation, adopt the approximate KNN searching method based on KD-Tree.
Calculation procedure is:
(1) set up KD-Tree according to ASIFT unique point, specific implementation step is as follows
A, determine the value in split territory;
Put at x by calculated characteristics, the data variance in y dimension, gets the dimension of variance yields maximum as the value in split territory;
B, determine Node-data territory;
According to the value in the split territory obtaining, characteristic point data is sorted in this dimension, obtain Node-data numeric field data point according to the intermediate value of data, like this, just determine cutting apart of this node of super face;
C, determine left and right subspace;
Cut apart super face whole space is divided into two parts, the point of cutting apart the super face left side is left subspace, and the point of cutting apart super face the right is right subspace.
D, then can obtain one-level child node according to He You subspace, left subspace, more respectively by space and data set further segmentation again, until only comprise a data point in space.
(2) by binary tree search, retrieval in KD-Tree with the approximate point of query point apart from neighbour;
(3) according to contrasting with adjacent other unique points, find with query point Euclidean distance nearest front two
Individual unique point;
(4) nearest Euclidean distance is removed near Euclidean distance in proper order, if this value is less than certain proportion threshold value γ, accept this pair of match point, Feature Points Matching success, otherwise, mate unsuccessful.
d 1 d 2 < &gamma;
Wherein d 1be two Euclidean distances that unique point to be matched is nearest; d 2be two unique points to be matched time near Euclidean distances.Threshold gamma=0.8 is set in the present invention.。
Judgement, whether tracking target proper vector mates successful with candidate region target feature vector, if success performs step five.Otherwise, return to execution step (3).
Embodiments of the invention are interpreted as being only not used in and limiting the scope of the invention for the present invention is described.After having read the content of record of the present invention, technician can make various changes or modifications the present invention, and these equivalences change and modification falls into the scope of the claims in the present invention equally.

Claims (3)

1. the multi-object tracking method in video monitoring, detects moving target by neighbor frame difference method, and the moving target detecting is set up to tracking target model, and the ASIFT proper vector of establishing target model; Adopt particle filter predicting candidate regional aim, and set up the ASIFT proper vector of candidate target model, tracking target proper vector is mated with candidate region target feature vector, adopt RANSAC algorithm to reject erroneous matching, upgrade object module, realize target is followed the tracks of, and comprises the following steps:
Steps A: read video image initial frame, adopt neighbor frame difference method to detect the moving target in video sequence;
Read video image initial frame, the image respective pixel values of adjacent two frames in video carried out to difference calculating,
D k(x,y)=|f k(x,y)-f k+1(x,y)|
D k = 0 D k < T 0 1 D k &GreaterEqual; T 0
Wherein, f k(x, y) is the image of present frame, f k+1(x, y) is the next frame image that current frame image is adjacent; D kbe the poor absolute values of two two field pictures, D k=1 is motion target area; Wherein T 0=0.7;
Step B: build tracking target model affine-yardstick invariant features conversion (Affine-SIFT, ASIFT) proper vector A; Concrete steps are:
Step B1, carries out affine transformation parameter to motion target area, in motion target area, angle of latitude θ is adopted to Geometric Sequence sampling: 1, a, a 2... a n, a>1, wherein n=5; To longitude angle carry out equal difference sampling: 0, b/t ... kb/t, wherein b=72 °, t=|1/cos θ |, k is kb/t<180 ° last integer of satisfying condition;
Step B2, carries out affined transformation to motion target area, utilizes the sequential parameter obtaining to calculate: I &prime; ( &phi; , t ) = cos &phi; - sin &phi; sin &phi; cos &phi; &CenterDot; t 0 0 1 &CenterDot; I
Wherein, I is motion target area, and I ' is the motion target area after affined transformation;
Step B3, carries out SIFT feature point detection to the motion target area after affine analog converting;
Step B4, the unique point of motion target area is carried out to vector description, build the ASIFT proper vectors of 128 dimensions;
Step B5, adopt major component component analysis method (PCA) to carry out space dimensionality reduction and obtain proper vector A to ASIFT proper vector;
Step C, reads next frame image;
Step D, adopts particle filter method to the image prediction candidate region target reading in step C, and builds the ASIFT proper vector B of candidate target model; Concrete steps are:
D1, to motion target area, M particle sample of random selection from one group of probability sample of former frame in t moment;
D2, carries out probability redistribution to the M newly a collecting particle;
D3, carrys out the weighted value of compute histograms according to RGB histogram to M particle, then M particle position is weighted to average computation according to weight, obtains the candidate region of tracking target;
D4, the ASIFT proper vector that builds candidate region obtains proper vector B;
Step e: motion target area proper vector A is mated with the ASIFT proper vector B of candidate region;
Step F: adopt random sample consistance RANSAC method to reject erroneous matching;
Step G: upgrade object module, return to step C, realize target is followed the tracks of.
2. the multi-object tracking method in video monitoring according to claim 1, is characterized in that:
In step B5, adopt major component component analysis method (PCA) to carry out space dimensionality reduction to ASIFT proper vector, concrete steps are:
B51, each ASIFT unique point of obtaining is described as respectively the vector of one 128 dimension, and using unique point as sample, writing out sample matrix is [x 1, x 2..., x n] t, wherein n is unique point number, x irepresent 128 dimensional feature vectors of i unique point;
B52, the averaged feature vector of n sample of calculating
B53, calculates the poor of the proper vector of all sample points and feature average vector, obtains difference value vector d i = x i - x &OverBar; , i = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n ;
B54, builds covariance matrix wherein Q=[d 1, d 2..., d n];
B55, asks 128 eigenvalue λ of covariance matrix iwith 128 proper vector e i;
B56, arranges λ by 128 eigenwerts obtaining by order from big to small 1>=λ 2>=...>=λ 128with characteristic of correspondence vector (e 1, e 2... e 128);
B57, chooses the direction of a corresponding m maximal eigenvector as major component;
B58, the matrix R of a 128*t of structure, its each row are made up of t proper vector;
B59, presses y 128 original dimension ASIFT feature descriptors i=x i* R projection, calculates the ASIFT feature descriptor y of 36 dimensions 1, y 2..., y n, wherein, x ifor the vector representation of the ASIFT unique point in original object region, y ifor the vector representation of ASIFT unique point in target area after dimensionality reduction.
3. the multi-object tracking method in video monitoring according to claim 1, it is characterized in that: in step e, when the ASIFT proper vector of motion target area proper vector and candidate region is carried out to matching operation, adopt the approximate KNN searching method based on KD-Tree.
CN201410497957.1A 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring Active CN104200495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410497957.1A CN104200495B (en) 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410497957.1A CN104200495B (en) 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring

Publications (2)

Publication Number Publication Date
CN104200495A true CN104200495A (en) 2014-12-10
CN104200495B CN104200495B (en) 2017-03-29

Family

ID=52085781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410497957.1A Active CN104200495B (en) 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring

Country Status (1)

Country Link
CN (1) CN104200495B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392469A (en) * 2014-12-15 2015-03-04 辽宁工程技术大学 Target tracking method based on soft characteristic theory
CN104751412A (en) * 2015-04-23 2015-07-01 重庆信科设计有限公司 Affine invariant feature-based image splicing method
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN105787963A (en) * 2016-02-26 2016-07-20 浪潮软件股份有限公司 Video target tracking method and device
CN106227216A (en) * 2016-08-31 2016-12-14 朱明� Home-services robot towards house old man
CN106296743A (en) * 2016-08-23 2017-01-04 常州轻工职业技术学院 A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system
CN106327528A (en) * 2016-08-23 2017-01-11 常州轻工职业技术学院 Moving object tracking method and operation method of unmanned aerial vehicle
CN107369164A (en) * 2017-06-20 2017-11-21 成都中昊英孚科技有限公司 A kind of tracking of infrared small object
CN107545583A (en) * 2017-08-21 2018-01-05 中国科学院计算技术研究所 A kind of target following acceleration method and system based on gauss hybrid models
CN107917646A (en) * 2017-01-10 2018-04-17 北京航空航天大学 A kind of anti-interference method of guidance of strong pulsed D based on the prediction of target terminal accessoble region
WO2018090912A1 (en) * 2016-11-15 2018-05-24 北京市商汤科技开发有限公司 Target object detection method, apparatus and system and neural network structure
CN108416258A (en) * 2018-01-23 2018-08-17 华侨大学 A kind of multi-human body tracking method based on human body model
CN108596949A (en) * 2018-03-23 2018-09-28 云南大学 Video frequency object tracking state analysis method, device and realization device
CN110111364A (en) * 2019-04-30 2019-08-09 腾讯科技(深圳)有限公司 Method for testing motion, device, electronic equipment and storage medium
CN110264501A (en) * 2019-05-05 2019-09-20 中国地质大学(武汉) A kind of adaptive particle filter video target tracking method and system based on CNN
CN110490902A (en) * 2019-08-02 2019-11-22 西安天和防务技术股份有限公司 Method for tracking target, device, computer equipment applied to smart city
CN110516528A (en) * 2019-07-08 2019-11-29 杭州电子科技大学 A kind of moving-target detection and tracking method based under movement background
CN110769214A (en) * 2018-08-20 2020-02-07 成都极米科技股份有限公司 Automatic tracking projection method and device based on frame difference
CN112559959A (en) * 2020-12-07 2021-03-26 中国西安卫星测控中心 Space-based imaging non-cooperative target rotation state calculation method based on feature vector
CN110110111B (en) * 2018-02-02 2021-12-31 兴业数字金融服务(上海)股份有限公司 Method and device for monitoring screen

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026759A (en) * 2007-04-09 2007-08-29 华为技术有限公司 Visual tracking method and system based on particle filtering
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)
US20120170659A1 (en) * 2009-09-04 2012-07-05 Stmicroelectronics Pvt. Ltd. Advance video coding with perceptual quality scalability for regions of interest
CN103440645A (en) * 2013-08-16 2013-12-11 东南大学 Target tracking algorithm based on self-adaptive particle filter and sparse representation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026759A (en) * 2007-04-09 2007-08-29 华为技术有限公司 Visual tracking method and system based on particle filtering
US20120170659A1 (en) * 2009-09-04 2012-07-05 Stmicroelectronics Pvt. Ltd. Advance video coding with perceptual quality scalability for regions of interest
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)
CN103440645A (en) * 2013-08-16 2013-12-11 东南大学 Target tracking algorithm based on self-adaptive particle filter and sparse representation

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392469B (en) * 2014-12-15 2017-05-31 辽宁工程技术大学 A kind of method for tracking target based on soft characteristic theory
CN104392469A (en) * 2014-12-15 2015-03-04 辽宁工程技术大学 Target tracking method based on soft characteristic theory
CN104751412A (en) * 2015-04-23 2015-07-01 重庆信科设计有限公司 Affine invariant feature-based image splicing method
CN104751412B (en) * 2015-04-23 2018-01-30 重庆信科设计有限公司 A kind of image split-joint method based on affine invariants
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN105631895B (en) * 2015-12-18 2018-05-29 重庆大学 With reference to the space-time context video target tracking method of particle filter
CN105787963A (en) * 2016-02-26 2016-07-20 浪潮软件股份有限公司 Video target tracking method and device
CN105787963B (en) * 2016-02-26 2019-04-16 浪潮软件股份有限公司 A kind of video target tracking method and device
CN106327528A (en) * 2016-08-23 2017-01-11 常州轻工职业技术学院 Moving object tracking method and operation method of unmanned aerial vehicle
CN106296743A (en) * 2016-08-23 2017-01-04 常州轻工职业技术学院 A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system
CN106227216A (en) * 2016-08-31 2016-12-14 朱明� Home-services robot towards house old man
WO2018090912A1 (en) * 2016-11-15 2018-05-24 北京市商汤科技开发有限公司 Target object detection method, apparatus and system and neural network structure
CN108073864A (en) * 2016-11-15 2018-05-25 北京市商汤科技开发有限公司 Target object detection method, apparatus and system and neural network structure
CN107917646B (en) * 2017-01-10 2020-11-24 北京航空航天大学 Infrared air-to-air missile anti-interference guidance method based on target terminal reachable area prediction
CN107917646A (en) * 2017-01-10 2018-04-17 北京航空航天大学 A kind of anti-interference method of guidance of strong pulsed D based on the prediction of target terminal accessoble region
CN107369164B (en) * 2017-06-20 2020-05-22 成都中昊英孚科技有限公司 Infrared weak and small target tracking method
CN107369164A (en) * 2017-06-20 2017-11-21 成都中昊英孚科技有限公司 A kind of tracking of infrared small object
CN107545583A (en) * 2017-08-21 2018-01-05 中国科学院计算技术研究所 A kind of target following acceleration method and system based on gauss hybrid models
CN107545583B (en) * 2017-08-21 2020-06-26 中国科学院计算技术研究所 Target tracking acceleration method and system based on Gaussian mixture model
CN108416258A (en) * 2018-01-23 2018-08-17 华侨大学 A kind of multi-human body tracking method based on human body model
CN108416258B (en) * 2018-01-23 2020-05-08 华侨大学 Multi-human body tracking method based on human body part model
CN110110111B (en) * 2018-02-02 2021-12-31 兴业数字金融服务(上海)股份有限公司 Method and device for monitoring screen
CN108596949A (en) * 2018-03-23 2018-09-28 云南大学 Video frequency object tracking state analysis method, device and realization device
CN108596949B (en) * 2018-03-23 2020-06-12 云南大学 Video target tracking state analysis method and device and implementation device
CN110769214A (en) * 2018-08-20 2020-02-07 成都极米科技股份有限公司 Automatic tracking projection method and device based on frame difference
CN110111364A (en) * 2019-04-30 2019-08-09 腾讯科技(深圳)有限公司 Method for testing motion, device, electronic equipment and storage medium
CN110111364B (en) * 2019-04-30 2022-12-27 腾讯科技(深圳)有限公司 Motion detection method and device, electronic equipment and storage medium
CN110264501A (en) * 2019-05-05 2019-09-20 中国地质大学(武汉) A kind of adaptive particle filter video target tracking method and system based on CNN
CN110516528A (en) * 2019-07-08 2019-11-29 杭州电子科技大学 A kind of moving-target detection and tracking method based under movement background
CN110490902A (en) * 2019-08-02 2019-11-22 西安天和防务技术股份有限公司 Method for tracking target, device, computer equipment applied to smart city
CN110490902B (en) * 2019-08-02 2022-06-14 西安天和防务技术股份有限公司 Target tracking method and device applied to smart city and computer equipment
CN112559959A (en) * 2020-12-07 2021-03-26 中国西安卫星测控中心 Space-based imaging non-cooperative target rotation state calculation method based on feature vector
CN112559959B (en) * 2020-12-07 2023-11-07 中国西安卫星测控中心 Space-based imaging non-cooperative target rotation state resolving method based on feature vector

Also Published As

Publication number Publication date
CN104200495B (en) 2017-03-29

Similar Documents

Publication Publication Date Title
CN104200495A (en) Multi-target tracking method in video surveillance
CN110097093B (en) Method for accurately matching heterogeneous images
CN108470354B (en) Video target tracking method and device and implementation device
CN106780557B (en) Moving object tracking method based on optical flow method and key point features
CN102236794B (en) Recognition and pose determination of 3D objects in 3D scenes
Garg et al. Delta descriptors: Change-based place representation for robust visual localization
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
Lee et al. Place recognition using straight lines for vision-based SLAM
CN101924871A (en) Mean shift-based video target tracking method
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN104574401A (en) Image registration method based on parallel line matching
Wu et al. FSANet: Feature-and-spatial-aligned network for tiny object detection in remote sensing images
CN113313701B (en) Electric vehicle charging port two-stage visual detection positioning method based on shape prior
CN112287906B (en) Template matching tracking method and system based on depth feature fusion
CN114708300A (en) Anti-blocking self-adaptive target tracking method and system
Zhang Sr et al. A ship target tracking algorithm based on deep learning and multiple features
Li et al. Adaptive and compressive target tracking based on feature point matching
Chen et al. Exploring depth information for head detection with depth images
Fritz et al. Urban object recognition from informative local features
CN115861352A (en) Monocular vision, IMU and laser radar data fusion and edge extraction method
Liu et al. [Retracted] Mean Shift Fusion Color Histogram Algorithm for Nonrigid Complex Target Tracking in Sports Video
Lu et al. A robust tracking architecture using tracking failure detection in Siamese trackers
Han et al. Adapting dynamic appearance for robust visual tracking
Zhang et al. Fish target detection and speed estimation method based on computer vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant