CN106296742A - A kind of online method for tracking target of combination Feature Points Matching - Google Patents

A kind of online method for tracking target of combination Feature Points Matching Download PDF

Info

Publication number
CN106296742A
CN106296742A CN201610694150.6A CN201610694150A CN106296742A CN 106296742 A CN106296742 A CN 106296742A CN 201610694150 A CN201610694150 A CN 201610694150A CN 106296742 A CN106296742 A CN 106296742A
Authority
CN
China
Prior art keywords
target
point
tracks
detector
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610694150.6A
Other languages
Chinese (zh)
Other versions
CN106296742B (en
Inventor
戴声奎
刘兴云
高剑萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN201610694150.6A priority Critical patent/CN106296742B/en
Publication of CN106296742A publication Critical patent/CN106296742A/en
Application granted granted Critical
Publication of CN106296742B publication Critical patent/CN106296742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Image Analysis (AREA)

Abstract

The online method for tracking target of a kind of combination Feature Points Matching, first calculates the marking area of tracked target, thus obtains optimum target tracing area;Then use the method that intermediate value stream method is followed the tracks of and Feature Points Matching combines to estimate the dimensional variation of target, and obtained the position at the final place of target by hierarchical clustering method rejecting noise spot;Finally, the dimensional information estimated by tracker is fed back to detector, accelerate the speed of detector by reducing detection metric space.The present invention uses based on TLD method for tracking target, utilizes the significance of tracked target to carry out target following, combines optical flow method and Feature Points Matching method to strengthen track algorithm and follows the tracks of the robustness of target under different scenes, and the method is simple, performs speed fast.

Description

A kind of online method for tracking target of combination Feature Points Matching
Technical field
The present invention relates to computer vision field, relate to the online method for tracking target of a kind of combination Feature Points Matching, can Target following under different scenes.
Background technology
Target following is one of important subject of computer vision field, is one and relates to graphical analysis, and pattern is known Not, automatically control, the problem of the multi-crossed disciplines such as iconics, be also target recognition, the basis of the subsequent operation such as Activity recognition, logical Cross and image sequence is carried out Algorithm Analysis, mark the space position parameter of our interesting target, such as the coordinate position of target, Size, angle etc., the interesting target position in frame of video is associated, obtains the movement locus of target.Through decades Development, video frequency object tracking intelligent video monitoring, field of human-computer interaction, vision guided navigation, military field, medical image guide The fields such as operation, motion analysis, game and three-dimensional reconstruction are obtained for and are widely applied.
Now domestic and international, method for tracking target substantially can be divided into two big classes: (1) production method for tracking target, mainly leads to Cross learning model target is indicated, search in the range of picture region with the object module that arrives of study, depending on reconstructed error Little region is target area.This kind of method easily produces when target appearance changes rapidly and follows the tracks of drift, and does not accounts for Background information around target, makes target tracking quality in complex environment significantly reduce.(2) discriminant target following side Method, by tracking problem as two classification problems of target/background, by judging that tracking problem, whether as target, is turned by search box The process of chemical conversion online updating grader.The target following being currently based on study becomes the emphasis of research worker research, mainly wraps Including off-line learning and on-line study two ways, off-line tracking algorithm needs to train substantial amounts of sample, needs to throw following the tracks of early stage Enter substantial amounts of manpower and materials and could obtain preferable tracking effect, the institute of target appearance cannot be estimated during following the tracks of likely Property, algorithm is with strong points, does not possess versatility.And online target tracking algorism, when following the tracks of object initialization, logical too small amount of Prior information is carried out the training of grader and is updated the display model of target by On-line testing sample, can actively fit Answer the deformation of target, it is not necessary to precondition, overcome offline target tracking and target deformation, environmental change are adapted to ability and ask Topic.Tracking the most in this respect achieves bigger achievement in research, but the most still exists and much ask Topic, therefore has bigger room for improvement.
In sum, target following technology has broad application prospects, and Target Tracking System based on on-line study has Stronger scene adaptability and application space, the most online target following technology needs to improve its tracking effect and meets actual answering Needs, there is bigger development space.Therefore, study online target following technology to be of great immediate significance.
Summary of the invention
It is an object of the invention to overcome the weak point of current various target tracking algorism, it is provided that one combines characteristic point The online method for tracking target of coupling, the method is simply efficient, can be well adapted for the target following of various scene.
The present invention adopts the following technical scheme that
The online method for tracking target of a kind of combination Feature Points Matching, it is characterised in that comprise the steps
1) in the first two field picture, calculate the marking area of tracked target I, thus obtain optimum target tracing area;
2) tracker that intermediate value stream method is followed the tracks of and Feature Points Matching combines is used to estimate the chi of optimum target tracing area Degree change, and the position at the final place of target is obtained by hierarchical clustering method rejecting noise spot;
3) dimensional variation estimated by tracker is fed back to detector, accelerate detection by reducing detection metric space The speed of device.
Preferably, in step 1) in, including:
1.1) marking area image block is calculated: I is carried out 8 × 8 non-overlapping segmentations, uses side based on histogram contrast Method calculates the significance of each image block, and to extract the high image block of significance be candidate's tracing area;
1.2) obtain optimal tracing area: if 1.1) obtained in image block less than Thrpatch, ThrpatchFor trying to achieve The number of significance block, then be tracked with the initial target frame preset, otherwise centered by the midpoint of this initial target frame, With step-length for 8, and according to following the tracks of the Aspect Ratio outward expansion of target, the area of the calculating notable block contained by current extensions region Ratio S with the extended area gross areasaliency, as ratio SsaliencyLess than ThrscoreTime stop search, now obtain new with Track target frame Is
Preferably, in step 2) in, including:
2.1) I is extractedsFAST characteristic point and BRISK in target frame double-wide describe sub-P={ (l1,f1),(l2, f2),......(ln,fn) and store, l1To lnRepresent the characteristic point position extracted, f1To fnExpression is extracted characteristic point Binary system describe son, FAST characteristic point is divided into the foreground features point P in region to be trackedfg={ (l1,f1),......, (ln1,fn1) and background characteristics point Pbg={ (l1,f1),......,(ln-n1,fn-n1)};
2.2) the second frame starts, to optical flow method characteristic point P to extracting before and after employingfgIt is tracked obtaining Tfg;Use Characteristic point in the target frame position double-wide that previous frame is traced into by Feature Points Matching method is mated, by arest neighbors away from Determine the effectiveness of coupling from the ratio θ with secondary adjacency, thus obtain effective match point, and reject and background phase The characteristic point joined, obtains following the tracks of clarification of objective match point Mfg, to P while couplingfgCharacteristic point carries out forward and backward light stream Method is followed the tracks of, and obtains error less than ThrFBTrace point Tfg, take TfgAnd MfgUnion Pfuse, and estimate target with median method Yardstick s and angle information φ;
2.3) to the primitive character point P traced intofgCarry out yardstick and angular transformation obtains new coordinate and uses level to gather Class method reject outlier, and with rejecting outlier after remaining point set VcEstimate target's center position center, if point set VcNumber Amount is less than 10% that primitive character is counted, then follow the tracks of unsuccessfully entrance step 2.4), otherwise utilize center and yardstick s to obtain mesh Cursor position size, returns step 2.2), and use point set VcCarry out next frame target following;
2.4) if following the tracks of unsuccessfully, with detector deinitialization tracker, target detector being detected carries out characteristic point Extract and mate with feature point set P, if coupling number is more than the 10% of point to be tracked, then initializing successfully, return step 2.2) next frame target following is carried out.
Preferably, determine initial gauges, in step 3 previously according to following the tracks of target frame size) in, described reduces detection ruler Degree space is accelerated the speed of detector and is specifically referred to: the target scale change S=[s of storage nearest ten frames of tracker1,s2,s3, s4,s5,s6,s7,s8,s9,s10];Calculate intermediate value S of dimensional variation SMed, and calculate SMedAnd the relation initialized between yardstick;Pass through Mapping relations find the gridding yardstick s of correspondencegrid, with detector to sgrid-6 arrive sgridIn the range of+6, yardstick detects;As Fruit is followed the tracks of unsuccessfully, then present frame is carried out whole size measurement.
From the above-mentioned description of this invention, compared with prior art, there is advantages that
1, extract the marking area of tracking target, improve algorithm anti-background clutter ability, the most effectively.
2, combine Feature Points Matching and improve track algorithm tracking robustness under different scenes, there is versatility.
3, detector adaptive scale search, improves detector speed, the most effectively.
Accompanying drawing explanation
Fig. 1 is the composition frame chart of TLD algorithm.
Fig. 2 is the flow chart of the inventive method.
Detailed description of the invention
Below by way of detailed description of the invention, the invention will be further described.
Studying discovery by inquiry, TLD target tracking algorism exists follows the tracks of drift and easily by complex background interference etc. Shortcoming.The present invention proposes the online method for tracking target of a kind of combination Feature Points Matching.First the notable of tracked target is calculated Region, thus obtain optimum target tracing area;Then the method that intermediate value stream method is followed the tracks of and Feature Points Matching combines is used to estimate The dimensional variation of meter target, and the position at the final place of target is obtained by hierarchical clustering method rejecting noise spot;Finally, will be with The dimensional information obtained estimated by track device feeds back to detector, accelerates the speed of detector by reducing detection metric space. Target deformation, Plane Rotation and rapid movement had robustness.
Concrete processing procedure is as follows:
First it is defined as follows variable so that arthmetic statement:
Current frame image H: picture width w* picture height h;
Tracked target I: picture width w1* picture height h1
The positive sample set Px of random fern grader, negative sample collection Nx;
The positive sample set PEx of nearest neighbor classifier, negative sample collection NEx;
The significance value of definition extended area image block is sEI (), wherein i is image block number;
The significance value of definition extended area image block is sSI (), wherein i is image block number;
Significance target I extracteds: picture width w2* picture height h2, the image coordinate at place is (x2,y2);
Initial characteristics point set P: describe son with FAST feature point detection BRI SK and be described.
Below the present invention is described in further details.
As in figure 2 it is shown, the online method for tracking target of the present invention a kind of combination Feature Points Matching, specifically include following step Rapid:
Step 1, initial sample extraction
(1) detector in Fig. 1 is in the first two field picture, and the multiple dimensioned sweep parameter of employing is 1.2, horizontal and vertical Moving step length is that frame of video is wide high by 10% respectively, sets up 21 grades of multiple dimensioned scanning windows, extracts object boundary frame Duplication 10 high image sheets, and every image sheet is carried out geometry affine transformation.Concrete step is, carries out every image sheet The translation of 1% scope, the dimensional variation of 1% scope, the rotation of 10 ° of scopes also adds Gaussian noise variance.Wherein translation, chi Degree, rotation are all to randomly generate within the specific limits.Being converted by these, 10 image sheets can generate 200 image sheets, will These image sheets are stored in set Px as the positive sample of detector initial training.Negative sample is then to choose and target frame position weight Folded rate is stored in set Nx less than the image sheet of 0.2 threshold value.Image sheet maximum with following the tracks of target Duplication in Px is carried out 15 Then × 15 normalization are stored in PEx, out and will randomly select 100 more than the sample extraction following the tracks of target variance 50% in Nx Individual sample is stored in NEx.Study module in Fig. 1 includes random fern grader, nearest neighbor classifier, variance grader etc.. On-time model in figure is that the sample to tracker and detector collection learns, and tracker and detector are divided by integration module The result not obtained carries out comprehensive and exports final target frame position.TLD detection study module in Fig. 2 is in Fig. 1 Study module.
(2) will (1) learn in the feeding random fern grader of Px and Nx;
(3) PEx and NEx in (1) is sent into nearest neighbor classifier learning;
(4) by (2) and (3) learning to random fern grader and nearest neighbor classifier detector is carried out initially Change.
Step 2, detector detection target
Detector multiple dimensioned window all of to present frame is scanned, and image sheet to be assessed is passed sequentially through variance and divides Class device, random fern grader and nearest neighbor classifier.Wherein variance grader arranges the threshold value of variance is target image sheet side The half of difference, abandons the image sheet less than target image sheet one semivariance;Remaining image sheet is inputted random fern grader, with Threshold value in machine Herba pteridii latiusculi grader is set to 0.67, by the image sheet input nearest neighbor classifier more than 0.67 of the threshold value after ballot;As The most last remaining image sheet number is more than 100, then front 100 image sheets choosing ballot mark the highest carry out arest neighbors assessment, Elect figure blade the highest for assessment mark as detect target position.
Step 3, the marking area of calculating tracked target, as shown in the object initialization in Fig. 2:
(1) marking area image block is calculated: follow the tracks of in frame of video H current, extract tracked target image block I, I is entered The non-overlapping segmentation of row 8 × 8, definition constriction zone isAnd extended areaIts size is respectively 0.6w1*0.6h1And (w1+16)*(h1+ 16) constriction zone and the significance s of extended area image block of each image block in I, are calculated respectivelyS (i) and sE(i), whereinThe color histogram of i-th piece, c during in formula, c (i) is IS(j) beThe color histogram of middle jth block, K=4 represents employing 4 neighbour's rectangular histograms.C in formula I () is the color histogram of i-th piece, c in IE(j) beThe color histogram of middle jth block, K=4 represents employing 4 neighbour's Nogatas Figure.And willImage block elect candidate's tracing area as.
(2) optimal tracing area is obtained: if the image block obtained in (1) is less than Thrpatch, ThrpatchFor try to achieve The number of significance block, it is set to 50, is then tracked with the initial target frame preset.Otherwise with the midpoint of this initial target frame Centered by, it is 8 and according to following the tracks of the Aspect Ratio outward expansion of target with step-length, calculates notable block contained by current extensions region The ratio S of area and the extended area gross areasaliency, as ratio SsaliencyLess than ThrscoreStop search when=0.7, this Time obtain new tracking target frame Is
Step 4, combine the tracker of optical flow method and Feature Points Matching, see the target tracking module in Fig. 2:
(1) I is extractedsFAST characteristic point and BRISK in target frame double-wide describe sub-P={ (l1,f1),(l2, f2),......(ln,fn) and store.Wherein l1To lnRepresent the characteristic point position extracted, f1To fnExpression is to be extracted spy Levy binary system a little and describe son.Characteristic point is divided into the foreground features point P in region to be trackedfg={ (l1,f1),......, (ln1,fn1) and background characteristics point Pbg={ (l1,f1),......,(ln-n1,fn-n1)};
(2) second frames start, to optical flow method characteristic point P to extracting before and after employingfgIt is tracked obtaining Tfg;Use Characteristic point in the target frame position double-wide that previous frame is traced into by Feature Points Matching method is mated, by arest neighbors away from From with the ratio θ of secondary adjacency (θ is characterized Point matching parameter, represents the probability of a characteristic point and further feature Point matching, Nearest neighbor distance is the biggest with time neighbour's ratio then represents that the correct probability of this feature Point matching is the biggest, θ=0.8 here) come really The effectiveness of fixed coupling is to obtain effective match point, and rejects the characteristic point matched with background, obtains following the tracks of the spy of target Levy match point Mfg.To P while couplingfgCharacteristic point obtains error less than Thr to optical flow method tracking before and after carrying outFBTracking Point Tfg, wherein ThrFBFor front and back to the threshold value of tracking error, 10≤ThrFB≤ 20, take TfgAnd MfgUnion Pfuse, and use intermediate value Method estimates the yardstick s and angle information φ of target;
(3) to the primitive character point P traced intofgCarry out yardstick and angular transformation obtains new coordinate, and use level to gather Class method rejects outlier, afterwards with remaining point set V after rejecting outliercEstimate target's center position center.If point set Vc's Quantity less than the 10% of primitive character point, is then followed the tracks of failure, is otherwise utilized center and dimensional information to obtain target location big Little, return step (2) and use point set VcCarry out next frame target following.
(4) if following the tracks of unsuccessfully, with detector deinitialization tracker, target detector being detected carries out characteristic point and carries Take and mate with feature point set P;If coupling number is more than the 10% of primitive character point, then initialize successfully.Return step (2) Carry out next frame target following.
Step 5, the detector speed that improves:
The target scale change S=[s of storage nearest ten frames of tracker1,s2,s3,s4,s5,s6,s7,s8,s9,s10];Calculate Intermediate value S of dimensional variation SMed, and calculate SMedAnd the relation initialized between yardstick;The gridding of correspondence is found by mapping relations Yardstick sgrid, with detector to sgrid-6 arrive sgridYardstick in the range of+6 detects;Lose if the tracker in Fig. 1 is followed the tracks of Lose, then present frame is carried out whole size measurement.
Step 6, obtain final result
By step 1 to step 5, if tracker and detector all detect target, then use position to add up way, follow the tracks of The target frame weight that device obtains accounts for 0.6, and the target frame weight that detector obtains accounts for 0.4, comprehensively obtains final result.With following table 1 and table 2 be algorithms of different tracking success rate on standard testing collection.
Table 1 algorithms of different tracking success rate on standard testing collection
Table 2 algorithms of different tracking range precision on standard testing collection
Above are only the detailed description of the invention of the present invention, but the design concept of the present invention is not limited thereto, all utilize this Design carries out the change of unsubstantiality to the present invention, all should belong to the behavior invading scope.

Claims (4)

1. the online method for tracking target combining Feature Points Matching, it is characterised in that comprise the steps
1) in the first two field picture, calculate the marking area of tracked target I, thus obtain optimum target tracing area;
2) yardstick of optimum target tracing area becomes to use the tracker that intermediate value stream method is followed the tracks of and Feature Points Matching combines to estimate Change, and obtained the position at the final place of target by hierarchical clustering method rejecting noise spot;
3) dimensional variation estimated by tracker is fed back to detector, accelerate detector by reducing detection metric space Speed.
The online method for tracking target of a kind of combination Feature Points Matching the most as claimed in claim 1, it is characterised in that in step 1) in, including:
1.1) marking area image block is calculated: I is carried out 8 × 8 non-overlapping segmentations, uses method meter based on histogram contrast Calculate the significance of each image block, and to extract the high image block of significance be candidate's tracing area;
1.2) obtain optimal tracing area: if 1.1) obtained in image block less than Thrpatch, ThrpatchAobvious for try to achieve The number of work property block, then be tracked, otherwise centered by the midpoint of this initial target frame, with step with the initial target frame preset A length of 8, and according to following the tracks of the Aspect Ratio outward expansion of target, the area of the calculating notable block contained by current extensions region and expansion The ratio S of territory, the exhibition section gross areasaliency, as ratio SsaliencyLess than ThrscoreTime stop search, now obtain new tracking mesh Mark frame Is
The online method for tracking target of a kind of combination Feature Points Matching the most as claimed in claim 2, it is characterised in that in step 2) in, including:
2.1) I is extractedsFAST characteristic point and BRISK in target frame double-wide describe sub-P={ (l1,f1),(l2, f2),......(ln,fn) and store, l1To lnRepresent the characteristic point position extracted, f1To fnExpression is extracted characteristic point Binary system describe son, FAST characteristic point is divided into the foreground features point P in region to be trackedfg={ (l1,f1),......, (ln1,fn1) and background characteristics point Pbg={ (l1,f1),......,(ln-n1,fn-n1)};
2.2) the second frame starts, to optical flow method characteristic point P to extracting before and after employingfgIt is tracked obtaining Tfg;Use feature Characteristic point in the target frame position double-wide that previous frame is traced into by point match method is mated, by nearest neighbor distance and The ratio θ of secondary adjacency determines the effectiveness of coupling, thus obtain effective match point, and rejects and match with background Characteristic point, obtains following the tracks of clarification of objective match point Mfg, to P while couplingfgCharacteristic point carry out forward and backward optical flow method with Track, obtains error less than ThrFBTrace point Tfg, take TfgAnd MfgUnion Pfuse, and the yardstick s of target is estimated with median method With angle information φ;
2.3) to the primitive character point P traced intofgCarry out yardstick and angular transformation obtains new coordinate and uses hierarchical clustering method Reject outlier, and with rejecting outlier after remaining point set VcEstimate target's center position center, if point set VcQuantity little 10% counted in primitive character, then follow the tracks of unsuccessfully entrance step 2.4), otherwise utilize center and yardstick s to obtain target position Put size, return step 2.2), and use point set VcCarry out next frame target following;
2.4) if following the tracks of unsuccessfully, with detector deinitialization tracker, target detector being detected carries out feature point extraction And mate with feature point set P, if coupling number is more than the 10% of point to be tracked, then initialize successfully, return step 2.2) enter Row next frame target following.
The online method for tracking target of a kind of combination Feature Points Matching the most as claimed in claim 3, it is characterised in that root in advance Initial gauges is determined, in step 3 according to following the tracks of target frame size) in, described reduces detection metric space to accelerate detector Speed specifically refers to: the target scale change S=[s of storage nearest ten frames of tracker1,s2,s3,s4,s5,s6,s7,s8,s9,s10]; Calculate intermediate value S of dimensional variation SMed, and calculate SMedAnd the relation initialized between yardstick;The net of correspondence is found by mapping relations Format yardstick sgrid, with detector to sgrid-6 arrive sgridIn the range of+6, yardstick detects;If following the tracks of unsuccessfully, then to currently Frame carries out whole size measurement.
CN201610694150.6A 2016-08-19 2016-08-19 A kind of matched online method for tracking target of binding characteristic point Active CN106296742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610694150.6A CN106296742B (en) 2016-08-19 2016-08-19 A kind of matched online method for tracking target of binding characteristic point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610694150.6A CN106296742B (en) 2016-08-19 2016-08-19 A kind of matched online method for tracking target of binding characteristic point

Publications (2)

Publication Number Publication Date
CN106296742A true CN106296742A (en) 2017-01-04
CN106296742B CN106296742B (en) 2019-01-29

Family

ID=57660693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610694150.6A Active CN106296742B (en) 2016-08-19 2016-08-19 A kind of matched online method for tracking target of binding characteristic point

Country Status (1)

Country Link
CN (1) CN106296742B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017161586A (en) * 2016-03-07 2017-09-14 キヤノン株式会社 Image shake correction device and method of controlling image shake correction device, imaging device, program, and storage medium
CN108022254A (en) * 2017-11-09 2018-05-11 华南理工大学 A kind of space-time contextual target tracking based on sign point auxiliary
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN108776974A (en) * 2018-05-24 2018-11-09 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN109215054A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 Face tracking method and system
CN109523570A (en) * 2017-09-20 2019-03-26 杭州海康威视数字技术股份有限公司 Beginning parameter transform model method and device
CN109636854A (en) * 2018-12-18 2019-04-16 重庆邮电大学 A kind of augmented reality three-dimensional Tracing Registration method based on LINE-MOD template matching
CN109791615A (en) * 2017-05-05 2019-05-21 京东方科技集团股份有限公司 For detecting and tracking the method, target object tracking equipment and computer program product of target object
CN111862147A (en) * 2020-06-03 2020-10-30 江西江铃集团新能源汽车有限公司 Method for tracking multiple vehicles and multiple human targets in video
CN111882583A (en) * 2020-07-29 2020-11-03 成都英飞睿技术有限公司 Moving target detection method, device, equipment and medium
CN112465876A (en) * 2020-12-11 2021-03-09 河南理工大学 Stereo matching method and equipment
CN112614153A (en) * 2020-11-26 2021-04-06 北京理工大学 Ground moving target tracking method based on differential forward and backward optical flows
CN112700657A (en) * 2020-12-21 2021-04-23 北京百度网讯科技有限公司 Method and device for generating detection information, road side equipment and cloud control platform
CN112734797A (en) * 2019-10-29 2021-04-30 浙江商汤科技开发有限公司 Image feature tracking method and device and electronic equipment
CN117635613A (en) * 2024-01-25 2024-03-01 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831618A (en) * 2012-07-20 2012-12-19 西安电子科技大学 Hough forest-based video target tracking method
CN104021564A (en) * 2014-06-26 2014-09-03 广东工业大学 Adaptive mean shift algorithm based on local invariant feature detection
CN105184822A (en) * 2015-09-29 2015-12-23 中国兵器工业计算机应用技术研究所 Target tracking template updating method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831618A (en) * 2012-07-20 2012-12-19 西安电子科技大学 Hough forest-based video target tracking method
CN104021564A (en) * 2014-06-26 2014-09-03 广东工业大学 Adaptive mean shift algorithm based on local invariant feature detection
CN105184822A (en) * 2015-09-29 2015-12-23 中国兵器工业计算机应用技术研究所 Target tracking template updating method

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017161586A (en) * 2016-03-07 2017-09-14 キヤノン株式会社 Image shake correction device and method of controlling image shake correction device, imaging device, program, and storage medium
CN109791615B (en) * 2017-05-05 2023-07-28 京东方科技集团股份有限公司 Method for detecting and tracking a target object, target object tracking device and computer program product
CN109791615A (en) * 2017-05-05 2019-05-21 京东方科技集团股份有限公司 For detecting and tracking the method, target object tracking equipment and computer program product of target object
CN109215054A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 Face tracking method and system
CN109523570B (en) * 2017-09-20 2021-01-22 杭州海康威视数字技术股份有限公司 Motion parameter calculation method and device
CN109523570A (en) * 2017-09-20 2019-03-26 杭州海康威视数字技术股份有限公司 Beginning parameter transform model method and device
CN108022254A (en) * 2017-11-09 2018-05-11 华南理工大学 A kind of space-time contextual target tracking based on sign point auxiliary
CN108022254B (en) * 2017-11-09 2022-02-15 华南理工大学 Feature point assistance-based space-time context target tracking method
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN108776974A (en) * 2018-05-24 2018-11-09 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN109636854A (en) * 2018-12-18 2019-04-16 重庆邮电大学 A kind of augmented reality three-dimensional Tracing Registration method based on LINE-MOD template matching
CN112734797A (en) * 2019-10-29 2021-04-30 浙江商汤科技开发有限公司 Image feature tracking method and device and electronic equipment
CN111862147A (en) * 2020-06-03 2020-10-30 江西江铃集团新能源汽车有限公司 Method for tracking multiple vehicles and multiple human targets in video
CN111862147B (en) * 2020-06-03 2024-01-23 江西江铃集团新能源汽车有限公司 Tracking method for multiple vehicles and multiple lines of human targets in video
CN111882583A (en) * 2020-07-29 2020-11-03 成都英飞睿技术有限公司 Moving target detection method, device, equipment and medium
CN111882583B (en) * 2020-07-29 2023-11-14 成都英飞睿技术有限公司 Moving object detection method, device, equipment and medium
CN112614153A (en) * 2020-11-26 2021-04-06 北京理工大学 Ground moving target tracking method based on differential forward and backward optical flows
CN112465876A (en) * 2020-12-11 2021-03-09 河南理工大学 Stereo matching method and equipment
CN112700657A (en) * 2020-12-21 2021-04-23 北京百度网讯科技有限公司 Method and device for generating detection information, road side equipment and cloud control platform
CN112700657B (en) * 2020-12-21 2023-04-28 阿波罗智联(北京)科技有限公司 Method and device for generating detection information, road side equipment and cloud control platform
CN117635613A (en) * 2024-01-25 2024-03-01 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method
CN117635613B (en) * 2024-01-25 2024-04-16 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method

Also Published As

Publication number Publication date
CN106296742B (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN106296742B (en) A kind of matched online method for tracking target of binding characteristic point
US10990191B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN109919974A (en) Online multi-object tracking method based on the more candidate associations of R-FCN frame
CN110060277A (en) A kind of vision SLAM method of multiple features fusion
CN110555412B (en) End-to-end human body gesture recognition method based on combination of RGB and point cloud
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN103426008B (en) Visual human hand tracking and system based on online machine learning
CN103886619A (en) Multi-scale superpixel-fused target tracking method
CN107808376A (en) A kind of detection method of raising one's hand based on deep learning
CN111476077A (en) Multi-view gait recognition method based on deep learning
CN110555867B (en) Multi-target object tracking method integrating object capturing and identifying technology
CN112396655B (en) Point cloud data-based ship target 6D pose estimation method
CN106327528A (en) Moving object tracking method and operation method of unmanned aerial vehicle
CN105741326B (en) A kind of method for tracking target of the video sequence based on Cluster-Fusion
CN107230219A (en) A kind of target person in monocular robot is found and follower method
CN112614161A (en) Three-dimensional object tracking method based on edge confidence
CN108520529A (en) Visible light based on convolutional neural networks and infrared video method for tracking target
CN108898621A (en) A kind of Case-based Reasoning perception target suggests the correlation filtering tracking of window
Songhui et al. Objects detection and location based on mask RCNN and stereo vision
Jean et al. Body tracking in human walk from monocular video sequences
CN104537690B (en) One kind is based on the united moving spot targets detection method of maximum time index
CN115855018A (en) Improved synchronous positioning and mapping method based on point-line comprehensive characteristics
CN110349184A (en) The more pedestrian tracting methods differentiated based on iterative filtering and observation
CN110895684B (en) Gesture motion recognition method based on Kinect
CN111723737A (en) Target detection method based on multi-scale matching strategy deep feature learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant