CN102831618A - Hough forest-based video target tracking method - Google Patents

Hough forest-based video target tracking method Download PDF

Info

Publication number
CN102831618A
CN102831618A CN2012102532672A CN201210253267A CN102831618A CN 102831618 A CN102831618 A CN 102831618A CN 2012102532672 A CN2012102532672 A CN 2012102532672A CN 201210253267 A CN201210253267 A CN 201210253267A CN 102831618 A CN102831618 A CN 102831618A
Authority
CN
China
Prior art keywords
target
image
characteristic
image block
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102532672A
Other languages
Chinese (zh)
Other versions
CN102831618B (en
Inventor
田小林
焦李成
李敏敏
张小华
王桂婷
朱虎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210253267.2A priority Critical patent/CN102831618B/en
Publication of CN102831618A publication Critical patent/CN102831618A/en
Application granted granted Critical
Publication of CN102831618B publication Critical patent/CN102831618B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a Hough forest-based video target tracking method which mainly solves the problem that the tracking easily fails when the target is partially shielded or generates non-rigid change in the prior art. The method comprises the following steps of: (1) inputting the first frame of a video, and manually marking the target to be tracked; (2) extracting the characteristics of the video image from the input first frame of video image; (3) establishing and initializing a Hough forest detector; (4) detecting the target and performing Hough vote; (5) tracking the target by use of a Lucas-Kanade tracker to obtain the scale change (s) of the target frame size; (6) determining the target position according to the vote peak in the Hough forest detection and the tracking result of the Lucas-Kanade tracker; (7) adjusting the width and height of the target frame according to the scale change (s) of the target frame size, and displaying the width and height; (8) re-training the Hough forest detector; and (9) repeating the steps (4)-(8) until the video is finished. The method disclosed by the invention realizes accurate tracking of the shielded target, and can be applied to the field of human-machine interaction and traffic monitoring.

Description

Video target tracking method based on the Hough forest
Technical field
The invention belongs to technical field of image processing, relate to video target tracking method, can be applicable to fields such as man-machine interaction and target following.
Background technology
Target following is meant through the image sequence of taking is analyzed, thereby calculates the position of target on every two field picture, obtains the parameter of being correlated with then.Target following is requisite gordian technique in the computer vision, and it all is widely used aspect many in robot visual guidance, safety monitoring, traffic control, video compress and meteorologic analysis etc.Like military aspect, Imaging Guidance, military surveillance and the supervision etc. of weapon successfully have been applied to.Civilian aspect like vision monitoring, also has been widely used in the each side of social life.Target following also can be applicable to the guard monitor of community and critical facility; Be used for intelligent transportation system and carry out the real-time tracing of vehicle, thereby obtain the many valuable traffic flow parameters of vehicle flowrate, vehicle, the speed of a motor vehicle, vehicle density or the like, simultaneously can also the detection accident or emergency situations such as fault.
The patented claim " a kind of video target tracking method based on the cumulative histogram particle filter " (number of patent application 201110101737.9, publication number CN102184548A) that Zhejiang Polytechnical University proposes discloses a kind of video target tracking method based on the cumulative histogram particle filter.This method combines the color cumulative histogram with the particle filter tracking algorithm, promptly at first according to detected target zone, calculate the color cumulative histogram of target; Initialization particle filter and tracking then; Obtaining the scope of target in a new frame, at this moment, is central point with the coordinate of each particle; Calculate the weight of interim cumulative histogram and each particle and carry out weight normalization; And obtain new target cumulative histogram thus, and upgrade cumulative histogram, adopt the replacement selection algorithm that particle is resampled at last.Though this method can be carried out correct tracking when the target and background color similarity,, what this method adopted is that histogram is described target, so to the not enough robust of non-rigid variation of target, be easy to follow the tracks of failure.
Patented claim " based on the video target tracking method of the average drifting " (number of patent application 201010110655.6 that University Of Suzhou proposes; Publication number CN101924871A); A kind of video target tracking method based on Mean Shift is disclosed; This method is to extract the SIFT characteristic of tracking target earlier, with the Mean-Shift algorithm SIFT characteristic of target is mated then, thereby realizes the tracking to target.Though this method has adopted rotation, scale, brightness are changed the SIFT characteristic that maintains the invariance,, can reduce the real-time of target following greatly because complexity is very big.
Summary of the invention
The present invention is directed to the deficiency of above-mentioned prior art, propose a kind of video target tracking method, to improve target following the robustness of target occlusion, non-rigid variation and the real-time of target following based on the Hough forest.
Realize that technical thought of the present invention is: Hough transformation is combined with the random forest sorter target is detected as detecting device; By the Lucas-Kanade tracker target is followed the tracks of simultaneously; Hough transformation is combined with the random forest sorter; Improve the performance of random forest sorter, make it, adjust the yardstick of target area simultaneously through the Lucas-Kanade method of introducing the tracking of target occlusion and the non-rigid variation of target robust more; Further confirm the position of target, make and follow the tracks of the dimensional variation that well adapts to target.Concrete performing step comprises as follows:
(1) first frame in one section video of input, and the handmarking goes out target to be tracked;
(2) from first frame video image of input, extract the characteristic of video image, this characteristic comprises: Lab
Characteristic, gradient orientation histogram HOG characteristic, the first order derivative characteristic of image x direction, second derivative characteristic, the first order derivative characteristic of image y direction, second derivative characteristic;
(3) set up also initialization Hough forest detecting device
The decision tree number of 3a) setting in the Hough forest detecting device is 20, for each decision tree produce at random 8 pairs of scopes 0 ~ 12 piece in position offset (l, s), (p q), is the interior a kind of characteristic of position offset l picked at random of every pair of piece simultaneously;
3b) with the target area as positive sample; With the zone beyond the target area as negative sample; With the target area to around expand 20 pixels and upgrade the zone as sample; Individual element is got 12 * 12 image block in sample upgrades the zone, to each decision tree, based on position offset (l in 8 pairs of pieces; S), (p; Q) confirm that 8 characteristic points of image block are right, extract the characteristic i training decision tree of these characteristic points, produce the image block characteristic value; If image block is positive sample; The position offset d of computed image piece center and target's center; The positive negative sample of update image block eigenvalue correspondence is the positive negative sample ratio of this decision tree when; The position offset d that the memory image block eigenvalue is corresponding; If image block is negative sample, a positive negative sample that update image piece coding the is corresponding positive negative sample ratio of this tree when then;
3c) from 20 decision trees of having set up, select positive negative sample to form Hough forest detecting device than 10 the highest decision trees;
(4) survey target and carry out the Hough ballot
4a) be written into a new frame video image; Extract the characteristic of a new frame video image according to the method for step (2); And the target area of previous frame expanded be twice as the region of search; In the region of search, with each pixel be the center get 12 * 12 image block, through the decision tree in the random forest sorter successively to image block classify, the computed image block eigenvalue; When decision tree judges that image block belongs to target, according to the center of the center calculating target of corresponding position offset d of image block eigenwert and image block and vote;
4b) get the position of the position of ballot peak value as target;
(5) Lucas-Kanade tracker tracking target obtains the big or small dimensional variation s of target frame;
(6), confirm the target location by following rule based on the ballot peak value in the detection of Hough forest and the tracking results of Lucas-Kanade tracker;
If the value of the maximum polling place during the Hough forest detects is greater than threshold value 1, then with detected position as the position of target in current frame video image, and execution in step (8);
If the value of the maximum polling place during the Hough forest detects is greater than threshold value 2 and less than threshold value 1; Then follow the tracks of the result that obtains in x, y deflection error during as detected result and Lucas-Kanade all less than 5 pixels; The average of getting testing result and tracking results is as the target location, and execution in step (8), otherwise; With testing result as the target location, execution in step (7);
If the value of the maximum polling place during the Hough forest detects is greater than threshold value 3 and less than threshold value 2, then with testing result as the target location, execution in step (7);
If the value of the maximum polling place during the Hough forest detects is less than threshold value 3, then with the tracking results of Lucas-Kanade tracker position, execution in step (7) as target;
(7) the variation yardstick s according to target frame size adjusts the wide and high of target frame, and shows;
(8) train Hough forest classified device again
8a) with the target area as positive sample, the zone beyond the target area as negative sample, is enlarged 20 pixels with the target area, with the zone of this expansion as upgrading the zone;
8b) in upgrading the zone; With each pixel is that 12 * 12 image block is got at the center, to each decision tree, and the characteristic i training decision tree of confirming 8 pairs of unique points of this image block and extracting these unique points; Produce the image block eigenwert; If image block is positive sample, the position offset d of computed image piece center and target's center then, the positive negative sample of update image block eigenvalue correspondence is the positive negative sample ratio of this decision tree when; The position offset d that the memory image block eigenvalue is corresponding is if this image block is negative sample positive negative sample ratio that then a update image block encoding is corresponding and the positive negative sample ratio of this decision tree;
(9) finish up to video repeating step (4)-(8).
The present invention compared with prior art has following advantage:
First; The present invention has adopted the thought of Hough ballot; The characteristic that not only aligns sample is learnt; And in learning process, write down the position offset of positive sample to target's center, even like this when target by partial occlusion or when non-rigid variations takes place, also can vote and obtain the target location by Hough.
The second, the present invention has adopted the Lucas-Kanade tracking to confirm the dimensional variation of target, and it is fixing to have overcome in the prior art target frame size, the shortcoming that does not change with target sizes.
Description of drawings
Fig. 1 is a process flow diagram of the present invention;
Fig. 2 is first frame video image of input;
Fig. 3 goes out the synoptic diagram of target to be tracked for handmarking in Fig. 2;
Fig. 4 is the synoptic diagram that upgrades the zone among Fig. 2;
Fig. 5 is a new frame video image of input;
Fig. 6 is the synoptic diagram of region of search among Fig. 5;
Fig. 7 is the synoptic diagram that upgrades the zone among Fig. 5.
The practical implementation measure
Below in conjunction with accompanying drawing invention is done and to be further described.
With reference to Fig. 1, the concrete realization of the present invention is provided following embodiment:
Step 1, first frame of one section video of input, and the handmarking goes out target to be tracked.
One section video figure of this instance input such as Fig. 2, it is first frame that one section people's face blocks video, the target of the people's face among Fig. 2 for following the tracks of, promptly to confining human face region as target to be tracked with mouse among Fig. 2, the result is as shown in Figure 3.
Step 2 from first frame video image of input, is extracted the characteristic of video image.
2.1) Fig. 2 is converted into the Lab color space by the RGB color space; R representes that red channel, G represent that green channel, B represent blue channel in the RGB color space; L representes that illumination passage, a represent to represent from redness to green passage, b from yellow to blue passage in the Lab color space, with L passage, a passage, b passage as characteristic 1 ~ characteristic 3 of extracting;
2.2) Fig. 2 is converted into gray level image by the RGB image; Calculate the gradient direction of each pixel in the gray level image; With 40 ° be a zone, the gradient direction merger of each pixel is quantified as 9 directions, the pixel number of statistics all directions; Can obtain the gradient orientation histogram HOG of 9 dimensions, with 9 dimensional vectors of HOG as characteristic 4 ~ characteristic 12;
2.3) gray level image is carried out medium filtering obtain filtered image I; Ask first order derivative as characteristic 13 to the x direction of image I; Ask the x direction to ask second derivative to image I as characteristic 14; Ask the y direction to ask first order derivative to image I, ask y direction second derivative as characteristic 16 image I as characteristic 15.
Step 3 is set up and initialization Hough forest detecting device.
3.1) the decision tree number set in the Hough forest detecting device is 20, for each decision tree produces 8 pairs of scope offsets in 0 ~ 12 piece at random
3.2) with the zone in the solid box shown in Figure 3 as sample just; The zone that solid box is outer is as negative sample; With the solid box among Fig. 3 to around expand 20 pixels and obtain frame of broken lines shown in Figure 4; The zone is upgraded as sample in zone in the frame of broken lines; Individual element is got 12 * 12 image block in sample upgrades the zone; To each decision tree; According to position offset (l in 8 pairs of pieces; S), (p; Q) confirm that 8 characteristic points of image block are right; Extract the feature i training decision tree of these characteristic points, produce the image block characteristic value; If image block is positive sample; The position offset d of computed image piece center and target's center; The positive negative sample of update image block eigenvalue correspondence is the positive negative sample ratio of this decision tree when; The position offset d that the memory image block eigenvalue is corresponding; If image block is negative sample, a positive negative sample that update image piece coding the is corresponding positive negative sample ratio of this tree when then;
3.3) from 20 decision trees of having set up, select positive negative sample to form Hough forest detecting device than 10 the highest decision trees.
Step 4 detects target and carries out the Hough ballot.
4.1) be written into new frame video image Fig. 5; Extract the characteristic of Fig. 5 according to the method for step 2; And the target area among Fig. 3 expanded twice zone in the frame of broken lines shown in Figure 6 as the region of search; In the region of search, with each pixel be the center get 12 * 12 image block, through the decision tree in the random forest sorter successively to image block classify, the computed image block eigenvalue; When decision tree judges that image block belongs to target, according to the center of the center calculating target of corresponding position offset d of image block eigenwert and image block and vote;
4.2) position of getting the ballot peak value is as the position of target;
Step 5 is with Lucas-Kanade method tracking target
5.1) N trace point p of even generation in the target area of Fig. 3 1, p 2... p N, follow the tracks of these points with the Lucas-Kanade tracker, obtain p 1, p 2... p NCorresponding point q in Fig. 5 1, q 2... q N
5.2) to the some q among Fig. 5 1, q 2... q N, follow the tracks of with the Lucas-Kanade tracker, obtain the some corresponding point p' in Fig. 3 1, p' 2... P' N
5.3) calculating p 1, p 2... p NAnd p' 1, p' 2... p ' NForward direction between the corresponding point-backward error fb calculates p 1, p 2... p NAnd q 1, q 2... q NStandard cross-correlation coefficient ncc, get q 1, q 2... q NIn satisfy following two requirements point as credible trace point: 1. forward direction-backward error fb is less than forward direction-backward error intermediate value fb_m; 2. standard cross-correlation coefficient ncc overgauge cross-correlation coefficient intermediate value ncc_m;
5.4) calculate respectively z credible trace point in Fig. 3 between any 2 apart from a i, and credible trace point distance b between any 2 in Fig. 5 i, and with the ratio of two types of distances
Figure BDA00001915968500061
Average, as the variation yardstick s of target frame size, i=1 wherein, 2 ... K,
Figure BDA00001915968500062
! The expression factorial.
Step 6, based on the ballot peak value in the detection of Hough forest and the tracking results of Lucas-Kanade tracker, confirm the target location by following rule:
If the ballot peak value during the Hough forest detects is greater than threshold value 1, then with detected position as the position of target in current frame video image, and execution in step 8;
If the ballot peak value during the Hough forest detects is greater than threshold value 2 and less than threshold value 1; Then follow the tracks of the result that obtains in x, y deflection error during as detected result and Lucas-Kanade all less than 5 pixels; The average of getting testing result and tracking results is as the target location; And execution in step 8; Otherwise; With testing result as the target location, execution in step 7;
If the ballot peak value during the Hough forest detects is greater than threshold value 3 and less than threshold value 2, then with testing result as the target location, execution in step 7;
If the ballot peak value during the Hough forest detects is less than threshold value 3, then with the tracking results of Lucas-Kanade tracker position, execution in step 7 as target;
Step 7 according to the variation yardstick s of target frame size, is adjusted the wide and high of target frame according to following formula, and is shown
W s=w×s
h s=h×s
Wherein, the target frame before w representes to adjust is wide, and s representes the variation yardstick of target frame size, W sRepresent that adjusted target frame is wide, the target frame before h representes to adjust is high, h sRepresent that adjusted target frame is high.
Step 8 is trained Hough forest classified device again
8.1) result that Fig. 5 is followed the tracks of is shown in the solid box among Fig. 7; Zone in the solid box among Fig. 7 as positive sample, as negative sample, is enlarged 20 pixels with solid box with the zone beyond the solid box; Obtain frame of broken lines shown in Figure 7, the zone in the frame of broken lines is regional as upgrading;
8.2) in upgrading the zone; With each pixel is that 12 * 12 image block is got at the center, to each decision tree, and the characteristic i training decision tree of confirming 8 pairs of unique points of this image block and extracting these unique points; Produce the image block eigenwert; If image block is positive sample, the position offset d of computed image piece center and target's center then, the positive negative sample of update image block eigenvalue correspondence is the positive negative sample ratio of this decision tree when; The position offset d that the memory image block eigenvalue is corresponding is if this image block is negative sample positive negative sample ratio that then a update image block encoding is corresponding and the positive negative sample ratio of this decision tree;
Step 9, repeating step 4---step 8 finishes up to video.
More than be an instance of the present invention, do not constitute that emulation experiment shows that the present invention can not only realize the correct tracking to the partial occlusion target, also can realize effective tracking of dimensional variation target is arranged to any restriction of the present invention.

Claims (5)

1. video target tracking method based on the Hough forest may further comprise the steps:
(1) first frame of one section video of input, and the handmarking goes out target to be tracked;
(2) from first frame video image of input; Extract the characteristic of video image, this characteristic comprises: Lab characteristic, gradient orientation histogram HOG characteristic; The first order derivative characteristic of image x direction, second derivative characteristic, the first order derivative characteristic of image y direction, second derivative characteristic;
(3) set up also initialization Hough forest detecting device
The decision tree number of 3a) setting in the Hough forest detecting device is 20, for each decision tree produce at random 8 pairs of scopes 0 ~ 12 piece in position offset (l, s), (p q), is the interior a kind of characteristic of position offset l picked at random of every pair of piece simultaneously;
3b) with the target area as positive sample; With the zone beyond the target area as negative sample; With the target area to around expand 20 pixels and upgrade the zone as sample; Individual element is got 12 * 12 image block in sample upgrades the zone, to each decision tree, based on position offset (l in 8 pairs of pieces; S), (p; Q) confirm that 8 characteristic points of image block are right, extract the characteristic i training decision tree of these characteristic points, produce the image block characteristic value; If image block is positive sample; The position offset d of computed image piece center and target's center; The positive negative sample of update image block eigenvalue correspondence is the positive negative sample ratio of this decision tree when; The position offset d that the memory image block eigenvalue is corresponding; If image block is negative sample, a positive negative sample that update image piece coding the is corresponding positive negative sample ratio of this tree when then;
3c) from 20 decision trees of having set up, select positive negative sample to form Hough forest detecting device than 10 the highest decision trees;
(4) detect target and carry out the Hough ballot
4a) be written into a new frame video image; Extract the characteristic of a new frame video image according to the method for step (2); And the target area of previous frame expanded be twice as the region of search; In the region of search, with each pixel be the center get 12 * 12 image block, through the decision tree in the random forest sorter successively to image block classify, the computed image block eigenvalue; When decision tree judges that image block belongs to target, according to the center of the center calculating target of corresponding position offset d of image block eigenwert and image block and vote;
4b) get the position of the position of ballot peak value as target;
(5), obtain the dimensional variation s of target frame size with Lucas-Kanade tracker tracking target;
(6), confirm the target location by following rule based on the ballot peak value in the detection of Hough forest and the tracking results of Lucas-Kanade tracker;
If the ballot peak value during the Hough forest detects is greater than threshold value 1, then with detected position as the position of target in current frame video image, and execution in step (8);
If the ballot peak value during the Hough forest detects is greater than threshold value 2 and less than threshold value 1; Then follow the tracks of the result that obtains in x, y deflection error during as detected result and Lucas-Kanade all less than 5 pixels; The average of getting testing result and tracking results is as the target location, and execution in step (8), otherwise; With testing result as the target location, execution in step (7);
If the ballot peak value during the Hough forest detects is greater than threshold value 3 and less than threshold value 2, then with testing result as the target location, execution in step (7);
If the ballot peak value during the Hough forest detects is less than threshold value 3, then with the tracking results of Lucas-Kanade tracker position, execution in step (7) as target;
(7) the variation yardstick s according to target frame size adjusts the wide and high of target frame, and shows;
(8) train Hough forest classified device again
8a) with the target area as positive sample, the zone beyond the target area as negative sample, is enlarged 20 pixels with the target area, with the zone of this expansion as upgrading the zone;
8b) in upgrading the zone; With each pixel is that 12 * 12 image block is got at the center, to each decision tree, and the characteristic i training decision tree of confirming 8 pairs of unique points of this image block and extracting these unique points; Produce the image block eigenwert; If image block is positive sample, the position offset d of computed image piece center and target's center then, the positive negative sample of update image block eigenvalue correspondence is the positive negative sample ratio of this decision tree when; The position offset d that the memory image block eigenvalue is corresponding is if this image block is negative sample positive negative sample ratio that then a update image block encoding is corresponding and the positive negative sample ratio of this decision tree;
(9) finish up to video repeating step (4)-(8).
2. the video target tracking method based on the Hough forest according to claim 1, wherein step (2) is described from first frame video image of input, extracts the characteristic of video image, carries out as follows:
(2a) video image is converted into the Lab color space by the RGB color space; R representes that red channel, G represent that green channel, B represent blue channel in the RGB color space; L representes that illumination passage, a represent to represent from redness to green passage, b from yellow to blue passage in the Lab color space, with L passage, a passage, b passage as characteristic 1 ~ characteristic 3 of extracting.
(2b) video image is converted into gray level image by the RGB image; Calculate the gradient direction of each pixel in the gray level image; With 40 ° be a zone, the gradient direction merger of each pixel is quantified as 9 directions, the pixel number of statistics all directions; Can obtain the gradient orientation histogram HOG of 9 dimensions, with 9 dimensional vectors of HOG as characteristic 4 ~ characteristic 12.
(2c) gray level image is carried out medium filtering and obtain filtered image I; Ask first order derivative as characteristic 13 to the x direction of image I; Ask the x direction to ask second derivative to image I as characteristic 14; Ask the y direction to ask first order derivative to image I, ask y direction second derivative as characteristic 16 image I as characteristic 15.
3. the video target tracking method based on the Hough forest according to claim 1, wherein step 3b) described according to position offset (l in 8 pieces j, r j, p j, q j) confirm that 8 unique points of image block are right, extract the right characteristic i training decision tree of these unique points, produce the image block eigenwert, following steps are carried out:
(3a) center of hypothesis image block be (x, y), and unique point is to comprising unique point 1 and unique point 2, will (x, y) and j to position offset in the piece (l, s), (p, q) addition obtain coordinate (x+l j, y+r j) and coordinate (x+p j, y+q j), with coordinate (x+l j, y+r j) pairing pixel is as j the unique point 1 that unique point is right, with coordinate (x+l X2, y+l Y2) pairing pixel is as j the unique point 2 that unique point is right;
The unique point 1 that (3b) j unique point of comparison is right and the eigenwert of unique point 2 obtain j the flag m that unique point is right j, if the eigenwert of unique point 1 is greater than the eigenwert of unique point 2, then m j=1, if the eigenwert of unique point 1 is less than the eigenwert of unique point 2, then m j=0;
(3c) with 8 flag m that unique point is right 1~m 8Be arranged in order and obtain image block eigenwert m 1m 2m 3m 4m 5m 6m 7m 8
4. the video target tracking method based on the Hough forest according to claim 1, wherein step (5) is described with Lucas-Kanade tracker tracking target, obtains the dimensional variation s of target frame size, carries out as follows:
(4a) in the target area of t-1 moment video image I1, evenly produce a series of N trace point p 1, p 2... p N, follow the tracks of these points with the Lucas-Kanade tracker, obtain p 1, p 2P NCorresponding point q in t moment video image I2 1, q 2... q N
(4b) to the some q among the t moment video image I2 1, q 2... q N, obtain q with the tracking of Lucas-Kanade tracker 1, q 2... q NCorresponding point p' in t-1 moment video image I1 1, p' 2... P ' N
(4c) calculate trace point p among the t-1 moment video image I1 1, p 2... p NAnd p' 1, p' 2... p' NForward direction between the corresponding point-backward error fb calculates p 1, p 2... p NAnd q 1, q 2... q NStandard cross-correlation coefficient ncc, get q 1, q 2... q NIn satisfy following two requirements point as credible trace point: 1. forward direction-backward error fb is less than forward direction-backward error intermediate value fb_m; 2. standard cross-correlation coefficient ncc overgauge cross-correlation coefficient intermediate value ncc_m;
(4d) calculate respectively z credible trace point t-1 constantly among the video image I1 between any 2 apart from a i, and credible trace point distance b between any 2 in t moment video image I2 i, and with the ratio of two types of distances Average, as the variation yardstick s of target frame size, i=1 wherein, 2 ... K,
Figure FDA00001915968400042
! The expression factorial.
5. the video target tracking method based on the Hough forest according to claim 1, wherein the described variation yardstick s according to target frame size of step (7) adjusts the wide and high of target frame, is to realize through following two formula:
W s=w×s
h s=h×s
Wherein, the target frame before w representes to adjust is wide, and s representes the variation yardstick of target frame size, W sRepresent that adjusted target frame is wide, the target frame before h representes to adjust is high, h sRepresent that adjusted target frame is high.
CN201210253267.2A 2012-07-20 2012-07-20 Hough forest-based video target tracking method Expired - Fee Related CN102831618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210253267.2A CN102831618B (en) 2012-07-20 2012-07-20 Hough forest-based video target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210253267.2A CN102831618B (en) 2012-07-20 2012-07-20 Hough forest-based video target tracking method

Publications (2)

Publication Number Publication Date
CN102831618A true CN102831618A (en) 2012-12-19
CN102831618B CN102831618B (en) 2014-11-12

Family

ID=47334733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210253267.2A Expired - Fee Related CN102831618B (en) 2012-07-20 2012-07-20 Hough forest-based video target tracking method

Country Status (1)

Country Link
CN (1) CN102831618B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136530A (en) * 2013-02-04 2013-06-05 国核自仪***工程有限公司 Method for automatically recognizing target images in video images under complex industrial environment
CN104281852A (en) * 2013-07-11 2015-01-14 上海瀛联体感智能科技有限公司 Target tracking algorithm based on fusion 2D detection
CN104299243A (en) * 2014-09-28 2015-01-21 南京邮电大学 Target tracking method based on Hough forests
CN104700391A (en) * 2013-12-06 2015-06-10 由田新技股份有限公司 Object tracking method and electronic device
CN104778699A (en) * 2015-04-15 2015-07-15 西南交通大学 Adaptive object feature tracking method
CN104778470A (en) * 2015-03-12 2015-07-15 浙江大学 Character detection and recognition method based on component tree and Hough forest
CN104809455A (en) * 2015-05-19 2015-07-29 吉林大学 Action recognition method based on distinguishable binary tree voting
CN105046382A (en) * 2015-09-16 2015-11-11 浪潮(北京)电子信息产业有限公司 Heterogeneous system parallel random forest optimization method and system
CN105404894A (en) * 2015-11-03 2016-03-16 湖南优象科技有限公司 Target tracking method used for unmanned aerial vehicle and device thereof
CN105741316A (en) * 2016-01-20 2016-07-06 西北工业大学 Robust target tracking method based on deep learning and multi-scale correlation filtering
CN103699908B (en) * 2014-01-14 2016-10-05 上海交通大学 Video multi-target tracking based on associating reasoning
CN106169188A (en) * 2016-07-11 2016-11-30 西南交通大学 A kind of method for tracing object based on the search of Monte Carlo tree
CN106296742A (en) * 2016-08-19 2017-01-04 华侨大学 A kind of online method for tracking target of combination Feature Points Matching
CN106846361A (en) * 2016-12-16 2017-06-13 深圳大学 Method for tracking target and device based on intuitionistic fuzzy random forest
CN106952284A (en) * 2017-03-28 2017-07-14 歌尔科技有限公司 A kind of feature extracting method and its device based on compression track algorithm
CN107508603A (en) * 2017-09-29 2017-12-22 南京大学 A kind of implementation method of forest condensing encoder
CN107527356A (en) * 2017-07-21 2017-12-29 华南农业大学 A kind of video tracing method based on lazy interactive mode
CN108062861A (en) * 2017-12-29 2018-05-22 潘彦伶 A kind of intelligent traffic monitoring system
CN108171146A (en) * 2017-12-25 2018-06-15 河南工程学院 A kind of method for detecting human face based on Hough forest integrated study
CN108196680A (en) * 2018-01-25 2018-06-22 盛视科技股份有限公司 A kind of robot vision follower method extracted based on characteristics of human body with retrieving
CN108475424A (en) * 2016-07-12 2018-08-31 微软技术许可有限责任公司 Methods, devices and systems for 3D feature trackings
CN108596188A (en) * 2018-04-04 2018-09-28 西安电子科技大学 Video object detection method based on HOG feature operators
CN108647698A (en) * 2018-05-21 2018-10-12 西安电子科技大学 Feature extraction and description method
CN108776822A (en) * 2018-06-22 2018-11-09 腾讯科技(深圳)有限公司 Target area detection method, device, terminal and storage medium
CN109544601A (en) * 2018-11-27 2019-03-29 天津工业大学 A kind of object detecting and tracking method based on on-line study
CN111368653A (en) * 2020-02-19 2020-07-03 杭州电子科技大学 Low-altitude small target detection method based on R-D (R-D) graph and deep neural network
CN112633168A (en) * 2020-12-23 2021-04-09 长沙中联重科环境产业有限公司 Garbage truck and method and device for identifying barrel turning action of garbage truck

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1780672A1 (en) * 2005-10-25 2007-05-02 Bracco Imaging, S.P.A. Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduce imaging artefacts caused by object movement
CN101908153A (en) * 2010-08-21 2010-12-08 上海交通大学 Method for estimating head postures in low-resolution image treatment
CN102496005A (en) * 2011-12-03 2012-06-13 辽宁科锐科技有限公司 Eye characteristic-based trial auxiliary study and judging analysis system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1780672A1 (en) * 2005-10-25 2007-05-02 Bracco Imaging, S.P.A. Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduce imaging artefacts caused by object movement
CN101908153A (en) * 2010-08-21 2010-12-08 上海交通大学 Method for estimating head postures in low-resolution image treatment
CN102496005A (en) * 2011-12-03 2012-06-13 辽宁科锐科技有限公司 Eye characteristic-based trial auxiliary study and judging analysis system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHYAM SUNDER KUMAR, ET AL.: "Mobile Object Detection through Client-Server based Vote Transfer", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136530A (en) * 2013-02-04 2013-06-05 国核自仪***工程有限公司 Method for automatically recognizing target images in video images under complex industrial environment
CN104281852A (en) * 2013-07-11 2015-01-14 上海瀛联体感智能科技有限公司 Target tracking algorithm based on fusion 2D detection
CN104700391B (en) * 2013-12-06 2018-09-11 由田新技股份有限公司 Object tracking method and electronic device
CN104700391A (en) * 2013-12-06 2015-06-10 由田新技股份有限公司 Object tracking method and electronic device
CN103699908B (en) * 2014-01-14 2016-10-05 上海交通大学 Video multi-target tracking based on associating reasoning
CN104299243B (en) * 2014-09-28 2017-02-08 南京邮电大学 Target tracking method based on Hough forests
CN104299243A (en) * 2014-09-28 2015-01-21 南京邮电大学 Target tracking method based on Hough forests
CN104778470A (en) * 2015-03-12 2015-07-15 浙江大学 Character detection and recognition method based on component tree and Hough forest
CN104778699A (en) * 2015-04-15 2015-07-15 西南交通大学 Adaptive object feature tracking method
CN104778699B (en) * 2015-04-15 2017-06-16 西南交通大学 A kind of tracking of self adaptation characteristics of objects
CN104809455A (en) * 2015-05-19 2015-07-29 吉林大学 Action recognition method based on distinguishable binary tree voting
CN104809455B (en) * 2015-05-19 2017-12-19 吉林大学 Action identification method based on the ballot of discriminability binary tree
CN105046382A (en) * 2015-09-16 2015-11-11 浪潮(北京)电子信息产业有限公司 Heterogeneous system parallel random forest optimization method and system
CN105404894A (en) * 2015-11-03 2016-03-16 湖南优象科技有限公司 Target tracking method used for unmanned aerial vehicle and device thereof
CN105404894B (en) * 2015-11-03 2018-10-23 湖南优象科技有限公司 Unmanned plane target tracking method and its device
CN105741316B (en) * 2016-01-20 2018-10-16 西北工业大学 Robust method for tracking target based on deep learning and multiple dimensioned correlation filtering
CN105741316A (en) * 2016-01-20 2016-07-06 西北工业大学 Robust target tracking method based on deep learning and multi-scale correlation filtering
CN106169188B (en) * 2016-07-11 2019-01-15 西南交通大学 A kind of method for tracing object based on the search of Monte Carlo tree
CN106169188A (en) * 2016-07-11 2016-11-30 西南交通大学 A kind of method for tracing object based on the search of Monte Carlo tree
CN108475424A (en) * 2016-07-12 2018-08-31 微软技术许可有限责任公司 Methods, devices and systems for 3D feature trackings
CN108475424B (en) * 2016-07-12 2023-08-29 微软技术许可有限责任公司 Method, apparatus and system for 3D face tracking
CN106296742A (en) * 2016-08-19 2017-01-04 华侨大学 A kind of online method for tracking target of combination Feature Points Matching
CN106296742B (en) * 2016-08-19 2019-01-29 华侨大学 A kind of matched online method for tracking target of binding characteristic point
CN106846361A (en) * 2016-12-16 2017-06-13 深圳大学 Method for tracking target and device based on intuitionistic fuzzy random forest
CN106952284A (en) * 2017-03-28 2017-07-14 歌尔科技有限公司 A kind of feature extracting method and its device based on compression track algorithm
CN107527356A (en) * 2017-07-21 2017-12-29 华南农业大学 A kind of video tracing method based on lazy interactive mode
CN107527356B (en) * 2017-07-21 2020-12-11 华南农业大学 Video tracking method based on lazy interaction mode
CN107508603A (en) * 2017-09-29 2017-12-22 南京大学 A kind of implementation method of forest condensing encoder
CN108171146A (en) * 2017-12-25 2018-06-15 河南工程学院 A kind of method for detecting human face based on Hough forest integrated study
CN108062861A (en) * 2017-12-29 2018-05-22 潘彦伶 A kind of intelligent traffic monitoring system
CN108062861B (en) * 2017-12-29 2021-01-15 北京安自达科技有限公司 Intelligent traffic monitoring system
CN108196680A (en) * 2018-01-25 2018-06-22 盛视科技股份有限公司 A kind of robot vision follower method extracted based on characteristics of human body with retrieving
CN108196680B (en) * 2018-01-25 2021-10-08 盛视科技股份有限公司 Robot vision following method based on human body feature extraction and retrieval
CN108596188A (en) * 2018-04-04 2018-09-28 西安电子科技大学 Video object detection method based on HOG feature operators
CN108647698A (en) * 2018-05-21 2018-10-12 西安电子科技大学 Feature extraction and description method
CN108776822A (en) * 2018-06-22 2018-11-09 腾讯科技(深圳)有限公司 Target area detection method, device, terminal and storage medium
CN109544601A (en) * 2018-11-27 2019-03-29 天津工业大学 A kind of object detecting and tracking method based on on-line study
CN111368653A (en) * 2020-02-19 2020-07-03 杭州电子科技大学 Low-altitude small target detection method based on R-D (R-D) graph and deep neural network
CN111368653B (en) * 2020-02-19 2023-09-08 杭州电子科技大学 Low-altitude small target detection method based on R-D graph and deep neural network
CN112633168A (en) * 2020-12-23 2021-04-09 长沙中联重科环境产业有限公司 Garbage truck and method and device for identifying barrel turning action of garbage truck
CN112633168B (en) * 2020-12-23 2023-10-31 长沙中联重科环境产业有限公司 Garbage truck and method and device for identifying garbage can overturning action of garbage truck

Also Published As

Publication number Publication date
CN102831618B (en) 2014-11-12

Similar Documents

Publication Publication Date Title
CN102831618B (en) Hough forest-based video target tracking method
CN103268616B (en) The moveable robot movement human body tracing method of multi-feature multi-sensor
CN104134209B (en) A kind of feature extracting and matching method and system in vision guided navigation
US8750567B2 (en) Road structure detection and tracking
Kong et al. Detecting abandoned objects with a moving camera
CN102999920B (en) Target tracking method based on nearest neighbor classifier and mean shift
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
CN101609507B (en) Gait recognition method
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN103279791B (en) Based on pedestrian's computing method of multiple features
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN104240266A (en) Target object tracking method based on color-structure features
CN103281477A (en) Multi-level characteristic data association-based multi-target visual tracking method
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
CN102521565A (en) Garment identification method and system for low-resolution video
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN105741324A (en) Moving object detection identification and tracking method on moving platform
CN103164693B (en) A kind of monitor video pedestrian detection matching process
CN104077605A (en) Pedestrian search and recognition method based on color topological structure
CN102622769A (en) Multi-target tracking method by taking depth as leading clue under dynamic scene
CN104517095A (en) Head division method based on depth image
CN106296743A (en) A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system
WO2024037408A1 (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141112

Termination date: 20190720