CN104200492A - Automatic detecting and tracking method for aerial video target based on trajectory constraint - Google Patents

Automatic detecting and tracking method for aerial video target based on trajectory constraint Download PDF

Info

Publication number
CN104200492A
CN104200492A CN201410421654.1A CN201410421654A CN104200492A CN 104200492 A CN104200492 A CN 104200492A CN 201410421654 A CN201410421654 A CN 201410421654A CN 104200492 A CN104200492 A CN 104200492A
Authority
CN
China
Prior art keywords
image
frame
pixel
motion
epsiv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410421654.1A
Other languages
Chinese (zh)
Other versions
CN104200492B (en
Inventor
张艳宁
杨涛
陈挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201410421654.1A priority Critical patent/CN104200492B/en
Publication of CN104200492A publication Critical patent/CN104200492A/en
Application granted granted Critical
Publication of CN104200492B publication Critical patent/CN104200492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an automatic detecting and tracking method for an aerial video target based on trajectory constraint. The automatic detecting and tracking method comprises the steps of firstly, automatically completing moving object detection in an aerial image by use of a forward image and the motion information of the forward image, secondly, removing parallax-induced mistaken static object detection included in the detection result in a trajectory constraint manner, and finally, analyzing the tracking result of a real moving object and removing the influence of a noise target by use of the statistics of motion trajectory information, and then completing automatic detection and tracking on the aerial video target; the accuracy of the detection and tracking result is more than 80%.

Description

The video object of taking photo by plane based on track constraint detects tracking automatically
Technical field
The present invention relates to the one video object of taking photo by plane online and automatically detect tracking, particularly relate to a kind of video object of taking photo by plane based on track constraint and automatically detect tracking.
Background technology
Utilize moving target in space and coherent kinetic characteristic on the time, automatically, effectively moving target is detected and carried out from the video image of taking photo by plane of complexity the online target following of robust, have very important significance.The existing video object of taking photo by plane automatically detects tracking and mainly contains: the video object of taking photo by plane based on background difference detects tracking and the video object of taking photo by plane based on kinetic characteristic statistics detects tracking.
Document " Moving object localization in thermal imagery by forward-backward MHI.Computer Vision and Pattern Recognition Workshop; 133-140,2006. " discloses a kind of video object of taking photo by plane based on forward-backward algorithm motion history image and has detected and positioning and tracing method.The method is obtained the moving target information of different time sections by movable information frame difference image and the backward movable information frame difference image of structure forward direction, and on the fusion basis of two images, completes the video frequency motion target detection of taking photo by plane automatically.But the method is mainly the poor information of gray scale frame based on image adds up, the result of statistics is that the Pixel-level half-tone information on image changes.Take photo by plane while obtaining over the ground video image when unmanned plane utilizes video sensor to carry out high-altitude, three-dimensional spatial information can be transformed into two-dimentional image information, cause the disappearance of picture depth information.Utilize the video object of taking photo by plane based on forward-backward algorithm motion history image to detect and positioning and tracing method except detecting that actual moving target is to picture, also comprised a large amount of static to picture because depth of field parallax in altitude causes.The data acquisition of unmanned plane earth observation is with ground as base plane, and the parallax static target therefore being caused by depth of field parallax in altitude can produce the two dimensional image movement locus that is different from ground.Therefore, the detection meeting of take photo by plane video object detection and positioning and tracing method based on forward-backward algorithm motion history image, because static target is detected as moving target causes a large amount of false-alarms, is followed the tracks of and has been brought very large challenge for follow-up location.
Summary of the invention
The technical matters solving
For fear of the deficiencies in the prior art part, the present invention proposes a kind of video object of taking photo by plane based on track constraint and automatically detects tracking.
Technical scheme
A kind of video object of taking photo by plane based on track constraint detects tracking automatically, first construct forward direction image and backward image, thereby and merge two movable informations on image and form and automatically complete the moving object detection on image when pre-treatment image, comprise actual moving target and the static target being caused by parallax; Secondly, utilize back projection matrix, calculate the respective pixel of each pixel on forward direction image and obtain the movement locus of each pixel; Finally, utilize the pole-face image-forming principle of camera to calculate track constraint matrix, analyze the tracking results of real motion target the impact of removing the static target causing because of parallax, it is characterized in that step is as follows:
Step 1: the method first suppressing by Corner Detection and non-maximum value is to forward direction image I fin t two field picture with t-Δ frame image carries out feature point extraction, Δ>=1, and utilize RANSAC method to remove the exterior point on two two field pictures; Then utilize Optical-flow Feature computing method to obtain t frame forward direction image I fmotion affine matrix finally by construct image with between affine variation relation:
I t F = P ( t , t - Δ ) F × I t - Δ F - - - ( 1 )
Adopt the mode of incremental computations to substitute directly calculating with motion affine matrix
P ( t , t - Δ ) F = P ( t , t - 1 ) F × P ( t - 1 , t - 2 ) F × · · · × P ( t - Δ + 1 , t - Δ ) F - - - ( 2 )
Wherein, be with motion affine matrix, be with motion affine matrix, be with motion affine matrix;
Step 2: by two two field pictures with obtain forward direction image I fthe poor moving image of frame at t frame:
D ( x , y , t ) F = | I t F - I t - Δ F | - - - ( 3 )
Step 3: according to and D (x, y, t) fconstruct forward direction image H (x, y, t) fand utilize Gaussian filter function to carry out smoothing processing and eliminate noise; Adopting uses the same method calculates backward image I bthe motion affine matrix of t frame with the poor moving image D of frame (x, y, t) bconstruct backward image H (x, y, t) b:
H ( x , y , t ) F = max ( 0 , P ( t , t - &Delta; ) F H ( x , y , t - 1 ) - d ) F if D ( x , y , t ) F < T F 255 if D ( x , y , t ) F &GreaterEqual; T F - - - ( 4 )
H ( x , y , t ) B = max ( 0 , P ( t , t - &Delta; ) B H ( x , y , t - 1 ) - d ) B if D ( x , y , t ) B < T B 255 if D ( x , y , t ) B &GreaterEqual; T B - - - ( 5 )
Wherein, x, y is the pixel coordinate of representative image respectively, and d is pad value parameter, T f, T bfor threshold value;
Step 4: by forward direction image H (x, y, t) fwith backward image H (x, y, t) bcarrying out two-value merges associated:
H(x,y) t=max(H(x,y,t) F,H(x,y,t) B) (6)
And remember H (x, y) tthe pixel to be detected that all pixel values on image are 255 is
Step 5: the motion affine matrix that utilizes forward direction image each measuring point p to be checked of backwards calculation t frame t=(x t, y t) ∈ Θ tthe pixel position p of correspondence in t-1 two field picture t-1:
p t - 1 = ( P ( t , t - 1 ) F ) T p t - - - ( 7 )
Step 6: with p t-1centered by, in the β image range point set Q that is radius, utilize regional extent search function F (p t-1, β) find and p tthere is the pixel coordinate point p' of maximum similarity t-1, as p tin the actual corresponding pixel points of t-1 frame, and set vector parameter ε (p' t-1, p t) as p tat the movement locus of t-1 frame and t frame;
F(p t-1,β)=maxf(p t-1,q) (8)
f ( p t - 1 , q ) = 1 N &Sigma; i = 1 N ( 1 - | &rho; p ( i ) - &rho; q ( i ) | Max ( &rho; p ( i ) , &rho; q ( i ) ) ) - - - ( 9 )
Wherein, q ∈ Q={q|||l (p t-1)-l (q) || < β }, l (p t-1) represent with p t-1centered by the piece image of 3 × 3 pixel sizes of point, l (q) represents the piece image of 3 × 3 pixel sizes of putting centered by q, ρ pand ρ (i) q(i) represent respectively l (p t-1) and the statistics with histogram of l (q), N fixes value 255;
Step 7: utilize track similar function κ (ε (p' t-1, p t)) each measuring point p to be checked of calculating t frame ttrack ε (p' t-1, p t) with the associated similarity of the track of all measuring points to be checked, be 1 or 0 to judge moving target pixel point set according to similar value with static scene point set
&kappa; ( &epsiv; p ) = 1 , &Sigma; i = 1 L | | &epsiv; p - &epsiv; i | | > 1 L &Sigma; i = 1 L &Sigma; j = 1 L | | &epsiv; i - &epsiv; j | | 0 , &Sigma; i = 1 L | | &epsiv; p - &epsiv; i | | &le; 1 L &Sigma; i = 1 L &Sigma; j = 1 L | | &epsiv; i - &epsiv; j | | - - - ( 10 )
Wherein, ε prepresent measuring point p to be checked ttrack ε (p' t-1, p t), ε i, ε jrepresent respectively the track of each measuring point to be checked in all measuring points to be checked of t frame, as κ (ε p)=1 o'clock, as κ (ε p)=0 o'clock,
Step 8: adopt the mode of facing recently cluster, by moving target pixel point set in the movement locus ε (p' of each pixel t-1, p t) carry out Cluster merging synthetic image moving target track ε (o t-1, o t);
Step 9: to moving target movement locus ε (o t-1, o t) judge; If moving target movement locus be nonlinear, this moving target is spurious motion target; If moving target movement locus ε (o t-1, o t) be linear, this moving target is real motion target; According to the continuous path of real motion target, complete the video object of taking photo by plane and automatically detect tracking.
The span of described pad value parameter d is [10,30], threshold value T fspan be [100,150], threshold value T fspan be [100,150].
The span of described β is [0.1,3].
Beneficial effect
A kind of video object of taking photo by plane based on track constraint that the present invention proposes detects tracking automatically, first the method constructs forward direction image and backward image, and merge two movable informations on image and automatically complete the moving object detection on image, comprise actual moving target and the static target being caused by parallax; Secondly, utilize back projection matrix, calculate the respective pixel of each pixel on forward direction image and obtain the movement locus of each pixel; Finally, analyze the tracking results of real motion target the impact of removing the static target causing because of parallax.More than detecting tracking results rate of accuracy reached to 80%.
Embodiment
The video object of taking photo by plane based on track constraint detects a tracking automatically, comprises the following steps:
In the image that step 1, unmanned plane sensor obtain, be subject to unmanned plane during flying disturbing influence, ground static scene shows on sensor video image to be motor pattern, therefore according to the mode of motion of camera sensor, calculate the motion affine matrix between two two field pictures, obtain ground static scene fixing between two two field pictures.First the method suppressing by Corner Detection and non-maximum value is carried out feature point extraction to two two field pictures, and utilizes RANSAC method to remove the exterior point on two two field pictures; Then utilize the unique point remaining to calculate the motion affine matrix between two two field pictures, utilize on this basis motion affine matrix that former frame image re-projection is carried out to the poor calculating of picture frame to a rear two field picture; Last according to the poor result of calculation of the picture frame between two two field pictures, obtain the static scene between two two field pictures.
Step 2, according to the static scene of two two field pictures, utilize the poor motion compensated image structure of frame between two two field pictures forward direction image, utilize on this basis Gaussian filter function to carry out smoothing processing and eliminate noise and obtain the forward direction image after optimization.Profit uses the same method and calculates backward image.
Step 3, detect overlaid pixel region according to the maximum of forward direction image and backward image, forward direction image and backward image are carried out to two-value and merge and obtain two-value testing result image as pixel to be detected.The Pixel Information that wherein two-value testing result image comprises is real motion object pixel information, motion artifacts information and the static target background pixel information causing because of parallax.
Step 4, utilize each measuring point to be checked of affine motion model backwards calculation corresponding pixel in previous frame image.Utilize regional extent search function correction affine motion model backwards calculation error, determine each measuring point to be checked actual corresponding pixel in previous frame image
Step 5, utilize track similar function to calculate the similarity of track and all locus of points to be detected of each measuring point to be checked.According to the movement locus agreement principle of static scene target, in conjunction with similarity result, each measuring point to be checked is classified, obtain moving target pixel point set and static scene point set.
The mode of cluster is faced in step 6, utilization recently, and adjacent moving target pixel point set is sorted out to all moving targets that obtain on image, wherein comprises real moving target Pixel Information and motion artifacts information.
The movement locus of step 7, statistics moving target obtains real moving target.If the movement locus of moving target is linear, judge that this moving target is real motion target; If the movement locus of moving target is nonlinear, judge that this moving target is spurious motion target (noise, the impact such as shelter).Finally according to the movement locus of real motion target, the moving target after to each cluster is followed the tracks of.
Now the invention will be further described in conjunction with the embodiments:
1, take photo by plane video frequency motion target detect
(a), according to the mode of motion of camera sensor, by calculating the motion affine matrix between two two field pictures, obtain the relative static scene between two two field pictures.The method suppressing by Corner Detection and non-maximum value is to forward direction image I fin t two field picture with t-Δ frame image carries out feature point extraction, Δ>=1, and utilize RANSAC method to remove the exterior point on two two field pictures;
(b) basis with unique point utilize Optical-flow Feature computing method to obtain t frame forward direction image I fmotion affine matrix by construct image with between affine variation relation:
I t F = P ( t , t - &Delta; ) F &times; I t - &Delta; F - - - ( 1 )
In order to reduce the error of affine variation function, adopt the mode of incremental computations to substitute directly calculating with motion affine matrix
P ( t , t - &Delta; ) F = P ( t , t - 1 ) F &times; P ( t - 1 , t - 2 ) F &times; &CenterDot; &CenterDot; &CenterDot; &times; P ( t - &Delta; + 1 , t - &Delta; ) F - - - ( 2 )
Wherein, it is basis with unique point utilize Optical-flow Feature computing method to obtain motion affine matrix, it is basis with unique point utilize Optical-flow Feature computing method to obtain motion affine matrix, it is basis with unique point utilize Optical-flow Feature computing method to obtain motion affine matrix.
(c) calculate two two field pictures with obtain forward direction image I fthe poor moving image of frame at t frame:
D ( x , y , t ) F = | I t F - I t - &Delta; F | - - - ( 3 )
(d) basis with construct forward direction image H (x, y, t) fand utilize Gaussian filter function to carry out smoothing processing and eliminate noise; Adopting uses the same method calculates backward image I bthe motion affine matrix of t frame with the poor moving image D of frame (x, y, t) bconstruct backward image H (x, y, β) b:
H ( x , y , t ) F = max ( 0 , P ( t , t - &Delta; ) F H ( x , y , t - 1 ) - d ) F if D ( x , y , t ) F < T F 255 if D ( x , y , t ) F &GreaterEqual; T F - - - ( 4 )
H ( x , y , t ) B = max ( 0 , P ( t , t - &Delta; ) B H ( x , y , t - 1 ) - d ) B if D ( x , y , t ) B < T B 255 if D ( x , y , t ) B &GreaterEqual; T B - - - ( 5 )
Wherein, x, y is the pixel coordinate of representative image respectively, pad value parameter d=25, threshold value T f=125, threshold value T b=125.
(e) by forward direction image H (x, y, t) fwith backward image H (x, y, t) bcarrying out two-value merges associated:
H(x,y) t=max(H(x,y,t) F,H(x,y,t) B) (6)
And remember H (x, y) tthe pixel to be detected that all pixel values on image are 255 is
2, take photo by plane video frequency motion target track constraint
(a) utilize the motion affine matrix of forward direction image each measuring point p to be checked of backwards calculation t frame t=(x t, y t) ∈ Θ tthe pixel position p of correspondence in t-1 two field picture t-1:
p t - 1 = ( P ( t , t - 1 ) F ) T p t - - - ( 7 )
(b) with p t-1centered by, in the image range point set Q that is radius of β=1.2, utilize regional extent search function F (p t-1, β) find and p tthere is the pixel coordinate point p' of maximum similarity t-1, as p tin the actual corresponding pixel points of t-1 frame, and set vector parameter ε (p' t-1, p t) as p tat the movement locus of t-1 frame and t frame;
F(p t-1,β)=maxf(p t-1,q) (8)
f ( p t - 1 , q ) = 1 N &Sigma; i = 1 N ( 1 - | &rho; p ( i ) - &rho; q ( i ) | Max ( &rho; p ( i ) , &rho; q ( i ) ) ) - - - ( 9 )
Wherein, q ∈ Q={q|||l (p t-1)-l (q) || < β }, l (p t-1) represent with p t-1centered by the piece image of 3 × 3 pixel sizes of point, l (q) represents the piece image of 3 × 3 pixel sizes of putting centered by q, ρ pand ρ (i) q(i) represent respectively l (p t-1) and the statistics with histogram of l (q), N fixes value 255.
(c) utilize track similar function κ (ε (p' t-1, p t)) each measuring point p to be checked of calculating t frame ttrack ε (p' t-1, p t) with the associated similarity of the track of all measuring points to be checked, obtain moving target pixel point set with static scene point set &Theta; t No - Object ;
&kappa; ( &epsiv; p ) = 1 , &Sigma; i = 1 L | | &epsiv; p - &epsiv; i | | > 1 L &Sigma; i = 1 L &Sigma; j = 1 L | | &epsiv; i - &epsiv; j | | 0 , &Sigma; i = 1 L | | &epsiv; p - &epsiv; i | | &le; 1 L &Sigma; i = 1 L &Sigma; j = 1 L | | &epsiv; i - &epsiv; j | | - - - ( 10 )
Wherein, ε prepresent measuring point p to be checked ttrack ε (p' t-1, p t), ε i, ε jrepresent respectively the track of each measuring point to be checked in all measuring points to be checked of t frame.As κ (ε p)=1 o'clock, as κ (ε p)=0 o'clock,
3, take photo by plane video frequency motion target follow the tracks of
(a) adopt the mode of facing recently cluster, by moving target pixel point set in the movement locus ε (p' of each pixel t-1, p t) carry out Cluster merging synthetic image moving target track ε (o t-1, o t);
(b) to moving target movement locus ε (o t-1, o t) judge; If moving target movement locus ε (o t-1, o t) be linear, judge that this moving target is real motion target; If moving target movement locus be nonlinear, judge that this moving target is spurious motion target (noise, the impact such as shelter); According to the continuous path of real motion target, complete the video object of taking photo by plane and automatically detect tracking.

Claims (3)

1. the video object of taking photo by plane based on track constraint detects a tracking automatically, it is characterized in that step is as follows:
Step 1: the method first suppressing by Corner Detection and non-maximum value is to forward direction image I fin t two field picture with t-Δ frame image carries out feature point extraction, Δ>=1, and utilize RANSAC method to remove the exterior point on two two field pictures; Then utilize Optical-flow Feature computing method to obtain t frame forward direction image I fmotion affine matrix finally by construct image with between affine variation relation:
I t F = P ( t , t - &Delta; ) F &times; I t - &Delta; F - - - ( 1 )
Adopt the mode of incremental computations to substitute directly calculating with motion affine matrix
P ( t , t - &Delta; ) F = P ( t , t - 1 ) F &times; P ( t - 1 , t - 2 ) F &times; &CenterDot; &CenterDot; &CenterDot; &times; P ( t - &Delta; + 1 , t - &Delta; ) F - - - ( 2 )
Wherein, be with motion affine matrix, be with motion affine matrix, be with motion affine matrix;
Step 2: by two two field pictures with obtain forward direction image I fthe poor moving image of frame at t frame:
D ( x , y , t ) F = | I t F - I t - &Delta; F | - - - ( 3 )
Step 3: according to and D (x, y, t) fconstruct forward direction image H (x, y, t) fand utilize Gaussian filter function to carry out smoothing processing and eliminate noise; Adopting uses the same method calculates backward image I bthe motion affine matrix of t frame with the poor moving image D of frame (x, y, t) bconstruct backward image H (x, y, t) b:
H ( x , y , t ) F = max ( 0 , P ( t , t - &Delta; ) F H ( x , y , t - 1 ) - d ) F if D ( x , y , t ) F < T F 255 if D ( x , y , t ) F &GreaterEqual; T F - - - ( 4 )
H ( x , y , t ) B = max ( 0 , P ( t , t - &Delta; ) B H ( x , y , t - 1 ) - d ) B if D ( x , y , t ) B < T B 255 if D ( x , y , t ) B &GreaterEqual; T B - - - ( 5 )
Wherein, x, y is the pixel coordinate of representative image respectively, and d is pad value parameter, T f, T bfor threshold value;
Step 4: by forward direction image H (x, y, t) fwith backward image H (x, y, t) bcarrying out two-value merges associated:
H(x,y) t=max(H(x,y,t) F,H(x,y,t) B) (6)
And remember H (x, y) tthe pixel to be detected that all pixel values on image are 255 is
Step 5: the motion affine matrix that utilizes forward direction image each measuring point p to be checked of backwards calculation t frame t=(x t, y t) ∈ Θ tthe pixel position p of correspondence in t-1 two field picture t-1:
p t - 1 = ( P ( t , t - 1 ) F ) T p t - - - ( 7 )
Step 6: with p t-1centered by, in the β image range point set Q that is radius, utilize regional extent search function F (p t-1, β) find and p tthere is the pixel coordinate point p' of maximum similarity t-1, as p tin the actual corresponding pixel points of t-1 frame, and set vector parameter ε (p' t-1, p t) as p tat the movement locus of t-1 frame and t frame;
F(p t-1,β)=maxf(p t-1,q) (8)
f ( p t - 1 , q ) = 1 N &Sigma; i = 1 N ( 1 - | &rho; p ( i ) - &rho; q ( i ) | Max ( &rho; p ( i ) , &rho; q ( i ) ) ) - - - ( 9 )
Wherein, q ∈ Q={q|||l (p t-1)-l (q) || < β }, l (p t-1) represent with p t-1centered by the piece image of 3 × 3 pixel sizes of point, l (q) represents the piece image of 3 × 3 pixel sizes of putting centered by q, ρ pand ρ (i) q(i) represent respectively l (p t-1) and the statistics with histogram of l (q), N fixes value 255;
Step 7: utilize track similar function κ (ε (p' t-1, p t)) each measuring point p to be checked of calculating t frame ttrack ε (p' t-1, p t) with the associated similarity of the track of all measuring points to be checked, be 1 or 0 to judge moving target pixel point set according to similar value with static scene point set
&kappa; ( &epsiv; p ) = 1 , &Sigma; i = 1 L | | &epsiv; p - &epsiv; i | | > 1 L &Sigma; i = 1 L &Sigma; j = 1 L | | &epsiv; i - &epsiv; j | | 0 , &Sigma; i = 1 L | | &epsiv; p - &epsiv; i | | &le; 1 L &Sigma; i = 1 L &Sigma; j = 1 L | | &epsiv; i - &epsiv; j | | - - - ( 10 )
Wherein, ε prepresent measuring point p to be checked ttrack ε (p' t-1, p t), ε i, ε jrepresent respectively the track of each measuring point to be checked in all measuring points to be checked of t frame, as κ (ε p)=1 o'clock, as κ (ε p)=0 o'clock,
Step 8: adopt the mode of facing recently cluster, by moving target pixel point set in the movement locus ε (p' of each pixel t-1, p t) carry out Cluster merging synthetic image moving target track ε (o t-1, o t);
Step 9: to moving target movement locus ε (o t-1, o t) judge; If moving target movement locus be nonlinear, this moving target is spurious motion target; If moving target movement locus ε (o t-1, o t) be linear, this moving target is real motion target; According to the continuous path of real motion target, complete the video object of taking photo by plane and automatically detect tracking.
2. the video object of taking photo by plane based on track constraint according to claim 1 detects tracking automatically, and the span that it is characterized in that described pad value parameter d is [10,30], threshold value T fspan be [100,150], threshold value T bspan be [100,150].
3. the video object of taking photo by plane based on track constraint according to claim 1 detects tracking automatically, and the span that it is characterized in that described β is [0.1,3].
CN201410421654.1A 2014-08-25 2014-08-25 Video object automatic detection tracking of taking photo by plane based on profile constraints Active CN104200492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410421654.1A CN104200492B (en) 2014-08-25 2014-08-25 Video object automatic detection tracking of taking photo by plane based on profile constraints

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410421654.1A CN104200492B (en) 2014-08-25 2014-08-25 Video object automatic detection tracking of taking photo by plane based on profile constraints

Publications (2)

Publication Number Publication Date
CN104200492A true CN104200492A (en) 2014-12-10
CN104200492B CN104200492B (en) 2017-03-29

Family

ID=52085778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410421654.1A Active CN104200492B (en) 2014-08-25 2014-08-25 Video object automatic detection tracking of taking photo by plane based on profile constraints

Country Status (1)

Country Link
CN (1) CN104200492B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654516A (en) * 2016-02-18 2016-06-08 西北工业大学 Method for detecting small moving object on ground on basis of satellite image with target significance
CN106227239A (en) * 2016-09-22 2016-12-14 安徽机电职业技术学院 Four rotor flying robot target lock-ons based on machine vision follow the tracks of system
CN106846358A (en) * 2017-01-13 2017-06-13 西北工业大学深圳研究院 Segmentation of Multi-target and tracking based on the ballot of dense track
CN106886745A (en) * 2016-12-26 2017-06-23 西北工业大学 A kind of unmanned plane reconnaissance method based on the generation of real-time online map
CN107025658A (en) * 2015-11-13 2017-08-08 本田技研工业株式会社 The method and system of moving object is detected using single camera
CN105761279B (en) * 2016-02-18 2019-05-24 西北工业大学 Divide the method for tracking target with splicing based on track
CN110555868A (en) * 2019-05-31 2019-12-10 南京航空航天大学 method for detecting small moving target under complex ground background
CN110660085A (en) * 2019-09-29 2020-01-07 凯迈(洛阳)测控有限公司 Real-time video image moving target detection and stable tracking method suitable for photoelectric pod

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413324A (en) * 2013-07-29 2013-11-27 西北工业大学 Automatic target tracking method for aerially photographed videos
CN103455797B (en) * 2013-09-07 2017-01-11 西安电子科技大学 Detection and tracking method of moving small target in aerial shot video
CN103617636B (en) * 2013-12-02 2016-08-17 西北工业大学 The automatic detecting and tracking method of video object based on movable information and sparse projection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHAOZHENG YIN ET AL: "《Moving Object Localization in Thermal Imagery》", 《COMPUTER VISION AND PATTERN RECOGNITION WORKSHOP》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025658A (en) * 2015-11-13 2017-08-08 本田技研工业株式会社 The method and system of moving object is detected using single camera
CN107025658B (en) * 2015-11-13 2022-06-28 本田技研工业株式会社 Method and system for detecting moving object by using single camera
CN105654516A (en) * 2016-02-18 2016-06-08 西北工业大学 Method for detecting small moving object on ground on basis of satellite image with target significance
CN105654516B (en) * 2016-02-18 2019-03-26 西北工业大学 Satellite image based on target conspicuousness is to ground weak moving target detection method
CN105761279B (en) * 2016-02-18 2019-05-24 西北工业大学 Divide the method for tracking target with splicing based on track
CN106227239A (en) * 2016-09-22 2016-12-14 安徽机电职业技术学院 Four rotor flying robot target lock-ons based on machine vision follow the tracks of system
CN106886745A (en) * 2016-12-26 2017-06-23 西北工业大学 A kind of unmanned plane reconnaissance method based on the generation of real-time online map
CN106886745B (en) * 2016-12-26 2019-09-24 西北工业大学 A kind of unmanned plane reconnaissance method generated based on real-time online map
CN106846358A (en) * 2017-01-13 2017-06-13 西北工业大学深圳研究院 Segmentation of Multi-target and tracking based on the ballot of dense track
CN110555868A (en) * 2019-05-31 2019-12-10 南京航空航天大学 method for detecting small moving target under complex ground background
CN110660085A (en) * 2019-09-29 2020-01-07 凯迈(洛阳)测控有限公司 Real-time video image moving target detection and stable tracking method suitable for photoelectric pod
CN110660085B (en) * 2019-09-29 2023-03-28 凯迈(洛阳)测控有限公司 Real-time video image moving target detection and stable tracking method suitable for photoelectric pod

Also Published As

Publication number Publication date
CN104200492B (en) 2017-03-29

Similar Documents

Publication Publication Date Title
CN104200492A (en) Automatic detecting and tracking method for aerial video target based on trajectory constraint
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN111693972B (en) Vehicle position and speed estimation method based on binocular sequence images
JP6095018B2 (en) Detection and tracking of moving objects
EP2858008B1 (en) Target detecting method and system
Uittenbogaard et al. Privacy protection in street-view panoramas using depth and multi-view imagery
Van Der Mark et al. Real-time dense stereo for intelligent vehicles
WO2020097840A1 (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN102646275B (en) The method of virtual three-dimensional superposition is realized by tracking and location algorithm
US8446468B1 (en) Moving object detection using a mobile infrared camera
CN101216941B (en) Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN103077521A (en) Area-of-interest extracting method used for video monitoring
CN104183127A (en) Traffic surveillance video detection method and device
CN103814306A (en) Depth measurement quality enhancement
CN103617636A (en) Automatic video-target detecting and tracking method based on motion information and sparse projection
KR20130082216A (en) Apparatus and method for tracking human hand by using color features
CN102261916B (en) Vision-based lunar rover positioning method in sandy environment
CN102609945A (en) Automatic registration method of visible light and thermal infrared image sequences
CN113256731A (en) Target detection method and device based on monocular vision
CN112541938A (en) Pedestrian speed measuring method, system, medium and computing device
CN115375733A (en) Snow vehicle sled three-dimensional sliding track extraction method based on videos and point cloud data
Diego et al. Vision-based road detection via on-line video registration
Nadav et al. Off-road path and obstacle detection using monocular camera
CN104809720A (en) Small cross view field-based double-camera target associating method
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant