CN104217442B - Aerial video moving object detection method based on multiple model estimation - Google Patents

Aerial video moving object detection method based on multiple model estimation Download PDF

Info

Publication number
CN104217442B
CN104217442B CN201410431932.1A CN201410431932A CN104217442B CN 104217442 B CN104217442 B CN 104217442B CN 201410431932 A CN201410431932 A CN 201410431932A CN 104217442 B CN104217442 B CN 104217442B
Authority
CN
China
Prior art keywords
pset
area
model
foreground blocks
represent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410431932.1A
Other languages
Chinese (zh)
Other versions
CN104217442A (en
Inventor
张艳宁
杨涛
仝小敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201410431932.1A priority Critical patent/CN104217442B/en
Publication of CN104217442A publication Critical patent/CN104217442A/en
Application granted granted Critical
Publication of CN104217442B publication Critical patent/CN104217442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an aerial video moving object detection method based on multiple model estimation. The aerial photography video moving object detection method comprises the following steps: firstly, utilizing a Mean shift color segmentation method to segment a scene into a plurality of color blocks; then, utilizing dense pyramid luminous flux characteristics, and adopting a RANSAC (Random Sample Consensus) method to calculate an affine transformation model corresponding the color blocks with a large area; managing smaller color blocks, and analyzing point movement consistency in the blocks so as to carry out multiple model membership degree calculation, wherein the color blocks with the big membership degree are moving blocks, and otherwise, the color blocks are false alarm targets; and finally, combining and denoising the moving blocks, and outputting and displaying a detection result. The invention obtains a 5% detection error rate of a test in a public aerial video database, and the error rate is lowered by 5% than a traditional 10% error rate.

Description

Video moving object detection method of taking photo by plane based on multiple-model estimator
Technical field
The present invention relates to Detection for Moving Target, more particularly to a kind of video motion of taking photo by plane based on multiple-model estimator Object detection method.
Background technology
Video frequency motion target of taking photo by plane detection is the important subject of computer vision field.Existing video motion of taking photo by plane Object detection method is based primarily upon the moving object detection framework that single background model is estimated.Document " moving object Detection in aerial video based on spatiotemporal saliency, chinese journal of Aeronautics, 2013,26 (5): 1211-1217 " propose a kind of video motion mesh of taking photo by plane based on time and space significance analysis Mark detection algorithm.The method first passes through the marking area that estimation background model and frame difference obtain on time dimension, as slightly extracts Candidate region, carry out significance analysis in space dimension afterwards, obtain the appearance details of target in candidate region, finally by the time Significant characteristics spatially combine, and obtain accurate moving object detection result.But the method testing result is fine or not Depend critically upon the complexity of scene, once there is overhead building constructions, electric pole, overpass etc. in scene not in the back of the body Static object in scape plane, will occur false-alarm, detect error rate averagely about 10%.
Content of the invention
Technical problem to be solved
In order to avoid video moving object detection method of taking photo by plane of the prior art is easily affected by scene complexity, on the scene There are multiple background models in scape, overhead static state building, electric pole etc. cause parallax object in the case of false alarm rate very Height, thus leading to detect mistake, the present invention proposes a kind of video moving object detection method of taking photo by plane based on multiple-model estimator.
Technical scheme
A kind of video moving object detection method of taking photo by plane based on multiple-model estimator is it is characterised in that step is as follows:
Step 1: using pyramid mean shift method, Color Segmentation is carried out to current frame image, take area to be more than given The colored block of threshold value thresh is background block { patchbi| i=1,2 ..., bnum }, remaining is foreground blocks { patchfj| j=1, 2 ..., fnum }, wherein bnum represents background block number in frame segmentation result, i-th piece of patchbi={ areai,pseti, pseti', areaiRepresent that i-th piece of area is comprised to count, psetiRepresent i-th piece of coordinate set being comprised a little, pseti' represent psetiThe coordinate set of middle corresponding point a little in consecutive frame;Fnum represents foreground blocks number, jth block patchfj={ areaj,psetj,psetj'},areajRepresent that the area of j-th foreground blocks is comprised to count, psetjRepresent J-th foreground blocks are comprised coordinate set a little, psetj' represent psetjThe coordinate set of middle corresponding point a little in consecutive frame Close;
Step 2: using pyramid optical flow algorithm to the point set pset in background blockiCarry out dense optical flow sign extraction, meter Calculate psetiPixel (x0,y0) in consecutive frame, the coordinate of corresponding point is (x0',y0'):
x0'=x0+u(x0,y0)
y0'=y0+v(x0,y0)
u(x0,y0),v(x0,y0) represent pixel (x respectively0,y0) horizontal direction at place, the light stream of vertical direction;Using Same method calculates foreground blocks Point Set psetjPixel (x1,y1) in consecutive frame, the coordinate of corresponding point is (x1',y1'):
x1'=x1+u(x1,y1)
y1'=y1+v(x1,y1)
Step 3: calculate background block patchb using ransac methodiAffine Transform Model afi, calculate psetiPixel Point (x0,y0) corresponding projection error
err x 0 err y 0 = af i · x 0 y 0 1 - x 0 ′ y 0 ′
IfThen (x0,y0) for meeting affine Transform Model afiBackground dot, otherwise for the back of the body Noise spot in scape;Multi-model set is designated as af={ afi| i=1,2 ..., bnum }, afi=[ri|ti], wherein riAnd tiPoint Wei not spin matrix and translation matrix;
Step 4: calculate current j-th foreground blocks patchfjIn k-th point of (xj,k,yj,k) under i-th background model Motion vector:
v j , k ( x ) v j , k ( y ) = x j , k ′ y j , k ′ - af i × x j , k y j , k
Wherein (xj,k',yj,k') represent (xj,k,yj,k) in psetj' in corresponding point coordinate, vj,k(x) and vj,k(y) difference Represent movement velocity both horizontally and vertically;
Calculate j-th foreground blocks degree of membership to i-th background model:
m ( i , j ) = σ k = 1 area j ( v j , k ( x ) - v j ( x ) &overbar; ) ( v j , k ( y ) - v j ( y ) &overbar; ) σ k = 1 area j ( v j , k ( x ) - v j ( x ) &overbar; ) 2 · σ k = 1 area j ( v j , k ( y ) - v j ( y ) &overbar; ) 2
v j ( x ) &overbar; = σ k = 1 area j v j , k ( x ) area j
v j ( y ) &overbar; = σ k = 1 area j v j , k ( y ) area j
Step 5: according to degree of membership calculating Subject Matrix p:
p ( i , j ) = 1 m ( i , j ) > θ 0 o t h e r w i s e
Wherein θ is degree of membership threshold value, ifThen determine that j-th foreground blocks is moving mass;Moving mass is entered Row merges:
A) initialization model integrates queue q as null, and object set queue o is null;
B) count the background model that current j-th foreground blocks are subordinate to, the set of these background models is designated as sj:
s j = { af i | ∀ i , p ( i , j ) = 1 }
C) travel through Models Sets queue q, ifMake q (m) ∩ sj≠ null, then by sjIt is incorporated to q (m), q (m)=q (m) ∪ sj, j-th foreground blocks is incorporated to corresponding object set queue o, otherwise execution step d);
D) by sjAdd Models Sets queue q as newcomer, corresponding foreground blocks add object set queue o as newcomer;
Step 6: traversal object set queue o, if there are two block centre distances in current membership to be less than first threshold, enters Row merges, and carries out noise remove after merging, area in testing result is less than to the moving mass of Second Threshold, as mistake inspection Survey is removed, finally by testing result display output.
Described threshold value thresh=0.01 × imgwidth × imgheight, imgwidth and imgheight table respectively Show width and the height of inputted video image.
Described dis takes 1.
Described θ=0.6.
Described first threshold takes 1.5 times of two block radius sums.
Described Second Threshold takes 100 pixels.
Beneficial effect
A kind of video moving object detection method of taking photo by plane based on multiple-model estimator proposed by the present invention, using Color Segmentation Scene partitioning is multiple models, and extract pyramid Optical-flow Feature multiple-model estimator is carried out to scene so that testing result not Uniform background is depended on to estimate again and registering result, thus improving detection algorithm robustness;Additionally, by target to multi-model Degree of membership calculate, can effectively remove the false-alarm targets such as building constructions, electric pole, tall and big trees, detection lower error rate is extremely 5%.
Specific embodiment
In conjunction with embodiment, the invention will be further described:
1st, mean shift Color Segmentation
One section of video sequence of taking photo by plane of input, is to carry out coloured silk using pyramid mean shift method to current frame image first Color is split.In order to ensure computational efficiency and degree of accuracy simultaneously, originally apply in example and take the pyramid number of plies to be 3, color window width is 10, space Position window width is 10.The colored block that area is more than given threshold value thresh is taken to be background block, remaining is foreground blocks, originally applies in example and takes Thresh=0.01 × imgwidth × imgheight, imgwidth and imgheight represent the width of inputted video image respectively And height.{patchbi| i=1,2 ..., bnum } represent background block in present frame segmentation result, wherein bnum represents frame segmentation knot Background block number in fruit, i-th piece of patchbi={ areai,pseti,pseti', areaiRepresent that i-th piece of area is comprised Points, psetiRepresent i-th piece of coordinate set being comprised a little, pseti' represent psetiMiddle corresponding point a little in consecutive frame Coordinate, be set to null.{patchfj| j=1,2 ..., fnum } represent foreground blocks in present frame segmentation result, fnum Represent foreground blocks number, each foreground blocks equally also comprises area and point set patchfj={ areaj,psetj,psetj'}, areajRepresent that the area of j-th foreground blocks is comprised to count, psetjRepresent that j-th foreground blocks are comprised coordinate set a little, psetj' represent psetjThe coordinate of middle corresponding point a little in consecutive frame, is set to null.
2nd, dense optical flow signature tracking
In view of ageing and accuracy, using pyramid optical flow algorithm, dense optical flow body is carried out to present frame and consecutive frame Levy extraction, calculate the light stream vector of each point pixel-by-pixel, that is, predict position in consecutive frame for each pixel, originally apply in example and take Pyramid coefficient is 0.5, and the pyramid number of plies is 3, window width 15, and iterationses are 3, smoothing windows a width of 5, and variance is 1.2.u(x, Y), v (x, y) represents the horizontal direction at pixel (x, y) place, the light stream of vertical direction respectively, then pixel (x, y) is adjacent In frame, the coordinate of corresponding point is (x', y').
X '=x+u (x, y) (3)
Y '=y+v (x, y)
So far, background block Point Set pset can be calculatediCorresponding point set pset in consecutive framei':There is (x0',y0')∈pseti', wherein x0'=x0+u(x0,y0),y0'=y0+v(x0,y0).
Foreground blocks Point Set pset can also be calculatedjCorresponding point set pset in consecutive framej':Have (x1',y1')∈psetj', wherein x1'=x1+u(x1,y1),y1'=y1+v(x1,y1).
3rd, multiple background model estimation
Each background block patchb in segmentation resultiA corresponding affine Transform Model.Using ransac method and background Pixel point set pset in blocki、pseti' to the affine Transform Model af calculating between present frame and next framei, and by psetiIn Institute is a little according to affine model afiProjection error be divided into interior point and exterior point, interior point as meets transformation model afiBackground Point, exterior point is then the noise spot in this background.(x0,y0) corresponding projection errorIt is calculated as follows:
err x 0 err y 0 = af i · x 0 y 0 1 - x 0 ′ y 0 ′ - - - ( 4 )
IfThen (x0,y0) it is interior point, otherwise for exterior point.Originally apply in example and take distance threshold dis For 1, confidence level takes 0.99.Multi-model set is designated as af={ afi| i=1,2 ..., bnum }.
afi=[ri|ti] (5)
Wherein riAnd tiIt is respectively spin matrix and translation matrix.
4th, degree of membership calculates
Calculate subordinated-degree matrix mbnum×fnum, the degree of membership to i-th background model for m (i, j) j-th foreground blocks of expression, this In degree of membership be defined as the correlation coefficient of point motion vector in block, i.e. point Movement consistency tolerance in block.Theoretically analyze, very Positive target all point motion vectors in its background model more consistent it should have larger degree of membership, and building construction, The decoys such as electric pole trees then do not possess above-mentioned attribute.So current j-th foreground blocks patchfjIn k-th point of (xj,k, yj,k) the motion vector computation formula under i-th background model is as follows:
v j , k ( x ) v j , k ( y ) = x j , k ′ y j , k ′ - af i × x j , k y j , k - - - ( 6 )
Wherein (xj,k',yj,k') represent (xj,k,yj,k) in psetj' in corresponding point coordinate, vj,k(x) and vj,k(y) difference Represent movement velocity both horizontally and vertically.The average of j-th foreground blocks movement velocity is:
v j ( x ) &overbar; = σ k = 1 area j v j , k ( x ) area j - - - ( 7 )
v j ( y ) &overbar; = σ k = 1 area j v j , k ( y ) area j - - - ( 8 )
So j-th foreground blocks are as follows to the degree of membership computing formula of i-th background model:
m ( i , j ) = σ k = 1 area j ( v j , k ( x ) - v j ( x ) &overbar; ) ( v j , k ( y ) - v j ( y ) &overbar; ) σ k = 1 area j ( v j , k ( x ) - v j ( x ) &overbar; ) 2 · σ k = 1 area j ( v j , k ( y ) - v j ( y ) &overbar; ) 2 - - - ( 9 )
M (i, j) ∈ [0,1], and closer to 1, m (i, j) shows that j-th foreground blocks is subordinate to i-th foreground model Degree is bigger, is that the probability of moving target in this background is bigger.
5th, testing result management and output
First Subject Matrix p is calculated according to degree of membership as follows:
p ( i , j ) = 1 m ( i , j ) > θ 0 o t h e r w i s e - - - ( 10 )
Wherein θ it is contemplated that influence of noise and edge's light stream mistake, originally applies in example and takes θ=0.6 for degree of membership threshold value.IfThen determine that j-th foreground blocks is moving mass.In view of moving target itself because of color inconsistent and It is divided into multiple moving mass, need to carry out the merging of moving mass.
Initialization model integrates queue q as null, and object set queue o is null.Traversal Subject Matrix p holds line by line from top to bottom Row is following to be operated:
A) count the background model that current j-th foreground blocks are subordinate to, the set of these background models is designated as sj:
s j = { af i | ∀ i , p ( i , j ) = 1 } - - - ( 11 )
B) travel through Models Sets queue q, ifMake q (m) ∩ sj≠ null, then by sjIt is incorporated to q (m), q (m)=q (m) ∪ sj, j-th foreground blocks is incorporated to corresponding object set queue o, otherwise execution step c);
C) by sjAdd Models Sets queue q as newcomer, corresponding foreground blocks add object set queue o as newcomer.
So far obtain Models Sets queue and corresponding object set queue, carry out the merging of foreground blocks below.Traversal target Collection queue, if there are two block centre distances in current membership and (originally applying and take the 1.5 of two block radius sums in example less than given threshold value Times) then merge.Carry out noise remove after merging, area in testing result (is originally applied in example less than given threshold value 100 pixels) moving mass, be removed as error detection.Finally by testing result display output.

Claims (6)

1. a kind of video moving object detection method of taking photo by plane based on multiple-model estimator is it is characterised in that step is as follows:
Step 1: using pyramid mean shift method, Color Segmentation is carried out to current frame image, take area to be more than given threshold value The colored block of thresh is background block { patchbi| i=1,2 ..., bnum }, remaining is foreground blocks { patchfj| j=1, 2 ..., fnum }, wherein bnum represents background block number in frame segmentation result, i-th piece of patchbi={ areai,pseti, pseti', areaiRepresent that i-th piece of area is comprised to count, psetiRepresent i-th piece of coordinate set being comprised a little, pseti' represent psetiThe coordinate set of middle corresponding point a little in consecutive frame;Fnum represents foreground blocks number, jth block patchfj={ areaj,psetj,psetj'},areajRepresent that the area of j-th foreground blocks is comprised to count, psetjRepresent J-th foreground blocks are comprised coordinate set a little, psetj' represent psetjThe coordinate set of middle corresponding point a little in consecutive frame Close;
Step 2: using pyramid optical flow algorithm to the point set pset in background blockiCarry out dense optical flow sign extraction, calculate psetiPixel (x0,y0) in consecutive frame, the coordinate of corresponding point is (x0',y0'):
x0'=x0+u(x0,y0)
y0'=y0+v(x0,y0)
u(x0,y0),v(x0,y0) represent pixel (x respectively0,y0) horizontal direction at place, the light stream of vertical direction;Using same Method calculates foreground blocks Point Set psetjPixel (x1,y1) in consecutive frame, the coordinate of corresponding point is (x1',y1'):
x1'=x1+u(x1,y1)
y1'=y1+v(x1,y1)
Step 3: calculate background block patchb using ransac methodiAffine Transform Model afi, calculate psetiPixel (x0,y0) corresponding projection error
e r r x 0 err y 0 = af i · x 0 y 0 1 - x 0 ′ y 0 ′
IfThen (x0,y0) for meeting affine Transform Model afiBackground dot, otherwise in background Noise spot;Multi-model set is designated as af={ afi| i=1,2 ..., bnum }, afi=[ri|ti], wherein riAnd tiIt is respectively and revolve Torque battle array and translation matrix;
Step 4: calculate current j-th foreground blocks patchfjIn k-th point of (xj,k,yj,k) motion under i-th background model Vector:
v j , k ( x ) v j , k ( y ) = x j , k ′ y j , k ′ - af i × x j , k y j , k
Wherein (xj,k',yj,k') represent (xj,k,yj,k) in psetj' in corresponding point coordinate, vj,k(x) and vj,kY () represents respectively Movement velocity both horizontally and vertically;
Calculate j-th foreground blocks degree of membership to i-th background model:
m ( i , j ) = σ k = 1 area j ( v j , k ( x ) - v j ( x ) &overbar; ) ( v j , k ( y ) - v j ( y ) &overbar; ) σ k = 1 area j ( v j , k ( x ) - v j ( x ) &overbar; ) 2 · σ k = 1 area j ( v j , k ( y ) - v j ( y ) &overbar; ) 2
v j ( x ) &overbar; = σ k = 1 area j v j , k ( x ) area j
v j ( y ) &overbar; = σ k = 1 area j v j , k ( y ) area j
Step 5: according to degree of membership calculating Subject Matrix p:
p ( i , j ) = 1 m ( i , j ) > θ 0 o t h e r w i s e
Wherein θ is degree of membership threshold value, ifThen determine that j-th foreground blocks is moving mass;Moving mass is closed And:
A) initialization model integrates queue q as null, and object set queue o is null;
B) count the background model that current j-th foreground blocks are subordinate to, the set of these background models is designated as sj:
s j = { af i | ∀ i , p ( i , j ) = 1 }
C) travel through Models Sets queue q, ifMake q (m) ∩ sj≠ null, then by sjIt is incorporated to q (m), q (m)=q (m) ∪ sj, will J-th foreground blocks is incorporated to corresponding object set queue o, otherwise execution step d);
D) by sjAdd Models Sets queue q as newcomer, corresponding foreground blocks add object set queue o as newcomer;
Step 6: traversal object set queue o, if there are two block centre distances in current membership to be less than first threshold, is closed And, carry out noise remove after merging, area in testing result is less than to the moving mass of Second Threshold, enters as error detection Row removes, finally by testing result display output.
2. according to claim 1 based on multiple-model estimator take photo by plane video moving object detection method it is characterised in that Described threshold value thresh=0.01 × imgwidth × imgheight, imgwidth and imgheight represents that input regards respectively The width of frequency image and height.
3. according to claim 1 based on multiple-model estimator take photo by plane video moving object detection method it is characterised in that Described dis takes 1.
4. according to claim 1 based on multiple-model estimator take photo by plane video moving object detection method it is characterised in that Described θ=0.6.
5. according to claim 1 based on multiple-model estimator take photo by plane video moving object detection method it is characterised in that Described first threshold takes 1.5 times of two block radius sums.
6. according to claim 1 based on multiple-model estimator take photo by plane video moving object detection method it is characterised in that Described Second Threshold takes 100 pixels.
CN201410431932.1A 2014-08-28 2014-08-28 Aerial video moving object detection method based on multiple model estimation Active CN104217442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410431932.1A CN104217442B (en) 2014-08-28 2014-08-28 Aerial video moving object detection method based on multiple model estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410431932.1A CN104217442B (en) 2014-08-28 2014-08-28 Aerial video moving object detection method based on multiple model estimation

Publications (2)

Publication Number Publication Date
CN104217442A CN104217442A (en) 2014-12-17
CN104217442B true CN104217442B (en) 2017-01-25

Family

ID=52098884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410431932.1A Active CN104217442B (en) 2014-08-28 2014-08-28 Aerial video moving object detection method based on multiple model estimation

Country Status (1)

Country Link
CN (1) CN104217442B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975918B (en) * 2016-04-29 2019-04-02 厦门大学 The moving target detecting method towards mobile camera based on multiple-model estimator
CN107316030A (en) * 2017-07-04 2017-11-03 西北工业大学深圳研究院 Unmanned plane is to terrain vehicle automatic detection and sorting technique
CN107330922A (en) * 2017-07-04 2017-11-07 西北工业大学 Video moving object detection method of taking photo by plane based on movable information and provincial characteristics
CN108540769B (en) * 2018-02-05 2019-03-29 东营金丰正阳科技发展有限公司 Unmanned flight's platform real-time image transmission system
CN108491818B (en) * 2018-03-30 2019-07-05 北京三快在线科技有限公司 Detection method, device and the electronic equipment of target object
CN109102503A (en) * 2018-08-13 2018-12-28 北京市遥感信息研究所 It is a kind of based on color space smoothly and improve the significant model of frequency tuning high score image change detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110020961A (en) * 2009-08-25 2011-03-04 삼성전자주식회사 Method of detecting and tracking moving object for mobile platform
CN103413324A (en) * 2013-07-29 2013-11-27 西北工业大学 Automatic target tracking method for aerially photographed videos
CN103426172A (en) * 2013-08-08 2013-12-04 深圳一电科技有限公司 Vision-based target tracking method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9147260B2 (en) * 2010-12-20 2015-09-29 International Business Machines Corporation Detection and tracking of moving objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110020961A (en) * 2009-08-25 2011-03-04 삼성전자주식회사 Method of detecting and tracking moving object for mobile platform
CN103413324A (en) * 2013-07-29 2013-11-27 西北工业大学 Automatic target tracking method for aerially photographed videos
CN103426172A (en) * 2013-08-08 2013-12-04 深圳一电科技有限公司 Vision-based target tracking method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Moving object detection in aerial video based on spatiotemporal saliency;Shen Hao 等;《Chinese Journal of Aeronautics》;20131231;第26卷(第5期);1211-1217 *
Random Sample Consensus:A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography;Martin A. Fischler 等;《Communications of the ACM》;19810630;第24卷(第6期);381-395 *
Two-Frame Motion Estimation Based on Polynomial Expansion;Gunnar Farneback;《SCIA 2003》;20031231;第2749卷;363-370 *
全局-局部联合滤波的红外小目标背景抑制方法;莫金花 等;《中国体视学与图像分析》;20111231;第16卷(第3期);223-231 *
自适应的均值漂移分割算法;马瑜 等;《激光与红外》;20131031;第43卷(第10期);1162-1165 *
航拍视频图像中地面运动目标的快速锁定;于映雪;《通讯世界》;20140630;8-9 *

Also Published As

Publication number Publication date
CN104217442A (en) 2014-12-17

Similar Documents

Publication Publication Date Title
CN104217442B (en) Aerial video moving object detection method based on multiple model estimation
CN111539273B (en) Traffic video background modeling method and system
CN106910203B (en) The quick determination method of moving target in a kind of video surveillance
CN103164706B (en) Object counting method and device based on video signal analysis
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
EP2858008B1 (en) Target detecting method and system
CN110175576A (en) A kind of driving vehicle visible detection method of combination laser point cloud data
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
CN103077521B (en) A kind of area-of-interest exacting method for video monitoring
CN110490911B (en) Multi-camera multi-target tracking method based on non-negative matrix factorization under constraint condition
US8712096B2 (en) Method and apparatus for detecting and tracking vehicles
CN102799883B (en) Method and device for extracting movement target from video image
CN106875424A (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN102156880A (en) Method for detecting abnormal crowd behavior based on improved social force model
CN109460764A (en) A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN106778633B (en) Pedestrian identification method based on region segmentation
Hinz et al. Car detection in aerial thermal images by local and global evidence accumulation
CN109636771A (en) Airbound target detection method and system based on image procossing
CN103747240A (en) Fusion color and motion information vision saliency filtering method
CN103735269A (en) Height measurement method based on video multi-target tracking
CN103679677A (en) Dual-model image decision fusion tracking method based on mutual updating of models
CN106156714A (en) The Human bodys' response method merged based on skeletal joint feature and surface character
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
Wang et al. Low-altitude infrared small target detection based on fully convolutional regression network and graph matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant