CN109767454A - Based on Space Time-frequency conspicuousness unmanned plane video moving object detection method - Google Patents

Based on Space Time-frequency conspicuousness unmanned plane video moving object detection method Download PDF

Info

Publication number
CN109767454A
CN109767454A CN201811552410.1A CN201811552410A CN109767454A CN 109767454 A CN109767454 A CN 109767454A CN 201811552410 A CN201811552410 A CN 201811552410A CN 109767454 A CN109767454 A CN 109767454A
Authority
CN
China
Prior art keywords
conspicuousness
frequency
image
region
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811552410.1A
Other languages
Chinese (zh)
Other versions
CN109767454B (en
Inventor
李映
汪亦文
李静玉
白宗文
聂金苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201811552410.1A priority Critical patent/CN109767454B/en
Publication of CN109767454A publication Critical patent/CN109767454A/en
Application granted granted Critical
Publication of CN109767454B publication Critical patent/CN109767454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of based on Space Time-frequency conspicuousness unmanned plane video moving object detection method, the time conspicuousness of video is extracted using Lucas-Kanade optical flow method, the spatial saliency of image is extracted using distribution of color, image is transformed into frequency domain from airspace, the frequency domain saliency of image is extracted using spectrum residual error method, time, space, frequency domain saliency are carried out linear weighted function to merge to obtain a conspicuousness confidence map, by setting threshold value by conspicuousness confidence map binaryzation, moving target is extracted from video of taking photo by plane.Time domain, airspace, frequency domain saliency fusion are got up, the deficiency in respective domain is made up using the conspicuousness in other two domain, improves the robustness of detection accuracy and detection, algorithm is simple, and execution efficiency is high.

Description

Based on Space Time-frequency conspicuousness unmanned plane video moving object detection method
Technical field
The method that the present invention relates to a kind of to detect moving target from unmanned plane video, belongs to computer vision field.
Background technique
The detection of unmanned plane video frequency motion target is important one of the branch of video intelligent analysis field that takes photo by plane, in military affairs There is particularly important application with civil field.Currently, experts and scholars both domestic and external are in video frequency motion target context of detection of taking photo by plane Some research work are done.Than there is the method based on inter-frame difference earlier, characteristic point or region are primarily based on to consecutive frame It is registrated, difference then is carried out to the consecutive frame after registration, the position of moving target is judged according to difference image.But This method is easy to be influenced by registration Algorithm precision.If registration accuracy is not high, the result of difference is also not accurate enough, right The judgement of moving target position can have a great impact below.In addition, due in video of taking photo by plane target it is relatively small, some technologies The middle method using background model estimation carries out moving object detection.But this method is easy by the background model established It influences, if in the background model established including target, the detection effect that subsequent target detection can not reach.And base In Space Time-frequency conspicuousness moving target detecting method, extract conspicuousness respectively from time domain, airspace, frequency domain, then by this three A conspicuousness is merged, and realizes the detection of moving target.This method is special by the main vision system using human eye of conspicuousness The object candidate area that point obtains in image realizes the detection of moving target in conjunction with the motion information in video.
Summary of the invention
Technical problems to be solved
In order to avoid the shortcomings of the prior art, the present invention proposes one kind, the purpose of the present invention is will be based on Space Time-frequency Conspicuousness fusion application is to unmanned plane video frequency motion target detection field to solve the problems such as detection accuracy is not high.
Technical solution
It is a kind of based on Space Time-frequency conspicuousness unmanned plane video moving object detection method, it is characterised in that step It is as follows:
Step 1: the time conspicuousness of video is extracted using Lucas-Kanade optical flow method;
Step 2: the spatial saliency of image is extracted using distribution of color;
Step 3: image being transformed into frequency domain from airspace, the frequency domain saliency of image is extracted using spectrum residual error method;
Step 4: time, space, frequency domain saliency being subjected to linear weighted function and merge to obtain a conspicuousness confidence map, is passed through Threshold value is set by conspicuousness confidence map binaryzation, extracts moving target from video of taking photo by plane, the specific steps are as follows:
1) time, space, frequency domain saliency linear weighted function is carried out to merge to obtain a conspicuousness confidence map S (x, y):
S (x, y)=μ1St(x,y)+μ2Ss(x,y)+μ3Sf(x,y)
Wherein, St(x, y) is time conspicuousness, Ss(x, y) is spatial saliency, Sf(x, y) is frequency domain saliency, μiIt is Every weight;
2) S (x, y) binaryzation is obtained by binary map B by setting threshold value, is found using 8 connected domains as standard all in B Connected region regionc1;
3) by region qualified in Bc1 circumscribed rectangular region is set to 1, seeks again using 8 connected regions as standard Look for connected region regionc2;The marginal information figure of prewitt operator extraction original input gray level image is used simultaneously, if regioncIt is greater than 5 more than the sum of every row gray value of 5 rows in the marginal information figure of the corresponding position of 2 connected region, then It is left regionc3;
4) identical with original input image size null matrix is initialized, and by regionc3 corresponding positions 1, rest part is set 0, obtain binary image Y1
5) the collar plate shape morphology closed operation that radius is 7 is carried out to binary map B, filling cavity region obtains Y2;Y1、Y2In After corresponding position element and operation, final binary map Y is obtained;
6) all connected region region for meeting standard of Y are found using 8 connected domains as standardcfinal; regioncThe position of final is the position that moving target is extracted from video of taking photo by plane.
Specific step is as follows for step 1:
Step 11: normalization operation is done to light stream directional diagram:
Wherein, θiIndicate the angle value of (x, y) point light stream, then carrying out radius to the light stream directional diagram after normalization is 3 Collar plate shape morphology closed operation, obtains grayscale image C;
Step 12: counting the frequency of the gray value on 0 to 255 in grayscale image C, calculate what each gray value occurred Frequency, then negative logarithm is taken to frequency, obtain the direction conspicuousness of the point:
Wherein, NiIndicate all quantity with (x, y) point gray value identical point, N indicates of pixel all in C Number;
The available time Saliency maps S based on light stream amplitude of same methoda, wherein amplitude normalization is as follows, Remaining step is identical as direction conspicuousness;
Final time Saliency maps St(x, y) is defined as the timing notable figure based on light stream amplitude and is based on light stream direction Timing notable figure linear weighted function and, be represented by the following formula:
St(x, y)=w1Sa(x,y)+w2Sd(x,y)。
Specific step is as follows for step 2:
Step 21: starting to traverse its 4 neighborhood at gray level image pixel coordinate (0,0), if gray value difference is less than Threshold value is then determined as same connected region, is otherwise provided as new connected region starting point, aforesaid operations is repeated, until image is complete It is traversed entirely;
Step 22: calculating each connected region gray average, and be uniformly assigned to the region all pixels point, obtain image M;
Step 23: counting the quantity of pixel in each connected domain in image M, calculate pixel in each connected domain and go out Existing frequency, then negative logarithm is taken to frequency, obtain spatial saliency:
Wherein, Nconnect(i) it indicates all and puts same connected domain pixel quantity, N with (x, y)connectIndicate own in M Pixel number.
Specific step is as follows for step 3:
Step 31: a given width gray level image H (x, y) is converted it from spatial domain by two dimensional discrete Fourier transform F To frequency domain, image is obtained in the expression F [H (x, y)] of frequency domain;
Step 32: obtain the amplitude A (f) and phase P (f) of F [H (x, y)]:
A (f)=| F [H (x, y)] |
Wherein, | | representative takes amplitude to operate,Representative takes phase operation;
Step 33: logarithmic spectrum L (f) is obtained after taking logarithm to the amplitude A (f) of F [H (x, y)]:
L (f)=log (A (f))
Step 34: using local smoothing filters hn(f) logarithmic spectrum is carried out smooth:
M (f)=L (f) * hn(f)
Here hn(f) be a n × n matrix, wherein each pixel is equal, be defined as follows shown in formula:
Step 35: the difference after the map of magnitudes of logarithmic spectrum and its progress mean filter is to compose residual error:
R (f)=L (f)-M (f)
Step 36: spectrum residual error R (f) and phase P (f) being subjected to two-dimensional discrete Fourier inverse transformation, so that it may turn from frequency domain Spatial domain is changed to, is shown below;
T (x, y)=| F-1[exp{R(f)+iP(f)}]|2
Step 37: doing gaussian filtering by being transformed into the figure after spatial domain to spectrum residual error and reconstruct piece image, be used to The conspicuousness for indicating each pixel of original image, becomes notable figure:
Sf(x, y)=T (x, y) * Gaussian.
Beneficial effect
It is proposed by the present invention a kind of based on Space Time-frequency conspicuousness unmanned plane video moving object detection method, it will Time domain, airspace, frequency domain saliency fusion are got up, and the deficiency in respective domain is made up using the conspicuousness in other two domain, improves inspection The robustness of precision and detection is surveyed, algorithm is simple, and execution efficiency is high.
Detailed description of the invention
Fig. 1 is taken photo by plane video frequency motion target overhaul flow chart based on Space Time-frequency conspicuousness
Specific embodiment
Now in conjunction with embodiment, attached drawing, the invention will be further described:
This programme is used based on Space Time-frequency conspicuousness video moving object detection method of taking photo by plane, the specific steps are as follows:
Step 1: the time conspicuousness of video is extracted using Lucas-Kanade optical flow method.
Step 2: the spatial saliency of image is extracted using distribution of color.
Step 3: image being transformed into frequency domain from airspace, the frequency domain saliency of image is extracted using spectrum residual error method.
Step 4: time, space, frequency domain saliency being subjected to linear weighted function and merge to obtain a conspicuousness confidence map, is passed through Conspicuousness confidence map binaryzation is extracted moving target from video of taking photo by plane by one suitable threshold value.
Steps are as follows for a kind of preferred embodiment of the invention:
Step 1: the time conspicuousness of video is extracted using Lucas-Kanade optical flow method.
Assuming that I (x, y, t) is the gray value of any pixel (x, y) in certain moment t image, and at the t+dt moment, original image For the pixel of the position (x, y) respectively in x, the offset in the direction y is dx, dy as in.According in very short time, corresponding position pixel is grey Degree remains unchanged, and has:
I (x, y, t)=I (x+dx, y+dy, t+dt) (1)
It is unfolded on the right of peer-to-peer with Taylor formula, and omits higher-order shear deformation because movement is sufficiently small, can be obtained:
X, y, the gradient in the direction t are corresponded at (x, y, t) for image.Vx and Vy is respectively pixel along x With the movement velocity in the direction y.
It is assumed that the pixel motion of a regional area is consistent, 5x5 neighborhood is chosen in the present embodiment, when pixel point When being set to edge, 0 completion of deleted areas gray value.It so can be one group of above-mentioned equation be established in its neighborhood for pixel:
Here pixel1, pixel2..., pixelnIt is the pixel on image I in pixel (x, y) 5x5 neighborhood.This A series of equation can uniformly be write as Qv=b, in which:
It only include two unknown quantitys, V in equation groupxAnd Vy.Lucas and Kanade seeks the equation using least square method The least square solution of group is solved as the light stream of pixel (x, y):
V=(QTQ-1)QTb (5)
Then amplitude and the direction of the light stream are found out respectively:
It is based respectively on the amplitude and direction seeking time conspicuousness of light stream.
Illustrate by taking direction as an example:
1) normalization operation is done to light stream directional diagram, as follows:
Wherein θiIndicate the angle value of (x, y) point light stream.The circle that radius is 3 is carried out to the light stream directional diagram after normalization again Dish-type morphology closed operation, obtains grayscale image C;
2) frequency of the gray value on 0 to 255 is counted in grayscale image C, calculates the frequency that each gray value occurs, Negative logarithm is taken to frequency again, obtains the direction conspicuousness of the point.It is as follows:
Wherein NiIndicate all quantity with (x, y) point gray value identical point, N indicates the number of pixel all in C.
The available time Saliency maps S based on light stream amplitude of same methoda, wherein amplitude normalization is as follows, Remaining step is identical as direction conspicuousness.
Final time Saliency maps St(x, y) is defined as the timing notable figure based on light stream amplitude and is based on light stream direction Timing notable figure linear weighted function and, be represented by the following formula:
St(x, y)=w1Sa(x,y)+w2Sd(x,y) (11)
In the present embodiment, w1And w20.7 and 0.3 are taken respectively.
Step 2: the spatial saliency of image is extracted using distribution of color.
Mean shift segmentation first is done to image, then uses for reference the calculation method of motion information conspicuousness, i.e., to distribution frequency Rate takes negative logarithm, obtains the spatial saliency of image.Specific step is as follows:
1) start to traverse its 4 neighborhood at gray level image pixel coordinate (0,0), if gray value difference is less than threshold value (in the present embodiment threshold value be 5), then be determined as same connected region, is otherwise provided as new connected region starting point, in repetition Operation is stated, until image is traversed completely.
2) each connected region gray average is calculated, and is uniformly assigned to the region all pixels point, obtains image M.
3) quantity that pixel in each connected domain is counted in image M calculates what pixel in each connected domain occurred Frequency, then negative logarithm is taken to frequency, obtain spatial saliency.
Wherein Nconnect(i) it indicates all and puts same connected domain pixel quantity, N with (x, y)connectIndicate all in M The number of pixel.
Step 3: image being changed into frequency domain from transform of spatial domain, the frequency domain saliency of image is extracted using spectrum residual error method.
1) a width gray level image H (x, y) is given, it is changed to from transform of spatial domain by frequency by two dimensional discrete Fourier transform F Domain obtains image in the expression F [H (x, y)] of frequency domain.
2) the amplitude A (f) and phase P (f) of F [H (x, y)] are obtained:
A (f)=| F [H (x, y)] | (13)
Wherein, | | representative takes amplitude to operate,Representative takes phase operation.
3) logarithmic spectrum L (f) is obtained after taking logarithm to the amplitude A (f) of F [H (x, y)]:
L (f)=log (A (f)) (15)
4) local smoothing filters h is usedn(f) logarithmic spectrum smoothly, be shown below, obtain the substantially shape of logarithmic spectrum Shape:
M (f)=L (f) * hn(f) (16)
Here hn(f) be a n × n matrix (3 × 3 are taken in the present embodiment), wherein each pixel is equal, fixed Justice is shown below:
5) map of magnitudes of logarithmic spectrum and the difference after its progress mean filter are to compose residual error, can be calculated as follows;
R (f)=L (f)-M (f) (18)
6) frequency-portions abnormal in the available image of residual error are composed, therefore can be used to carry out well-marked target detection.It will It composes residual error R (f) and phase P (f) carries out two-dimensional discrete Fourier inverse transformation, so that it may be transformed into spatial domain, such as following formula from frequency domain It is shown;
T (x, y)=| F-1[exp{R(f)+iP(f)}]|2 (19)
7) it does gaussian filtering and reconstructs piece image by being transformed into the figure after spatial domain to spectrum residual error and (adopted in this programme With 3 × 3 sizes, the gauss low frequency filter that standard deviation is 1) it is used to indicate the conspicuousness of each pixel of original image, become notable figure.
Sf(x, y)=T (x, y) * Gaussian (20)
Step 4: time, space, frequency domain saliency being subjected to linear weighted function and merge to obtain a conspicuousness confidence map, is passed through Conspicuousness confidence map binaryzation is extracted moving target from video of taking photo by plane by one suitable threshold value.
1) time, space, frequency domain saliency linear weighted function is carried out to merge to obtain a conspicuousness confidence map S (x, y):
S (x, y)=μ1St(x,y)+μ2Ss(x,y)+μ3Sf(x,y) (21)
μiIt is every weight, μ in the present embodiment1、μ2Divide μ30.52,0.2 and 0.28 is not taken.
2) S (x, y) binaryzation is obtained by binary map by a suitable threshold value (threshold value is 0.2 in the present embodiment) B finds all connected region region in B using 8 connected domains as standardc1, each connected region area needs in this programme Will be in 20 × 20 pixels between 200 × 200 pixels, length-width ratio and breadth length ratio are respectively less than and are equal to 5.
3) by region qualified in Bc1 circumscribed rectangular region is set to 1, seeks again using 8 connected regions as standard Look for connected region regionc2.The marginal information figure of prewitt operator extraction original input gray level image is used simultaneously, if regioncIt is greater than 5 more than the sum of every row gray value of 5 rows in the marginal information figure of the corresponding position of 2 connected region, then It is left regionc3。
4) identical with original input image size null matrix is initialized, and by regionc3 corresponding positions 1, rest part is set 0, obtain binary image Y1
5) the collar plate shape morphology closed operation that radius is 7 is carried out to binary map B, filling cavity region obtains Y2。Y1、Y2In After corresponding position element and operation, final binary map Y is obtained.
6) all connected region region for meeting standard of Y are found using 8 connected domains as standardcFinal, the present embodiment In standard are as follows: the line number value of each connected region needs the area more than or equal to 0.6 times of boundary rectangle.
regioncThe position of final is the position that moving target is extracted from video of taking photo by plane.

Claims (4)

1. a kind of based on Space Time-frequency conspicuousness unmanned plane video moving object detection method, it is characterised in that step is such as Under:
Step 1: the time conspicuousness of video is extracted using Lucas-Kanade optical flow method;
Step 2: the spatial saliency of image is extracted using distribution of color;
Step 3: image being transformed into frequency domain from airspace, the frequency domain saliency of image is extracted using spectrum residual error method;
Step 4: time, space, frequency domain saliency being subjected to linear weighted function and merge to obtain a conspicuousness confidence map, passes through setting Conspicuousness confidence map binaryzation is extracted moving target from video of taking photo by plane by threshold value, the specific steps are as follows:
1) time, space, frequency domain saliency linear weighted function is carried out to merge to obtain a conspicuousness confidence map S (x, y):
S (x, y)=μ1St(x,y)+μ2Ss(x,y)+μ3Sf(x,y)
Wherein, St(x, y) is time conspicuousness, Ss(x, y) is spatial saliency, Sf(x, y) is frequency domain saliency, μiIt is every Weight;
2) S (x, y) binaryzation is obtained by binary map B by setting threshold value, all connections in B is found using 8 connected domains as standard Region regionc1;
3) by region qualified in Bc1 circumscribed rectangular region is set to 1, is found be connected to as standard using 8 connected regions again Region regionc2;The marginal information figure for using prewitt operator extraction original input gray level image simultaneously, if in regionc2 A connected region corresponding position marginal information figure in more than the sum of every row gray value of 5 rows be greater than 5, then be left regionc3;
4) identical with original input image size null matrix is initialized, and by regionc3 corresponding positions 1, rest part sets 0, obtains To binary image Y1
5) the collar plate shape morphology closed operation that radius is 7 is carried out to binary map B, filling cavity region obtains Y2;Y1、Y2Middle correspondence After position element and operation, final binary map Y is obtained;
6) all connected region region for meeting standard of Y are found using 8 connected domains as standardcfinal;regioncFinal's Position is the position that moving target is extracted from video of taking photo by plane.
2. according to claim 1 a kind of based on Space Time-frequency conspicuousness unmanned plane video frequency motion target detection side Method, it is characterised in that specific step is as follows for step 1:
Step 11: normalization operation is done to light stream directional diagram:
Wherein, θiIt indicates the angle value of (x, y) point light stream, then the collar plate shape that radius is 3 is carried out to the light stream directional diagram after normalization Morphology closed operation obtains grayscale image C;
Step 12: counting the frequency of the gray value on 0 to 255 in grayscale image C, calculate the frequency that each gray value occurs Rate, then negative logarithm is taken to frequency, obtain the direction conspicuousness of the point:
Wherein, NiIndicate all quantity with (x, y) point gray value identical point, N indicates the number of pixel all in C;
The available time Saliency maps S based on light stream amplitude of same methoda, wherein amplitude normalizes as follows, remaining step It is identical as direction conspicuousness;
Final time Saliency maps St(x, y) be defined as timing notable figure based on light stream amplitude and based on light stream direction when The linear weighted function of sequence notable figure and, be represented by the following formula:
St(x, y)=w1Sa(x,y)+w2Sd(x,y)。
3. according to claim 1 a kind of based on Space Time-frequency conspicuousness unmanned plane video frequency motion target detection side Method, it is characterised in that specific step is as follows for step 2:
Step 21: start to traverse its 4 neighborhood at gray level image pixel coordinate (0,0), if gray value difference is less than threshold value, Then be determined as same connected region, be otherwise provided as new connected region starting point, repeat aforesaid operations, until image completely by time It goes through;
Step 22: calculating each connected region gray average, and be uniformly assigned to the region all pixels point, obtain image M;
Step 23: counting the quantity of pixel in each connected domain in image M, calculate what pixel in each connected domain occurred Frequency, then negative logarithm is taken to frequency, obtain spatial saliency:
Wherein, Nconnect(i) it indicates all and puts same connected domain pixel quantity, N with (x, y)connectIndicate picture all in M The number of vegetarian refreshments.
4. according to claim 1 a kind of based on Space Time-frequency conspicuousness unmanned plane video frequency motion target detection side Method, it is characterised in that specific step is as follows for step 3:
Step 31: it is changed to frequency from transform of spatial domain by two dimensional discrete Fourier transform F by a given width gray level image H (x, y) Domain obtains image in the expression F [H (x, y)] of frequency domain;
Step 32: obtain the amplitude A (f) and phase P (f) of F [H (x, y)]:
A (f)=| F [H (x, y)] |
Wherein, | | representative takes amplitude to operate,Representative takes phase operation;
Step 33: logarithmic spectrum L (f) is obtained after taking logarithm to the amplitude A (f) of F [H (x, y)]:
L (f)=log (A (f))
Step 34: using local smoothing filters hn(f) logarithmic spectrum is carried out smooth:
M (f)=L (f) * hn(f)
Here hn(f) be a n × n matrix, wherein each pixel is equal, be defined as follows shown in formula:
Step 35: the difference after the map of magnitudes of logarithmic spectrum and its progress mean filter is to compose residual error:
R (f)=L (f)-M (f)
Step 36: spectrum residual error R (f) and phase P (f) being subjected to two-dimensional discrete Fourier inverse transformation, so that it may be transformed into from frequency domain Spatial domain is shown below;
T (x, y)=| F-1[exp{R(f)+iP(f)}]|2
Step 37: doing gaussian filtering by being transformed into the figure after spatial domain to spectrum residual error and reconstruct piece image, for indicating The conspicuousness of each pixel of original image, becomes notable figure:
Sf(x, y)=T (x, y) * Gaussian.
CN201811552410.1A 2018-12-18 2018-12-18 Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance Active CN109767454B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811552410.1A CN109767454B (en) 2018-12-18 2018-12-18 Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811552410.1A CN109767454B (en) 2018-12-18 2018-12-18 Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance

Publications (2)

Publication Number Publication Date
CN109767454A true CN109767454A (en) 2019-05-17
CN109767454B CN109767454B (en) 2022-05-10

Family

ID=66450293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811552410.1A Active CN109767454B (en) 2018-12-18 2018-12-18 Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance

Country Status (1)

Country Link
CN (1) CN109767454B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148149A (en) * 2019-05-20 2019-08-20 哈尔滨工业大学(威海) Underwater vehicle thermal trail segmentation method based on local contrast accumulation
CN111950549A (en) * 2020-08-12 2020-11-17 上海大学 Sea surface obstacle detection method based on fusion of sea antennas and visual saliency
CN112001991A (en) * 2020-10-27 2020-11-27 中国空气动力研究与发展中心高速空气动力研究所 High-speed wind tunnel dynamic oil flow map image processing method
CN113449658A (en) * 2021-07-05 2021-09-28 四川师范大学 Night video sequence significance detection method based on spatial domain, frequency domain and time domain
CN113591708A (en) * 2021-07-30 2021-11-02 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image
CN114511851A (en) * 2022-01-30 2022-05-17 南水北调中线干线工程建设管理局 Hairspring algae cell statistical method based on microscope image
CN115861365A (en) * 2022-10-11 2023-03-28 海南大学 Moving object detection method, system, computer device and storage medium
CN116449332A (en) * 2023-06-14 2023-07-18 西安晟昕科技股份有限公司 Airspace target detection method based on MIMO radar

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN101634706A (en) * 2009-08-19 2010-01-27 西安电子科技大学 Method for automatically detecting bridge target in high-resolution SAR images
CN103077533A (en) * 2012-12-26 2013-05-01 中国科学技术大学 Method for positioning moving target based on frogeye visual characteristics
CN103075998A (en) * 2012-12-31 2013-05-01 华中科技大学 Monocular space target distance-measuring and angle-measuring method
CN103679196A (en) * 2013-12-05 2014-03-26 河海大学 Method for automatically classifying people and vehicles in video surveillance
CN104050477A (en) * 2014-06-27 2014-09-17 西北工业大学 Infrared image vehicle detection method based on auxiliary road information and significance detection
CN104777453A (en) * 2015-04-23 2015-07-15 西北工业大学 Wave beam domain time-frequency analysis method for warship line spectrum noise source positioning
CN105303571A (en) * 2015-10-23 2016-02-03 苏州大学 Time-space saliency detection method for video processing
CN107122715A (en) * 2017-03-29 2017-09-01 哈尔滨工程大学 It is a kind of based on frequency when conspicuousness combine moving target detecting method
CN108229487A (en) * 2016-12-12 2018-06-29 南京理工大学 A kind of conspicuousness detection method of combination spatial domain and frequency domain

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN101634706A (en) * 2009-08-19 2010-01-27 西安电子科技大学 Method for automatically detecting bridge target in high-resolution SAR images
CN103077533A (en) * 2012-12-26 2013-05-01 中国科学技术大学 Method for positioning moving target based on frogeye visual characteristics
CN103075998A (en) * 2012-12-31 2013-05-01 华中科技大学 Monocular space target distance-measuring and angle-measuring method
CN103679196A (en) * 2013-12-05 2014-03-26 河海大学 Method for automatically classifying people and vehicles in video surveillance
CN104050477A (en) * 2014-06-27 2014-09-17 西北工业大学 Infrared image vehicle detection method based on auxiliary road information and significance detection
CN104777453A (en) * 2015-04-23 2015-07-15 西北工业大学 Wave beam domain time-frequency analysis method for warship line spectrum noise source positioning
CN105303571A (en) * 2015-10-23 2016-02-03 苏州大学 Time-space saliency detection method for video processing
CN108229487A (en) * 2016-12-12 2018-06-29 南京理工大学 A kind of conspicuousness detection method of combination spatial domain and frequency domain
CN107122715A (en) * 2017-03-29 2017-09-01 哈尔滨工程大学 It is a kind of based on frequency when conspicuousness combine moving target detecting method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"Saliency directed sampling for flicker removal from video streams", RESEARCH DISCLOSURE, vol. 487, no. 30, 10 November 2004 (2004-11-10) *
BING XU等: "Accurate Object Segmentation for Video Sequences via Temporal-Spatial-Frequency Saliency Model", 《IEEE INTELLIGENT SYSTEMS》 *
YANG CHANGHUI等: "Overlapped fruit recognition for citrus harvesting robot in natural scenes", 《2017 2ND INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION ENGINEERING》 *
宋耀等: "交通监控视频中的车辆异常行为检测", 《视频应用与工程》 *
李鹏等: "基于视觉显著性和特征点匹配增强的运动目标跟踪框架", 《湖南科技大学学报( 自然科学版)》 *
肖丽君等: "基于对称差分算法的视频运动目标分割", 《吉林大学学报(理学版)》 *
蔡佳丽: "基于显著性的运动目标检测技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148149A (en) * 2019-05-20 2019-08-20 哈尔滨工业大学(威海) Underwater vehicle thermal trail segmentation method based on local contrast accumulation
CN110148149B (en) * 2019-05-20 2024-01-30 哈尔滨工业大学(威海) Hot wake segmentation method of underwater vehicle based on local contrast accumulation
CN111950549A (en) * 2020-08-12 2020-11-17 上海大学 Sea surface obstacle detection method based on fusion of sea antennas and visual saliency
CN112001991A (en) * 2020-10-27 2020-11-27 中国空气动力研究与发展中心高速空气动力研究所 High-speed wind tunnel dynamic oil flow map image processing method
CN112001991B (en) * 2020-10-27 2021-01-26 中国空气动力研究与发展中心高速空气动力研究所 High-speed wind tunnel dynamic oil flow map image processing method
CN113449658A (en) * 2021-07-05 2021-09-28 四川师范大学 Night video sequence significance detection method based on spatial domain, frequency domain and time domain
CN113591708B (en) * 2021-07-30 2023-06-23 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image
CN113591708A (en) * 2021-07-30 2021-11-02 金陵科技学院 Meteorological disaster monitoring method based on satellite-borne hyperspectral image
CN114511851A (en) * 2022-01-30 2022-05-17 南水北调中线干线工程建设管理局 Hairspring algae cell statistical method based on microscope image
CN115861365B (en) * 2022-10-11 2023-08-15 海南大学 Moving object detection method, system, computer device and storage medium
CN115861365A (en) * 2022-10-11 2023-03-28 海南大学 Moving object detection method, system, computer device and storage medium
CN116449332A (en) * 2023-06-14 2023-07-18 西安晟昕科技股份有限公司 Airspace target detection method based on MIMO radar
CN116449332B (en) * 2023-06-14 2023-08-25 西安晟昕科技股份有限公司 Airspace target detection method based on MIMO radar

Also Published As

Publication number Publication date
CN109767454B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN109767454A (en) Based on Space Time-frequency conspicuousness unmanned plane video moving object detection method
Ni et al. Visual tracking using neuromorphic asynchronous event-based cameras
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
Zhu et al. Object tracking in structured environments for video surveillance applications
CN106709472A (en) Video target detecting and tracking method based on optical flow features
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN111311647B (en) Global-local and Kalman filtering-based target tracking method and device
Liu et al. Adaptive object tracking by learning hybrid template online
CN106570893A (en) Rapid stable visual tracking method based on correlation filtering
CN104036524A (en) Fast target tracking method with improved SIFT algorithm
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN111881790A (en) Automatic extraction method and device for road crosswalk in high-precision map making
CN110276785A (en) One kind is anti-to block infrared object tracking method
CN109544635A (en) It is a kind of based on the automatic camera calibration method for enumerating exploration
CN102609945A (en) Automatic registration method of visible light and thermal infrared image sequences
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN105405138A (en) Water surface target tracking method based on saliency detection
CN112489088A (en) Twin network visual tracking method based on memory unit
Feng Mask RCNN-based single shot multibox detector for gesture recognition in physical education
CN110473255A (en) A kind of ship bollard localization method divided based on multi grid
CN111899278B (en) Unmanned aerial vehicle image rapid target tracking method based on mobile terminal
CN109410246A (en) The method and device of vision tracking based on correlation filtering
CN110689559B (en) Visual target tracking method based on dense convolutional network characteristics
CN105741317B (en) Infrared motion target detection method based on time-space domain significance analysis and rarefaction representation
CN106780541A (en) A kind of improved background subtraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant