CN109767454B - Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance - Google Patents
Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance Download PDFInfo
- Publication number
- CN109767454B CN109767454B CN201811552410.1A CN201811552410A CN109767454B CN 109767454 B CN109767454 B CN 109767454B CN 201811552410 A CN201811552410 A CN 201811552410A CN 109767454 B CN109767454 B CN 109767454B
- Authority
- CN
- China
- Prior art keywords
- image
- significance
- saliency
- frequency
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to an unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency saliency, which is characterized by extracting the time saliency of a video by using a Lucas-Kanade optical flow method, extracting the spatial saliency of an image by using color distribution, converting the image from a spatial domain to a frequency domain, extracting the frequency domain saliency of the image by using a spectral residual error method, performing linear weighting and fusion on the time, space and frequency domain saliency to obtain a saliency confidence map, binarizing the saliency confidence map by setting a threshold value, and extracting a moving target from an aerial video. The time domain, the space domain and the frequency domain are fused, the defects of the respective domains are made up by using the significance of the other two domains, the detection precision and the detection robustness are improved, the algorithm is simple, and the execution efficiency is high.
Description
Technical Field
The invention relates to a method for detecting a moving target from an unmanned aerial vehicle aerial video, and belongs to the field of computer vision.
Background
Unmanned aerial vehicle aerial video moving target detection is one of important branches in the field of aerial video intelligent analysis, and has extremely important application in the fields of military affairs and civilian use. At present, in the aspect of aerial photography video moving target detection, expert scholars at home and abroad have already performed some research works. Earlier, there is a method based on inter-frame difference, which registers adjacent frames based on feature points or regions, then performs difference on the registered adjacent frames, and determines the position of a moving target according to a difference image. However, this method is susceptible to the accuracy of the registration algorithm. If the registration precision is not high, the difference result is not accurate enough, and the judgment of the position of the following moving target is greatly influenced. In addition, because the target in the aerial video is relatively small, a background model estimation method is used for detecting the moving target in some technologies. However, this method is easily affected by the established background model, and if the established background model contains targets, the subsequent target detection cannot achieve a good detection effect. The moving target detection method based on the time-space-frequency significance extracts the significance from a time domain, a space domain and a frequency domain respectively, and then fuses the three significances to realize the detection of the moving target. The method mainly utilizes the characteristics of a visual system of human eyes to obtain a target candidate area in an image, and combines motion information in a video to realize the detection of a moving target.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a method for detecting the unmanned aerial vehicle aerial video moving target based on time-space-frequency significance fusion, which aims to solve the problems of low detection precision and the like.
Technical scheme
An unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance is characterized by comprising the following steps:
step 1: extracting the time significance of the video by using a Lucas-Kanade optical flow method;
step 2: extracting the spatial significance of the image by utilizing the color distribution;
and step 3: converting the image from a space domain to a frequency domain, and extracting the frequency domain significance of the image by using a spectral residual error method;
and 4, step 4: the method comprises the following steps of performing linear weighted fusion on time, space and frequency domain saliency to obtain a saliency confidence map, setting a threshold value to binarize the saliency confidence map, and extracting a moving target from an aerial video, wherein the specific steps are as follows:
1) carrying out linear weighted fusion on the time, space and frequency domain significance to obtain a significance confidence map S (x, y):
S(x,y)=μ1St(x,y)+μ2Ss(x,y)+μ3Sf(x,y)
wherein S ist(x, y) is temporal significance, Ss(x, y) is spatial significance, Sf(x, y) is frequency domain significance, μiIs the weight of each item;
2) obtaining a binary image B by setting a threshold value to carry out binarization on S (x, y), and searching all connected regions in B by taking 8 connected regions as a standardc1;
3) The regions meeting the conditions in the BcSetting the circumscribed rectangular region of 1 as 1, and searching for the region of the connected region by taking the 8 connected regions as the standardc2; simultaneously using prewitt operator to extract the edge information graph of the original input gray level image if the edge information graph is in regionc2, if the sum of the gray values of each row of more than 5 rows in the edge information graph of the corresponding position of one connected region of the region exceeds 5, the connected region is reserved as the regionc3;
4) Initializing a zero matrix with the same size as the original input image and generating a regionc3 corresponding to the position 1, and setting the rest part to be 0 to obtain a binary image Y1;
5) The binary image B was subjected to a disk-shaped morphological closing operation with a radius of 7 to fill the cavity region and obtain Y2;Y1、Y2After the corresponding position elements and the operation are carried out, a final binary image Y is obtained;
6) searching all connected region regions meeting the standard of Y by taking 8 connected regions as the standardcfinal;regioncThe final position is the position of the moving object extracted from the aerial video.
The specific steps of step 1 are as follows:
step 11: normalizing the optical flow directional diagram:
wherein, thetaiRepresenting the angle value of the (x, y) point optical flow, and performing disc-shaped morphological closing operation with the radius of 3 on the normalized optical flow directional diagram to obtain a gray scale diagram C;
step 12: counting the frequency of the gray values from 0 to 255 in the gray map C, calculating the frequency of each gray value, and taking the negative logarithm of the frequency to obtain the directional significance of the point:
wherein N isiRepresenting the number of all points which are the same as the gray value of the (x, y) point, and N representing the number of all pixel points in C;
the same method can obtain a time saliency map S based on optical flow amplitudeaWherein the amplitude is normalized as follows, and the rest steps are the same as the direction significance;
final temporal saliency map St(x, y) is defined as a linear weighted sum of the time-series saliency map based on optical flow magnitude and the time-series saliency map based on optical flow direction, expressed by:
St(x,y)=w1Sa(x,y)+w2Sd(x,y)。
the specific steps of step 2 are as follows:
step 21: traversing 4 neighborhoods of the gray image from the pixel coordinates (0,0) of the gray image, if the gray value difference is smaller than a threshold value, judging the same connected region, otherwise, setting the connected region as a new connected region starting point, and repeating the operation until the image is completely traversed;
step 22: calculating the gray average value of each connected region, and uniformly assigning values to all pixel points in the region to obtain an image M;
step 23: counting the number of pixels in each connected domain in the image M, calculating the occurrence frequency of the pixels in each connected domain, and taking the negative logarithm of the frequency to obtain the spatial significance:
wherein N isconnect(i) Representing the number of all the pixels in the same connected domain with the (x, y) point, NconnectRepresenting the number of all the pixel points in M.
The specific steps of step 3 are as follows:
step 31: giving a gray image H (x, y), and converting the gray image H (x, y) from a spatial domain to a frequency domain through two-dimensional discrete Fourier transform F to obtain a representation F [ H (x, y) ] of the image in the frequency domain;
step 32: obtaining the amplitude A (F) and phase P (F) of F [ H (x, y) ]:
A(f)=|F[H(x,y)]|
step 33: logarithm of amplitude A (F) of F [ H (x, y) ] is taken to obtain a logarithmic spectrum L (F):
L(f)=log(A(f))
step 34: using local smoothing filters hn(f) Smoothing the log spectrum:
M(f)=L(f)*hn(f)
where h isn(f) Is an n x n matrix, in whichEach pixel is equal and defined as follows:
step 35: the difference between the magnitude graph of the log spectrum and the mean value of the log spectrum after mean value filtering is the spectrum residual:
R(f)=L(f)-M(f)
step 36: the spectrum residual R (f) and the phase P (f) are subjected to two-dimensional inverse discrete Fourier transform, so that the spectrum residual R (f) and the phase P (f) can be converted from a frequency domain to a space domain, as shown in the following formula;
T(x,y)=|F-1[exp{R(f)+iP(f)}]|2
step 37: reconstructing an image by performing Gaussian filtering on the image after the spectrum residual is converted into a spatial domain, wherein the image is used for representing the significance of each pixel of the original image and becomes a significance map:
Sf(x,y)=T(x,y)*Gaussian。
advantageous effects
According to the unmanned aerial vehicle aerial video moving target detection method based on the time-space-frequency significance, the time domain significance, the space domain significance and the frequency domain significance are fused, the significance of the other two domains is utilized to make up the defects of the respective domains, the detection precision and the detection robustness are improved, the algorithm is simple, and the execution efficiency is high.
Drawings
FIG. 1 is a flow chart of aerial video moving object detection based on time-space-frequency saliency
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
the scheme adopts an aerial video moving target detection method based on time-space-frequency significance, and comprises the following specific steps:
step 1: and extracting the time significance of the video by using a Lucas-Kanade optical flow method.
Step 2: the spatial saliency of the image is extracted by using the color distribution.
And step 3: and converting the image from a space domain to a frequency domain, and extracting the frequency domain significance of the image by using a spectral residual error method.
And 4, step 4: and performing linear weighting fusion on the time, space and frequency domain saliency to obtain a saliency confidence map, binarizing the saliency confidence map through a proper threshold value, and extracting a moving object from the aerial video.
A preferred embodiment of the invention comprises the following steps:
step 1: and extracting the time significance of the video by using a Lucas-Kanade optical flow method.
Assuming that I (x, y, t) is the gray scale value of any pixel point (x, y) in the image at a certain time t, the offset of the pixel at the position (x, y) in the original image in the x and y directions is dx and dy at the time t + dt. According to the principle that the gray scale of the pixel at the corresponding position is kept unchanged in a short time, the method comprises the following steps:
I(x,y,t)=I(x+dx,y+dy,t+dt) (1)
the right side of the equation is expanded by Taylor's formula, and the higher order infinitesimal is omitted because the motion is small enough, so that:
the gradient of the image at (x, y, t) corresponding to the x, y, t direction. Vx and Vy are the motion velocities of the pixel in the x and y directions, respectively.
Assuming that the pixel motion of a local area is consistent, in this embodiment, a 5 × 5 neighborhood is selected, and when the pixel position is an edge, the gray value of the missing part is filled with 0. Then a set of the above equations can be established for the pixel in its neighborhood:
here pixel1,pixel2,…,pixelnIs the pixel point in the neighborhood of pixel point (x, y)5x5 on this image I. This is oneThe series of equations may be written uniformly as Qv ═ b, where:
the system of equations contains only two unknowns, VxAnd Vy. Lucas and Kanade solve the least square solution of the equation set by using the least square method as the optical flow of the pixel (x, y), and the solution is obtained:
v=(QTQ-1)QTb (5)
then, the amplitude and direction of the point light flow are respectively calculated:
temporal saliency is found based on magnitude and direction of optical flow, respectively.
The directions are taken as examples:
1) normalizing the optical flow directional diagram as follows:
wherein theta isiThe angular value representing the optical flow of the (x, y) point. Then, performing disc-shaped morphological closing operation with the radius of 3 on the normalized optical flow directional diagram to obtain a gray scale diagram C;
2) and (4) counting the frequency of the gray values from 0 to 255 in the gray map C, calculating the frequency of each gray value, and taking the negative logarithm of the frequency to obtain the directional significance of the point. The following were used:
wherein N isiAnd N represents the number of all pixels in C.
The same method can obtain a time saliency map S based on optical flow amplitudeaWhere the magnitude is normalized as follows, the remaining steps are the same as the directional saliency.
Final temporal saliency map St(x, y) is defined as a linear weighted sum of the time-series saliency map based on optical flow magnitude and the time-series saliency map based on optical flow direction, represented by:
St(x,y)=w1Sa(x,y)+w2Sd(x,y) (11)
in this embodiment, w1And w2Take 0.7 and 0.3, respectively.
Step 2: the spatial saliency of the image is extracted by using the color distribution.
Mean shift segmentation is carried out on the image, and then a calculation method of motion information significance is used for reference, namely negative logarithm is taken for distribution frequency, and spatial significance of the image is obtained. The method comprises the following specific steps:
1) traversing 4 neighborhoods of the gray-scale image from the pixel coordinates (0,0), if the gray-scale value difference is smaller than a threshold (5 in the embodiment), determining the same connected region, otherwise, setting the connected region as a new connected region starting point, and repeating the above operations until the image is completely traversed.
2) And calculating the gray average value of each connected region, and uniformly assigning values to all pixel points in the region to obtain an image M.
3) And counting the number of pixels in each connected domain in the image M, calculating the occurrence frequency of the pixels in each connected domain, and taking the negative logarithm of the frequency to obtain the spatial significance.
Wherein N isconnect(i) Representing the number of all the pixels in the same connected domain with the (x, y) point, NconnectRepresenting the number of all the pixel points in M.
And 3, step 3: and converting the image from a space domain to a frequency domain, and extracting the frequency domain significance of the image by using a spectral residual error method.
1) Given a gray scale image H (x, y), it is transformed from the spatial domain to the frequency domain by a two-dimensional discrete Fourier transform F, resulting in a representation F [ H (x, y) ] of the image in the frequency domain.
2) Obtaining the amplitude A (F) and phase P (F) of F [ H (x, y) ]:
A(f)=|F[H(x,y)]| (13)
3) Logarithm of amplitude A (F) of F [ H (x, y) ] is taken to obtain a logarithmic spectrum L (F):
L(f)=log(A(f)) (15)
4) using local smoothing filters hn(f) The log spectrum is smoothed, as shown below, to obtain the approximate shape of the log spectrum:
M(f)=L(f)*hn(f) (16)
where h isn(f) Is an n × n matrix (3 × 3 in this embodiment), where each pixel is equal, and is defined as shown in the following formula:
5) the difference between the amplitude diagram of the log spectrum and the mean value of the log spectrum after mean value filtering is a spectrum residual error, and can be calculated according to the following formula;
R(f)=L(f)-M(f) (18)
6) the spectral residuals may capture the frequency portion of the anomaly in the image and thus may be used for salient object detection. The spectrum residual R (f) and the phase P (f) are subjected to two-dimensional inverse discrete Fourier transform, so that the spectrum residual R (f) and the phase P (f) can be converted from a frequency domain to a space domain, as shown in the following formula;
T(x,y)=|F-1[exp{R(f)+iP(f)}]|2 (19)
7) an image is reconstructed by performing gaussian filtering on the image after the spectrum residual is converted into the spatial domain (in the scheme, a gaussian low-pass filter with the size of 3 × 3 and the standard deviation of 1 is adopted) so as to represent the significance of each pixel of the original image, and the image becomes a significance map.
Sf(x,y)=T(x,y)*Gaussian (20)
And 4, step 4: and performing linear weighting fusion on the time, space and frequency domain saliency to obtain a saliency confidence map, binarizing the saliency confidence map through a proper threshold value, and extracting a moving object from the aerial video.
1) Carrying out linear weighted fusion on the time, space and frequency domain significance to obtain a significance confidence map S (x, y):
S(x,y)=μ1St(x,y)+μ2Ss(x,y)+μ3Sf(x,y) (21)
μiis the weight of each term, mu in this embodiment1、μ2Mu minute30.52, 0.2 and 0.28, respectively.
2) Binarizing S (x, y) by a proper threshold (the threshold is 0.2 in the embodiment) to obtain a binary image B, and searching all connected regions in B by using 8 connected regions as a standardc1, each connected region area needs to be between 20 × 20 pixels and 200 × 200 pixels in the scheme, and the aspect ratio and the width-to-length ratio are both less than or equal to 5.
3) The regions meeting the conditions in the BcSetting the circumscribed rectangular region of 1 as 1, and searching for the region of the connected region by taking the 8 connected regions as the standardc2. Simultaneously using prewitt operator to extract the edge information graph of the original input gray level image if the edge information graph is in regionc2 edge information of a corresponding position of one connected regionIf the sum of the gray values of each row of more than 5 rows in the graph is more than 5, the gray value is kept as the regionc3。
4) Initializing a zero matrix with the same size as the original input image and generating a regionc3 corresponding to the position 1, and setting the rest part to be 0 to obtain a binary image Y1。
5) The binary image B was subjected to a disk-shaped morphological closing operation with a radius of 7 to fill the cavity region and obtain Y2。Y1、Y2And obtaining a final binary image Y after the corresponding position elements and the operation.
6) Searching all connected region regions meeting the standard of Y by taking 8 connected regions as the standardcfinal, the criteria in this embodiment are: the number of rows of each connected region needs to be greater than or equal to 0.6 times the area of the circumscribed rectangle.
regioncThe final position is the position of the moving object extracted from the aerial video.
Claims (4)
1. An unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance is characterized by comprising the following steps:
step 1: extracting the time significance of the video by using a Lucas-Kanade optical flow method;
step 2: extracting the spatial significance of the image by utilizing the color distribution;
and step 3: converting the image from a space domain to a frequency domain, and extracting the frequency domain significance of the image by using a spectrum residual error method;
and 4, step 4: the method comprises the following steps of performing linear weighted fusion on time, space and frequency domain saliency to obtain a saliency confidence map, setting a threshold value to binarize the saliency confidence map, and extracting a moving target from an aerial video, wherein the specific steps are as follows:
1) carrying out linear weighted fusion on the time, space and frequency domain significance to obtain a significance confidence map S (x, y):
S(x,y)=μ1St(x,y)+μ2Ss(x,y)+μ3Sf(x,y)
wherein S ist(x, y) is temporal significance, Ss(x, y) is a spatial displayBirth character, Sf(x, y) is frequency domain significance, μiIs the weight of the respective item;
2) obtaining a binary image B by setting a threshold value to carry out binarization on S (x, y), and searching all connected regions in B by taking 8 connected regions as a standardc1;
3) The regions meeting the conditions in the BcSetting the circumscribed rectangular region of 1 as 1, and searching for the region of the connected region by taking the 8 connected regions as the standardc2; simultaneously using prewitt operator to extract the edge information graph of the original input gray level image if the edge information graph is in regionc2, if the sum of the gray values of each row of more than 5 rows in the edge information graph of the corresponding position of one connected region of the region exceeds 5, the connected region is reserved as the regionc3;
4) Initializing a zero matrix with the same size as the original input image and generating a regionc3 corresponding to the position 1, and setting the rest part to be 0 to obtain a binary image Y1;
5) The binary image B was subjected to a disk-shaped morphological closing operation with a radius of 7 to fill the cavity region and obtain Y2;Y1、Y2After the corresponding position elements and the operation are carried out, a final binary image Y is obtained;
6) searching all connected region regions meeting the standard of Y by taking 8 connected regions as the standardcfinal;regioncThe final position is the position of the moving object extracted from the aerial video.
2. The unmanned aerial vehicle aerial video moving object detection method based on time-space-frequency saliency as claimed in claim 1, wherein the specific steps of step 1 are as follows:
step 11: normalizing the optical flow directional diagram:
wherein, thetaiRepresenting the angle value of the (x, y) point optical flow, and performing disc-shaped morphological closing operation with the radius of 3 on the normalized optical flow directional diagram to obtain a gray scale diagram C;
step 12: counting the frequency of the gray values from 0 to 255 in the gray map C, calculating the frequency of each gray value, and taking the negative logarithm of the frequency to obtain the directional significance of the point:
wherein N isiRepresenting the number of all points which are the same as the gray value of the (x, y) point, and N representing the number of all pixel points in C;
the same method can obtain a time saliency map S based on optical flow amplitudeaWherein the amplitude is normalized as follows, and the rest steps are the same as the direction significance;
final temporal saliency map St(x, y) is defined as a linear weighted sum of the time-series saliency map based on optical flow magnitude and the time-series saliency map based on optical flow direction, represented by:
St(x,y)=w1Sa(x,y)+w2Sd(x,y)。
3. the unmanned aerial vehicle aerial video moving object detection method based on time-space-frequency saliency of claim 1, characterized in that the specific steps of step 2 are as follows:
step 21: traversing 4 neighborhoods of the gray image from the pixel coordinates (0,0) of the gray image, if the gray value difference is smaller than a threshold value, judging the same connected region, otherwise, setting the connected region as a new connected region starting point, and repeating the operation until the image is completely traversed;
step 22: calculating the gray average value of each connected region, and uniformly assigning values to all pixel points in the region to obtain an image M;
step 23: counting the number of pixels in each connected domain in the image M, calculating the occurrence frequency of the pixels in each connected domain, and taking the negative logarithm of the frequency to obtain the spatial significance:
wherein N isconnect(i) Representing the number of all the pixels in the same connected domain with the (x, y) point, NconnectRepresenting the number of all the pixel points in M.
4. The unmanned aerial vehicle aerial video moving object detection method based on time-space-frequency saliency of claim 1, characterized in that the specific steps of step 3 are as follows:
step 31: giving a gray image H (x, y), and converting the gray image H (x, y) from a spatial domain to a frequency domain through two-dimensional discrete Fourier transform F to obtain a representation F [ H (x, y) ] of the image in the frequency domain;
step 32: obtaining the amplitude A (F) and phase P (F) of F [ H (x, y) ]:
A(f)=|F[H(x,y)]|
step 33: logarithm of amplitude A (F) of F [ H (x, y) ] is taken to obtain a logarithmic spectrum L (F):
L(f)=log(A(f))
step 34: using local smoothing filters hn(f) Smoothing the log spectrum:
M(f)=L(f)*hn(f)
where h isn(f) Is an n x n matrix where each pixel is equal, defined as:
step 35: the difference between the magnitude graph of the log spectrum and the mean value of the log spectrum after mean value filtering is the spectrum residual:
R(f)=L(f)-M(f)
step 36: the spectrum residual R (f) and the phase P (f) are subjected to two-dimensional inverse discrete Fourier transform, so that the spectrum residual R (f) and the phase P (f) can be converted from a frequency domain to a space domain, as shown in the following formula;
T(x,y)=|F-1[exp{R(f)+iP(f)}]|2
step 37: reconstructing an image by performing Gaussian filtering on the image after the spectrum residual is converted into a spatial domain, wherein the image is used for representing the significance of each pixel of the original image and becomes a significance map:
Sf(x,y)=T(x,y)*Gaussian。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811552410.1A CN109767454B (en) | 2018-12-18 | 2018-12-18 | Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811552410.1A CN109767454B (en) | 2018-12-18 | 2018-12-18 | Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109767454A CN109767454A (en) | 2019-05-17 |
CN109767454B true CN109767454B (en) | 2022-05-10 |
Family
ID=66450293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811552410.1A Active CN109767454B (en) | 2018-12-18 | 2018-12-18 | Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109767454B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110148149B (en) * | 2019-05-20 | 2024-01-30 | 哈尔滨工业大学(威海) | Hot wake segmentation method of underwater vehicle based on local contrast accumulation |
CN111950549B (en) * | 2020-08-12 | 2022-07-22 | 上海大学 | Sea surface obstacle detection method based on fusion of sea antennas and visual saliency |
CN112001991B (en) * | 2020-10-27 | 2021-01-26 | 中国空气动力研究与发展中心高速空气动力研究所 | High-speed wind tunnel dynamic oil flow map image processing method |
CN113449658A (en) * | 2021-07-05 | 2021-09-28 | 四川师范大学 | Night video sequence significance detection method based on spatial domain, frequency domain and time domain |
CN113591708B (en) * | 2021-07-30 | 2023-06-23 | 金陵科技学院 | Meteorological disaster monitoring method based on satellite-borne hyperspectral image |
CN114511851B (en) * | 2022-01-30 | 2023-04-04 | 中国南水北调集团中线有限公司 | Hairspring algae cell statistical method based on microscope image |
CN115861365B (en) * | 2022-10-11 | 2023-08-15 | 海南大学 | Moving object detection method, system, computer device and storage medium |
CN116449332B (en) * | 2023-06-14 | 2023-08-25 | 西安晟昕科技股份有限公司 | Airspace target detection method based on MIMO radar |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303727A (en) * | 2008-07-08 | 2008-11-12 | 北京中星微电子有限公司 | Intelligent management method based on video human number Stat. and system thereof |
CN101634706A (en) * | 2009-08-19 | 2010-01-27 | 西安电子科技大学 | Method for automatically detecting bridge target in high-resolution SAR images |
CN103075998A (en) * | 2012-12-31 | 2013-05-01 | 华中科技大学 | Monocular space target distance-measuring and angle-measuring method |
CN103077533A (en) * | 2012-12-26 | 2013-05-01 | 中国科学技术大学 | Method for positioning moving target based on frogeye visual characteristics |
CN103679196A (en) * | 2013-12-05 | 2014-03-26 | 河海大学 | Method for automatically classifying people and vehicles in video surveillance |
CN104050477A (en) * | 2014-06-27 | 2014-09-17 | 西北工业大学 | Infrared image vehicle detection method based on auxiliary road information and significance detection |
CN104777453A (en) * | 2015-04-23 | 2015-07-15 | 西北工业大学 | Wave beam domain time-frequency analysis method for warship line spectrum noise source positioning |
CN105303571A (en) * | 2015-10-23 | 2016-02-03 | 苏州大学 | Time-space saliency detection method for video processing |
CN107122715A (en) * | 2017-03-29 | 2017-09-01 | 哈尔滨工程大学 | It is a kind of based on frequency when conspicuousness combine moving target detecting method |
CN108229487A (en) * | 2016-12-12 | 2018-06-29 | 南京理工大学 | A kind of conspicuousness detection method of combination spatial domain and frequency domain |
-
2018
- 2018-12-18 CN CN201811552410.1A patent/CN109767454B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303727A (en) * | 2008-07-08 | 2008-11-12 | 北京中星微电子有限公司 | Intelligent management method based on video human number Stat. and system thereof |
CN101634706A (en) * | 2009-08-19 | 2010-01-27 | 西安电子科技大学 | Method for automatically detecting bridge target in high-resolution SAR images |
CN103077533A (en) * | 2012-12-26 | 2013-05-01 | 中国科学技术大学 | Method for positioning moving target based on frogeye visual characteristics |
CN103075998A (en) * | 2012-12-31 | 2013-05-01 | 华中科技大学 | Monocular space target distance-measuring and angle-measuring method |
CN103679196A (en) * | 2013-12-05 | 2014-03-26 | 河海大学 | Method for automatically classifying people and vehicles in video surveillance |
CN104050477A (en) * | 2014-06-27 | 2014-09-17 | 西北工业大学 | Infrared image vehicle detection method based on auxiliary road information and significance detection |
CN104777453A (en) * | 2015-04-23 | 2015-07-15 | 西北工业大学 | Wave beam domain time-frequency analysis method for warship line spectrum noise source positioning |
CN105303571A (en) * | 2015-10-23 | 2016-02-03 | 苏州大学 | Time-space saliency detection method for video processing |
CN108229487A (en) * | 2016-12-12 | 2018-06-29 | 南京理工大学 | A kind of conspicuousness detection method of combination spatial domain and frequency domain |
CN107122715A (en) * | 2017-03-29 | 2017-09-01 | 哈尔滨工程大学 | It is a kind of based on frequency when conspicuousness combine moving target detecting method |
Non-Patent Citations (6)
Title |
---|
Accurate Object Segmentation for Video Sequences via Temporal-Spatial-Frequency Saliency Model;Bing Xu等;《IEEE Intelligent Systems》;20171011;第18-28页 * |
Overlapped fruit recognition for citrus harvesting robot in natural scenes;Yang ChangHui等;《2017 2nd International Conference on Robotics and Automation Engineering》;20180215;第398-402页 * |
交通监控视频中的车辆异常行为检测;宋耀等;《视频应用与工程》;20151231;第39卷(第14期);第107-111页 * |
基于对称差分算法的视频运动目标分割;肖丽君等;《吉林大学学报(理学版)》;20080731;第46卷(第4期);第691-696页 * |
基于显著性的运动目标检测技术研究;蔡佳丽;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160815;第2016年卷(第8期);I138-991 * |
基于视觉显著性和特征点匹配增强的运动目标跟踪框架;李鹏等;《湖南科技大学学报( 自然科学版)》;20171231;第32卷(第4期);第61-68页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109767454A (en) | 2019-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109767454B (en) | Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance | |
CN111862126B (en) | Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm | |
CN109272489B (en) | Infrared weak and small target detection method based on background suppression and multi-scale local entropy | |
WO2018024030A1 (en) | Saliency-based method for extracting road target from night vision infrared image | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
WO2019042232A1 (en) | Fast and robust multimodal remote sensing image matching method and system | |
CN109598794B (en) | Construction method of three-dimensional GIS dynamic model | |
CN108446634B (en) | Aircraft continuous tracking method based on combination of video analysis and positioning information | |
CN106683119B (en) | Moving vehicle detection method based on aerial video image | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CA2780595A1 (en) | Method and multi-scale attention system for spatiotemporal change determination and object detection | |
CN104063711B (en) | A kind of corridor end point fast algorithm of detecting based on K means methods | |
JP2014504410A (en) | Detection and tracking of moving objects | |
CN114187665B (en) | Multi-person gait recognition method based on human skeleton heat map | |
CN105405138B (en) | Waterborne target tracking based on conspicuousness detection | |
CN112446436A (en) | Anti-fuzzy unmanned vehicle multi-target tracking method based on generation countermeasure network | |
CN110245600B (en) | Unmanned aerial vehicle road detection method for self-adaptive initial quick stroke width | |
CN113608663B (en) | Fingertip tracking method based on deep learning and K-curvature method | |
Cho et al. | Semantic segmentation with low light images by modified CycleGAN-based image enhancement | |
Zhang et al. | Multiple Saliency Features Based Automatic Road Extraction from High‐Resolution Multispectral Satellite Images | |
CN113379789B (en) | Moving target tracking method in complex environment | |
Zhang et al. | Multi-FEAT: Multi-feature edge alignment for targetless camera-LiDAR calibration | |
CN111899278A (en) | Unmanned aerial vehicle image rapid target tracking method based on mobile terminal | |
Zhang et al. | An IR and visible image sequence automatic registration method based on optical flow | |
Du et al. | A high-precision vision-based mobile robot slope detection method in unknown environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |