CN111311644B - Moving target detection method based on video SAR - Google Patents

Moving target detection method based on video SAR Download PDF

Info

Publication number
CN111311644B
CN111311644B CN202010040411.9A CN202010040411A CN111311644B CN 111311644 B CN111311644 B CN 111311644B CN 202010040411 A CN202010040411 A CN 202010040411A CN 111311644 B CN111311644 B CN 111311644B
Authority
CN
China
Prior art keywords
image
shadow
frames
adjacent
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010040411.9A
Other languages
Chinese (zh)
Other versions
CN111311644A (en
Inventor
李晋
罗先明
闵锐
皮亦鸣
曹宗杰
崔宗勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010040411.9A priority Critical patent/CN111311644B/en
Publication of CN111311644A publication Critical patent/CN111311644A/en
Application granted granted Critical
Publication of CN111311644B publication Critical patent/CN111311644B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the radar remote sensing application technology, and particularly relates to a moving target detection method based on a video SAR. The method utilizes the shadow generated by the moving target in the video SAR imaging result, considers the influence of points in the vertical direction of the image on the basis of the Wellner adaptive threshold algorithm, and simultaneously utilizes the gray histogram of the image to jointly process to obtain the threshold segmentation result. Then obtaining a background image through background modeling, subtracting the background image from a corresponding threshold segmentation result to obtain a foreground image, and subsequently performing morphological processing to filter partial clutter; and finally, considering that the shadow movement of the adjacent frame of the video SAR is small, and acquiring the final moving target shadow by utilizing the image phase and subtraction filtering interference of the adjacent frame. The method can accurately detect the moving target in the video SAR image by detecting the shadow of the moving target.

Description

Moving target detection method based on video SAR
Technical Field
The invention belongs to the radar remote sensing application technology, and particularly relates to a moving target detection method based on a video SAR.
Background
The requirements of modern battlefields on reconnaissance, accurate positioning and striking are gradually increased, and the requirements on moving target detection, positioning and tracking are higher and higher. The SAR can image the moving target, and the position, the movement direction, the speed of the moving target and even the shape information of the target are represented in a high-resolution image. The conventional SAR moving target detection technology has some defects: firstly, because the SAR imaging frequency is positively correlated with the radar working frequency, the traditional SAR has low working frequency and low imaging frame rate, and the target motion track in the synthetic aperture time can be lost, so that the real-time tracking of the moving target can not be realized; secondly, the minimum detectable speed is high, and the slow target is easy to be interfered by clutter and cannot be detected; moreover, the SAR system which has high resolution and GMTI (ground Moving Target indication) capability has high complexity, and has larger limit on a carrying platform. The video SAR is an SAR system capable of realizing high frame rate imaging, and can realize real-time monitoring of a target by continuously monitoring and imaging a ground target area to form at least 5 frames of images per second. The imaging characteristics of high resolution and high frame rate play an important role, and the moving target detection technology based on the video SAR also becomes a new research hotspot in the field of radar remote sensing.
The moving target detection technology aiming at the traditional SAR has a single channel as follows: spectrum filtering, time-frequency analysis, RDM, Keystone algorithm and the like, but the single-channel algorithm has poor target detection capability; there are also multi-channel algorithms such as: phase center offset (DPCA), along-track interference (ATI), STAP, etc., but the multi-channel algorithm system has higher complexity. The methods aim at the echo of a moving target, and then calculate a back-term scattering coefficient (RCS) to detect the moving target, and the Doppler frequency shift and defocusing generated by the moving target lead to poor detection performance of directly detecting the echo signal of the moving target, small speed measurement range and error generation of position positioning of the moving target.
The moving target detection method for the video SAR has a few researches, has various problems, has a poor detection effect when applied to a video SAR mode, generates shadows due to the fact that an emission signal is blocked in a video SAR imaging result, the shadows correspond to the real position of the target, and the detection through the shadows in the video SAR imaging result has many advantages, such as: (1) the slow target detection performance is good, (2) the moving target is accurately positioned, (3) continuous monitoring can be realized according to the shadow, (4) the detection process is simple to realize, and the like.
The moving target can be detected well through shadow detection, the detection of the moving target is realized by detecting the shadow in the imaging result of the video SAR at present, but the processing flow is more complex, and the threshold segmentation technology adopts the globally same threshold. Because the shadow of the moving target is detected by analyzing the contrast of the shadow and the background clutter and comparing the local area characteristics, the following formula is adopted:
Figure BDA0002367553900000021
wherein DTCR is the ratio of clutter to shadow intensity, σoHRepresenting the intensity, σ, of ambient noise in the shadowoLIndicating the intensity of the echo in the shaded area, σNThe detection effect of the shadow is dependent on the value of the local area DTCR for the total noise of the image, and by using the global same threshold, a better result can be obtained for a considerable part of the objects, but it may cause false detection or false alarm in the case that the shadow itself is bright.
Disclosure of Invention
The invention aims to solve the problems and the defects, and provides a moving target detection method based on a video SAR (synthetic aperture radar). on the basis of the shadow detection technology of the existing video SAR mode, the moving target is detected by detecting the shadow, the threshold segmentation is considered from the local information, the processing flow is simplified, the moving target can be detected more accurately, the false alarm or missing detection is reduced, and the moving target detection effect is improved.
The moving target detection method based on the video SAR is realized by the following steps, and the whole block diagram of the detection process is shown as the attached figure 1. The whole introduction of the process: the invention aims at the detection of the moving target in the imaging result of the video SAR, the video SAR has high frame rate, and if the moving target detection can be realized, the real-time observation of the moving target in the monitoring scene is facilitated. A moving object is detected by detecting a change in shading in successive frame images. Firstly, the registration can match the feature points of continuous multi-frame images, thereby facilitating the subsequent detection of shadow movement. Then, a shadow, a background and a clutter region are distinguished by utilizing binarization, and a background image can be obtained by utilizing a binarization result and background modeling, wherein the background image represents the background of a plurality of continuous images. The subtraction between the images can result in a foreground image. The foreground image is displayed as a changed shadow, but a false alarm caused by partial clutter interference exists, and the moving shadow can be finally obtained by utilizing the regularity of adjacent frame targets and the randomness of the clutter, so that the moving target can be detected.
The technical scheme of the invention is as follows: a moving target detection method based on a video SAR is characterized by comprising the following steps:
s1, acquiring the continuous multi-frame imaging result of the video SAR, and carrying out image acquisition on any continuous 9-frame images Ii-4,Ii-3,Ii-2,Ii-1,Ii,Ii+1,Ii+2,Ii+3,Ii+4The selection of 7 frames is: i isi-4,Ii-2,Ii-1,Ii,Ii+1,Ii+2,Ii+4Wherein IiFor the ith frame image in the continuous image sequence, then adopting SIFT algorithm to carry out image registration, and registering to the 4 th frame I in 7 framesiAs a reference image;
s2, carrying out threshold segmentation on the image: the m × n matrix data of the image is computationally expressed as 1 row of m × n columns, the influence of points in the transverse and longitudinal directions adjacent to any point in the m × n matrix is respectively considered, and points adjacent to the point in the two directions are synthesized for summation, wherein the summation in the transverse direction is as follows:
Figure BDA0002367553900000031
the summation in the longitudinal direction is:
Figure BDA0002367553900000032
wherein p isnIs the gray value of any point in the image gray value matrix, s1、s2Respectively represent p in the transverse and longitudinal directionsnAdjacent points taken as centers, fs1(n) represents the transverse direction s1+1 points of the sum of the gray values, fs2(n) represents s in the longitudinal direction2+1 dot of the sum of the gray values, where s1,s2Value takingOf transverse and longitudinal extent, respectively, of the original image
Figure BDA0002367553900000036
Calculating the whole single-row matrix according to a transverse and longitudinal summation formula to respectively obtain two single-row matrices Fs1(n)、Fs2(n) reverse recovery of the single-row matrix into two mxn matrices F1、F2Then, the two matrixes are summed to obtain a matrix F which simultaneously considers the transverse and longitudinal influencesallWill FallNumber of summations per coordinate s1+s2+2 average to get the average
Figure BDA0002367553900000033
Obtaining a first threshold segmentation result T1(n):
Figure BDA0002367553900000034
Wherein T is a set common variable representing the influence of the value of a point adjacent to the point on the point, T1(n) represents whether the point value is 0 or 1, wherein the result is that 1 represents shadow, 0 represents non-shadow and corresponds to the threshold segmentation result of the video SAR image;
using the value p with the largest number of gray values in the gray value distribution of the whole imagemostTaking 0.9 of the value as preprocessing to obtain a second threshold segmentation result T2(n):
Figure BDA0002367553900000035
Dividing the first threshold into results T1(n) and a second threshold segmentation result T2(n) and operation:
Figure BDA0002367553900000041
Tresultrepresenting the final threshold segmentation result;
s3, acquiring a background image of the current image: obtaining the threshold segmentation result of the 7 frames of images registered in the step S1 according to the step S2, taking 5 frames from the 7 frames of images, and taking: i isi-4,Ii-2,Ii,Ii+2,Ii+4The 5 frames of images are subjected to background modeling under the condition that the sum of the same points is more than 3 and is taken as 1:
Figure BDA0002367553900000042
acquired background image IbackRepresenting the part of the 5 frames with unchanged shadow, namely a static object, the reference image IiAnd its left and right adjacent images Ii-1,Ii+1Is obtained by thresholdingbinarySubtracting the background image to obtain a corresponding foreground image IprospectForeground image IprospectThe part displayed in (1) is the corresponding threshold segmentation result IbinaryMoving object shadow in (1):
Iprospect=Ibinary-Iback
s4, filtering interference between adjacent frames:
adopting the adjacent frame phase-and-phase mode to filter different interference between adjacent frames, and setting the foreground image corresponding to the ith frame image as
Figure BDA0002367553900000049
The adjacent previous frame is
Figure BDA0002367553900000047
Adjacent next frame is
Figure BDA0002367553900000048
Figure BDA0002367553900000043
Wherein I1For the first two of the three adjacent frame foregroundsThe frame sum result is used for filtering random clutter interference of different areas in adjacent frames, and meanwhile, a common part is reserved; performing an etching operation of1Enlarged middle shadow part and then
Figure BDA0002367553900000046
And (3) carrying out the following operation:
Figure BDA0002367553900000044
performing a subtraction operation to filter out the same interference between adjacent frames:
Figure BDA0002367553900000045
I3for the result obtained by subtracting the adjacent frames, after performing the etching operation, the result is compared with I2And operation is carried out to obtain a final shadow detection result:
Figure BDA0002367553900000051
wherein I4And detecting the moving object according to the final shadow detection result.
The method has the advantages of simple flow, and can reduce false alarm and missing detection by adopting a self-adaptive mode during threshold segmentation, and accurately detect the moving target.
Drawings
FIG. 1 is a flow chart of the overall implementation of the present invention;
FIG. 2 is a schematic diagram of a horizontal and vertical processing scheme for each frame of image;
FIG. 3 is a single row matrix after image matrix conversion in a certain direction;
fig. 4 shows the results of the experimental processing flow, where (a) the 4 th frame of the video SAR (b) is the result of threshold segmentation, (c) the background image (d) the foreground image (e) is the result of interference shadow filtering, (f) the result of shadow detection is marked back to the original image.
Detailed Description
The invention is described in detail below with reference to the attached drawing
The invention comprises the following steps:
step 1: processing the imaging video result of the video SAR to generate a multi-frame image, selecting 7 frames from the continuous 9-frame images, and performing image registration by adopting an SIFT algorithm, wherein the registration takes the 4 th frame of the 7 frames as a reference image, and the selection method comprises the following steps: i isi-4、Ii-2、Ii-1、Ii、Ii+1、Ii+2、Ii+4Wherein IiIs the ith frame image in the image sequence.
Step 2: the image needs to be thresholded after registration, so that the edge between the shadow and the environmental clutter is clearer, the threshold segmentation mode is based on a Wellner self-adaptive threshold processing algorithm, the influence of points in the vertical direction of the image is considered on the basis of the algorithm, and meanwhile, the mode in the gray value distribution of the whole image is utilized to filter some interference generated by the threshold segmentation method.
(1) The thresholding method computationally represents the m × n data of an image as 1 row m × n columns, yielding the data format of fig. 3. The threshold used by the algorithm for thresholding each point in the image is constantly changing, and each point is not affected by all points in the whole image, so the values of the points in the transverse and longitudinal directions adjacent to the point are considered as shown in fig. 2, wherein red represents the transverse transformation and blue represents the longitudinal transformation, and the points adjacent to the point in the two directions are summed, wherein the summation in the transverse direction is as follows:
Figure BDA0002367553900000061
the sum in the longitudinal direction is as follows:
Figure BDA0002367553900000062
wherein p isnFor grey scale of imagesThe grey value of a point in the matrix of values, s1、s2Respectively represent p in the transverse and longitudinal directionsnTaking a total of several points adjacent to the centre, fs1(n) represents the transverse direction s1+1 dot of the sum of the gray values. f. ofs2(n) represents s in the longitudinal direction2+1 dot of the sum of the gray values, where s1,s2Taking values of the transverse and longitudinal lengths of the original image respectively
Figure BDA0002367553900000065
Two single-row matrixes F can be obtained by calculating the whole single-row matrix according to a transverse and longitudinal summation formulas1(n)、Fs2(n) wherein due to the start and end of a single row of matrices
Figure BDA0002367553900000066
Or
Figure BDA0002367553900000067
The method can not satisfy simultaneous calculation of left and right, and can directly select the original value corresponding to the image matrix without summation calculation, and then reversely restore the single-row matrix into two m multiplied by n matrixes F1、F2Then, the two matrixes are summed to obtain a matrix F which simultaneously considers the transverse and longitudinal influencesallWill FallThe number s of each coordinate summation1+s2+2 average to get the average
Figure BDA0002367553900000068
Judging the point pnFinally, 0 or 1 is taken according to the following formula:
Figure BDA0002367553900000063
where T is a common variable representing the effect of the value of a point adjacent to the point on the point, T1And (n) represents whether the final value of the point is 0 or 1, wherein the result is that 1 represents shadow, 0 represents non-shadow and corresponds to the threshold segmentation result of the video SAR image.
(2) By the method, a good threshold segmentation effect can be obtained, but the gray value around the local point is large, so that even if the gray value of the current point is large in the whole image, the gray value is finally judged to be 1 to form a false shadow, and the interference to a real target shadow is caused. The method detects the shadow to realize the detection of the moving target, and utilizes the value p with the maximum number of gray values in the gray value distribution in the whole imagemostTaking 0.9 of this value as a pre-treatment, the following equation:
Figure BDA0002367553900000064
wherein p isnThe threshold segmentation result T is obtained according to the gray value of a certain point in the video SAR image2(n) and the threshold value division result T1And (n) filtering false dark regions in the result of the threshold segmentation method to reduce interference to real target shadows.
And step 3: background modeling obtains a background image of the current image. Obtaining the binarization result of the registered adjacent 7 frames of images according to the step 2, and taking 5 frames from the 7 frames of images because the imaging frame rate of the video SAR is high, the time difference between the adjacent frames of images is small, and the moving distance of the moving target is small, wherein the 5 frames of images are taken every 1 frame, for example: i isi-4,Ii-2,Ii,Ii+2,Ii+4And carrying out background modeling on the condition that the sum of the same points of the 5 frames of images is more than 3 and is taken as 1:
Figure BDA0002367553900000071
obtaining a background image IbackRepresenting the part of the 5 frames with unchanged shade, namely a static object, and then segmenting the threshold value of the image of the (I-1, I, I + 1) th frame to obtain an image IbinarySubtracting the background image to obtain a corresponding foreground image IprospectForeground image IprospectThe part displayed in (1) is also the corresponding threshold segmentation nodeFruit IbinaryThe moving object shadow in (1) is expressed by the following relation:
Iprospect=Ibinary-Iback
wherein the variables are defined as described above, and then scattered and fine clutter interference is filtered out by adopting connected domain detection.
And 4, step 4: the foreground image obtained in the step 3 is subjected to connected domain detection to filter scattered and fine clutter, but still more clutter interference is left to cause the possibility of false alarm, and because the clutter interference generated between adjacent frames is different, the background image I in the step 3 isbackNot very accurate, and different random clutter interferences may exist in different frames in the foreground image obtained thereby. For the clutter, the random clutter is filtered by adopting a mode of adjacent frame phase and to reduce the possibility of false alarm, wherein the mode is as follows:
(1) obtaining a foreground image corresponding to the i-1, i, i +1 frame image according to the previous steps, wherein the foreground image corresponding to the i frame image is set as
Figure BDA0002367553900000073
The adjacent previous frame is
Figure BDA0002367553900000074
Adjacent next frame is
Figure BDA0002367553900000075
For these three frames of images, first, the operation is as follows:
Figure BDA0002367553900000072
in the formula I1Processing the sum result of the first two frames in the foreground of the three adjacent frames to filter random clutter interference of different areas in the adjacent frames and simultaneously reserve a common part; these common parts arise for two reasons: firstly, as the time difference between adjacent frames is very small in the video SAR imaging result, the same moving target does not change greatly between the adjacent frames, and an overlapping part still exists; second oneRandom clutter interference exists between adjacent frames, interference of the same region exists at the same time, but the regions are fixed and only appear in adjacent images, and although the regions are static, the background modeling is not judged as the background in the step 3.
The intersection of two frame images can obtain the moving target part, and simultaneously filters out the random clutter interference in different areas, but the processing with the operation is carried out, I1As a result, the shadow of the moving object in the image becomes smaller, and the direct intersection is taken, which may result in missed detection, so that proper corrosion operation is carried out to reduce I1All the shadow parts are enlarged and then are compared with
Figure BDA0002367553900000081
The following processing steps are carried out:
Figure BDA0002367553900000082
I2the results obtained for the AND operation of the first two frames appropriately erode the results of the AND operation of the third frame, in which the random interference portions between adjacent frames are filtered.
(2) The subtraction operation may filter out the same interference between adjacent frames, as:
Figure BDA0002367553900000083
wherein I3The same interference can be filtered for the result of the subtraction between adjacent frames. Also, the shadow of the target is reduced, and after the etching operation is required, the target is etched2And operating to obtain a final shadow detection result, as shown in the formula:
Figure BDA0002367553900000084
wherein I4And the final shadow detection result is obtained.
The following combined simulation verifies the practicability of the invention:
setting an experimental environment: intel i3-4170 processor, Windows operating system, Matlab R2017 a;
setting parameters: when the binarization algorithm in the step 2 is adopted, the values of the horizontal direction and the longitudinal direction are the lengths in all directions
Figure BDA0002367553900000085
the value of t is 15, 8 neighborhoods are adopted during the detection processing of the connected domain, and the number of the connected objects is more than 50 for reservation. The etching operation used a flat disc structure of radius 4, strel ('disk',4) in Matlab.
Video results were imaged using video SAR published by the National laboratory of Sandia National Laboratories, SNL, usa.
According to the technical scheme, the shadow is detected and marked back to the original image, the experimental result is shown in figure 4, and the moving target in the SAR imaging result of the video can be accurately detected by observing the experimental result.

Claims (1)

1. A moving target detection method based on video SAR utilizes shadow generated by a moving target in video SAR imaging results to realize detection of the moving target through detection of the shadow, and is characterized by comprising the following steps:
s1, acquiring the continuous multi-frame imaging result of the video SAR, and carrying out image acquisition on any continuous 9-frame images Ii-4,Ii-3,Ii-2,Ii-1,Ii,Ii+1,Ii+2,Ii+3,Ii+4Selecting 7 frames: i isi-4,Ii-2,Ii-1,Ii,Ii+1,Ii+2,Ii+4Wherein IiFor the ith frame image in the continuous image sequence, then adopting SIFT algorithm to carry out image registration, and registering to the 4 th frame I in 7 framesiAs a reference image;
s2, carrying out threshold segmentation on the image: the m × n matrix data of the image is computationally expressed as 1 row of m × n columns, the influence of points in the transverse and longitudinal directions adjacent to any point in the m × n matrix is respectively considered, and points adjacent to the point in the two directions are synthesized for summation, wherein the summation in the transverse direction is as follows:
Figure FDA0002788412210000011
the summation in the longitudinal direction is:
Figure FDA0002788412210000012
wherein p isnIs the gray value of any point in the image gray value matrix, s1、s2Respectively represent p in the transverse and longitudinal directionsnNumber of adjacent points, f, taken as the centres1(n) represents the transverse direction s1+1 points of the sum of the gray values, fs2(n) represents s in the longitudinal direction2+1 dot of the sum of the gray values, where s1,s21/8 with the values of the horizontal length and the vertical length of the original image respectively, and two single-row matrixes F are obtained by calculating the whole single-row matrix according to a horizontal summation formula and a vertical summation formulas1(n)、Fs2(n) reverse recovery of the single-row matrix into two mxn matrices F1、F2Then, the two matrixes are summed to obtain a matrix F which simultaneously considers the transverse and longitudinal influencesallWill FallNumber of summations per coordinate s1+s2+2 average to get the average
Figure FDA0002788412210000013
Obtaining a first threshold segmentation result T1(n):
Figure FDA0002788412210000014
Wherein T is a set common variable representing the influence of the value of a point adjacent to the point on the point, T1(n) indicates whether the point value is 0 or 1, where a result of 1 represents a shadow0 represents a non-shadow corresponding to a threshold segmentation result of the video SAR image;
using the value p with the largest number of gray values in the gray value distribution of the whole imagemostTaking 0.9 of the value as preprocessing to obtain a second threshold segmentation result T2(n):
Figure FDA0002788412210000021
Dividing the first threshold into results T1(n) and a second threshold segmentation result T2(n) and operation:
Tresult=T1(n)∩T2(n)
Tresultrepresenting the final threshold segmentation result;
s3, acquiring a background image of the current image: obtaining the threshold segmentation result of the 7 frames of images registered in the step S1 according to the step S2, taking 5 frames from the 7 frames of images, and taking: i isi-4,Ii-2,Ii,Ii+2,Ii+4The 5 frames of images are subjected to background modeling under the condition that the sum of the same points is more than 3 and is taken as 1:
Figure FDA0002788412210000022
acquired background image IbackRepresenting the part of the 5 frames with unchanged shadow, namely a static object, the reference image IiAnd its left and right adjacent images Ii-1,Ii+1Is obtained by thresholdingbinarySubtracting the background image to obtain a corresponding foreground image IprospectForeground image IprospectThe part displayed in (1) is the corresponding threshold segmentation result IbinaryMoving object shadow in (1):
Iprospect=Ibinary-Iback
s4, filtering interference between adjacent frames:
filtering phases by taking the phase sum of adjacent framesDifferent interference between adjacent frames is set as the foreground image corresponding to the ith frame image
Figure FDA0002788412210000023
The adjacent previous frame is
Figure FDA0002788412210000024
Adjacent next frame is
Figure FDA0002788412210000025
Figure FDA0002788412210000026
Wherein I1The sum results of the first two frames in the three adjacent frames of the foreground are used for filtering random clutter interference of different areas in the adjacent frames, and meanwhile, the common part is reserved; performing an etching operation of1Enlarged middle shadow part and then
Figure FDA0002788412210000027
And (3) carrying out the following operation:
Figure FDA0002788412210000031
performing a subtraction operation to filter out the same interference between adjacent frames:
Figure FDA0002788412210000032
I3for the result obtained by subtracting the adjacent frames, after performing the etching operation, the result is compared with I2And operation is carried out to obtain a final shadow detection result:
I4=I3∩I2
wherein I4Is the final shadow detection result, and then detectsAnd moving the object.
CN202010040411.9A 2020-01-15 2020-01-15 Moving target detection method based on video SAR Expired - Fee Related CN111311644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010040411.9A CN111311644B (en) 2020-01-15 2020-01-15 Moving target detection method based on video SAR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010040411.9A CN111311644B (en) 2020-01-15 2020-01-15 Moving target detection method based on video SAR

Publications (2)

Publication Number Publication Date
CN111311644A CN111311644A (en) 2020-06-19
CN111311644B true CN111311644B (en) 2021-03-30

Family

ID=71161411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010040411.9A Expired - Fee Related CN111311644B (en) 2020-01-15 2020-01-15 Moving target detection method based on video SAR

Country Status (1)

Country Link
CN (1) CN111311644B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784059B (en) * 2020-07-06 2022-04-29 贵州工程应用技术学院 Method for predicting dominant development azimuth of coal seam macroscopic crack
CN113313007B (en) * 2021-05-26 2022-10-14 每日互动股份有限公司 Pedestrian static state identification method based on video, electronic equipment and storage medium
CN114119627B (en) * 2021-10-19 2022-05-17 北京科技大学 High-temperature alloy microstructure image segmentation method and device based on deep learning
CN114511504B (en) * 2022-01-04 2023-11-10 电子科技大学 Video SAR moving target shadow detection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744068A (en) * 2014-01-21 2014-04-23 西安电子科技大学 Moving target detection imaging method of dual-channel frequency modulation continuous wave SAR system
CN104318589A (en) * 2014-11-04 2015-01-28 中国电子科技集团公司第十四研究所 ViSAR-based anomalous change detection and tracking method
US8994577B1 (en) * 2012-07-05 2015-03-31 Sandia Corporation Synthetic aperture radar images with composite azimuth resolution
CN105261037A (en) * 2015-10-08 2016-01-20 重庆理工大学 Moving object detection method capable of automatically adapting to complex scenes
CN107230188A (en) * 2017-04-19 2017-10-03 湖北工业大学 A kind of method of video motion shadow removing
CN109917378A (en) * 2018-12-26 2019-06-21 西安电子科技大学 Utilize the VideoSAR moving target detecting method of space time correlation
CN110033455A (en) * 2018-01-11 2019-07-19 上海交通大学 A method of extracting information on target object from video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10078791B2 (en) * 2014-01-09 2018-09-18 Irvine Sensors Corporation Methods and devices for cognitive-based image data analytics in real time

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8994577B1 (en) * 2012-07-05 2015-03-31 Sandia Corporation Synthetic aperture radar images with composite azimuth resolution
CN103744068A (en) * 2014-01-21 2014-04-23 西安电子科技大学 Moving target detection imaging method of dual-channel frequency modulation continuous wave SAR system
CN104318589A (en) * 2014-11-04 2015-01-28 中国电子科技集团公司第十四研究所 ViSAR-based anomalous change detection and tracking method
CN105261037A (en) * 2015-10-08 2016-01-20 重庆理工大学 Moving object detection method capable of automatically adapting to complex scenes
CN107230188A (en) * 2017-04-19 2017-10-03 湖北工业大学 A kind of method of video motion shadow removing
CN110033455A (en) * 2018-01-11 2019-07-19 上海交通大学 A method of extracting information on target object from video
CN109917378A (en) * 2018-12-26 2019-06-21 西安电子科技大学 Utilize the VideoSAR moving target detecting method of space time correlation

Also Published As

Publication number Publication date
CN111311644A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111311644B (en) Moving target detection method based on video SAR
CN109740445B (en) Method for detecting infrared dim target with variable size
CN107808383B (en) Rapid detection method for SAR image target under strong sea clutter
US20120328161A1 (en) Method and multi-scale attention system for spatiotemporal change determination and object detection
CN110728697A (en) Infrared dim target detection tracking method based on convolutional neural network
CN109633633B (en) Life signal enhancement method based on segmented classification enhancement processing
CN108961255B (en) Sea-land noise scene segmentation method based on phase linearity and power
CN110400294B (en) Infrared target detection system and detection method
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN110717934B (en) Anti-occlusion target tracking method based on STRCF
CN108647693B (en) Sea surface infrared target detection method based on binary significance characteristics
CN101482969A (en) SAR image speckle filtering method based on identical particle computation
CN111369570A (en) Multi-target detection tracking method for video image
CN110095774B (en) Moving target detection method for circular track video SAR
CN114549642B (en) Low-contrast infrared dim target detection method
Liu et al. Moving dim and small target detection in multiframe infrared sequence with low SCR based on temporal profile similarity
CN111881837B (en) Shadow extraction-based video SAR moving target detection method
CN111161308A (en) Dual-band fusion target extraction method based on key point matching
CN107369163B (en) Rapid SAR image target detection method based on optimal entropy dual-threshold segmentation
CN108828549B (en) Target extraction method based on airport scene surveillance radar system
CN112435249A (en) Dynamic small target detection method based on periodic scanning infrared search system
CN116188510B (en) Enterprise emission data acquisition system based on multiple sensors
CN109544574B (en) Target extraction method based on all-solid-state VTS radar
CN112215146B (en) Weak and small target joint detection and tracking system and method based on random finite set
CN112099018B (en) Moving object detection method and device based on combination of radial speed and regional energy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210330

Termination date: 20220115