CN104899873B - SAR image salient region detection method based on Anisotropic diffusion space - Google Patents

SAR image salient region detection method based on Anisotropic diffusion space Download PDF

Info

Publication number
CN104899873B
CN104899873B CN201510254252.1A CN201510254252A CN104899873B CN 104899873 B CN104899873 B CN 104899873B CN 201510254252 A CN201510254252 A CN 201510254252A CN 104899873 B CN104899873 B CN 104899873B
Authority
CN
China
Prior art keywords
scale
pixel point
matrix
significance
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510254252.1A
Other languages
Chinese (zh)
Other versions
CN104899873A (en
Inventor
张强
吴艳
王凡
张磊
樊建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Huoyanwei Optoelectronic Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510254252.1A priority Critical patent/CN104899873B/en
Publication of CN104899873A publication Critical patent/CN104899873A/en
Application granted granted Critical
Publication of CN104899873B publication Critical patent/CN104899873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of SAR image salient region detection method based on Anisotropic diffusion space, the problem of existing algorithm under speckle noise is unable to accurate and effective detection SAR image salient region is mainly solved.Implementation step is:(1) by the detection window of different scale, edge strength and diffusion coefficient of the pixel on different scale are calculated;(2) using the row edge parameters matrix and column border parameter matrix of different scale, the scalogram and its comparison diagram of different scale are built;(3) correspondence scaled window is built on different scale figure and its comparison diagram, yardstick significance measure is calculated, and thus judges conspicuousness, the significance measure and conspicuousness yardstick of pixel is determined;(4) the salient region coordinate and its regional extent stablized by iteration.The present invention reduces the influence of speckle noise, detection accuracy is improved, the scope of salient region can be effectively provided, available for SAR image target detection and target identification.

Description

SAR image salient region detection method based on anisotropic diffusion space
Technical Field
The invention belongs to the technical field of image processing, relates to SAR image salient region detection, and can be used for SAR image target detection and target identification.
Technical Field
The Synthetic Aperture Radar (SAR) system is an active microwave imaging radar, and has become an important tool for acquiring data in the field of remote sensing due to the characteristics of all-time, all-weather and penetrability. With the increase of the data volume of the SAR image and the development of the image analysis technology, the demand for automatically processing the SAR image is increasingly strong. Particularly, the SAR image target detection technology not only can reduce the workload of manual interpretation, but also is the basis and key link of the automatic target recognition ATR technology of the SAR image. Therefore, the SAR image target area can be effectively and accurately obtained, and the target identification efficiency and the target positioning precision of the SAR image can be improved.
The target region in the SAR image is usually clearly distinguished from the background. In the human visual system, such areas are salient areas in the underlying vision that are not related to the scene content. Therefore, the target region can be obtained by detecting a significant region in the SAR image. The conventional optical image Saliency region detection method is a multi-scale Saliency region detection method proposed by Laurent. Itti, ChristofKoch and Ernst Niebur et al (Laurent. Itti, ChristofKoch and Ernst Niebur. A Model of salt-Based Visual Analysis for Rapid Screen Analysis. IEEE trans. on Pattern Analysis and Machine Analysis, 1998,20(11): 1254-. The method comprises the following steps: firstly, carrying out Gaussian pyramid decomposition on an image; obtaining early visual features through the center-periphery difference of the center fine scale and the periphery coarse scale; and after a saliency map is obtained by normalizing the center-periphery difference, a winner total-obtaining strategy is adopted to obtain the position of the saliency region. Because the method is simple and strong in robustness, the method is adopted by many optical image target recognition systems. However, researches find that edge information is the key for judging and positioning the salient region, and the gaussian pyramid decomposition adopted by the method cannot keep an accurate edge position, so that the positioning accuracy of the salient region is influenced. And because the method cannot give clear size of the salient region, the range of the salient region in the image cannot be accurately marked. When the method is applied to the SAR image, because a large amount of speckle noise generated in the imaging process of the SAR system changes the real image intensity, false edges are caused in a homogeneous region, and the real edges of a brighter salient region become fuzzy, so that the salient region cannot be judged by using accurate edge information, and the positioning accuracy is reduced.
Disclosure of Invention
The invention aims to overcome the defects of the existing problems and provides an SAR image saliency region detection method based on an anisotropic diffusion space, so that misjudgment of region saliency and position coordinates thereof is reduced, the accuracy of an algorithm is improved, a saliency region range is effectively given, and a good foundation is laid for subsequent target identification and recognition.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
(1) inputting an SAR image SI with the size of I multiplied by J, and calculating the equivalent vision ENL of the image by utilizing a rectangular homogeneous region R on the image;
(2) given false alarm probability pfaCalculating an initial edge threshold T according to the equivalent vision ENL;
(3) setting the maximum dimension λmaxMinimum dimension λminAnd a scale interval Delta lambda, and setting k to be from 0 to (lambda) in sequencemaxmin) All integers of/Δ λ, in the scale λmin(λ) is used at + k × Δ λmin+k×Δλ)×(λmin+ k × Δ λ), calculating the edge strength g of each pixel (i, j)i,j,kWherein I is a row where the pixel point is located, J is a column where the pixel point is located, I is greater than or equal to 1 and less than or equal to I, and J is greater than or equal to 1 and less than or equal to J;
(4) according to the edge strength gi,j,kAt the scale λminThe diffusion coefficient div of each pixel point (i, j) at + k × Δ λi,j,k
(5) Setting m to be all integers from 0 to I in sequence, setting n to be all integers from 0 to J in sequence according to the scale lambdamin+ k × Δ λ diffusion coefficient, calculating the scale λminRow edge parameter matrix A for the m-th row of + k × Δ λm,kAnd a column edge parameter matrix A 'of the n-th column'n,k
(6) Let k take the values from 0 to (λ) in turnmaxmin) All integers of/Δ λ -1, according to the scale λminThe row edge parameter matrix and the column edge parameter matrix of + k × delta lambda calculate the dimension lambda by using an additive operator splitting strategymin+ k × Δ λ dimension graph UkAnd comparative picture U'k
(7) According to the scale map U calculated in (6)kAnd comparative picture U'kCalculating an initial saliency matrix YT
7a) Calculating the dimension lambda of each pixel point (i, j)minThe metric of significance S at + k × Δ λi,j,k
7b) Finding out (lambda) from 7a)maxmin) A/lambda scaleThe largest one of the significance measures if that measure corresponds to the measure Ri,jIs λminOr λmax- Δ λ, then the pixel point (i, j) has no saliency region and no saliency measure for this pixel point is defined, otherwise the pixel point (i, j) has a saliency region and its saliency measure Si,jIs scale Ri,jAnd (ii) and (iii) a row vector (i, j, R)i,j,Si,j) Adding an initial significance matrix YT
(8) Selecting an initial significance matrix YTThe row corresponding to the first% largest significance measure in the new significance matrix Y is constructedT', 0 & lt & ltltoreq.100, and obtaining a stable significance matrix Y by an iteration methodSExtracting a stable significance matrix YSAnd (3) drawing a corresponding square significance region in the SAR image by using row and column coordinates of the first two elements of each row as centers and the 3 rd element as the number of pixels of the side length of the square.
Compared with the prior art, the invention has the following advantages:
(1) the invention adopts detection windows with different scales to judge the edge strength of the pixel, establishes a corresponding scale map by anisotropic diffusion of the edge strength with different scales, and can effectively describe edge information with different scales while accurately giving the edge position, thereby improving the accuracy of judging and positioning the region saliency;
(2) when the saliency of the region is judged, the saliency measurement is calculated by adopting the square windows corresponding to different scales, and the measured scale is taken as the saliency scale of the region, so that the actual range of the saliency region in the image is given.
Simulation results show that compared with the existing SM salient region detection method, the method provided by the invention has the advantages that edge information on different scales is effectively described, the detection accuracy of the salient region is improved, and the salient region range is effectively provided.
Drawings
FIG. 1 is a general flow chart of an implementation of the present invention;
FIG. 2 is a sub-flow chart of the present invention for calculating edge strength;
FIG. 3 is a sub-flow chart of the present invention for calculating diffusion coefficients;
FIG. 4 is a sub-flow diagram of the present invention for calculating a row edge parameter matrix and a column edge parameter matrix;
FIG. 5 is a sub-flow diagram of a computed dimension map and its comparison map of the present invention;
FIG. 6 is a sub-flow diagram of the calculation of a stable significance matrix in the present invention;
FIG. 7 is a scale chart of the anisotropic diffusion setup for the measured SAR image in the present invention;
FIG. 8 is a graph of the salient region detection result of a low-resolution measured SAR image containing a vehicle target according to the present invention;
FIG. 9 is a graph of the salient region detection result of a high resolution measured SAR image containing a vehicle target according to the present invention;
fig. 10 is a diagram showing the detection result of the salient region of the measured SAR image containing the ship target according to the present invention.
Detailed Description
The embodiments and effects of the present invention are further described below with reference to the accompanying drawings:
referring to fig. 1, the specific implementation steps of the present invention are as follows:
step 1, inputting an SAR image SI with the size of I multiplied by J, and calculating the equivalent vision ENL of the image by utilizing a rectangular homogeneous region R on the image:
where mean (-) is the mean, var (-) is the variance, (m, n) ∈ R indicates that the pixel (m, n) is contained in the region R, and xm,nAnd (3) representing the pixel value of the pixel point (m, n), wherein I is the row number of the image, and J is the column number of the image.
Step 2, giving false alarm probability pfaAnd calculating an initial edge threshold T according to the equivalent vision ENL:
wherein Qinv (·,. cndot.) is inverse incomplete gamma function, false alarm probability pfaIs set according to the degree of saliency of the object in the image, which is set to 10% in this example.
Step 3, calculating the edge strength g of each pixel point by using the detection windowi,j,k
Referring to fig. 2, the specific implementation of this step is as follows:
3a) setting parameters: maximum dimension λmaxMinimum dimension λminThe scale interval Δ λ, the initial row coordinate of the pixel point (i, j) is i-1, the column coordinate is j-1, and the scale coefficient k is 0, where λmax,λminΔ λ is set according to the possible size of the object in the image, λ is set in this examplemax≤60,λmin≥2,2≤Δλ≤10;
3b) Using (lambda) with pixel point (i, j) as centermin+k×Δλ)×(λmin+ k × Δ λ) detecting all pixels in the window, calculating the pixel point at the scale λminLower mean μ at + k × Δ λi,j,kAnd mu's of mean value'i,j,k
Wherein,i,j,kto center on the pixel point (i, j) (. lambda.)min+k×Δλ)×(λmin+ k × delta lambda) average pixel value of the pixel points in the detection window,to center on the pixel point (i, j) (. lambda.)min+k×Δλ)×(λmin+ k × Δ λ) average of the squares of the pixel values of the pixel points in the detection window;
3c) using an initial edge threshold T and a lower mean μi,j,kCalculating the pixel point (i, j) at the scale lambdaminEdge threshold T at + k × Δ λi,j,k
Ti,j,k=μi,j,k×T;
3d) Statistics are centered around pixel point (i, j) (. lambda.)min+k×Δλ)×(λmin+ k × Δ λ) detection window equal to or greater than Ti,j,kNumber of pixels numi,j,kAnd is less than Ti,j,kNumber of pixels num'i,j,kCalculating the pixel point (i, j) at the scale lambdaminEdge strength g at + k × Δ λi,j,k
gi,j,k=(μ′i,j,ki,j,k-1)×min(numi,j,k/(λmin+k×Δλ)2,num'i,j,k/(λmin+k×Δλ)2)
Wherein min (·, ·) represents taking the minimum of the two;
3e) skipping to the corresponding step according to the size of the scale: when lambda ismin+k×Δλ<λmaxIf so, making k equal to k +1, and returning to the step 3 b); otherwise, executing step 3 f);
3f) skipping to the corresponding step according to the pixel point coordinates: when J is not equal to J, making J equal to J +1 and k equal to 0, and returning to the step 3 b); when I ≠ I, J ═ J, let I ≠ I +1, J ═ 1, k ═ 0, and return to step 3 b); when I, J, step 4 is executed;
it should be noted that: the calculation of the edge intensity of the SAR image is not limited to the above method given in this example, and any one of the following methods in the prior art may be employed:
one is the mean ratio ROA method, see R.Touzi, A.Lop' es, and P.Bousquet.A statistical and geometric edge detector for SAR images, IEEE transactions.Geosci.remote Sensing,1988,26(6): 764-773;
second, the likelihood ratio LR method, see C.J.Over, D.Blacknell, and R.G.white.Optimum edge detection in SAR.IEE procedures-Radar, Sonar and Navigation,1996,143 (1); 31-40;
the three exponential weighted mean ratio ROEWA method, see R.A.Lop`es,P.Marthon,andE.Cubero-Castan.An Optimal Multiedge Detector for SAR Image Segmentation.EEETrans.Geosci.Remote Sensing,1998,36(3):793-802。
Step 4, according to the edge strength gi,j,kCalculating the diffusion coefficient div of the pixel point under different scalesi,j,k
Referring to fig. 3, the specific implementation of this step is as follows:
4a) setting the initial row coordinate of a pixel point (i, j) as i-1, the column coordinate as j-1, and the scale coefficient k as 0;
4b) according to the edge strength gi,j,kCalculating the pixel point (i, j) at the scale lambdaminDiffusion coefficient div at + k × Δ λi,j,k
Wherein gt is a set edge parameter, and the value range of gt isTo When I is from 1 to I, and J is from 1 to J, gi,j,0Is measured.
4c) Skipping to the corresponding step according to the size of the scale: when lambda ismin+k×Δλ<λmaxIf so, making k equal to k +1, and returning to the step 4 b); otherwise, executing step 4 d);
4d) skipping to the corresponding step according to the pixel point coordinates: when J is not equal to J, making J equal to J +1 and k equal to 0, and returning to the step 4 b); when I ≠ I, J ═ J, let I ≠ I +1, J ═ 1, k ═ 0, and return to step 4 b); when I, J, step 5 is performed.
Step 5, according to the diffusion coefficient div of each pixel point under each scalei,j,kCalculating a row edge parameter matrix A of each scalem,kAnd column edge parameter matrix A'n,k
Referring to fig. 4, the specific implementation of this step is as follows:
5a) setting the initial line coordinate of the image as m-1, the column coordinate as n-1, and the scale coefficient as k-0;
5b) calculating the scale lambdaminRow edge parameter matrix A for the m-th row of + k × Δ λm,k
Wherein: a isi,i-1=-Δλ×(divm,i,k+divm,i-1,k),
ai,i+1=-Δλ×(divm,i,k+divm,i+1,k),
ai,i=1+Δλ×(2×divm,i,k+divm,i-1,k+divm,i+1,k),1≤i≤J;
5c) Skipping to the corresponding step according to the line number: when m is not equal to I, making m equal to m +1, and returning to the step 5 b); when m ═ I, step 5d) is performed;
5d) calculating the scale lambdaminColumn edge parameter matrix A 'of n-th column of + k × Δ λ'n,k
Wherein: a'j,j-1=-Δλ×(divj,n,k+divj-1,n,k),
a′j,j+1=-Δλ×(divj,n,k+divj+1,n,k),
a′j,j=1+Δλ×(2×divj,n,k+divj-1,n,k+divj+1,n,k),1≤j≤I;
5e) Skipping to the corresponding step according to the number of columns: when n ≠ J, let n ≠ n +1, and return to step 5 d); when n ═ J, perform step 5 f);
5f) skipping to the corresponding step according to the size of the scale: when lambda ismin+k×Δλ≠λmaxWhen m is 1, n is 1, k is k +1, and the process returns to the step 5 b); when lambda ismin+k×Δλ=λmaxThen step 6 is performed.
Step 6, according to the line edge parameter matrix A with different scalesm,kAnd column edge parameter matrix A'n,kCalculating corresponding scale graph U by using additive operator splitting strategykAnd comparative picture U'k
Referring to fig. 5, the specific implementation of this step is as follows:
6a) setting the initial line coordinate of the image as m-1, the column coordinate as n-1, and the scale coefficient as k-0;
6b) setting an initial scale map UkIs image SI, and is initially compared with image U'kCalculating the number cn of times to be 1 for the image SI;
6c) using a row edge parameter matrix Am,kAnd scale graph UkThe m-th line element of (1), calculating a line scale map by the Thomas algorithmThe m-th row element of (1);
6d) using a row edge parameter matrix Am,k+1And comparative picture U'kThe m-th line element of (1), calculating a line contrast map by the Thomas algorithmThe m-th row element of (1);
6e) skipping to the corresponding step according to the line number: when m ≠ I, let m ≠ m +1, and return to step 6 c); when m ═ I, step 6f) is performed;
6f) utilizing a column edge parameter matrix A'n,kAnd scale graph UkThe column scale map is calculated by the Thomas algorithmThe nth column element of (1);
6g) utilizing a column edge parameter matrix A'n,k+1And comparative picture U'kThe column contrast map is calculated by the Thomas algorithmThe nth column element of (1);
6h) skipping to the corresponding step according to the number of columns: when n ≠ J, let n ≠ n +1, and return to step 6 f); when n ═ J, perform step 6 i);
6i) calculating the scale lambdamin+ k × Δ λ dimension graph UkAnd comparative picture U'k
6j) Skipping to the corresponding step according to the calculation times: when cn < (lambda)min+ k × Δ λ)/2, let m be 1, n be 1, cn be cn +1, and return to step 6c), otherwise, perform step 6 k);
6k) skipping to the corresponding step according to the size of the scale: when lambda ismin+k×Δλ≠λmaxWhen Δ λ is reached, let m be 1, n be 1, k be k +1, and return to step 6 b); when lambda ismin+k×Δλ=λmaxΔ λ, step 7 is performed.
The Thomas algorithm, proposed by the England mathematician Lueglin Thomas, solves the set of tri-diagonal equations by a simplified form of Gaussian elimination, and is described in H.R.Schwarz.Numerische Mathimatik.Stuttgart, Germany: Teubner,1988, 43-45.
Step 7, according to the dimension graph UkAnd comparative picture U'kCalculating an initial saliency matrix YT
7a) Setting the initial row coordinate of a pixel point (i, j) as i-1 and the column coordinate as j-1;
7b) let k take the values from 0 to (λ) in turnmaxmin) Calculating all integers of/delta lambda-1, and calculating the scale lambda of each pixel point (i, j)minThe metric of significance S at + k × Δ λi,j,k
Where x is a pixel value varying from 0 to 255, pi,j,k(x) Is a scale chart UkCentered on the pixel point (i, j), (λ)min+k×Δλ)×(λmin+ k × Δ λ) pixel point probability p 'with pixel value x in square window'i,j,k(x) Is comparative picture U'kCentered on the pixel point (i, j), (λ)min+k×Δλ)×(λmin+ k × Δ λ) pixel point probability with pixel value x in the square window;
7c) finding the (lambda) determined in 7b)maxmin) The largest one of the/Δ λ scale significance measures if that scale significance measure corresponds to the scale Ri,jIs λminOr λmax- Δ λ, then the pixel point (i, j) has no saliency region and no saliency measure for this pixel point is defined, otherwise the pixel point (i, j) has a saliency region and its saliency measure Si,jIs scale Ri,jAnd (ii) and (iii) a row vector (i, j, R)i,j,Si,j) Adding an initial significance matrix YT
7d) Skipping to the corresponding step according to the pixel point coordinates: when J is not equal to J, making J equal to J +1, and returning to the step 7 b); when I ≠ I, J ═ J, let I ≠ I +1, J ═ 1, and return to step 7 b); when I, J, step 8 is performed.
Step 8, according to the initial significance matrix YTTo obtain a stable significance matrix YS
Referring to fig. 6, the specific implementation of this step is as follows:
8a) selecting an initial significance matrix YTThe row corresponding to the first% largest significance measure in the new significance matrix Y is constructedT', 0 is less than or equal to 100, and the value is not more than 20;
8b) setting a region saliency ratio sr of not less than 0.3, and setting a stable saliency matrix YSIs an empty matrix;
8c) selecting a new significance matrix YT' middle significanceTaking the pixel point with the maximum measurement as a candidate point, constructing a square window with the first two elements of the corresponding line as the center and the third element as the side length, and calculating the ratio sr' of the number of the pixel points with the significant region in the square window to the total number of the pixel points in the window;
8d) comparing sr 'with sr, if sr' is less than sr, then selecting the new significance matrix Y for the row where the candidate point is locatedT' removing; otherwise, adding the row of the candidate point into the stable significance matrix YSThen, the candidate points and all the pixel points with significance regions in the square window are arranged in a new significance matrix YT' removing;
8e) judging a new significance matrix YT' whether it is empty, if it is empty, it is stopped, and the stable saliency matrix Y is outputS(ii) a Otherwise, return to step 8 c).
Step 9, extracting stable significance matrix YSDrawing a corresponding square saliency area in the SAR image, wherein the first two elements of each line are row-column coordinates of the center of the square area, and the third element is the side length of the square area.
The effects of the invention can be further illustrated by the following simulations:
1. conditions of the experiment
The experimental simulation environment is as follows: MATLAB R2011b, Intel (R) Core i5-3470CPU 3.2GHz, Window7 professional edition.
2. The experimental contents and results are as follows:
experiment 1, respectively applying 9 × 9 and 17 × 17 detection windows to an actually measured SAR image, and then obtaining a scale map with a corresponding scale by using an additive operator splitting strategy, where fig. 7(a) is the actually measured SAR image, fig. 7(b) is the scale map with the scale of 9, and fig. 7(c) is the scale map with the scale of 17.
As can be seen from fig. 7(b), since the edge scale of the vehicle and the terrain is larger than the size of the 9 × 9 detection window, when the edge strength obtained by the window is applied to the anisotropic diffusion, although the homogeneous region becomes blurred, the edge information of the vehicle and the terrain is well maintained.
As can be seen from fig. 7(c), since the vehicle edge dimension is smaller than the 17 × 17 detection window size, when the edge strength of the vehicle edge obtained by the window is applied to the anisotropic diffusion, the vehicle edge becomes blurred, and the corresponding terrain edge is always larger than the detection window size, so that the terrain edge information is still maintained.
Experiment 2, the low resolution measured SAR image containing the vehicle target was subjected to Saliency region detection using the method of the present invention and the existing SM Saliency region detection algorithm (Laurent. Itti, Christof Koch and Ernst Niebur. A Model of fashion-Based Visual Attention for Rapid Scene Analysis. IEEE Trans. on Pattern Analysis and Machine Analysis, 1998,20(11): 1254-.
The parameters are set as follows: the homogeneous region R is a rectangular region with the column coordinates 58-225 and the column coordinates 109-268; probability of false alarm pfa0.1; maximum dimension λmax33, minimum dimension λmin5, the scale interval delta lambda is 4;15; the region saliency ratio sr is 1.
The detection result is shown in fig. 8, where fig. 8(a) is a low-resolution actually measured SAR image containing a vehicle target, fig. 8(b) is the detection result of the significant region of fig. 8(a) by using the SM significant region detection algorithm, and fig. 8(c) is the detection result of the significant region of fig. 8(a) by the method of the present invention.
As can be seen from fig. 8(a), the vehicle object has a saliency in the entire image.
As can be seen from fig. 8(b), the SM saliency region detection algorithm detects only 6 vehicle targets out of 13 vehicle targets and detects a partial homogeneous region as a target, and in addition, although the method can mark a more accurate position of a vehicle target for a low-resolution image, it cannot accurately mark a target range.
As can be seen from fig. 8(c), the present invention can accurately detect the positions of all vehicle targets, and no false detection occurs in the homogeneous region, while the marked target range matches the actual size of the vehicle target.
Experiment 3, the method of the invention and the existing SM salient region detection algorithm are respectively used for carrying out salient region detection on the high-resolution actually-measured SAR image containing the vehicle target.
The parameters are set as follows: the homogeneous region R is a rectangular region with the row coordinates of 35-74 and the column coordinates of 180-209; probability of false alarm pfa0.1; maximum dimension λmax57, minimum dimension λmin9, the scale interval Δ λ is 8;10; the region saliency ratio sr is 0.8.
The detection result is shown in fig. 9, where fig. 9(a) is a high-resolution actually measured SAR image containing a vehicle target, fig. 9(b) is the detection result of the significant region of fig. 9(a) by using the SM significant region detection algorithm, and fig. 9(c) is the detection result of the significant region of fig. 9(a) by the method of the present invention.
As can be seen from fig. 9(a), the vehicle object has a saliency in the entire image.
As can be seen from fig. 9(b), the SM significant region detection algorithm detects all 13 vehicle targets, but is also affected by terrain variations, erroneously detects a terrain variation region as a target, and this method cannot accurately obtain the position of the vehicle target, nor can it identify an accurate target range.
As can be seen from fig. 9(c), the present invention does not suffer from the influence of terrain variations and has false detection, but accurately detects the positions of all vehicle targets, and can accurately mark the target range that matches the actual size of the vehicle target.
Experiment 4, the method of the invention and the existing SM salient region detection algorithm are respectively used for carrying out salient region detection on the actually measured SAR image containing the ship target.
The parameters are set as follows: the homogeneous region R is a rectangular region with column coordinates 287-335 and column coordinates 226-292; probability of false alarm pfa0.1; maximum dimension λmax31, minimum dimension λmin7, the scale interval delta lambda is 2;6.6; the region saliency ratio sr is 0.31.
The detection result is shown in fig. 10, where fig. 10(a) is a measured SAR image containing a ship target, fig. 10(b) is the detection result of the significant region of fig. 10(a) by using SM significant region detection algorithm, and fig. 10(c) is the detection result of the significant region of fig. 10(a) by the method of the present invention.
As can be seen from fig. 10(a), the ship target has a saliency in the entire image.
As can be seen from fig. 10(b), the SM saliency region detection algorithm cannot accurately map the position of the ship target.
As can be seen from fig. 10(c), for the long strip-shaped ship target, the method not only can accurately detect the positions of all the ship targets, but also can overlap out the target range conforming to the actual size of the ship target by using a plurality of salient regions.

Claims (9)

1. A SAR image salient region detection method based on an anisotropic diffusion space comprises the following steps:
(1) inputting an SAR image SI with the size of I multiplied by J, and calculating the equivalent vision ENL of the image by utilizing a rectangular homogeneous region R on the image;
(2) given false alarm probability pfaCalculating an initial edge threshold T according to the equivalent vision ENL;
(3) setting the maximum dimension λmaxMinimum dimension λminAnd a scale interval Delta lambda, and setting k to be from 0 to (lambda) in sequencemaxmin) All integers of/Δ λ, in the scale λmin(λ) is used at + k × Δ λmin+k×Δλ)×(λmin+ k × Δ λ), calculating the edge strength g of each pixel (i, j)i,j,kWherein I is a row where the pixel point is located, J is a column where the pixel point is located, I is greater than or equal to 1 and less than or equal to I, and J is greater than or equal to 1 and less than or equal to J;
(4) according to the edge strength gi,j,kAt the scale λminThe diffusion coefficient div of each pixel point (i, j) at + k × Δ λi,j,k
(5) Setting m to be all integers from 0 to I in sequence, setting n to be all integers from 0 to J in sequence according to the scale lambdamin+ k × Δ λ diffusion coefficient, calculating the scale λminRow edge parameter matrix A for the m-th row of + k × Δ λm,kAnd a column edge parameter matrix A 'of the n-th column'n,k
(6) Let k take the values from 0 to (λ) in turnmaxmin) All integers of/Δ λ -1, according to the scale λminThe row edge parameter matrix and the column edge parameter matrix of + k × delta lambda calculate the dimension lambda by using an additive operator splitting strategymin+ k × Δ λ dimension graph UkAnd comparative picture U'k
(7) According to the scale map U calculated in (6)kAnd comparative picture U'kCalculating an initial saliency matrix YT
7a) Calculating the dimension lambda of each pixel point (i, j)minThe metric of significance S at + k × Δ λi,j,kK takes values from 0 to (λ) in sequencemaxmin) All integers,/Δ λ -1;
7b) finding out (lambda) from 7a)maxmin) The largest one of the/Δ λ scale significance measures if that scale significance measure corresponds to the scale Ri,jIs λminOr λmax- Δ λ, then the pixel point (i, j) has no saliency region and no saliency measure for this pixel point is defined, otherwise the pixel point (i, j) has a saliency region and its saliency measure Si,jIs scale Ri,jAnd (ii) and (iii) a row vector (i, j, R)i,j,Si,j) Adding an initial significance matrixYT
(8) Selecting an initial significance matrix YTThe row corresponding to the first% largest significance measure in the new significance matrix Y is constructedT', 0 & lt & ltltoreq.100, and obtaining a stable significance matrix Y by an iteration methodSExtracting a stable significance matrix YSAnd (3) drawing a corresponding square significance region in the SAR image by using row and column coordinates of the first two elements of each row as centers and the 3 rd element as the number of pixels of the side length of the square.
2. The method of claim 1, wherein the step (1) equivalent vision ENL is calculated as follows:
where mean (-) is the mean, var (-) is the variance, (m, n) ∈ R indicates that the pixel (m, n) is contained in the region R, and xm,nRepresenting the pixel value of the pixel point (m, n).
3. The method of claim 1, wherein the step (2) calculates an initial edge threshold T according to the following formula:
wherein Qinv (·,. cndot.) is an inverse partial gamma function.
4. The method of claim 1, wherein the step (3) of calculating the edge strength g of each pixel point (i, j)i,j,kThe method comprises the following steps:
(3a) using (lambda) with pixel point (i, j) as centermin+k×Δλ)×(λmin+ k × Δ λ) detecting all pixels in the window, calculating the pixel point at the scale λminLower mean μ at + k × Δ λi,j,kAnd mu's of mean value'i,j,k
Wherein,i,j,kto center on the pixel point (i, j) (. lambda.)min+k×Δλ)×(λmin+ k × delta lambda) average pixel value of the pixel points in the detection window,to center on the pixel point (i, j) (. lambda.)min+k×Δλ)×(λmin+ k × Δ λ) average of the squares of the pixel values of the pixel points in the detection window;
(3b) using an initial edge threshold T and a lower mean μi,j,kCalculating the pixel point (i, j) at the scale lambdaminEdge threshold T at + k × Δ λi,j,k
Ti,j,k=μi,j,k×T;
(3c) Statistics are centered around pixel point (i, j) (. lambda.)min+k×Δλ)×(λmin+ k × Δ λ) detection window equal to or greater than Ti,j,kNumber of pixels numi,j,kAnd is less than Ti,j,kNumber of pixels num'i,j,kCalculating the pixel point (i, j) at the scale lambdaminEdge strength g at + k × Δ λi,j,k
gi,j,k=(μ′i,j,ki,j,k-1)×min(numi,j,k/(λmin+k×Δλ)2,num'i,j,k/(λmin+k×Δλ)2)
Wherein min (·, ·) represents taking the minimum of the two.
5. The method of claim 1, wherein the step (4) comprises calculating the distance λminThe diffusion coefficient div of each pixel point (i, j) at + k × Δ λi,j,kCalculated as follows:
wherein gt is a set edge parameter, and the value range of gt isTo When I is from 1 to I, and J is from 1 to J, gi,j,0Is measured.
6. The method of claim 1, wherein the step (5) comprises calculating a measure λminRow edge parameter matrix A for the m-th row of + k × Δ λm,kAnd a column edge parameter matrix A 'of the n-th column'n,kCalculated as follows:
wherein: a isi,i-1=-Δλ×(divm,i,k+divm,i-1,k),
ai,i+1=-Δλ×(divm,i,k+divm,i+1,k),
ai,i=1+Δλ×(2×divm,i,k+divm,i-1,k+divm,i+1,k),1≤i≤J,
a′j,j-1=-Δλ×(divj,n,k+divj-1,n,k),
a′j,j+1=-Δλ×(divj,n,k+divj+1,n,k),
a′j,j=1+Δλ×(2×divj,n,k+divj-1,n,k+divj+1,n,k),1≤j≤I。
7. According to the rightThe method of claim 1, wherein the step (6) comprises calculating the dimension λmin+ k × Δ λ dimension graph UkAnd comparative picture U'kThe method comprises the following steps:
(6a) setting an initial scale map UkIs image SI, and is initially compared with image U'kIs an image SI;
(6b) using a row edge parameter matrix Am,kAnd scale graph UkThe m-th line element of (2) calculates a line scale map by the Thomas algorithmM is more than or equal to 1 and less than or equal to I in the mth row;
(6c) using a row edge parameter matrix Am,k+1And comparative picture U'kThe m-th line element of (1), calculating a line contrast map by the Thomas algorithmThe m-th row element of (1);
(6d) utilizing a column edge parameter matrix A'n,kAnd scale graph UkThe column scale map is calculated by the Thomas algorithmN is more than or equal to 1 and less than or equal to J;
(6e) utilizing a column edge parameter matrix A'n,k+1And comparative picture U'kThe column comparison map is calculated by the Thomas algorithmThe nth column element of (1);
(6f) calculating the scale lambdamin+ k × Δ λ dimension graph UkAnd comparative picture U'k
(6g) Judging whether the steps (6b) to (6f) calculate k times, if so, stopping, and outputting a scale map UkAnd comparative picture U'k(ii) a Otherwise, go back to step (6 b).
8. The method of claim 1, wherein said step 7a) of calculating each pixel point (i, j) at a scale λminThe metric of significance S at + k × Δ λi,j,kCalculated as follows:
where x is a pixel value varying from 0 to 255, pi,j,k(x) Is a scale chart UkCentered on the pixel point (i, j), (λ)min+k×Δλ)×(λmin+ k × Δ λ) pixel point probability p 'with pixel value x in square window'i,j,k(x) Is comparative picture U'kCentered on the pixel point (i, j), (λ)min+k×Δλ)×(λmin+ k × Δ λ) pixel probability with a pixel value x within the square window.
9. The method according to claim 1, wherein the stable significance matrix Y is obtained in the step (8) by an iterative methodSThe method comprises the following steps:
(8a) setting the area saliency ratio sr and setting the stable saliency matrix YSIs an empty matrix;
(8b) selecting a new significance matrix YTTaking the pixel point with the maximum significance measurement as a candidate point, constructing a square window with the first two elements of the corresponding line as the center and the third element as the side length, and calculating the ratio sr' of the number of the pixel points with the significance region in the square window to the total number of the pixel points in the window;
(8c) comparing sr 'with sr, if sr' is less than sr, then selecting the row of the candidate point from the significance matrix YT' removing; otherwise, the candidate point is setAll rows add a stable significance matrix YSThen, the candidate points and all the pixels with significant regions in the square window are selected from the significant matrix YT' removing;
(8d) judging significance matrix YT' whether it is empty, if it is empty, it is stopped, and the stable saliency matrix Y is outputS(ii) a Otherwise, go back to step (8 b).
CN201510254252.1A 2015-05-18 2015-05-18 SAR image salient region detection method based on Anisotropic diffusion space Active CN104899873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510254252.1A CN104899873B (en) 2015-05-18 2015-05-18 SAR image salient region detection method based on Anisotropic diffusion space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510254252.1A CN104899873B (en) 2015-05-18 2015-05-18 SAR image salient region detection method based on Anisotropic diffusion space

Publications (2)

Publication Number Publication Date
CN104899873A CN104899873A (en) 2015-09-09
CN104899873B true CN104899873B (en) 2017-10-24

Family

ID=54032518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510254252.1A Active CN104899873B (en) 2015-05-18 2015-05-18 SAR image salient region detection method based on Anisotropic diffusion space

Country Status (1)

Country Link
CN (1) CN104899873B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301643B (en) * 2017-06-06 2019-08-06 西安电子科技大学 Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500453A (en) * 2013-10-13 2014-01-08 西安电子科技大学 SAR(synthetic aperture radar) image significance region detection method based on Gamma distribution and neighborhood information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9330334B2 (en) * 2013-10-24 2016-05-03 Adobe Systems Incorporated Iterative saliency map estimation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500453A (en) * 2013-10-13 2014-01-08 西安电子科技大学 SAR(synthetic aperture radar) image significance region detection method based on Gamma distribution and neighborhood information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multiple-Scale Salient-Region Detection of SAR Image Based on Gamma Distribution and Local Intensity Variation;Qiang Zhang等;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20140831;第11卷(第8期);第1370-1374页 *
基于局部对比和全局稀有度的显著性检测;贺良杰 等;《计算机应用研究》;20140930;第31卷(第9期);第2832-2835,2840页 *
尺度自适应的SAR图像显著性检测方法;谢惠杰 等;《计算机工程与应用》;20150417;第51卷(第20期);第145-152页 *

Also Published As

Publication number Publication date
CN104899873A (en) 2015-09-09

Similar Documents

Publication Publication Date Title
US6263089B1 (en) Method and equipment for extracting image features from image sequence
US20120328161A1 (en) Method and multi-scale attention system for spatiotemporal change determination and object detection
CN108986152B (en) Foreign matter detection method and device based on difference image
CN102968799B (en) Integral image-based quick ACCA-CFAR SAR (Automatic Censored Cell Averaging-Constant False Alarm Rate Synthetic Aperture Radar) image target detection method
CN103697855B (en) A kind of hull horizontal attitude measuring method detected based on sea horizon
CN111476159A (en) Method and device for training and detecting detection model based on double-angle regression
CN109242870A (en) A kind of sea horizon detection method divided based on image with textural characteristics
Hu et al. Local edge distributions for detection of salient structure textures and objects
CN105389799B (en) SAR image object detection method based on sketch map and low-rank decomposition
CN110428425B (en) Sea-land separation method of SAR image based on coastline vector data
CN111160477B (en) Image template matching method based on feature point detection
CN112013921B (en) Method, device and system for acquiring water level information based on water level gauge measurement image
CN108230375A (en) Visible images and SAR image registration method based on structural similarity fast robust
CN110889843A (en) SAR image ship target detection method based on maximum stable extremal region
CN114494371A (en) Optical image and SAR image registration method based on multi-scale phase consistency
CN107835998B (en) Hierarchical tiling method for identifying surface types in digital images
CN110390338A (en) A kind of SAR high-precision matching process based on non-linear guiding filtering and ratio gradient
CN104899873B (en) SAR image salient region detection method based on Anisotropic diffusion space
CN107369163B (en) Rapid SAR image target detection method based on optimal entropy dual-threshold segmentation
CN111027512B (en) Remote sensing image quayside ship detection and positioning method and device
CN107729903A (en) SAR image object detection method based on area probability statistics and significance analysis
CN115294439B (en) Method, system, equipment and storage medium for detecting air weak and small moving target
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
CN111369507B (en) Trail detection method based on normalized gray scale Hough transform and local CFAR
CN114677428A (en) Power transmission line icing thickness detection method based on unmanned aerial vehicle image processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230620

Address after: Room 124, Room 70102, Xinglinggu Podium Building, Entrepreneurship Research and Development Park, No. 69 Jinye Road, High tech Zone, Xi'an City, Shaanxi Province, 710076

Patentee after: Xi'an Huoyanwei Optoelectronic Technology Co.,Ltd.

Address before: 710071 No. 2 Taibai South Road, Shaanxi, Xi'an

Patentee before: XIDIAN University