CN102902979A - Method for automatic target recognition of synthetic aperture radar (SAR) - Google Patents

Method for automatic target recognition of synthetic aperture radar (SAR) Download PDF

Info

Publication number
CN102902979A
CN102902979A CN2012103386300A CN201210338630A CN102902979A CN 102902979 A CN102902979 A CN 102902979A CN 2012103386300 A CN2012103386300 A CN 2012103386300A CN 201210338630 A CN201210338630 A CN 201210338630A CN 102902979 A CN102902979 A CN 102902979A
Authority
CN
China
Prior art keywords
matrix
training sample
test sample
expression
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103386300A
Other languages
Chinese (zh)
Other versions
CN102902979B (en
Inventor
黄钰林
王兵
杨建宇
王涛
武俊杰
李文超
刘娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201210338630.0A priority Critical patent/CN102902979B/en
Publication of CN102902979A publication Critical patent/CN102902979A/en
Application granted granted Critical
Publication of CN102902979B publication Critical patent/CN102902979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatic target recognition of an SAR. The automatic target recognition of the SAR mainly comprises three steps such as SAR image preprocessing, feature extraction and target classification, and the method is applicable to feature extraction and target classification of the automatic target recognition of the SAR and solves the problem that effective identification information can not be extracted from high-dimensional SAR images. According to the method for the automatic target recognition of the SAR, a manifold structure theory is introduced, and the method is based on a neighborhood identification embedding criterion. The method comprises the steps of A, initializing; B, constructing a similarity matrix and a difference matrix; C, calculating a target matrix on the basis of a maximum margin criterion; D, calculating a projection matrix; E, conducting feature extraction on training samples according to the projection matrix to obtain training sample features; E, conducting feature extraction on SAR images to be classified to obtain test sample features; and F, classifying SAR images to be tested according to a nearest neighbor classifier, wherein Step A-Step E belong to the feature extraction phase, and Step F belongs to the target classification phase. By the aid of the method, the probability of correct identification of targets can be improved.

Description

A kind of method of synthetic-aperture radar automatic target identification
Technical field:
(Synthetic Aperture Radar, be called for short: SAR) field, it is particularly related to, and the SAR Characteristic of Image extracts and the target classification field in the automatic target identification (Automatic Target Recognition, ATR) to the invention belongs to synthetic-aperture radar.
Background technology:
Many weeks, the identification of SAR automatic target is an important application facet of SAR imaging, it combines modern signal processing technology and pattern recognition theory, utilize computing machine that the information that gathers is carried out automatic analysis, finish discovery, location, identification target, thereby so that the ATR technology can provide the information such as objective attribute target attribute, classification.By SAR ATR technology, can remove interfere information in the SAR image and the redundant information of target, extract the diagnostic characteristics of target, both improved the recognition capability to unknown object, shortened again the classification time.
SAR ATR mainly comprises three steps, the i.e. pre-service of SAR image, feature extraction and target classification.Wherein feature extraction is the key issue of SAR ATR, and it directly affects the effect of classification.The purpose of feature extraction is to utilize various converter techniques to improve the distributed architecture of raw data in feature space, removes redundant information, thereby has improved the identifiability of data, reduces operand.In document " Efficient and Robust Feature Extraction by Maximum Margin Criterion ", proposed a kind ofly (to see " Haifeng Li; Tao Jiang; Keshu Zhang; Efficient and robust feature extraction by maximum margin criterion.IEEE Transactions on Neural Networks; Vol 17; No.1 for details based on maximal margin criterion (MMC) feature extracting method, January 2006, pages:157-165. "); the overall linear distributed architecture of the method based on data is theoretical; merged the classification information of training sample; be intended to seek projection matrix, so that the distance between the similar sample is close to each other after the mapping of sample process projection matrix, the distance between foreign peoples's sample away from each other.
In order to make things convenient for treatment S AR view data, usually two-dimensional SAR image is converted to one-dimensional vector, this will produce the data set of one group of higher-dimension, the distribution of high dimensional data in the space is nonlinear organization and be not overall linear structure, therefore the MMC method that proposes based on overall linear structure can not well be described the distributed architecture of SAR image in the space, will certainly affect recognition performance.
Summary of the invention:
In order to solve the problem that can't extract the characteristic information that is conducive to identify in the feature extraction of SAR ATR in high dimensional data, the present invention proposes a kind of synthetic aperture radar automatic target recognition method based on theory of manifolds.The method is theoretical based on manifold learning, utilizes the maximal margin criterion to extract the characteristic information of data, can so that foreign peoples's sample away from each other, similar sample is close to each other, thereby improves the evident characteristics of feature.The present invention has the characteristics of more reasonable, the high robustness of data space structure distribution, high recognition performance.
Content of the present invention for convenience of description, at first make following term definition:
Definition 1, stream shape
If M is the Hausdorff space, if any point x is belonged to M, the neighborhood U homeomorphism of x in M arranged in m dimension Euclidean space R mAn opener, claim that then M is m dimension stream shape.See for details document " Chen Shengshen. the infinitesimal geometry teaching materials. the BJ University Press ".
Definition 2, maximal margin criterion
The maximal margin criterion is a kind of criterion of feature extracting method, see document " Haifeng Li; Tao Jiang; Keshu Zhang.Efficient and robust feature extraction by maximum margin criterion.I EEE Transactions on Neural Networks; Vol 17; No.1, and January 2006 " for details.
Definition 3, eigenwert and proper vector
If A is n rank square formations, if exist number λ and n dimension non-vanishing vector α to make A α=λ α, claim that then λ is the eigenwert of square formation A, α is that square formation A is corresponding to a proper vector of eigenvalue λ.See for details document " Huang Tingzhu, Cheng Xiaoyu. linear algebra and space analytic geometry (second edition). Higher Education Publishing House, 2003 ".
Definition 4, Euclidean distance
Euclidean distance is to measure that a kind of of distance measures between two vectors, establishes vector x and y, and the Euclidean distance between them is || x-y|| 2See for details document " Huang Tingzhu, Zhong Shouming, Li Zhengliang. matrix theory. the .2003 of Higher Education Publishing House ".
Definition 5, vectorial 2 norms
If column vector x=is (x 1, x 2... x n) T∈ R n, wherein T is transpose of a matrix, then || || 2Vectorial 2 norms,
Figure BDA00002136714500021
See for details document " Huang Tingzhu, Zhong Shouming, Li Zhengliang. matrix theory. Higher Education Publishing House, 2003 ".
Definition 6, diagonal matrix
If square formation A=is (a Ij) N * nFirst a Ij=0 (i ≠ j), claim that then A is diagonal matrix, is denoted as A=diag (a 11, a 22..., a Nn).See for details document " Huang Tingzhu, Cheng Xiaoyu. linear algebra and space analytic geometry (second edition). Higher Education Publishing House, 2003 "
Definition 7, minimum distance classifier
Be provided with L the sample T=[t that classification is known 1, t 2... t L], each sample t iClassification be ω i, i=1,2 ..., L.Existing one sample s to be identified calculates respectively s and each sample t iEuclidean distance d (s, t i), then s should belong to and its that class apart from the sample representative of minimum, if that is: Then differentiate s ∈ ω iSee document " Cover.T.Estimati on by the Nearest Neighbor Rule.IEEE Transactions on Information Theory, Vol 14, No.1, January 1968 " for details.
The present invention proposes the method based on a kind of synthetic-aperture radar automatic target identification of theory of manifolds, and it may further comprise the steps:
Step 1, initialization
Read the SAR image, the SAR image that reads is spliced by row, obtain column vector.The training set that definition contains N width of cloth SAR image is expressed as matrix X=(x 1, x 2..., x i..., x N) ∈ R M * N, wherein, N represents the number of samples in this group training sample set, N is positive integer, x 1First training sample, x 2Second training sample, x iI training sample, x NN training sample, i ∈ 1,2 ..., N}.The dimension of each training sample is m * 1 dimension, and m represents the number of pixels of SAR image, and R represents real number set.The category label set expression of definition training sample is matrix Y=(y 1, y 2..., y i..., y N), y wherein 1Training sample x 1Category label, y 2Training sample x 2Category label, y iTraining sample x iCategory label, y NTraining sample x NCategory label, i ∈ 1,2 ..., N}.Definition SAR image measurement sample set is expressed as matrix X'=(x 1', x ' 2..., x t' ..., x ' N ') ∈ R M * N', x wherein 1' be first test sample book, x 2' be second test sample book, x t' be t test sample book, x ' NN test sample book, the number of this group test sample book of N ' expression.Each test sample book x t' dimension be m * 1 dimension, t ∈ 1,2 ..., N ' }, t is the label of test sample book.
Step 2, structure similarity matrix W (w)With the otherness matrix W (b)
Step 2 comprises step 2.1, step 2.2, step 2.3:
The training sample set X that step 2.1 obtains according to step 1, employing formula (1) is constructed the Euclidean distance matrix G between all training samples:
G=(g ij) NxN,i,j∈{1,2,…,N} (1)
Wherein, the element g of Euclidean distance matrix G Ij=| x i-x j|| 2Training sample x among the expression training sample set X iAnd x jBetween Euclidean distance, || || 2Be vectorial 2 norms, N represents the number of samples in this group training sample set, and N is positive integer.
The category label set Y of the training sample that step 2.2 obtains according to step 1 and the Euclidean distance matrix G that step 2.1 obtains adopt formula (2) to construct similarity matrix W between all samples (w):
W ( w ) = ( w ij ( w ) ) N × N , i , j ∈ { 1,2 , · · · , N } - - - ( 2 )
Similarity matrix W wherein (w)In element
Figure BDA00002136714500042
Training sample x among the expression training sample set X iAnd x jBetween the similarity factor of influence:
w ij ( w ) = exp ( - g ij ) , if ( g ij ≤ ϵ 1 ∩ y i = y j ) 0 , others - - - ( 3 )
Wherein, ∩ presentation logic symbol " with ", y iAnd y jRespectively sample x iAnd x jCategory label, ε 1The adjacent region threshold that represents similar sample, the empirical value that adopts emulation to determine in the reality.
The category label set Y of the training sample that step 2.3 obtains according to step 1 and the Euclidean distance matrix G that step 2.1 obtains adopt the otherness matrix W between formula (4) the structure sample (b):
W ( b ) = ( w ij ( b ) ) N × N , i , j ∈ { 1,2 , · · · , N } - - - ( 4 )
Otherness matrix W wherein (b)In element
Figure BDA00002136714500045
Training sample x among the expression training sample set X iAnd x jBetween the otherness factor of influence:
w ij ( b ) = exp ( - g ij ) , if ( g ij ≤ ϵ 2 ∩ y i ≠ y j ) 0 , others - - - ( 5 )
Wherein, ∩ presentation logic symbol " with ", y iAnd y jRespectively sample x iAnd x jCategory label, ε 2The adjacent region threshold of expression foreign peoples sample, the empirical value that adopts emulation to determine in the reality.
Step 3, calculate objective matrix J based on the maximal margin criterion
The training sample set X that step 1 is obtained, the similarity matrix W that step 2.2 obtains (w)The otherness matrix W that obtains with step 2.3 (b), adopt formula (6) to calculate objective matrix J:
J=X(D (w)-D (b))X T-X(W (w)-W (b))X T (6)
Wherein, X represents the training sample set, and w is similarity matrix W (w)Subscript, the expression similarity, b is the otherness matrix W (b)Subscript, the expression otherness, D (w)Be diagonal matrix, be expressed as D ( w ) = diag ( d 1 ( w ) , d 2 ( w ) , · · · , d i ( w ) , · · · , d N ( w ) ) ∈ R N × N , Wherein,
Figure BDA00002136714500052
Diagonal matrix D (w)First element, be expressed as
Figure BDA00002136714500053
Figure BDA00002136714500054
Diagonal matrix D (w)Second element, be expressed as
Figure BDA00002136714500055
Figure BDA00002136714500056
Diagonal matrix D (w)I element, be expressed as
Figure BDA00002136714500057
Figure BDA00002136714500058
Diagonal matrix D (w)N element, be expressed as
Figure BDA00002136714500059
D (b)Be diagonal matrix, be expressed as D ( b ) = diag ( d 1 ( b ) , d 2 ( b ) , · · · , d i ( b ) , · · · , d N ( b ) ) ∈ R N × N , Wherein,
Figure BDA000021367145000511
Diagonal matrix D (b)First element, be expressed as
Figure BDA000021367145000512
Figure BDA000021367145000513
Diagonal matrix D (b)Second element, be expressed as Diagonal matrix D (b)I element, be expressed as
Figure BDA000021367145000516
Figure BDA000021367145000517
Diagonal matrix D (b)N element, be expressed as
Figure BDA000021367145000518
X TIt is the transposition of matrix X.
Step 4, calculating projection matrix V
Adopt projection matrix V that training sample set is carried out dimensionality reduction.
According to the objective matrix J that step 3 obtains, calculate m stack features value and the proper vector of objective matrix J
Figure BDA000021367145000519
Wherein, λ iI the eigenwert of objective matrix J, v iλ iThe characteristic of correspondence vector, R represents real number set, sample dimension m is positive integer.
After m eigenwert sorted from big to small, be expressed as λ 1〉=λ 2〉=... λ i〉=... 〉=λ mWherein, λ 1The 1st eigenwert after objective matrix J sorts from big to small, λ 2The 2nd eigenwert after objective matrix J sorts from big to small, λ iI eigenwert after objective matrix J sorts from big to small, λ mM eigenwert after objective matrix J sorts from big to small.
Choose λ 1~ λ mIn before K eigenvalue of maximum λ 1~ λ KThe characteristic of correspondence vector v 1~ v KForm projection matrix V, wherein λ KK the eigenwert of objective matrix J, v 1λ 1The characteristic of correspondence vector, v Kλ KCharacteristic of correspondence vector, K are the dimensions that extracts feature, and K chooses the arbitrary integer between 1 to sample dimension m.
Step 5, feature extraction
The projection matrix V that obtains according to step 4, all the training sample x among the training sample set X that step 1 is obtained i, i={1,2 ..., N} is according to formula (7), calculation training sample x iFeature z i,
z i=V Tx i,i∈{1,2,…,N} (7)
Obtain the training sample characteristic set, be expressed as matrix Z Train=(z 1, z 2..., z i..., z N), z wherein 1Expression training sample x 1Feature, z 2Expression training sample x 2Feature, z iExpression training sample x iFeature, z NExpression training sample x NFeature.V TIt is the transposition of matrix V.
All test sample book x among the projection matrix V that obtains according to step 4, the test sample book that step 1 the obtains set X ' t', t ∈ 1,2 ..., N ' }, according to formula (8), calculate test sample book x t' feature z t',
z t′=V Tx t′,i∈{1,2,…,N′} (8)
Obtain the test sample book characteristic set and be expressed as matrix Z Test=(z 1', z ' 2..., z t' ..., z ' N '), z wherein 1' expression test sample book x 1Feature, z 2' expression test sample book x 2Feature, z t' expression test sample book x tFeature, z ' N 'Expression test sample book x N 'Feature, t is the label of test sample book, N ' is the number of test sample book.
Step 6, target classification
The training sample characteristic set Z that obtains according to step 5 TrainWith test sample book characteristic set Z Test, adopt traditional minimum distance classifier to test sample book characteristic set Z TestIn the feature z of each test sample book t' classify, obtain the category label y of test sample book t', t ∈ 1,2 ..., N ' }.
The category label set expression of all test sample books is matrix Y '=(y 1', y 2' ..., y t' ..., y ' N '), wherein, y' 1Expression test sample book x 1Category label, y 2' expression test sample book x 2Category label, y t' expression test sample book x tCategory label, y ' N 'Expression test sample book x N 'Category label.Test sample book x tCategory label y t' be exactly the unknown object kind that synthetic-aperture radar is surveyed.
Through above step, realize synthetic aperture radar target identification.
Need to prove: step 1~5th, the SAR image is carried out feature extraction, obtain the feature that is easy to classify; Step 3 is to adopt the maximal margin criterion to calculate objective matrix J, and similar sample is assembled, foreign peoples's sample away from; The purpose of step 6 target classification is by the feature that compares training sample and the feature of test sample book, determines the classification of test sample book according to feature.
The present invention need not manual intervention, automatically finishes from reading the SAR image to exporting other process of target class the SAR image, reaches the purpose of SAR automatic target identification.
Essence of the present invention and innovative point:
The present invention utilizes manifold learning theoretical, has effectively extracted the feature that is easy to classify in the higher-dimension SAR data.Innovation is the problem that can not effectively extract higher-dimension SAR data characteristics for overall linear structure, and it is theoretical to introduce manifold learning, utilizes the maximal margin criterion, extracts the validity feature that is easy to classify when reducing SAR characteristics of image dimension.
Advantage of the present invention:
1: the data space structure distribution is more reasonable: this method meets the nonlinear Distribution structure of higher-dimension SAR image in the space based on theory of manifolds.Adopt theory of manifolds from higher-dimension SAR view data, to recover its low dimensional manifold structure, not only reduced intrinsic dimensionality, can also obtain the characteristic information that is easy to classify.
2: robustness increases: this method adopts the maximal margin criterion, and the dimension disaster problem that has faced when having avoided processing high dimensional data has improved the sane performance of the method.
3: recognition performance increases: with compare based on maximal margin criterion feature extracting method, adopt the feature of the high dimensional data that this method extracts to be easier to classification.
Description of drawings
Fig. 1 is workflow block diagram of the present invention
Fig. 2 is the training sample that uses of the present invention and type and the sample size of test sample book
Fig. 3 is for adopting the object recognition rate of the inventive method
Wherein horizontal ordinate represents the intrinsic dimensionality that extracts, and ordinate represents the discrimination that target is correctly validated.
Fig. 4 is for adopting the optimal identification rate of all kinds of targets of the present invention.
Embodiment
Such as Fig. 1, implementation step of the present invention is as follows:
Step 1, initialization
If the set of SAR image training sample is expressed as matrix X=(x 1, x 2..., x N) ∈ R M * N, wherein N=698 represents the number of this group training sample, each training sample x iDimension be m * 1 dimension, m=3721, m get the number of pixels of SAR image, i ∈ 1,2 ..., 698}, R represents real number set.In this simultaneously, the category label set expression of supposing training sample is matrix Y=(y 1, y 2..., y 698), y wherein iTraining sample x iCategory label, i ∈ 1,2 ..., 698}.If SAR image measurement sample set is expressed as matrix X '=(x 1', x 2' ..., x ' N ') ∈ R M * N', wherein N'=1365 represents the number of this group test sample book, each test sample book x' tDimension be m * 1 dimension, t ∈ 1,2 ..., 1365}, t are the labels of test sample book.
Step 2, structure similarity matrix W (w)With the otherness matrix W (b)
This step is divided into following 3 little steps:
Step 2.1 obtains training sample set X according to step 1, and employing formula (1) is constructed the Euclidean distance matrix G between all training samples:
G=(g ij) 698×698,i,j∈{1,2,…,698} (1)
Wherein, the element g of Euclidean distance matrix G Ij=| x i-x j|| 2Sample x among the expression training sample set X iAnd x jBetween Euclidean distance, || || 2Vectorial 2 norms.
The category label set Y of the training sample that step 2.2 obtains according to step 1 and the Euclidean distance matrix G that step 2.1 obtains adopt formula (2) to construct similarity matrix W between all samples (w):
W ( w ) = ( w ij ( w ) ) 698 × 698 , i , j ∈ { 1,2 , · · · , 698 } - - - ( 2 )
Similarity matrix W wherein (w)Element
Figure BDA00002136714500082
Training sample x among the expression training sample set X iAnd x jBetween the similarity factor of influence:
w ij ( w ) = exp ( - g ij ) , if ( g ij ≤ ϵ 1 ∩ y i = y j ) 0 , others - - - ( 3 )
Wherein, ∩ presentation logic symbol " with ".y iAnd y jRespectively sample x iAnd x jCategory label.ε 1The=4.5th, the upper limit of Euclidean distance between the restriction similarity sample of being determined by emulation.
The category label set Y of the training sample that step 2.3 obtains according to step 1 and the Euclidean distance matrix G that step 2.1 obtains adopt the otherness matrix W between formula (4) the structure sample (b):
W ( b ) = ( w ij ( b ) ) 698 × 698 , i , j ∈ [ 1,2 , · · · , 698 ] - - - ( 4 )
Otherness matrix W wherein (b)Element
Figure BDA00002136714500092
Training sample x among the expression training sample set X iAnd x jBetween the otherness factor of influence:
w ij ( b ) = exp ( - g ij ) , if ( g ij ≤ ϵ 2 ∩ y i ≠ y j ) 0 , others - - - ( 5 )
Wherein, ∩ presentation logic symbol " with ".y iAnd y jRespectively sample x iAnd x jCategory label.ε 2The=5th, the upper limit of Euclidean distance between the restriction otherness sample of being determined by emulation.
Step 3, calculate objective matrix J based on the maximal margin criterion
To the training sample set X that step 1 obtains, the similarity matrix W that step 2.2 obtains (w)The otherness matrix W that obtains with step 2.3 (b), adopt formula (6) to calculate objective matrix J:
J=X(D (w)-D (b))X T-X(W (w)-W (b))X T (6)
Wherein D ( w ) = diag ( d 1 ( w ) , d 2 ( w ) , · · · , d i ( w ) , · · · , d 698 ( w ) ) ∈ R 698 × 698 , d 1 ( w ) = Σ j = 1 698 w 1 j ( w ) , d 2 ( w ) = Σ j = 1 698 w 2 j ( w ) , d i ( w ) = Σ j = 1 698 w ij ( w ) , d 698 ( w ) = Σ j = 1 698 w 698 j ( w ) , D ( b ) = diag ( d 1 ( b ) , d 2 ( b ) , · · · , d i ( b ) , · · · , d 698 ( b ) ) ∈ R 698 × 698 , d 1 ( b ) = Σ j = 1 698 w 1 j ( b ) , d 2 ( b ) = Σ j = 1 698 w 2 j ( b ) , d i ( b ) = Σ j = 1 698 w ij ( b ) , d 698 ( b ) = Σ j = 1 698 w 698 j ( b ) , I ∈ 1,2 ..., 698}, X TIt is the transposition of matrix X.
Step 4, calculating projection matrix V
According to the objective matrix J that step 3 obtains, calculate 3721 stack features value and the proper vectors of objective matrix J
Figure BDA000021367145000914
After 3721 stack features values are sorted from big to small, be expressed as λ 1〉=λ 2〉=... 〉=λ 3721Choose front K eigenvalue of maximum λ 1~ λ KThe characteristic of correspondence vector v 1~ v K, form projection matrix V=(v 1, v 2..., v K), V ∈ R 3721 * K, wherein K is the dimension of the feature of extraction, K chooses 1 to the arbitrary integer between the sample dimension 3721.
Step 5, feature extraction
The projection matrix V that obtains according to step 4, all the training sample x among the training sample set X that step 1 is obtained i, i ∈ 1,2 ..., 698} calculates its feature z according to formula (7) i,
z i=V Tx i,i∈{1,2,…,698} (7)
Wherein, V TIt is the transposition of matrix V.
Obtain the training sample characteristic set and be expressed as matrix Z Train=(z 1, z 2..., z 698);
The projection matrix V that obtains according to step 4, all the test sample book x among the test sample book set X ' that step 1 is obtained t', t ∈ 1,2 ..., 1365}, t are the labels of test sample book.According to formula (8), calculate its feature z t',
z t′=V Tx t′,i∈{1,2,…,1365} (8)
Obtain the test sample book characteristic set and be expressed as matrix Z Test=(z 1', z 2' ..., z ' 1365).
Step 6, target classification
The training sample characteristic set Z that obtains according to step 5 TrainWith test sample book characteristic set Z Test, adopt minimum distance classifier to test sample book characteristic set Z TestIn each test sample book z t' classify, obtain the category label y of this test sample book t', t ∈ 1,2 ..., 1365}.
The category label set expression that finally obtains test sample book is matrix Y '=(y 1', y 2' ..., y t' ..., y ' 1365).Through above-mentioned steps, can finish the identification to test sample book.

Claims (1)

1. synthetic aperture radar automatic target recognition method is characterized in that it may further comprise the steps:
Step 1, initialization
Read the SAR image, the SAR image that reads is spliced by row, obtain column vector; The training set that definition contains N width of cloth SAR image is expressed as matrix X=(x 1, x 2..., x i..., x N) ∈ R M * N, wherein, N represents the number of samples in this group training sample set, N is positive integer, x 1First training sample, x 2Second training sample, x iI training sample, x NN training sample, i ∈ 1,2 ..., N}; The dimension of each training sample is m * 1 dimension, and m represents the number of pixels of SAR image, and R represents real number set; The category label set expression of definition training sample is matrix Y=(y 1, y 2..., y i..., y N), y wherein 1Training sample x 1Category label, y 2Training sample x 2Category label, y iTraining sample x iCategory label, y NTraining sample x NCategory label, i ∈ 1,2 ..., N}; Definition SAR image measurement sample set is expressed as matrix X'=(x 1', x ' 2..., x t' ..., x ' N ') ∈ R M * N', x wherein 1' be first test sample book, x 2' be second test sample book, x t' be t test sample book, x ' NN test sample book, the number of this group test sample book of N ' expression; Each test sample book x t' dimension be m * 1 dimension, t ∈ 1,2 ..., N ' }, t is the label of test sample book;
Step 2, structure similarity matrix W (w)With the otherness matrix W (b)
Step 2 comprises step 2.1, step 2.2, step 2.3:
The training sample set X that step 2.1 obtains according to step 1, employing formula (1) is constructed the Euclidean distance matrix G between all training samples:
G=(g ij) N×N,i,j∈{1,2,…,N} (1)
Wherein, the element g of Euclidean distance matrix G Ij=| x i-x j|| 2Training sample x among the expression training sample set X iAnd x jBetween Euclidean distance, || || 2Be vectorial 2 norms, N represents the number of samples in this group training sample set, and N is positive integer;
The category label set Y of the training sample that step 2.2 obtains according to step 1 and the Euclidean distance matrix G that step 2.1 obtains adopt formula (2) to construct similarity matrix W between all samples (w):
W ( w ) = ( w ij ( w ) ) N × N , i , j ∈ { 1,2 , · · · , N } - - - ( 2 )
Similarity matrix W wherein (w)In element
Figure FDA00002136714400021
Training sample x among the expression training sample set X iAnd x jBetween the similarity factor of influence:
w ij ( w ) = exp ( - g ij ) , if ( g ij ≤ ϵ 1 ∩ y i = y j ) 0 , others - - - ( 3 )
Wherein, ∩ presentation logic symbol " with ", y iAnd y jRespectively sample x iAnd x jCategory label, ε 1The adjacent region threshold that represents similar sample, the empirical value that adopts emulation to determine in the reality;
The category label set Y of the training sample that step 2.3 obtains according to step 1 and the Euclidean distance matrix G that step 2.1 obtains adopt the otherness matrix W between formula (4) the structure sample (b):
W ( b ) = ( w ij ( b ) ) N × N , i , j ∈ { 1,2 , · · · , N } - - - ( 4 )
Otherness matrix W wherein (b)In element
Figure FDA00002136714400024
Training sample x among the expression training sample set X iAnd x jBetween the otherness factor of influence:
w ij ( b ) = exp ( - g ij ) , if ( g ij ≤ ϵ 2 ∩ y i ≠ y j ) 0 , others - - - ( 5 )
Wherein, ∩ presentation logic symbol " with ", y iAnd y jRespectively sample x iAnd x jCategory label, ε 2The adjacent region threshold of expression foreign peoples sample, the empirical value that adopts emulation to determine in the reality;
Step 3, calculate objective matrix J based on the maximal margin criterion
The training sample set X that step 1 is obtained, the similarity matrix W that step 2.2 obtains (w)The otherness matrix W that obtains with step 2.3 (b), adopt formula (6) to calculate objective matrix J:
J=X(D (w)-D (b))X T-X(W (w)-W (b))X T (6)
Wherein, X represents the training sample set, and w is similarity matrix W (w)Subscript, the expression similarity, b is the otherness matrix W (b)Subscript, the expression otherness, D (w)Be diagonal matrix, be expressed as D ( w ) = diag ( d 1 ( w ) , d 2 ( w ) , · · · , d i ( w ) , · · · , d N ( w ) ) ∈ R N × N , Wherein,
Figure FDA00002136714400027
Diagonal matrix D (w)First element, be expressed as
Figure FDA00002136714400028
Figure FDA00002136714400029
Diagonal matrix D (w)Second element, be expressed as
Figure FDA000021367144000210
Figure FDA000021367144000211
Diagonal matrix D (w)I element, be expressed as
Figure FDA000021367144000212
Figure FDA000021367144000213
Diagonal matrix D (w)N element, be expressed as
Figure FDA00002136714400031
D (b)Be diagonal matrix, be expressed as D ( b ) = diag ( d 1 ( b ) , d 2 ( b ) , · · · , d i ( b ) , · · · , d N ( b ) ) ∈ R N × N , Wherein, Diagonal matrix D (b)First element, be expressed as
Figure FDA00002136714400034
Diagonal matrix D (b)Second element, be expressed as
Figure FDA00002136714400036
Diagonal matrix D (b)I element, be expressed as
Figure FDA00002136714400039
Diagonal matrix D (b)N element, be expressed as
Figure FDA000021367144000310
X TIt is the transposition of matrix X;
Step 4, calculating projection matrix V
Adopt projection matrix V that training sample set is carried out dimensionality reduction;
According to the objective matrix J that step 3 obtains, calculate m stack features value and the proper vector of objective matrix J
Figure FDA000021367144000311
Wherein, λ iI the eigenwert of objective matrix J, v iλ iThe characteristic of correspondence vector, R represents real number set, sample dimension m is positive integer;
After m eigenwert sorted from big to small, be expressed as λ 1〉=λ 2〉=... λ i〉=... 〉=λ mWherein, λ 1The 1st eigenwert of objective matrix J, λ 2The 2nd eigenwert of objective matrix J, λ iI the eigenwert of objective matrix J, λ mM the eigenwert of objective matrix J;
Choose λ 1~ λ mIn before K eigenvalue of maximum λ 1~ λ KThe characteristic of correspondence vector v 1~ v KForm projection matrix V, wherein λ KK the eigenwert of objective matrix J, v 1λ 1The characteristic of correspondence vector, v Kλ KCharacteristic of correspondence vector, K are the dimensions that extracts feature, and K chooses the arbitrary integer between 1 to sample dimension m;
Step 5, feature extraction
The projection matrix V that obtains according to step 4, all the training sample x among the training sample set X that step 1 is obtained i, i={1,2 ..., N} is according to formula (7), calculation training sample x iFeature z i,
z i=V Tx i,i∈{1,2,…,N} (7)
Obtain the training sample characteristic set, be expressed as matrix Z Train=(z 1, z 2..., z i..., z N), z wherein 1Expression training sample x 1Feature, z 2Expression training sample x 2Feature, z iExpression training sample x iFeature, z NExpression training sample x NFeature; V TIt is the transposition of matrix V;
All test sample book x among the projection matrix V that obtains according to step 4, the test sample book that step 1 the obtains set X ' t', t ∈ 1,2 ..., N ' }, according to formula (8), calculate test sample book x t' feature z t',
z t′=V Tx t′,i∈{1,2,…,N′} (8)
Obtain the test sample book characteristic set and be expressed as matrix Z Test=(z 1', z ' 2..., z t' ..., z ' N '), z wherein 1' expression test sample book x 1Feature, z 2' expression test sample book x 2Feature, z t' expression test sample book x tFeature, z ' N 'Expression test sample book x N 'Feature, t is the label of test sample book, N ' is the number of test sample book;
Step 6, target classification
The training sample characteristic set Z that obtains according to step 5 TrainWith test sample book characteristic set Z Test, adopt traditional minimum distance classifier to test sample book characteristic set Z TestIn the feature z of each test sample book t' classify, obtain the category label y of test sample book t', t ∈ 1,2 ..., N ' };
The category label set expression of all test sample books be matrix Y'=(y ' 1, y 2' ..., y t' ..., y ' N '), wherein, y' 1Expression test sample book x 1Category label, y 2' expression test sample book x 2Category label, y t' expression test sample book x tCategory label, y ' N 'Expression test sample book x N 'Category label; Test sample book x tCategory label y t' be exactly the unknown object kind that synthetic-aperture radar is surveyed;
Through above step, realize synthetic aperture radar target identification.
CN201210338630.0A 2012-09-13 2012-09-13 A kind of method of synthetic-aperture radar automatic target detection Active CN102902979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210338630.0A CN102902979B (en) 2012-09-13 2012-09-13 A kind of method of synthetic-aperture radar automatic target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210338630.0A CN102902979B (en) 2012-09-13 2012-09-13 A kind of method of synthetic-aperture radar automatic target detection

Publications (2)

Publication Number Publication Date
CN102902979A true CN102902979A (en) 2013-01-30
CN102902979B CN102902979B (en) 2015-08-19

Family

ID=47575200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210338630.0A Active CN102902979B (en) 2012-09-13 2012-09-13 A kind of method of synthetic-aperture radar automatic target detection

Country Status (1)

Country Link
CN (1) CN102902979B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218623A (en) * 2013-04-24 2013-07-24 南京理工大学 Radar target feature extraction method based on self-adaption neighborhood preserving identification projection
CN104199007A (en) * 2014-09-09 2014-12-10 西安电子科技大学 Radar distributed ground target discrimination method based on neighbor one-class classifiers
CN104636758A (en) * 2015-02-12 2015-05-20 华中科技大学 Support vector regression-based SAR (synthetic aperture radar) image adaptability predicting method
CN106874841A (en) * 2016-12-30 2017-06-20 陕西师范大学 SAR Morph Target recognition methods based on regularization locality preserving projections
CN106897730A (en) * 2016-12-30 2017-06-27 陕西师范大学 SAR target model recognition methods based on fusion classification information with locality preserving projections
CN107194329A (en) * 2017-05-05 2017-09-22 南京航空航天大学 A kind of one-dimensional range profile recognition methods based on the sparse holding projection of adaptive local
CN107678006A (en) * 2017-09-06 2018-02-09 电子科技大学 A kind of true and false target one-dimensional range profile feature extracting method of the radar of largest interval subspace
CN108845302A (en) * 2018-08-23 2018-11-20 电子科技大学 A kind of true and false target's feature-extraction method of k nearest neighbor transformation
CN109117739A (en) * 2018-07-18 2019-01-01 成都识达科技有限公司 One kind identifying projection properties extracting method based on neighborhood sample orientation
CN110119716A (en) * 2019-05-15 2019-08-13 中国科学院自动化研究所 A kind of multi-source image processing method
CN111121939A (en) * 2020-01-02 2020-05-08 深圳市汉德网络科技有限公司 High-precision vehicle-mounted area weighing method
CN116071667A (en) * 2023-04-07 2023-05-05 北京理工大学 Method and system for detecting abnormal aircraft targets in specified area based on historical data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6750805B1 (en) * 2002-12-20 2004-06-15 The Boeing Company Full polarization synthetic aperture radar automatic target detection algorithm
CN101526995A (en) * 2009-01-19 2009-09-09 西安电子科技大学 Synthetic aperture radar target identification method based on diagonal subclass judgment analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6750805B1 (en) * 2002-12-20 2004-06-15 The Boeing Company Full polarization synthetic aperture radar automatic target detection algorithm
CN101526995A (en) * 2009-01-19 2009-09-09 西安电子科技大学 Synthetic aperture radar target identification method based on diagonal subclass judgment analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王义敏 等: "基于特征向量的SAR图像目标识别方法研究", 《计算机工程与应用》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218623B (en) * 2013-04-24 2016-04-13 南京理工大学 The radar target feature extraction method differentiating projection is kept based on self-adaptation neighbour
CN103218623A (en) * 2013-04-24 2013-07-24 南京理工大学 Radar target feature extraction method based on self-adaption neighborhood preserving identification projection
CN104199007A (en) * 2014-09-09 2014-12-10 西安电子科技大学 Radar distributed ground target discrimination method based on neighbor one-class classifiers
CN104199007B (en) * 2014-09-09 2016-10-12 西安电子科技大学 Radar Area Objects discrimination method in a distributed manner based on arest neighbors oneclass classification device
CN104636758A (en) * 2015-02-12 2015-05-20 华中科技大学 Support vector regression-based SAR (synthetic aperture radar) image adaptability predicting method
CN104636758B (en) * 2015-02-12 2018-02-16 华中科技大学 A kind of SAR image suitability Forecasting Methodology based on support vector regression
CN106897730B (en) * 2016-12-30 2020-04-10 陕西师范大学 SAR target model identification method based on fusion category information and local preserving projection
CN106874841A (en) * 2016-12-30 2017-06-20 陕西师范大学 SAR Morph Target recognition methods based on regularization locality preserving projections
CN106897730A (en) * 2016-12-30 2017-06-27 陕西师范大学 SAR target model recognition methods based on fusion classification information with locality preserving projections
CN107194329A (en) * 2017-05-05 2017-09-22 南京航空航天大学 A kind of one-dimensional range profile recognition methods based on the sparse holding projection of adaptive local
CN107194329B (en) * 2017-05-05 2020-12-08 南京航空航天大学 One-dimensional range profile identification method based on adaptive local sparse preserving projection
CN107678006A (en) * 2017-09-06 2018-02-09 电子科技大学 A kind of true and false target one-dimensional range profile feature extracting method of the radar of largest interval subspace
CN109117739A (en) * 2018-07-18 2019-01-01 成都识达科技有限公司 One kind identifying projection properties extracting method based on neighborhood sample orientation
CN108845302A (en) * 2018-08-23 2018-11-20 电子科技大学 A kind of true and false target's feature-extraction method of k nearest neighbor transformation
CN110119716A (en) * 2019-05-15 2019-08-13 中国科学院自动化研究所 A kind of multi-source image processing method
CN111121939A (en) * 2020-01-02 2020-05-08 深圳市汉德网络科技有限公司 High-precision vehicle-mounted area weighing method
CN116071667A (en) * 2023-04-07 2023-05-05 北京理工大学 Method and system for detecting abnormal aircraft targets in specified area based on historical data

Also Published As

Publication number Publication date
CN102902979B (en) 2015-08-19

Similar Documents

Publication Publication Date Title
CN102902979B (en) A kind of method of synthetic-aperture radar automatic target detection
Jiang et al. SuperPCA: A superpixelwise PCA approach for unsupervised feature extraction of hyperspectral imagery
Jia et al. A novel ranking-based clustering approach for hyperspectral band selection
CN101350069B (en) Computer implemented method for constructing classifier from training data and detecting moving objects in test data using classifier
CN105138972A (en) Face authentication method and device
CN105069811B (en) A kind of Multitemporal Remote Sensing Images change detecting method
Lin et al. Multiple instance ffeature for robust part-based object detection
CN104318219A (en) Face recognition method based on combination of local features and global features
CN102622607A (en) Remote sensing image classification method based on multi-feature fusion
CN102663371B (en) Low-resolution face recognition method coupling gait characteristics
CN105930873B (en) A kind of walking across mode matching method certainly based on subspace
Zhang et al. Sparse reconstruction for weakly supervised semantic segmentation
Niu et al. Spatial-DiscLDA for visual recognition
CN106203483A (en) A kind of zero sample image sorting technique of multi-modal mapping method of being correlated with based on semanteme
CN102930300A (en) Method and system for identifying airplane target
Cruz et al. Feature representation selection based on classifier projection space and oracle analysis
Van de Weijer et al. Fusing color and shape for bag-of-words based object recognition
Chen et al. Spatial weighting for bag-of-visual-words and its application in content-based image retrieval
CN109034213A (en) Hyperspectral image classification method and system based on joint entropy principle
Li et al. Supervised learning on local tangent space
CN105023239B (en) The high-spectral data dimension reduction method being distributed based on super-pixel and maximum boundary
CN101751554B (en) Method for filtering internet hemp image
CN101877065A (en) Extraction and identification method of non-linear authentication characteristic of facial image under small sample condition
DelPozo et al. Detecting specular surfaces on natural images
CN104050489A (en) SAR ATR method based on multicore optimization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant