CN107239740A - A kind of SAR image automatic target recognition method of multi-source Fusion Features - Google Patents

A kind of SAR image automatic target recognition method of multi-source Fusion Features Download PDF

Info

Publication number
CN107239740A
CN107239740A CN201710312180.0A CN201710312180A CN107239740A CN 107239740 A CN107239740 A CN 107239740A CN 201710312180 A CN201710312180 A CN 201710312180A CN 107239740 A CN107239740 A CN 107239740A
Authority
CN
China
Prior art keywords
target
image
sar image
mrow
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710312180.0A
Other languages
Chinese (zh)
Other versions
CN107239740B (en
Inventor
李波
李长军
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710312180.0A priority Critical patent/CN107239740B/en
Publication of CN107239740A publication Critical patent/CN107239740A/en
Application granted granted Critical
Publication of CN107239740B publication Critical patent/CN107239740B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a kind of SAR image target recognition method of multi-source Fusion Features, mainly solve the change such as target sizes, orientation, rotation and clutter background causes recognition result not robust and the low problem of probability to having a strong impact on of bringing of target identification by force.The present invention combines cosine Fourier moment characteristics and the respective advantage of sharp peaks characteristic, and cascade fusion identification is carried out to two category features of extraction.Its scheme is:Read projected image and the standardization of the SAR image of different target and the two dimensional surface of threedimensional model;Projected image moment characteristics in extracting 1 using cosine Fourier Invariant Moment Method;SAR image sharp peaks characteristic is extracted using the CFAR detection methods of rayleigh distributed;Target is identified using the cascade fusion grader for combining SVM and matching algorithm.The present invention can be effectively improved accurately identifies precision and robustness in the high target with attitudes vibration of intrinsic dimensionality, and without extra increase to guidance control system expense, probability is accurately identified available for the automatic target improved in SAR image.

Description

A kind of SAR image automatic target recognition method of multi-source Fusion Features
Technical field
The invention belongs to Technology of Radar Target Identification field, more particularly to a kind of SAR image multi-source Fusion Features is automatic Target identification method.Target classification and identification available for SAR image.
Background technology
Synthetic aperture radar (Synthetic Aperture Radar, SAR) is sensed as a kind of active microwave imaging Device, applied to active radar target seeker, with round-the-clock, the detectivity of round-the-clock, has stronger in Complex Battlefield Environments Independence and antijamming capability.But it is relatively low by SAR imaging resolutions, scalloping and had a strong impact on comprising background etc. to mesh Mark does not bring severe challenge.
At present on original SAR image by multi-source Fusion Features, increase target information utilization rate, overcome single source sensor The one-sidedness obtained to target information, lifting automatic target image recognition precision and robust research are both difficult point and focus.One Aspect due to parameter sensitivities such as original SAR image azimuthals, by different visual angles image co-registration be a width independent image effect simultaneously It is undesirable.On the other hand different classes of Fusion Features are inherently very difficult, therefore the target identification side of multi-source Fusion Features Method needs further to explore and study.
The content of the invention
The goal of the invention of the present invention is:For target sizes, orientation, the change such as rotation and strong clutter background are to target Recognize that brings has a strong impact on, target three-dimensional is projected to two dimensional surface by proposition, using cosine Fourier not bending moment and auspicious The CFAR detection methods of profit distribution are extracted to its cosine Fourier square and sharp peaks characteristic respectively, and utilize cascade fusion classification Device carries out feature-based fusion identification to target, realizes in the high target identification with attitudes vibration of intrinsic dimensionality, without extra Increase is to guidance control system expense.The present invention has identification real-time good, recognition result strong robustness and accuracy of identification height etc. Feature.
Technical scheme includes as follows:
First, scheme thinking
Using cosine Fourier square and rayleigh distributed CFAR detection methods respectively to the two dimensional image after target projection and Original SAR image carries out feature extraction, builds the square of target image and the cascade fusion grader of sharp peaks characteristic vector, realizes many Source Fusion Features are in the high purpose with target identification under attitudes vibration of target signature dimension.
2nd, step is realized
A kind of original SAR image automatic target recognition method of multi-source Fusion Features proposed by the present invention, including following step Suddenly:
Step S1:The original SAR image of different target is inputted as white silk sample set, and training sample set is located in advance Reason:
S101:Data are emulated based on target three-dimensional profile, the target three-dimensional shape model of original SAR image is set up, and will The threedimensional model projects to two dimensional surface, obtains the two dimensional image f (x, y) under cartesian coordinate, and image f (x, y) is entered Row standardization, the image f (m, n) after being standardized, and image f (m, n) polar coordinate image is calculated, obtain model Project polar coordinate image f (r, θ);
S102:Original SAR image is carried out after binary conversion treatment, then rim detection is carried out, edge image is obtained, then by side Edge image transforms to polar coordinates from cartesian coordinate, obtains SAR polar coordinate image f ' (r ', θ '), and wherein rim detection can be Usual either method, such as Gradient edge detection algorithm.
S103:Target slice processing is carried out to original SAR image, original SAR image target slice is obtained.
Step S2:Using cosine Fourier's Moment Feature Extraction method, respectively to the model projection polar coordinates of training sample set Image f (r, θ), polar form image f ' (r ', θ ') carry out Moment Feature Extraction, obtain the moment characteristics of training sample.
Step S3:Original SAR image target slice to training sample set carries out sharp peaks characteristic extraction.
Step S301:Target and background detection are carried out to original SAR image target slice:
By rayleigh distributedBringing into CFAR (constant false alarm rate) detective operators to obtain:Wherein, bsFor rayleigh distributed form parameter,Z represents noise intensity. Dissolve pFA, can obtainSo as to obtain the threshold value based on rayleigh distributed sliding window CFAR detective operators
Detection segmentation is carried out to the target in original SAR image target slice, background based on threshold value T:If original SAR image The local center pixel x of target slicec> T, then xcFor object pixel, otherwise xcFor background pixel;By original SAR image target The object pixel of section obtains Target Segmentation image;
Step S302:Step S301 segmentation figure picture close at filtering using ω × ω rectangle Morphologic filters Reason, then counting filtering process is carried out to closing the segmentation figure picture after filtering process, reject peak pixel in filter window structural region (pixel for being more than predetermined threshold value in filter window structural region) filling rate is not more than (20 ± 5) % center pixel, is counted Number filter result.Wherein, the filter window structure for counting filtering can be rectangle or circle, if rectangle, then rectangle is most The size on long side is ω;If circular, then it is directly ω, and ω is preset value, depending on original SAR image target slice Size itself, such as 5,6.
The nonzero value counted in filtered Target Segmentation image is set to 1, it is other to be set to 0, obtain and Target Segmentation figure As an equal amount of mask template.For the information of target area in enhancing filtered image, schemed using mask template and original SAR As respective items (same position pixel) are multiplied, final goal segmentation figure picture is obtained.
Step S303:The final goal segmentation figure picture obtained to step S302 carries out sharp peaks characteristic extraction:
Calculate the metric p of each pixel of final goal segmentation figure pictureij, wherein under be designated as pixel coordinates logo symbol:
Wherein aijRepresent the pixel value of current pixel, summit U (aij) represent with aijCentered on neighborhood (such as with eight neck Domain), am,nRepresent aijNeighborhood U (aij) in single pixel value;σ is the standard of the image pixel intensities of final goal segmentation figure picture Difference.
If metric pijFor 1, then current pixel is that peak pixel, i.e. current pixel are peak point pixel;Metric pijFor 0, then it represents that current pixel is non-peak pixel.
Step S304:The amplitude of all peak points to being extracted in final goal segmentation figure picture is normalized, and obtains To relative target peak amplitude:Wherein, XjRepresent j-th of final goal segmentation figure picture Peak point, V represents that the peak value of final goal segmentation figure picture is counted out, a (Xj) represent j-th of peak point XjAmplitude.
Step S4:Set up cascade fusion grader.
Step S401:By the use of SVM (vectorial support machine), grader is as first order feature classifiers, based on pre-set categories number Mesh h, carries out SVM classifier training to the Character eigenvector of each training sample, obtains h SVM class template, complete the first order special Levy classifier training.
Step S402:It is right based on pre-set categories number h by the use of peak value matched classifier as second level feature classifiers The sharp peaks characteristic vector of each training sample carries out peak value matched classifier training, obtains the template of h sharp peaks characteristic classification, completes Second level feature classifiers training.
Step S403:Cascade fusion grader is obtained by the cascade of the first order, second level feature classifiers
Step S5:Input SAR image to be identified, carries out feature extraction, completes target identification processing.
Step S501:SAR image to be identified is pre-processed:
Using step S101 identical processing modes, obtain SAR image to be identified model projection polar coordinate image f (r, θ), using step S102 identical processing modes, the SAR polar coordinate image f ' (r ', θ ') of SAR image to be identified is obtained;
Target slice processing is carried out to SAR image to be identified, SAR image target slice to be identified is obtained;
Step S502:Using cosine Fourier's Moment Feature Extraction method, respectively to the model projection pole of SAR image to be identified Image coordinate f (r, θ), SAR polar coordinate image f ' (r ', θ ') carry out Moment Feature Extraction, and the square for obtaining SAR image to be identified is special Levy;
Step S503:By Character eigenvector (abbreviation object to be identified) the input first order feature point of SAR image to be identified Class device carries out preliminary classification identification, then the posteriority that the object to be identified of first order feature classifiers output belongs to of all categories is general Rate collection is combined into Pset={ p1,p2,...,ph, object to be identified is expressed as K classification confidence:Wherein, i=1 ..., h,For set PsetIn removeOutside its The set that his element is constituted, piRepresent that object to be identified belongs to posterior probability of all categories.
If confidence level is less than or equal to confidence threshold, proceed the identifying processing of second level feature classifiers, and will Posterior probability set PsetAs the input of prior information during sharp peaks characteristic match cognization;Otherwise, output first order feature point The recognition result of class device, i.e. posterior probability collection is combined into Pset={ p1,p2,...,phIn corresponding to the maximum posterior probability of value Classification is the recognition result of current object to be identified.
Step S504:The identifying processing of second level feature classifiers is carried out using peak value matched classifier:
With G={ g1,g2,...,ghRepresent between the SAR image of identification and the class template of second level feature classifiers Similarity set, i.e. sharp peaks characteristic similarity collection are combined into G, the target similarity g that peak value matched classifier is exportedi(i= 1 ..., increase conversion h) is carried out, then the similarity g ' after convertingiFor:Wherein, k is sharp peaks characteristic Mutual exclusion characteristic similarity label in similarity set G;
Step S505:By posterior probability piWith the target similarity g ' after increase conversioniSum obtains cascade fusion grader Classification and Identification metric Di, i.e. Di=pi+g′i, wherein i=1 ..., h;
Again by h DiMaximum be worth to the generic for currently inputting SAR image to be identified.
The present invention is preferentially standardized when extracting the moment characteristics of original SAR image to projected image, can be notable Reduce target signature amount of views;And step 2 be using cosine Fourier not bending moment to three dimensional object model in two dimensional surface Projected image carry out Moment Feature Extraction, with make full use of its translation, rotation, it is flexible in the case of invariant feature so that it is special Levy dimension reduction and reduce amount of calculation;The use of cascade fusion grader, improves the identification probability and robustness of the present invention.And In whole identification processing procedure, invention is automatically performed target identification without manual intervention.
In summary, by adopting the above-described technical solution, the beneficial effects of the invention are as follows:
1st, the real-time of identification increases:The present invention is extracted using cosine Fourier square to invariant moment features, it is to avoid pre- First storing the information such as all positions being continually changing, distance, posture reduces redundant viewses feature so that amount of calculation is reduced.
2nd, the robustness of identification has strengthened:The present invention is using the cascade fusion classification combined based on SVM and matching algorithm Device is recognized twice to moment characteristics and sharp peaks characteristic, enhances this method recognition result robustness.
3rd, recognition performance increases:Compared with using single SVM methods and matching algorithm etc., the present invention uses multi-source Feature fusion is easier to target identification and does not increase guidance system expense additionally.
Brief description of the drawings
Fig. 1 is goal approach implementation process schematic diagram provided in an embodiment of the present invention;
Fig. 2 is that the present invention is implementation method figure of the embodiment of the present invention.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, with reference to embodiment and accompanying drawing, to this hair It is bright to be described in further detail.
The present invention causes single recognizer identification probability not high and robustness when there is orientation template missing for target Not strong the problem of, introduce target CAD (Computer Aided Design, Computer Aided Modeling) model projection image method Fill up azimuth of target and there is the situation of missing, and extract the auxiliary sharp peaks characteristic identification of its moment characteristics.Original SAR is schemed with being lifted Real-time, recognition result robustness and the accuracy of identification of the target identification of picture.Referring to Fig. 1,2, specific implementation step of the present invention is such as Under:
Step S1:Read the original SAR image of different target, training sample set and test as present embodiment Sample set.And training sample set and test sample collection are pre-processed:
S101:Data are emulated based on target three-dimensional profile, the target three-dimensional shape model of original SAR image is set up, and will The threedimensional model projects to two dimensional surface, obtains the two dimensional image f (x, y) under cartesian coordinate, and image f (x, y) is entered Row standardization, the image f (m, n) after being standardized, and image f (m, n) polar coordinate image is calculated, obtain model Project polar coordinate image f (r, θ);
S102:Original SAR image is carried out after binary conversion treatment, then by Gradient edge detection algorithm to binary conversion treatment Rear original SAR image carries out rim detection, obtains edge image, then edge image is transformed into pole from cartesian coordinate sitting Mark, obtains SAR polar coordinate image f ' (r ', θ ');
S103:Target slice processing is carried out to original SAR image, original SAR image target slice is obtained.
Step S2:Using cosine Fourier's Moment Feature Extraction method, training sample set, the square of test sample collection are extracted respectively Feature.Model projection polar coordinate image f (r, θ), SAR polar coordinate image f ' (r ', θ ') i.e. respectively to training sample set is carried out Moment Feature Extraction, obtains the moment characteristics of training sample;Model projection polar coordinate image f (r, θ) respectively to test sample collection, SAR polar coordinate image f ' (r ', θ ') carries out Moment Feature Extraction, obtains the moment characteristics of test sample.
Step S3:Carry out sharp peaks characteristic extraction respectively to training sample and test sample, extracting object is original SAR image Target slice.
Step S301:Target in being cut into slices to original SAR image is detected with background, according to rayleigh distributed:
Wherein, bsFor the form parameter of rayleigh distributed,Z represents noise intensity.
Formula (1) is brought into CFAR detective operators and can obtained:
Neutralizing formula (2) can be obtained:
Therefore, obtaining the threshold value T based on rayleigh distributed sliding window CFAR detection algorithms is:
Step S302:The target in original SAR image target slice, background are split based on threshold value T:If original The target local center pixel x of SAR image target slicec> T, then xcFor object pixel, otherwise xcFor background pixel;By original The object pixel of SAR image target slice obtains Target Segmentation image;
Step S303:Step S302 Target Segmentation image is carried out closing filtering using 5 × 5 rectangle Morphologic filters Processing, then counting filtering process is carried out to closing the Target Segmentation image after filtering process, reject peak in filter window structural region Value pixel filling rate is not more than (20 ± 5) % center pixel, obtains counting filter result;
The nonzero value counted after filter result is set to 1, it is other to be set to 0, obtain an equal amount of with Target Segmentation image Mask template.It is (same using mask template and original SAR image respective items to strengthen the information of target area in filtered image One position pixel) it is multiplied, obtain final goal segmentation figure picture.
Step S304:The final goal segmentation figure picture obtained to step S303 carries out sharp peaks characteristic extraction:
The row summit of image object after splitting respectively to final goal, row summit and 2 D vertex carry out sharp peaks characteristic and carried Take, you can obtain original SAR image sharp peaks characteristic.Calculate the metric p of each pixel of final goal segmentation figure pictureij
Wherein, subscript i, j accords with for the coordinates logo of pixel, aijRepresent the pixel value of current pixel, summit U (aij) represent With aijCentered on eight neighborhood, am,nRepresent aijEight neighborhood U (aij) in single pixel value, σ be final goal segmentation figure The standard deviation of the image pixel intensities of picture;
If the metric p of current pixel pointijFor 1, then current pixel is designated as peak pixel point, is otherwise background clutter;
Step S305:Amplitude to all peak points in final goal segmentation figure picture is normalized, and obtains phase To target peak amplitude:
Wherein, XjJ-th of peak point of final goal segmentation figure picture is represented, V represents the peak value of final goal segmentation figure picture Count out, a (Xj) represent j-th of peak point XjRange value.
Sharp peaks characteristic to this completion training sample, test sample is extracted and normalized.
Step S4:Cascade fusion grader is built based on training sample:
Step S401:Based on pre-set categories number h, SVM classifier training is carried out to the Character eigenvector of each training sample, H SVM class template is obtained, the training of first order feature classifiers is completed.
Step S402:Based on pre-set categories number h, peak value matching classification is carried out to the sharp peaks characteristic vector of each training sample Device is trained, and obtains the matching template of h sharp peaks characteristic classification, completes the training of second level feature classifiers.
Step S403:Cascade fusion grader is obtained by the cascade of the first order, second level feature classifiers.
Step S5:Classification identifying processing is carried out to test sample based on the cascade fusion grader trained:
Step S501:The Character eigenvector input first order feature classifiers of test sample are subjected to preliminary classification identification, The test sample that then first order feature classifiers are exported belongs to posterior probability collection of all categories and is combined into Pset={ p1,p2,...,ph, Therefore the classification confidence of each test sample is represented by:
Wherein, K is any test sample identifier,For set PsetIn removeOutside other The set that element is constituted.
For test sample K, if its classification confidence conf (K) is more than given threshold value, first is directly exported The recognition result of level feature classifiers, i.e. posterior probability collection is combined into Pset={ p1,p2,...,phIn the maximum posterior probability of value Corresponding classification logotype is the recognition result of current test sample.
Otherwise, the secondary classification identification of second level feature classifiers is continued into, that is, performs step S502.
Step S502:By the sharp peaks characteristic of test sample vector input second level feature classifiers, and by first order feature Input of the posterior probability set of grader as prior information during sharp peaks characteristic match cognization.
To increase the otherness between extracted sharp peaks characteristic similarity, to test sample and the peak value classification of selection G={ g are set to the similarity set between template1,g2,...,gh}。
The target similarity g that peak value matched classifier is exportediIncrease conversion is carried out, then sharp peaks characteristic giPhase after conversion Like degree g 'iFor:
Wherein, k is mutual exclusion characteristic similarity label in sharp peaks characteristic similarity set G.
Step S503:By the posterior probability p of SVM classifieriAs the resolution value of first order feature classifiers, by peak It is worth the similarity g ' of matched classifieriAs the resolution value of second level feature classifiers, two stage recognition metric sum is taken Obtain the Classification and Identification metric D of cascade fusion graderi
Di=pi+g′i (9)
Adjudicate h DiIn maximum corresponding to classification be current test sample generic.
To sum up, a kind of SAR image automatic target recognition method of multi-source Fusion Features of the embodiment of the present invention is deposited in target In various attitudes vibrations, higher discrimination is still ensured that.

Claims (1)

1. a kind of SAR image automatic target recognition method of multi-source Fusion Features, it is characterised in that comprise the following steps:
Step S1:The original SAR image of different target is inputted as white silk sample set, and training sample set is pre-processed:
S101:Data are emulated based on target three-dimensional profile, the target three-dimensional shape model of original SAR image is set up, and will be described Threedimensional model projects to two dimensional surface, obtains the two dimensional image f (x, y) under cartesian coordinate, and enter rower to image f (x, y) Quasi-ization processing, the image f (m, n) after being standardized, and image f (m, n) polar coordinate image is calculated, obtain model projection Polar coordinate image f (r, θ);
S102:Original SAR image is carried out after binary conversion treatment, then rim detection is carried out, edge image is obtained, then by edge graph As transforming to polar coordinates from cartesian coordinate, SAR polar coordinate image f ' (r ', θ ') is obtained;
S103:Target slice processing is carried out to original SAR image, original SAR image target slice is obtained;
Step S2:Using cosine Fourier's Moment Feature Extraction method, respectively to the model projection polar coordinate image f of training sample set (r, θ), SAR polar coordinate image f ' (r ', θ ') carry out Moment Feature Extraction, obtain the moment characteristics of training sample;
Step S3:Original SAR image target slice to training sample set carries out sharp peaks characteristic extraction:
Step S301:Target and background detection are carried out to original SAR image target slice:
By rayleigh distributedBringing into constant false alarm rate detective operators to obtain: Wherein, bsFor rayleigh distributed form parameter,Z represents noise intensity;Dissolve pFA, can obtain
So as to obtain threshold value
Detection segmentation is carried out to the target in original SAR image target slice, background based on threshold value T:If original SAR image target The target local center pixel x of sectionc> T, then xcFor object pixel, otherwise xcFor background pixel;By original SAR image target The object pixel of section obtains Target Segmentation image;
Step S302:Step S301 Target Segmentation image close at filtering using ω × ω rectangle Morphologic filters Reason, then counting filtering process is carried out to closing the Target Segmentation image after filtering process, reject peak value in filter window structural region Pixel filling rate is not more than τ % center pixel, obtains counting filter result.Wherein, τ span is 15~25;
The nonzero value counted in filtered Target Segmentation image is set to 1, it is other to be set to 0, obtain mask template;By mask mould Plate is multiplied with original SAR image, obtains final goal segmentation figure picture;
Step S303:The final goal segmentation figure picture obtained to step S302 carries out sharp peaks characteristic extraction:
Calculate the metric p of each pixel of final goal segmentation figure pictureij
<mrow> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>a</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&gt;</mo> <mi>&amp;sigma;</mi> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> </msub> <mo>&amp;Element;</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, subscript i, j accords with for the coordinates logo of pixel, aijRepresent the pixel value of current pixel, summit U (aij) represent with aij Centered on neighborhood, am,nRepresent aijNeighborhood U (aij) in single pixel value, σ for final goal segmentation figure picture pixel it is strong The standard deviation of degree;
If the metric p of current pixel pointijFor 1, then current pixel is designated as peak point, is otherwise background clutter;
Step S304:The amplitude of all peak points to being extracted in final goal segmentation figure picture is normalized, and obtains phase To target peak amplitude:
Wherein, XjJ-th of peak point of final goal segmentation figure picture is represented, V represents the peak value points of final goal segmentation figure picture Mesh, a (Xj) represent j-th of peak point XjAmplitude;
Step S4:Set up cascade fusion grader.
Step S401:Based on pre-set categories number h, SVM classifier training is carried out to the Character eigenvector of each training sample, obtained H SVM class template, completes the training of first order feature classifiers;
Step S402:Based on pre-set categories number h, peak value matched classifier instruction is carried out to the sharp peaks characteristic vector of each training sample Practice, obtain the matching template of h sharp peaks characteristic classification, complete the training of second level feature classifiers;
Step S403:Cascade fusion grader is obtained by the cascade of the first order, second level feature classifiers;
Step S5:Input SAR image to be identified, carries out feature extraction, completes target identification processing:
Step S501:SAR image to be identified is pre-processed:
Using step S101 identical processing modes, the model projection polar coordinate image f (r, θ) of SAR image to be identified is obtained, is adopted With step S102 identical processing modes, the SAR polar coordinate image f ' (r ', θ ') of SAR image to be identified is obtained;
Target slice processing is carried out to SAR image to be identified, SAR image target slice to be identified is obtained;
Step S502:Using cosine Fourier's Moment Feature Extraction method, respectively to the model projection polar coordinates of SAR image to be identified Image f (r, θ), SAR polar coordinate image f ' (r ', θ ') carry out Moment Feature Extraction, obtain the moment characteristics of SAR image to be identified;
Using the sharp peaks characteristic extracting mode described in step S3, the sharp peaks characteristic of SAR image to be identified is obtained;
Step S503:The moment characteristics input first order feature classifiers of SAR image to be identified, the posteriority for obtaining belonging to of all categories is general Rate collection is combined into Pset={ p1,p2,...,ph};
Based on posterior probability set PsetCalculate the classification confidence conf (K) of SAR image to be identified:Wherein i=1 ..., h,For set PsetIn remove Outside other elements constitute set, wherein K be SAR image identifier to be identified;
When the classification confidence conf (K) of SAR image to be identified is more than confidence threshold, step S504 is performed;Otherwise by posteriority Making by Probability Sets PsetClassification corresponding to the maximum posterior probability of middle value as current object to be identified recognition result;
Step S504:By posterior probability set PsetAs the prior information of second level feature classifiers, to SAR image to be identified Sharp peaks characteristic carry out peak value matching Classification and Identification processing, obtain belonging to target similarity set G={ g of all categories1, g2,...,gh};
To target peak matching similarity giIncrease conversion is carried out, the target similarity g ' after increase conversion is obtainediWherein, k is mutual exclusion characteristic similarity label, i=1 ..., h in sharp peaks characteristic similarity set G;
Step S505:By posterior probability piWith the target similarity g ' after increase conversioniSum obtains point of cascade fusion grader Class resolution value Di, i.e. Di=pi+g′i, wherein i=1 ..., h;
By h DiMaximum be worth to the generic for currently inputting SAR image to be identified.
CN201710312180.0A 2017-05-05 2017-05-05 A kind of SAR image automatic target recognition method of multi-source Fusion Features Expired - Fee Related CN107239740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710312180.0A CN107239740B (en) 2017-05-05 2017-05-05 A kind of SAR image automatic target recognition method of multi-source Fusion Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710312180.0A CN107239740B (en) 2017-05-05 2017-05-05 A kind of SAR image automatic target recognition method of multi-source Fusion Features

Publications (2)

Publication Number Publication Date
CN107239740A true CN107239740A (en) 2017-10-10
CN107239740B CN107239740B (en) 2019-11-05

Family

ID=59984739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710312180.0A Expired - Fee Related CN107239740B (en) 2017-05-05 2017-05-05 A kind of SAR image automatic target recognition method of multi-source Fusion Features

Country Status (1)

Country Link
CN (1) CN107239740B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830242A (en) * 2018-06-22 2018-11-16 北京航空航天大学 SAR image targets in ocean classification and Detection method based on convolutional neural networks
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
CN109816634A (en) * 2018-12-29 2019-05-28 歌尔股份有限公司 Detection method, model training method, device and equipment
CN110148146A (en) * 2019-05-24 2019-08-20 重庆大学 A kind of plant leaf blade dividing method and system using generated data
CN110210403A (en) * 2019-06-04 2019-09-06 电子科技大学 A kind of SAR image target recognition method based on latent structure
CN112070151A (en) * 2020-09-07 2020-12-11 北京环境特性研究所 Target classification and identification method of MSTAR data image
CN112800980A (en) * 2021-02-01 2021-05-14 南京航空航天大学 SAR target recognition method based on multi-level features
CN113591804A (en) * 2021-09-27 2021-11-02 阿里巴巴达摩院(杭州)科技有限公司 Image feature extraction method, computer-readable storage medium, and computer terminal
CN113743481A (en) * 2021-08-20 2021-12-03 北京电信规划设计院有限公司 Method and system for identifying human-like image
CN114627089A (en) * 2022-03-21 2022-06-14 成都数之联科技股份有限公司 Defect identification method, defect identification device, computer equipment and computer readable storage medium
CN114782480A (en) * 2022-03-19 2022-07-22 中国电波传播研究所(中国电子科技集团公司第二十二研究所) Automatic extraction method of vehicle targets in SAR image
CN115034257A (en) * 2022-05-09 2022-09-09 西北工业大学 Cross-modal information target identification method and device based on feature fusion
WO2023284698A1 (en) * 2021-07-14 2023-01-19 浙江大学 Multi-target constant false alarm rate detection method based on deep neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081791A (en) * 2010-11-25 2011-06-01 西北工业大学 SAR (Synthetic Aperture Radar) image segmentation method based on multi-scale feature fusion
CN104331707A (en) * 2014-11-02 2015-02-04 西安电子科技大学 Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine)
CN104680183A (en) * 2015-03-14 2015-06-03 西安电子科技大学 SAR target identification method based on scattering point and K-center one-class classifier
CN105842694A (en) * 2016-03-23 2016-08-10 中国电子科技集团公司第三十八研究所 FFBP SAR imaging-based autofocus method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081791A (en) * 2010-11-25 2011-06-01 西北工业大学 SAR (Synthetic Aperture Radar) image segmentation method based on multi-scale feature fusion
CN104331707A (en) * 2014-11-02 2015-02-04 西安电子科技大学 Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine)
CN104680183A (en) * 2015-03-14 2015-06-03 西安电子科技大学 SAR target identification method based on scattering point and K-center one-class classifier
CN105842694A (en) * 2016-03-23 2016-08-10 中国电子科技集团公司第三十八研究所 FFBP SAR imaging-based autofocus method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李长军 等: "三维模型在SAR图像自动目标识别中的应用", 《2016年航空科学与技术全国博士生学术论坛摘要集》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830242A (en) * 2018-06-22 2018-11-16 北京航空航天大学 SAR image targets in ocean classification and Detection method based on convolutional neural networks
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
CN109816634A (en) * 2018-12-29 2019-05-28 歌尔股份有限公司 Detection method, model training method, device and equipment
CN109816634B (en) * 2018-12-29 2023-07-11 歌尔股份有限公司 Detection method, model training method, device and equipment
CN110148146A (en) * 2019-05-24 2019-08-20 重庆大学 A kind of plant leaf blade dividing method and system using generated data
CN110210403B (en) * 2019-06-04 2022-10-14 电子科技大学 SAR image target identification method based on feature construction
CN110210403A (en) * 2019-06-04 2019-09-06 电子科技大学 A kind of SAR image target recognition method based on latent structure
CN112070151A (en) * 2020-09-07 2020-12-11 北京环境特性研究所 Target classification and identification method of MSTAR data image
CN112070151B (en) * 2020-09-07 2023-12-29 北京环境特性研究所 Target classification and identification method for MSTAR data image
CN112800980A (en) * 2021-02-01 2021-05-14 南京航空航天大学 SAR target recognition method based on multi-level features
CN112800980B (en) * 2021-02-01 2021-12-07 南京航空航天大学 SAR target recognition method based on multi-level features
WO2023284698A1 (en) * 2021-07-14 2023-01-19 浙江大学 Multi-target constant false alarm rate detection method based on deep neural network
US12044799B2 (en) 2021-07-14 2024-07-23 Zhejiang University Deep neural network (DNN)-based multi-target constant false alarm rate (CFAR) detection methods
CN113743481A (en) * 2021-08-20 2021-12-03 北京电信规划设计院有限公司 Method and system for identifying human-like image
CN113743481B (en) * 2021-08-20 2024-04-16 北京电信规划设计院有限公司 Method and system for identifying humanized image
CN113591804B (en) * 2021-09-27 2022-02-22 阿里巴巴达摩院(杭州)科技有限公司 Image feature extraction method, computer-readable storage medium, and computer terminal
CN113591804A (en) * 2021-09-27 2021-11-02 阿里巴巴达摩院(杭州)科技有限公司 Image feature extraction method, computer-readable storage medium, and computer terminal
CN114782480A (en) * 2022-03-19 2022-07-22 中国电波传播研究所(中国电子科技集团公司第二十二研究所) Automatic extraction method of vehicle targets in SAR image
CN114782480B (en) * 2022-03-19 2024-04-09 中国电波传播研究所(中国电子科技集团公司第二十二研究所) Automatic extraction method for vehicle targets in SAR image
CN114627089A (en) * 2022-03-21 2022-06-14 成都数之联科技股份有限公司 Defect identification method, defect identification device, computer equipment and computer readable storage medium
CN115034257A (en) * 2022-05-09 2022-09-09 西北工业大学 Cross-modal information target identification method and device based on feature fusion

Also Published As

Publication number Publication date
CN107239740B (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN107239740A (en) A kind of SAR image automatic target recognition method of multi-source Fusion Features
Zhang et al. Airport detection and aircraft recognition based on two-layer saliency model in high spatial resolution remote-sensing images
CN103390156B (en) A kind of licence plate recognition method and device
Soltany et al. Fast and accurate pupil positioning algorithm using circular Hough transform and gray projection
CN105574527B (en) A kind of quick object detecting method based on local feature learning
CN107464252A (en) A kind of visible ray based on composite character and infrared heterologous image-recognizing method
CN107862698A (en) Light field foreground segmentation method and device based on K mean cluster
Potapova et al. Learning what matters: combining probabilistic models of 2d and 3d saliency cues
CN107330397A (en) A kind of pedestrian&#39;s recognition methods again based on large-spacing relative distance metric learning
CN104102904B (en) A kind of static gesture identification method
CN108335331A (en) A kind of coil of strip binocular visual positioning method and apparatus
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
CN114926747A (en) Remote sensing image directional target detection method based on multi-feature aggregation and interaction
Li et al. Road markings extraction based on threshold segmentation
CN105975906A (en) PCA static gesture recognition method based on area characteristic
CN111626241A (en) Face detection method and device
KR101733288B1 (en) Object Detecter Generation Method Using Direction Information, Object Detection Method and Apparatus using the same
Wu et al. Fast pedestrian detection with laser and image data fusion
CN105955473A (en) Computer-based static gesture image recognition interactive system
Bui et al. A texture-based local soft voting method for vanishing point detection from a single road image
Reddy et al. Hand gesture recognition using skeleton of hand and distance based metric
CN115511853A (en) Remote sensing ship detection and identification method based on direction variable characteristics
Cosma et al. Part-based pedestrian detection using HoG features and vertical symmetry
CN113033256B (en) Training method and device for fingertip detection model
CN111178158B (en) Rider detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191105

CF01 Termination of patent right due to non-payment of annual fee