CN107491793A - A kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering - Google Patents
A kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering Download PDFInfo
- Publication number
- CN107491793A CN107491793A CN201710786485.5A CN201710786485A CN107491793A CN 107491793 A CN107491793 A CN 107491793A CN 201710786485 A CN201710786485 A CN 201710786485A CN 107491793 A CN107491793 A CN 107491793A
- Authority
- CN
- China
- Prior art keywords
- mrow
- sar image
- scattering
- classification
- sparse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering, original polarization SAR image data to be sorted are first inputted;Then polarization scattering matrix is converted the data into;Sparse scattering is carried out to polarization scattering matrix again to encode;By the Input matrix that sparse scattering coding obtains to full convolutional network, simultaneously training network is initialized, feature learning is carried out to the initial data of image, is finally classified, obtains classification results.The present invention considers the whole features and spatial structure characteristic of image simultaneously, improves polarization SAR image terrain classification precision.
Description
Technical field
The invention belongs to Polarimetric SAR Image processing technology field, and in particular to a kind of pole based on the sparse full convolution of scattering
Change SAR image classification methods, feature extraction and terrain classification are carried out available for polarization SAR image.
Background technology
Polarization SAR is a kind of active coherent multichannel microwave remote sensing imaging radar of high-resolution, and it is one of SAR important
Part, have the advantages that round-the-clock, round-the-clock, high resolution, can side view imaging, can be widely applied to military, agricultural, lead
The numerous areas such as boat, land use, geography monitoring.Polarization SAR can obtain more abundant target information, remote sensing fields by
To great attention, an important means of the Classification of Polarimetric SAR Image as interpretation, it has also become one of polarization SAR information processing
Hot research direction.
Existing Classification of Polarimetric SAR Image method can be divided into two stages:Feature extraction phases and classifier design rank
Section.The classical feature extraction algorithm of feature extraction phases has based on coherent target decomposition and incoherent goal decomposition.Coherent Targets
Decomposition algorithm is decomposed including Pauli, and sphere-diplane-helix (SDH) is decomposed, and Cameron is decomposed etc..Incoherent target
Decomposition algorithm has Huynen decomposition, and FreemanDurden is decomposed, and Yamguchi four-component are decomposed,
CloudePottier decomposition etc..The classifier design stage is broadly divided into unsupervised segmentation algorithm and Supervised classification is calculated, unsupervised
Sorting algorithm mainly includes H/ α-Wishart algorithms, K-means algorithms.There is supervision algorithm mainly to include neural network algorithm, prop up
Hold vector machine algorithm etc..Also there is the semisupervised classification algorithm that a collection of scholar proposes recently.
The content of the invention
In view of the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is that provide a kind of based on sparse
Scatter the polarization SAR image terrain classification method of full convolutional network, the dual space phase structure information using image, to original number
Preferably represent and learn according to space, extract more effective feature and classified, improve nicety of grading.
The present invention uses following technical scheme:
A kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering, first inputs original polarization SAR to be sorted
View data;Then polarization scattering matrix is converted the data into;Sparse scattering is carried out to polarization scattering matrix again to encode;Will be sparse
The Input matrix that scattering coding obtains initializes simultaneously training network, feature is carried out to the initial data of image to full convolutional network
Study, is finally classified, obtains classification results.
Further, comprise the following steps that:
S1, input polarization SAR image initial data to be sorted, are encoded into polarization scattering matrix S;
S2, sparse scattering coding is carried out to polarization scattering matrix, obtain sparse collision matrix
The information marked in S3, the atural object distribution reference figure according to Polarimetric SAR Image, randomly choose the training sample per class
This, obtains training sample set;
S4 while initialize relevant parameter of the full convolution through network;
S5, by the training sample of selection in batches and normalize to [0.1,0.9] afterwards train FCN networks;
S6, repeat step S5, until meeting end condition, maximum iteration 2000 times, obtains FCN mould in this method
Shape parameter;
S7, utilize the neural network forecast classification trained;
S8, output image simultaneously calculate nicety of grading.
Further, in step S1, the polarization scattering matrix S is specially:
Wherein, a, b, c, d, e, f, g, h represent channel value, SHH=a+bi, SHV=c+di, SVH=e+fi, SVV=g+hi,
I represents complex unit.
Further, in step S2, the sparse collision matrixSpecially:
Wherein, a, b, c, d, e, f, g, h represent channel value.
Further,For above-mentioned encoding operation:
Wherein, x represents real part, and y represents imaginary part, and i represents complex unit.
Further, in step S3, the every class training sample number for sampling to obtain is 512.
Further, step S5 specifically includes following steps:
S501, in the training process, determines neuron z of the object function on l layers(l)Gradient δ(l)For:
Wherein, J represents loss, and W represents weight, and b represents biasing, and X represents input data, and Y represents label;
S502, convolutional layer is set as l layers, sub-sampling layer is l+1 layers, determines k-th feature of the object function on l layers
The biasing b of mapping(l)Gradient be:
S503, sub-sampling layer is set as l layers, l+1 layers are convolutional layer, determine k-th feature of the object function on l layers
The neuron wave filter of mappingGradient be:
Wherein, X represents input data, and Y represents label, and W represents weight, and b represents biasing, and down represents down-sampling, table
Show dot product,pRepresent passage.
Further, in step S7, the original test data of polarization SAR image to be sorted is subjected to sparse scattering and encoded
After normalizing to [0.1,0.9], the network trained is inputted, Polarimetric SAR Image to be sorted is classified, obtained every
The classification of individual pixel.
Further, step S8 is specially:
S801, the pixel class for predicting using grader classification, using red R, green G, blue B as three primary colours, according to
Color method is painted in three primary colours, the Polarimetric SAR Image after being painted, is then output it;
S802, by the pixel class that Polarimetric SAR Image obtains and truly species not compared with, by the consistent picture of classification
Nicety of grading of the ratio of plain number and whole number of pixels as Polarimetric SAR Image.
Compared with prior art, the present invention at least has the advantages that:
Terrain classification method of the present invention based on the sparse full convolutional network of scattering, first to original polarization SAR image data
Collision matrix is transformed into, secondly, row coefficient scattering coding is entered to collision matrix;Then, initialization and training network, to image
Initial data carries out more preferable feature learning, training network;Finally, and predict classification and calculate nicety of grading, can be obvious
Ground keeps the spatial structural form of image, classification noise is eliminated, so as to improve the classification results to image.
Further, the present invention is proposed for the special sparse scattering coded system of polarization SAR data, and is combined sparse
Code Design corresponding feature extraction and sorting algorithm are scattered, feature extraction and classification design are combined and propose a kind of base
In the polarization SAR image terrain classification of the sparse full convolutional network of scattering, test result indicates that with good classification performance.
Further, by the training sample of selection in batches and normalize to [0.1,0.9] afterwards train FCN networks be in order to
Accelerate network convergence speed, so as to be rapidly achieved optimal solution.
Further, propagated forward is belonged to using the neural network forecast classification trained, without derivation, calculating speed is fast, effect
Rate is high, so that preferably detection algorithm performance.
Further, the pixel class of classification is predicted using grader, by the consistent number of pixels of classification and whole pixels
Classification results are quantized and quantified as the nicety of grading of Polarimetric SAR Image by the ratio of number, so as to more preferably judge classification knot
Fruit.
Below by drawings and examples, technical scheme is described in further detail.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the pcolor of polarization SAR data generation;
Fig. 3 is true atural object distribution reference figure;
Fig. 4 is Wishart classifier methods classification results figures;
Fig. 5 is that the result figure decomposed with Freeman with convolutional neural networks classification is decomposed based on Cloude;
Fig. 6 is the classification results figure of the present invention.
Embodiment
The invention provides a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering, first input to be sorted
Original polarization SAR image data;Then polarization scattering matrix is converted the data into;Sparse dissipate is carried out to polarization scattering matrix again
Penetrate coding;The Input matrix that sparse scattering coding is obtained is classified to full convolutional network, obtains classification results.The present invention
Compared with existing certain methods, while consider the whole features and spatial structure characteristic of image, it will be apparent that improve polarization
SAR image terrain classification precision, solves that feature extraction in existing polarization SAR image terrain classification method is incomplete and image space
The problem of structure cannot be kept.
A kind of referring to Fig. 1, specific steps of the Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering of the present invention
It is as follows:
S1, input polarization SAR image initial data to be sorted, are encoded into polarization scattering matrix;
Original polarization SAR image data totally eight passages, it will be assumed that consider eight channel values of a pixel, respectively
A-g is designated as, while assumes that polarization scattering matrix is S, just there is equation below
Wherein complex matrix element is represented by:
Wherein, i represents complex unit, so can be obtained by polarization scattering matrix S.
S2, sparse scattering coding is carried out to polarization scattering matrix, obtain sparse collision matrix;
Set forth herein sparse scattering coding, can first assume following encoding operation:
It illustrates formula as follows:
Wherein, the first row represents the position of positive number, and the second row represents the position of plural number, but if being that retrial just takes definitely
Value.First column position is used for depositing real, secondary series storage imaginary part.So whole mapping can represent as follows, wherein,For above-mentioned encoding operation:
So it can represent as follows for collision matrix S, its sparse scattering cataloged procedure:
The information marked in S3, the atural object distribution reference figure according to Polarimetric SAR Image, randomly choose the training sample per class
This, obtains training sample set:
The number that obtained every class training sample is sampled in the present invention is 512;
S4 while initialize relevant parameter of the full convolution through network (Fully Convolutional Network, FCN);
S5, by the training sample of selection in batches and normalize to [0.1,0.9] afterwards train FCN networks;
S501, in the training process, neuron z of the object function on l layers(l)Gradient be:
S502, convolutional layer gradient, it is assumed that convolutional layer is l layers, sub-sampling layer is l+1 layers.
Because sub-sampling layer is down-sampling operation, the error delta of a neuron of l+1 layers corresponds to volume basic unit (last layer)
Individual features mapping a region.Each neuron in k-th of Feature Mapping of l layers has a line and l+1 layers
A neuron in k-th of Feature Mapping is connected.According to chain rule, the error term δ of a Feature Mapping of l layers(l,k),
Only need the error term δ for mapping l+1 layers character pair(l+1,k)Up-sampling operation is carried out, then it is inclined with the activation value of l layer features
Derivative is by element multiplication, multiplied by weight w(l+1,k), just obtained δ(l,k)。
The error term δ of k-th of Feature Mapping of l layers(l , k)Specific derivation process it is as follows:
Wherein, Z represents upper strata output, and X represents input data, and Y represents label, and J represents loss, and b represents biasing, and W is represented
Weight.
In the error term δ for k-th of Feature Mapping for obtaining l layers(l,k), object function reflects on k-th of feature of l layers
Penetrate neuron wave filterGradient
Wherein, Z represents upper strata output, and X represents input data, and Y represents label, and wt represents the width of core, and b represents biasing,
W represents weight, and r, j represent weight manipulative indexing, htThe height of core is represented, p represents passage, and s-i+u, t-j+v represent position, s
Line index is represented, t represents column index.
Biasing b of the object function on k-th of Feature Mapping of l layers(l)Gradient can be written as:
S503, sub-sampling layer gradient, it is assumed that sub-sampling layer is l layers, l+1 layers are convolutional layer.Because sub-sampling layer
It is down-sampling operation, the error term δ of a neuron of l+1 layers detects the one of mapping corresponding to the corresponding spy of convolutional layer (last layer)
Individual region.
So as to obtain neuron wave filter of the object function on k-th of Feature Mapping of l layersGradient can
To be written as:
Wherein, X represents input data, and Y represents label, and W represents weight, and b represents biasing, and down represents down-sampling, table
Show dot product, p represents passage.
S6, repeat step S5, until meeting end condition, maximum iteration 2000 times, obtains FCN mould in this method
Shape parameter:
S7, utilize the neural network forecast classification trained:
The original test data of polarization SAR image to be sorted is subjected to sparse scattering coding normalization to [0.1,0.9]
Afterwards, the network trained is inputted, the feature for obtaining hidden layer carries out joint expression, is then input to instruction using these features
The grader perfected is classified to Polarimetric SAR Image to be sorted, obtains the classification of each pixel;
S8, output image simultaneously calculate nicety of grading.
S801, the pixel class classified is predicted using grader, using R (red), G (green), B (blueness) as three bases
Color, painted, the Polarimetric SAR Image after being painted, then output it according to color method in three primary colours;
S802, by the pixel class that Polarimetric SAR Image obtains and truly species not compared with, by the consistent picture of classification
Nicety of grading of the ratio of plain number and whole number of pixels as Polarimetric SAR Image.
Embodiment
1st, experiment condition and method
Hardware platform is:Titan X 16GB、64GB RAM;
Software platform is:Ubuntu16.04.2, TensorFlow;
Experimental method:The method of respectively of the invention and existing Wishart graders and based on Cloude decompose and
Freeman decomposes extraction feature, convolutional neural networks classification is recycled, wherein existing both approaches are all polarization SAR figures
As method classical in classification.
2nd, emulation content and result
In emulation experiment, Fig. 2 is the pcolor of polarization SAR data generation, it can be seen that true atural object distributed effect.Fig. 3
It is the atural object distribution reference figure manually marked, for training and testing algorithm performance and effect.Every class is randomly selected according to Fig. 3
512 training samples, it is left sample as test set computational accuracy, obtains all kinds of niceties of grading and total nicety of grading is used as and commented
Valency index.
Evaluation result is as shown in table 1, wherein, M1 is the method for Wishart graders, M2 be decomposed based on Cloude and
Freeman decomposes extraction feature, the method for recycling convolutional neural networks classification, and M3 is the method for the present invention.
All kinds of niceties of grading and total nicety of grading that table 1 obtains for the present invention with two kinds of control methods in emulation experiment
Analysis of experimental results
Fig. 4 be contrast algorithm Wishart classifier methods obtain classification results, Fig. 5 be based on Cloude decompose and
Classification results Fig. 5, Fig. 6 that Freeman is decomposed and the method for convolutional Neural net classification obtains are the classification knot that the present invention obtains
Fruit, as a result statistics is as shown in table 1, hence it is evident that it can be seen that other two methods are contrasted shown in Fig. 6 has obtained preferable experimental result,
For Fig. 6 results areas than more uniform, noise is less, is all higher than two kinds of control methods per class nicety of grading, total nicety of grading is significantly
Improve;Although the classification results edges of regions division relative smooth that the Wishart classifier methods shown in Fig. 4 obtain, has serious
Mistake divide phenomenon, desultory point is more;Being decomposed based on Cloude shown in Fig. 5 and Freeman are decomposed and the side of convolutional neural networks
Method result has been lifted, and lost the part detailed information of image.
In summary, the polarization SAR image terrain classification method proposed by the present invention based on the sparse full convolutional network of scattering
The spatial structural form of image can be significantly kept, eliminates classification noise, so as to improve the classification results to image.
Claims (9)
- A kind of 1. Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering, it is characterised in that first input original to be sorted Beginning polarimetric SAR image data;Then polarization scattering matrix is converted the data into;Sparse scattering is carried out to polarization scattering matrix again to compile Code;By the Input matrix that sparse scattering coding obtains to full convolutional network, simultaneously training network is initialized, to the initial data of image Feature learning is carried out, is finally classified, obtains classification results.
- 2. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 1, its feature exist In comprising the following steps that:S1, input polarization SAR image initial data to be sorted, are encoded into polarization scattering matrix S;S2, sparse scattering coding is carried out to polarization scattering matrix, obtain sparse collision matrixThe information marked in S3, the atural object distribution reference figure according to Polarimetric SAR Image, the training sample per class is randomly choosed, Obtain training sample set;S4 while initialize relevant parameter of the full convolution through network;S5, by the training sample of selection in batches and normalize to [0.1,0.9] afterwards train FCN networks;S6, repeat step S5, until meeting end condition, maximum iteration 2000 times in this method, obtain FCN model ginseng Number;S7, utilize the neural network forecast classification trained;S8, output image simultaneously calculate nicety of grading.
- 3. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 2, its feature exist In in step S1, the polarization scattering matrix S is specially:<mrow> <mi>S</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>H</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>S</mi> <mrow> <mi>V</mi> <mi>H</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>S</mi> <mrow> <mi>V</mi> <mi>V</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>Wherein, a, b, c, d, e, f, g, h represent channel value, SHH=a+bi, SHV=c+di, SVH=e+fi, SVV=g+hi, i are represented Complex unit.
- 4. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 2, its feature exist In, in step S2, the sparse collision matrixSpecially:Wherein, a, b, c, d, e, f, g, h represent channel value.
- 5. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 4, its feature exist In,For above-mentioned encoding operation:Wherein, x represents real part, and y represents imaginary part, and i represents complex unit.
- 6. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 2, its feature exist In in step S3, the every class training sample number for sampling to obtain is 512.
- 7. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 2, its feature exist In step S5 specifically includes following steps:S501, in the training process, determines neuron z of the object function on l layers(l)Gradient δ(l)For:<mrow> <msup> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mfrac> <mrow> <mo>&part;</mo> <mi>J</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>,</mo> <mi>b</mi> <mo>;</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&part;</mo> <msup> <mi>z</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> </mrow> </mfrac> </mrow>Wherein, J represents loss, and W represents weight, and b represents biasing, and X represents input data, and Y represents label;S502, convolutional layer is set as l layers, sub-sampling layer is l+1 layers, determines k-th Feature Mapping of the object function on l layers Biasing b(l)Gradient be:<mrow> <mfrac> <mrow> <mo>&part;</mo> <mi>J</mi> <mrow> <mo>(</mo> <mrow> <mi>W</mi> <mo>,</mo> <mi>b</mi> <mo>;</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&part;</mo> <msup> <mi>b</mi> <mrow> <mo>(</mo> <mrow> <mi>l</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msup> </mrow> </mfrac> <mo>=</mo> <msup> <mi>&delta;</mi> <mrow> <mo>(</mo> <mrow> <mi>l</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msup> <mo>;</mo> </mrow>S503, sub-sampling layer is set as l layers, l+1 layers are convolutional layer, determine k-th Feature Mapping of the object function on l layers Neuron wave filterGradient be:<mrow> <mfrac> <mrow> <mo>&part;</mo> <mi>J</mi> <mrow> <mo>(</mo> <mrow> <mi>W</mi> <mo>,</mo> <mi>b</mi> <mo>;</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&part;</mo> <msubsup> <mi>W</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>l</mi> <mo>,</mo> <mi>k</mi> <mo>,</mo> <mi>p</mi> </mrow> <mo>)</mo> </mrow> </msubsup> </mrow> </mfrac> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </munder> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>n</mi> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mrow> <mo>(</mo> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msub> <mrow> <mo>(</mo> <msup> <mi>&delta;</mi> <mrow> <mo>(</mo> <mrow> <mi>l</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow>Wherein, X represents input data, and Y represents label, and W represents weight, and b represents biasing, and down represents down-sampling, represents point Multiply, p represents passage.
- 8. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 1, its feature exist In, in step S7, by the original test data of polarization SAR image to be sorted carry out sparse scattering coding normalization to [0.1, 0.9] after, the network trained is inputted, Polarimetric SAR Image to be sorted is classified, obtains the class of each pixel Not.
- 9. a kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering according to claim 1, its feature exist In step S8 is specially:S801, the pixel class classified is predicted using grader, using red R, green G, blue B as three primary colours, according to three bases Color method is painted on color, the Polarimetric SAR Image after being painted, is then output it;S802, by the pixel class that Polarimetric SAR Image obtains and truly species not compared with, by classification consistent pixel The nicety of grading of number and the ratio of whole number of pixels as Polarimetric SAR Image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710786485.5A CN107491793B (en) | 2017-09-04 | 2017-09-04 | Polarized SAR image classification method based on sparse scattering complete convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710786485.5A CN107491793B (en) | 2017-09-04 | 2017-09-04 | Polarized SAR image classification method based on sparse scattering complete convolution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107491793A true CN107491793A (en) | 2017-12-19 |
CN107491793B CN107491793B (en) | 2020-05-01 |
Family
ID=60651540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710786485.5A Active CN107491793B (en) | 2017-09-04 | 2017-09-04 | Polarized SAR image classification method based on sparse scattering complete convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107491793B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564006A (en) * | 2018-03-26 | 2018-09-21 | 西安电子科技大学 | Based on the polarization SAR terrain classification method from step study convolutional neural networks |
CN108846426A (en) * | 2018-05-30 | 2018-11-20 | 西安电子科技大学 | Polarization SAR classification method based on the twin network of the two-way LSTM of depth |
CN110096994A (en) * | 2019-04-28 | 2019-08-06 | 西安电子科技大学 | A kind of small sample PolSAR image classification method based on fuzzy label semanteme priori |
CN112206063A (en) * | 2020-09-01 | 2021-01-12 | 广东工业大学 | Multi-mode multi-angle dental implant registration method |
CN112560966A (en) * | 2020-12-18 | 2021-03-26 | 西安电子科技大学 | Polarimetric SAR image classification method, medium and equipment based on scattergram convolution network |
CN113627480A (en) * | 2021-07-09 | 2021-11-09 | 武汉大学 | Polarized SAR image classification method based on reinforcement learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913076A (en) * | 2016-04-07 | 2016-08-31 | 西安电子科技大学 | Polarimetric SAR image classification method based on depth direction wave network |
CN106096652A (en) * | 2016-06-12 | 2016-11-09 | 西安电子科技大学 | Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device |
CN106934419A (en) * | 2017-03-09 | 2017-07-07 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on plural profile ripple convolutional neural networks |
-
2017
- 2017-09-04 CN CN201710786485.5A patent/CN107491793B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913076A (en) * | 2016-04-07 | 2016-08-31 | 西安电子科技大学 | Polarimetric SAR image classification method based on depth direction wave network |
CN106096652A (en) * | 2016-06-12 | 2016-11-09 | 西安电子科技大学 | Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device |
CN106934419A (en) * | 2017-03-09 | 2017-07-07 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on plural profile ripple convolutional neural networks |
Non-Patent Citations (2)
Title |
---|
BIAO HOU ET.AL: "Classification of Polarimetric SAR Images Using Multilayer Autoencoders and Superpixels", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 * |
王雪松 等: "高性能探测成像与识别的研究进展及展望", 《中国科学: 信息科学》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564006A (en) * | 2018-03-26 | 2018-09-21 | 西安电子科技大学 | Based on the polarization SAR terrain classification method from step study convolutional neural networks |
CN108564006B (en) * | 2018-03-26 | 2021-10-29 | 西安电子科技大学 | Polarized SAR terrain classification method based on self-learning convolutional neural network |
CN108846426A (en) * | 2018-05-30 | 2018-11-20 | 西安电子科技大学 | Polarization SAR classification method based on the twin network of the two-way LSTM of depth |
CN108846426B (en) * | 2018-05-30 | 2022-01-11 | 西安电子科技大学 | Polarization SAR classification method based on deep bidirectional LSTM twin network |
CN110096994A (en) * | 2019-04-28 | 2019-08-06 | 西安电子科技大学 | A kind of small sample PolSAR image classification method based on fuzzy label semanteme priori |
CN112206063A (en) * | 2020-09-01 | 2021-01-12 | 广东工业大学 | Multi-mode multi-angle dental implant registration method |
CN112560966A (en) * | 2020-12-18 | 2021-03-26 | 西安电子科技大学 | Polarimetric SAR image classification method, medium and equipment based on scattergram convolution network |
CN112560966B (en) * | 2020-12-18 | 2023-09-15 | 西安电子科技大学 | Polarized SAR image classification method, medium and equipment based on scattering map convolution network |
CN113627480A (en) * | 2021-07-09 | 2021-11-09 | 武汉大学 | Polarized SAR image classification method based on reinforcement learning |
CN113627480B (en) * | 2021-07-09 | 2023-08-08 | 武汉大学 | Polarization SAR image classification method based on reinforcement learning |
Also Published As
Publication number | Publication date |
---|---|
CN107491793B (en) | 2020-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107491793A (en) | A kind of Classification of Polarimetric SAR Image method based on the sparse full convolution of scattering | |
CN113159051B (en) | Remote sensing image lightweight semantic segmentation method based on edge decoupling | |
CN107292317B (en) | Polarization SAR classification method based on shallow feature and T matrix deep learning | |
CN104077599B (en) | Polarization SAR image classification method based on deep neural network | |
CN104156728B (en) | Polarized SAR image classification method based on stacked code and softmax | |
CN104123555B (en) | Super-pixel polarimetric SAR land feature classification method based on sparse representation | |
CN110852227A (en) | Hyperspectral image deep learning classification method, device, equipment and storage medium | |
CN104331707A (en) | Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine) | |
CN107563422A (en) | A kind of polarization SAR sorting technique based on semi-supervised convolutional neural networks | |
CN108764173A (en) | The hyperspectral image classification method of confrontation network is generated based on multiclass | |
CN105913076A (en) | Polarimetric SAR image classification method based on depth direction wave network | |
CN106023065A (en) | Tensor hyperspectral image spectrum-space dimensionality reduction method based on deep convolutional neural network | |
CN108846426A (en) | Polarization SAR classification method based on the twin network of the two-way LSTM of depth | |
CN109145992A (en) | Cooperation generates confrontation network and sky composes united hyperspectral image classification method | |
CN105046268B (en) | Classification of Polarimetric SAR Image method based on Wishart depth networks | |
CN108133173A (en) | Classification of Polarimetric SAR Image method based on semi-supervised ladder network | |
CN106096652A (en) | Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device | |
CN106960415A (en) | A kind of method for recovering image based on pixel-recursive super-resolution model | |
CN107844751A (en) | The sorting technique of guiding filtering length Memory Neural Networks high-spectrum remote sensing | |
CN104318246A (en) | Depth self-adaption ridgelet network based polarimetric SAR (Synthetic Aperture Radar) image classification | |
CN105069478A (en) | Hyperspectral remote sensing surface feature classification method based on superpixel-tensor sparse coding | |
CN106203444A (en) | Classification of Polarimetric SAR Image method based on band ripple Yu convolutional neural networks | |
CN105894018A (en) | Polarized SAR image classification method based deep multi-example learning | |
CN105894013A (en) | Method for classifying polarized SAR image based on CNN and SMM | |
CN105160353A (en) | Polarimetric SAR data ground object classification method based on multiple feature sets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |