CN107491793B - Polarized SAR image classification method based on sparse scattering complete convolution - Google Patents
Polarized SAR image classification method based on sparse scattering complete convolution Download PDFInfo
- Publication number
- CN107491793B CN107491793B CN201710786485.5A CN201710786485A CN107491793B CN 107491793 B CN107491793 B CN 107491793B CN 201710786485 A CN201710786485 A CN 201710786485A CN 107491793 B CN107491793 B CN 107491793B
- Authority
- CN
- China
- Prior art keywords
- sar image
- scattering
- polarized
- denotes
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a polarized SAR image classification method based on sparse scattering complete convolution, which comprises the steps of firstly inputting original polarized SAR image data to be classified; then converting the data into a polarization scattering matrix; then carrying out sparse scattering coding on the polarized scattering matrix; and inputting the matrix obtained by the sparse scattering coding into a full convolution network, initializing and training the network, learning the characteristics of the original data of the image, and finally classifying to obtain a classification result. The method simultaneously considers all the characteristics and the space structure characteristics of the image, and improves the classification precision of the ground features of the polarized SAR image.
Description
Technical Field
The invention belongs to the technical field of polarized SAR image processing, and particularly relates to a polarized SAR image classification method based on sparse scattering complete convolution, which can be used for feature extraction and surface feature classification of polarized SAR images.
Background
The polarized SAR is an important component of the SAR, has the advantages of all weather, all time, high resolution, side-view imaging and the like, and can be widely applied to various fields of military affairs, agriculture, navigation, land utilization, geographic monitoring and the like. The polarized SAR can obtain richer target information, the remote sensing field is highly emphasized, and polarized SAR image classification is taken as an important interpretation measure and becomes a hot research direction for polarized SAR information processing.
The existing polarized SAR image classification method can be divided into two stages, namely a feature extraction stage and a classifier design stage, wherein the classical feature extraction algorithm in the feature extraction stage comprises coherent target decomposition and incoherent target decomposition, the coherent target decomposition algorithm comprises Pauli decomposition, sphere-diplane-helix (SDH) decomposition, Cameron decomposition and the like.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a polarized SAR image terrain classification method based on a sparse scattering full convolution network aiming at the defects in the prior art, wherein the space phase structure information of an image is dually utilized to better represent and learn an original data space, more effective features are extracted for classification, and the classification precision is improved.
The invention adopts the following technical scheme:
a polarized SAR image classification method based on sparse scattering complete convolution firstly inputs original polarized SAR image data to be classified; then converting the data into a polarization scattering matrix; then carrying out sparse scattering coding on the polarized scattering matrix; and inputting the matrix obtained by the sparse scattering coding into a full convolution network, initializing and training the network, learning the characteristics of the original data of the image, and finally classifying to obtain a classification result.
Further, the method comprises the following specific steps:
s1, inputting original data of the polarized SAR images to be classified, and encoding the original data into a polarized scattering matrix S;
s2, thinning the polarized scattering matrixPerforming scattering coding to obtain a sparse scattering matrix
S3, randomly selecting training samples of each type according to the marked information in the ground feature distribution reference map of the polarized SAR image to obtain a training sample set;
s4, initializing related parameters of the full convolution through the network;
s5, training the FCN after the selected training samples are divided into batches and normalized to [0.1,0.9 ];
s6, repeating the step S5 until the termination condition is met, wherein the maximum iteration time in the method is 2000 times, and model parameters of the FCN are obtained;
s7, predicting and classifying by using the trained network;
and S8, outputting the image and calculating the classification precision.
Further, in step S1, the polarized scattering matrix S is specifically:
wherein a, b, c, d, e, f, g, h represent channel values, SHH=a+bi、SHV=c+di、SVH=e+fi、SVVI represents a complex unit.
Further, in step S2, the sparse scattering matrixThe method specifically comprises the following steps:
wherein a, b, c, d, e, f, g and h represent channel values.
where x denotes the real part, y denotes the imaginary part, and i denotes the complex unit.
Further, in step S3, the number of training samples of each type obtained by sampling is 512.
Further, step S5 specifically includes the following steps:
s501, in the training process, determining the neuron z of the objective function relative to the l layer(l)Gradient delta of(l)Comprises the following steps:
wherein J represents loss, W represents weight, b represents bias, X represents input data, and Y represents label;
s502, setting the convolution layer as l layers and the sub-sampling layer as l +1 layers, and determining the bias b of the k characteristic mapping of the objective function relative to the l layer(l)The gradient of (d) is:
s503, setting the sub-sampling layer as a layer l, setting the layer l +1 as a convolution layer, and determining the neuron filter of the k characteristic mapping of the target function relative to the layer lThe gradient of (d) is:
where X represents input data, Y represents a label, W represents a weight, b represents an offset, down represents down-sampling,. represents dot product,prepresenting a channel.
Further, in step S7, after the original test data of the polarized SAR image to be classified is normalized to [0.1,0.9] by sparse scattering coding, the original test data is input into the trained network, and the polarized SAR image to be classified is classified to obtain the category of each pixel point.
Further, step S8 is specifically:
s801, predicting the classified pixel classes by using a classifier, coloring red R, green G and blue B serving as three primary colors according to a three-primary-color coloring method to obtain a colored polarized SAR image, and outputting the colored polarized SAR image;
s802, comparing the pixel types obtained by polarizing the SAR image with the real ground object types, and taking the ratio of the pixel number with the consistent type to the total pixel number as the classification precision of the polarized SAR image.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a ground feature classification method based on a sparse scattering complete convolution network, which comprises the steps of firstly converting original polarization SAR image data into a scattering matrix, and secondly carrying out coefficient scattering coding on the scattering matrix; then, initializing and training a network, performing better feature learning on the original data of the image, and training the network; and finally, predicting classification and calculating classification accuracy, so that the spatial structure information of the image can be obviously maintained, and classification noise is removed, thereby improving the classification result of the image.
Furthermore, the invention provides a special sparse scattering coding mode for the polarized SAR data, designs a corresponding feature extraction and classification algorithm by combining sparse scattering coding, combines feature extraction and classification design, and provides polarized SAR image ground feature classification based on a sparse scattering full convolution network, and experimental results show that the polarized SAR image ground feature classification has good classification performance.
Furthermore, the FCN is trained after the selected training samples are divided into batches and normalized to [0.1,0.9] so as to accelerate the network convergence speed and achieve the optimal solution quickly.
Furthermore, the classification is predicted by using the trained network, which belongs to forward propagation, the derivation is not needed, the calculation speed is high, the efficiency is high, and therefore the algorithm performance is better detected.
Furthermore, the classifier is used for predicting the classified pixel classes, the ratio of the number of pixels with the same class to the number of all pixels is used as the classification precision of the polarized SAR image, and the classification result is digitized and quantized, so that the classification result is better judged.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a pseudo-color map generated by polarizing SAR data;
FIG. 3 is a reference diagram of the distribution of real features;
fig. 4 is a diagram of a classification result of the Wishart classifier method;
FIG. 5 is a graph of the results of Cloude decomposition and Freeman decomposition and convolutional neural network classification;
FIG. 6 is a diagram of the classification results of the present invention.
Detailed Description
The invention provides a polarized SAR image classification method based on sparse scattering complete convolution, which comprises the steps of firstly inputting original polarized SAR image data to be classified; then converting the data into a polarization scattering matrix; then carrying out sparse scattering coding on the polarized scattering matrix; and inputting the matrix obtained by the sparse scattering coding into a full convolution network, and classifying to obtain a classification result. Compared with the existing methods, the method considers all the characteristics and the spatial structure characteristics of the image, obviously improves the classification precision of the ground features of the polarized SAR image, and solves the problems that the characteristic extraction is incomplete and the image spatial structure cannot be maintained in the existing classification method of the ground features of the polarized SAR image.
Referring to fig. 1, the method for classifying a polarized SAR image based on sparse scattering complete convolution according to the present invention includes the following specific steps:
s1, inputting original data of the polarized SAR images to be classified, and encoding the original data into a polarized scattering matrix;
the original polarized SAR image data has eight channels, the eight channel values of one pixel are considered and are respectively marked as a-g, and simultaneously, a polarized scattering matrix is S, and the following formula is provided
Where the complex matrix elements can be represented as:
where i denotes a complex unit, so that the polarization scattering matrix S can be obtained.
S2, carrying out sparse scattering coding on the polarized scattering matrix to obtain a sparse scattering matrix;
the sparse scattering coding proposed herein may assume the following coding operations:
the schematic formula is as follows:
where the first row represents the position of a positive number and the second row represents the position of a complex number, but the absolute value is taken if it is a trial-and-error. The first column of positions is used for storing real parts of the complex numbers, and the second column is used for storing imaginary parts. The entire mapping can be expressed as follows, wherein,the encoding operation is as follows:
then for the scattering matrix S, the sparse scattering encoding process can be expressed as follows:
s3, randomly selecting each type of training sample according to the marked information in the ground feature distribution reference map of the polarized SAR image to obtain a training sample set:
the number of each type of training samples obtained by sampling in the invention is 512;
s4, initializing relevant parameters of Full Convolution Network (FCN) at the same time;
s5, training the FCN after the selected training samples are divided into batches and normalized to [0.1,0.9 ];
s501, in the training process, the objective function is related to the neuron z of the l layer(l)The gradient of (d) is:
s502, gradient of the convolutional layer, wherein the convolutional layer is assumed to be l layers, and the sub-sampling layer is l +1 layer.
Since the sub-sampling layer is a down-sampling operation, the error δ of one neuron of the l +1 layer corresponds to one area of the corresponding feature map of the volume base layer (the previous layer). Each neuron in the kth feature map of level l has an edge that is connected to a neuron in the kth feature map of level l + 1. Error term δ of a feature map of layer i according to the chain rule(l,k)Only the layer l +1 is required to be mapped to the error term delta of the feature(l+1,k)Performing up-sampling operation, multiplying the up-sampling operation by the partial derivative of the activation value of the l-layer characteristic element by element, and multiplying the multiplied value by the weight w(l+1,k)Then get delta(l,k)。
Error term δ of kth feature map of l-th layer(l , k)The specific derivation process of (2) is as follows:
where Z denotes an upper layer output, X denotes input data, Y denotes a label, J denotes a loss, b denotes an offset, and W denotes a weight.
Error term in k characteristic mapping of the l layerδ(l,k)With respect to the kth feature mapping neuron filter of the l-th layerGradient of gradient
Wherein Z represents upper layer output, X represents input data, Y represents tag, wt represents width of core, b represents bias, W represents weight, r, j represents weight corresponding index, htDenotes the height of the kernel, p denotes the channel, s-i + u, t-j + v denotes the position, s denotes the row index, and t denotes the column index.
Bias b of the objective function with respect to the kth feature map of the l layer(l)The gradient of (d) can be written as:
s503, gradient of the sub-sampling layer, wherein the sub-sampling layer is assumed to be a layer l, and a layer l +1 is a convolution layer. Since the sub-sampling layer is a down-sampling operation, the error term δ of one neuron of the l +1 layer corresponds to one region of the corresponding scout map of the convolutional layer (the previous layer).
Thereby obtaining a neuron filter of a kth feature map of an objective function with respect to a l layerThe gradient of (d) can be written as:
where X denotes input data, Y denotes a label, W denotes a weight, b denotes an offset, down denotes downsampling,. denotes dot product, and p denotes a channel.
S6, repeating the step S5 until the termination condition is met, wherein the maximum iteration number in the method is 2000 times, and model parameters of the FCN are obtained:
s7, predicting and classifying by using the trained network:
after carrying out sparse scattering coding normalization on original test data of the polarized SAR image to be classified to [0.1,0.9], inputting the test data into a trained network to obtain the characteristics of a hidden layer for joint representation, and then inputting the characteristics into a trained classifier to classify the polarized SAR image to be classified to obtain the category of each pixel point;
and S8, outputting the image and calculating the classification precision.
S801, predicting the classified pixel classes by using a classifier, coloring R (red), G (green) and B (blue) serving as three primary colors according to a three-primary-color coloring method to obtain a colored polarized SAR image, and outputting the colored polarized SAR image;
s802, comparing the pixel types obtained by polarizing the SAR image with the real ground object types, and taking the ratio of the pixel number with the consistent type to the total pixel number as the classification precision of the polarized SAR image.
Examples
1. Experimental conditions and methods
The hardware platform is as follows: titan X16 GB, 64GB RAM;
the software platform is as follows: ubuntu16.04.2, TensorFlow;
the experimental method comprises the following steps: the method of the invention and the existing Wishart classifier respectively, and the method based on the features extracted by the cloud decomposition and the Freeman decomposition, and the classification by the convolutional neural network are utilized, wherein the existing two methods are both classical methods in the polarized SAR image classification.
2. Simulation content and results
In simulation experiments, fig. 2 is a pseudo-color image generated by polarizing SAR data, and a real ground feature distribution effect can be seen. FIG. 3 is a manually labeled terrain profile reference map for training and testing algorithm performance and effectiveness. According to the figure 3, 512 training samples of each class are randomly selected, the remaining samples are used as a test set to calculate the precision, and the classification precision and the total classification precision of each class are obtained and used as evaluation indexes.
The evaluation results are shown in table 1, where M1 is a method of Wishart classifier, M2 is a method of extracting features based on Cloude decomposition and Freeman decomposition and then classifying by using convolutional neural network, and M3 is the method of the present invention.
Table 1 shows the classification accuracy and total classification accuracy obtained in simulation experiments by the present invention and two comparison methods
Analysis of Experimental results
Fig. 4 is a classification result obtained by a comparison algorithm Wishart classifier method, fig. 5 is a classification result obtained by a method based on cloud decomposition, Freeman decomposition and convolutional neural network classification, fig. 5 is a classification result obtained by the invention, and fig. 6 is a classification result obtained by the invention, and the result statistics are shown in table 1, so that it can be obviously seen that better experimental results are obtained by comparing the other two methods shown in fig. 6, the result area of fig. 6 is more uniform, the noise is less, the classification precision of each type is higher than that of the two comparison methods, and the total classification precision is obviously improved; although the classification result obtained by the Wishart classifier method shown in fig. 4 is relatively smooth in region edge division, a serious misclassification phenomenon occurs, and there are many stray points; the method based on the cloud decomposition, the Freeman decomposition and the convolutional neural network shown in FIG. 5 is improved, and partial detail information of the image is lost.
In conclusion, the polarized SAR image terrain classification method based on the sparse scattering full convolution network can obviously keep the spatial structure information of the image, removes the classification noise and improves the classification result of the image.
Claims (5)
1. A polarized SAR image classification method based on sparse scattering complete convolution is characterized in that original polarized SAR image data to be classified are input firstly; then converting the data into a polarization scattering matrix; then carrying out sparse scattering coding on the polarized scattering matrix; inputting a matrix obtained by sparse scattering coding into a full convolution network, initializing and training the network, performing feature learning on original data of an image, and finally classifying to obtain a classification result, wherein the method comprises the following specific steps:
s1, inputting original data of the polarized SAR image to be classified, and encoding the original data into a polarized scattering matrix S, wherein the polarized scattering matrix S specifically comprises the following steps:
wherein a, b, c, d, e, f, g, h represent channel values, SHH=a+bi、SHV=c+di、SVH=e+fi、SVVG + hi, i represents a complex unit;
s2, carrying out sparse scattering coding on the polarized scattering matrix to obtain a sparse scattering matrixThe sparse scattering matrixThe method specifically comprises the following steps:
wherein a, b, c, d, e, f, g and h represent channel values;
s3, randomly selecting training samples of each type according to the marked information in the ground feature distribution reference map of the polarized SAR image to obtain a training sample set, wherein the number of each type of training samples obtained by sampling is 512;
s4, initializing related parameters of the full convolution through the network;
s5, training the FCN after the selected training samples are divided into batches and normalized to [0.1,0.9 ];
s6, repeating the step S5 until the termination condition is met, wherein the maximum iteration time in the method is 2000 times, and model parameters of the FCN are obtained;
s7, predicting and classifying by using the trained network;
and S8, outputting the image and calculating the classification precision.
3. The sparse scattering complete convolution-based polarimetric SAR image classification method as claimed in claim 1, wherein step S5 specifically comprises the following steps:
s501, in the training process, determining the neuron z of the objective function relative to the l layer(l)Gradient delta of(l)Comprises the following steps:
wherein J represents loss, W represents weight, b represents bias, X represents input data, and Y represents label;
s502, setting the convolution layer as l layers and the sub-sampling layer as l +1 layers, and determining the bias b of the k characteristic mapping of the objective function relative to the l layer(l)The gradient of (d) is:
s503, setting the sub-sampling layer as a layer l, setting the layer l +1 as a convolution layer, and determining the neuron filter of the k characteristic mapping of the target function relative to the layer lThe gradient of (d) is:
where X denotes input data, Y denotes a label, W denotes a weight, b denotes an offset, down denotes downsampling,. denotes dot product, and p denotes a channel.
4. The sparse scattering complete convolution-based polarimetric SAR image classification method as claimed in claim 1, wherein in step S7, after sparse scattering coding normalization is performed on original test data of the polarimetric SAR image to be classified to [0.1,0.9], the original test data is input into a trained network, and the polarimetric SAR image to be classified is classified to obtain the classification of each pixel point.
5. The sparse scattering complete convolution-based polarimetric SAR image classification method according to claim 1, characterized in that step S8 specifically includes:
s801, predicting the classified pixel classes by using a classifier, coloring red R, green G and blue B serving as three primary colors according to a three-primary-color coloring method to obtain a colored polarized SAR image, and outputting the colored polarized SAR image;
s802, comparing the pixel types obtained by polarizing the SAR image with the real ground object types, and taking the ratio of the pixel number with the consistent type to the total pixel number as the classification precision of the polarized SAR image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710786485.5A CN107491793B (en) | 2017-09-04 | 2017-09-04 | Polarized SAR image classification method based on sparse scattering complete convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710786485.5A CN107491793B (en) | 2017-09-04 | 2017-09-04 | Polarized SAR image classification method based on sparse scattering complete convolution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107491793A CN107491793A (en) | 2017-12-19 |
CN107491793B true CN107491793B (en) | 2020-05-01 |
Family
ID=60651540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710786485.5A Active CN107491793B (en) | 2017-09-04 | 2017-09-04 | Polarized SAR image classification method based on sparse scattering complete convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107491793B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564006B (en) * | 2018-03-26 | 2021-10-29 | 西安电子科技大学 | Polarized SAR terrain classification method based on self-learning convolutional neural network |
CN108846426B (en) * | 2018-05-30 | 2022-01-11 | 西安电子科技大学 | Polarization SAR classification method based on deep bidirectional LSTM twin network |
CN110096994B (en) * | 2019-04-28 | 2021-07-23 | 西安电子科技大学 | Small sample PolSAR image classification method based on fuzzy label semantic prior |
CN112206063A (en) * | 2020-09-01 | 2021-01-12 | 广东工业大学 | Multi-mode multi-angle dental implant registration method |
CN112560966B (en) * | 2020-12-18 | 2023-09-15 | 西安电子科技大学 | Polarized SAR image classification method, medium and equipment based on scattering map convolution network |
CN113627480B (en) * | 2021-07-09 | 2023-08-08 | 武汉大学 | Polarization SAR image classification method based on reinforcement learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913076A (en) * | 2016-04-07 | 2016-08-31 | 西安电子科技大学 | Polarimetric SAR image classification method based on depth direction wave network |
CN106096652A (en) * | 2016-06-12 | 2016-11-09 | 西安电子科技大学 | Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device |
CN106934419A (en) * | 2017-03-09 | 2017-07-07 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on plural profile ripple convolutional neural networks |
-
2017
- 2017-09-04 CN CN201710786485.5A patent/CN107491793B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913076A (en) * | 2016-04-07 | 2016-08-31 | 西安电子科技大学 | Polarimetric SAR image classification method based on depth direction wave network |
CN106096652A (en) * | 2016-06-12 | 2016-11-09 | 西安电子科技大学 | Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device |
CN106934419A (en) * | 2017-03-09 | 2017-07-07 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on plural profile ripple convolutional neural networks |
Non-Patent Citations (2)
Title |
---|
Classification of Polarimetric SAR Images Using Multilayer Autoencoders and Superpixels;Biao Hou et.al;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;20160731;第9卷(第7期);第3072-3081页 * |
高性能探测成像与识别的研究进展及展望;王雪松 等;《中国科学: 信息科学》;20160930;第46卷(第9期);第1211-1235页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107491793A (en) | 2017-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107491793B (en) | Polarized SAR image classification method based on sparse scattering complete convolution | |
CN111709902B (en) | Infrared and visible light image fusion method based on self-attention mechanism | |
CN109584248B (en) | Infrared target instance segmentation method based on feature fusion and dense connection network | |
CN107292317B (en) | Polarization SAR classification method based on shallow feature and T matrix deep learning | |
CN108564006B (en) | Polarized SAR terrain classification method based on self-learning convolutional neural network | |
CN108846426B (en) | Polarization SAR classification method based on deep bidirectional LSTM twin network | |
CN104008538B (en) | Based on single image super-resolution method | |
CN107563433B (en) | Infrared small target detection method based on convolutional neural network | |
CN109145992A (en) | Cooperation generates confrontation network and sky composes united hyperspectral image classification method | |
CN110991511A (en) | Sunflower crop seed sorting method based on deep convolutional neural network | |
CN108460391B (en) | Hyperspectral image unsupervised feature extraction method based on generation countermeasure network | |
CN112560967B (en) | Multi-source remote sensing image classification method, storage medium and computing device | |
CN109740631B (en) | OBIA-SVM-CNN remote sensing image classification method based on object | |
CN112257741B (en) | Method for detecting generative anti-false picture based on complex neural network | |
CN110879982A (en) | Crowd counting system and method | |
CN113222836A (en) | Hyperspectral and multispectral remote sensing information fusion method and system | |
CN113902622B (en) | Spectrum super-resolution method based on depth priori joint attention | |
CN114972885B (en) | Multi-mode remote sensing image classification method based on model compression | |
CN103646256A (en) | Image characteristic sparse reconstruction based image classification method | |
CN104881867A (en) | Method for evaluating quality of remote sensing image based on character distribution | |
CN105894013A (en) | Method for classifying polarized SAR image based on CNN and SMM | |
CN115331104A (en) | Crop planting information extraction method based on convolutional neural network | |
CN107680081B (en) | Hyperspectral image unmixing method based on convolutional neural network | |
CN108256557B (en) | Hyperspectral image classification method combining deep learning and neighborhood integration | |
Plaza et al. | Nonlinear neural network mixture models for fractional abundance estimation in AVIRIS hyperspectral images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |