CN112836736B - Hyperspectral image semi-supervised classification method based on depth self-encoder composition - Google Patents
Hyperspectral image semi-supervised classification method based on depth self-encoder composition Download PDFInfo
- Publication number
- CN112836736B CN112836736B CN202110116366.5A CN202110116366A CN112836736B CN 112836736 B CN112836736 B CN 112836736B CN 202110116366 A CN202110116366 A CN 202110116366A CN 112836736 B CN112836736 B CN 112836736B
- Authority
- CN
- China
- Prior art keywords
- encoder
- self
- matrix
- data
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 239000011159 matrix material Substances 0.000 claims abstract description 58
- 230000003595 spectral effect Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000012360 testing method Methods 0.000 claims description 46
- 238000012549 training Methods 0.000 claims description 42
- 230000008569 process Effects 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 4
- 230000003190 augmentative effect Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims 2
- 238000010276 construction Methods 0.000 claims 1
- 230000006870 function Effects 0.000 abstract description 18
- 230000000694 effects Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 235000008331 Pinus X rigitaeda Nutrition 0.000 description 2
- 235000011613 Pinus brutia Nutrition 0.000 description 2
- 241000018646 Pinus brutia Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
A hyperspectral image semi-supervised classification method based on depth self-encoder composition relates to the technical field of remote sensing image processing and is used for solving the technical problem that the classification effect is poor in the existing hyperspectral image classification method. The technical points of the method of the invention comprise: constructing a sparse self-encoder for obtaining spectral domain characteristics of the hyperspectral image data; constructing a graph structure by using a method based on an self-expression model; optimizing graph structure by using a variational graph self-encoder (VGAE); correcting the coefficient matrix of the optimized graph structure; classification is achieved using gaussian random fields and harmonic functions (GRFs). The invention fully considers the mutual relation among the hyperspectral data, gives consideration to the spectral domain information and the spatial domain information of the hyperspectral data, and can enable the hyperspectral data to be classified to reach higher accuracy under the condition of small samples.
Description
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a hyperspectral image semi-supervised classification method based on depth self-coder composition.
Background
With the rapid development of remote sensing technology, hyperspectrum is widely applied by abundant spectral domain information and spatial domain information, and has the characteristic of 'map integration'. The important reason for classifying the ground features by using the hyperspectral remote sensing technology is that different substances show different spectral curves, and rich spectral information can greatly help target identification and classification. The hyperspectral remote sensing images play an important role in agriculture, forestry, water body management, geological exploration, soil monitoring, urban ground object classification and other aspects.
At present, representative supervised classification methods such as K-nearest neighbor, support vector machine and neural network need a large amount of sample data to train a network model, and the application of the supervised classification method in hyperspectral image classification is limited due to the lack of enough labeled samples in practical application. In addition, semi-supervised classification methods such as a self-training method, a collaborative training method and the like do not pay attention to similarity relation among samples, and the classification effect of the hyperspectral images is influenced; secondly, in the aspect of feature extraction, common feature extraction methods usually only focus on features of a single sample, and ignore the mutual connection among sample features.
Disclosure of Invention
In view of the above problems, the invention provides a hyperspectral image semi-supervised classification method based on depth self-encoder composition, which is used for solving the technical problem of poor classification effect in the existing hyperspectral image classification method.
In order to achieve the above purpose, the specific steps of the invention comprise:
firstly, acquiring hyperspectral image training and testing data; the training data comprises class labels, and the test data does not comprise the class labels;
secondly, preprocessing the hyperspectral image training and testing data;
thirdly, constructing a sparse self-encoder for obtaining spectral domain characteristics of the hyperspectral image data;
step four, constructing a graph structure by using a method based on a self-expression model;
fifthly, optimizing the graph structure by using a variational graph self-encoder (VGAE) to obtain a reconstructed coefficient matrix;
step six, correcting the reconstructed coefficient matrix to obtain a corrected graph structure;
step seven, classifying the test data by utilizing a Gaussian random field and a harmonic function (GRF);
and step eight, using the classified test data and the initial training data as new training data, and iterating and circulating the step three to the step seven until all the test data are classified.
Further, the preprocessing in the second step comprises dimensionality reduction processing and normalization processing; and the dimension reduction processing is to convert the original three-dimensional hyperspectral data into two-dimensional hyperspectral data.
Further, the constructing the sparse self-encoder in step three comprises the following steps:
step three, the sparse self-encoder comprises an input layer, a hidden layer and an output layer, firstly, the number of nodes of each layer, an activation function and a sparsity condition are determined, and a weight w between the input layer and the hidden layer is initialized 1 Deviation b 1 Weight w2, offset b between hidden layer and output layer 2 ;
And step two, training a sparse self-encoder by using hyperspectral image training data, obtaining a weight and a deviation meeting the sparsity condition, and reconstructing hyperspectral image test data by using the trained sparse self-encoder.
Further, the specific calculation process of the second step includes:
and (3) an encoding process: z = f (w) 1 x+b 1 );
wherein x represents the original input; f represents an encoder function; z represents a hidden variable; g represents a reconstruction function;represents the encoder reconstruction output; the loss function consists of two parts of reconstruction loss and KL divergence regularization:
wherein J (w, b) represents the loss between the original input and the reconstruction, and is a function of the weight w and the deviation b; beta represents the weight of the control sparsity penalty factor; s 2 Representing the number of hidden nodes; ρ represents a set sparsity;representing the average activation output of the jth hidden node.
Further, the structure of the structural diagram in the fourth step specifically includes: representing the reconstructed test sample by using the linear combination of the reconstructed training samples; and (3) forming nodes of the graph by the reconstructed training sample and the reconstructed testing sample, using the linear representation coefficient as the edge weight of the graph, solving the self-expression-based model, and obtaining an optimal coefficient matrix.
Further, the self-expression-based model is specifically expressed by the following formula (1):
wherein X represents a feature matrix; lambda [ alpha ] 1 Parameters representing control sparsity and reconstruction error compromise; w = [ W = 1 ,W 2 ,…,W a+k ]Is a matrix of coefficients, W i For the ith sample point X (i) The expression coefficient of (1).
Solving the formula (1) by using an interleaving direction multiplier algorithm specifically comprises the following steps: firstly, constructing an augmented Lagrangian function, which is specifically expressed by the following formula (2):
wherein Λ represents a lagrange multiplier; mu >0 is a numerical parameter;
and then minimizing the formula (2), and alternately updating the variable coefficient matrix W, the loss function J and the Lagrange multiplier Lambda by using an interleaving direction multiplier algorithm so as to obtain an optimal coefficient matrix.
Further, the concrete steps of step five include,
fifthly, forming a node characteristic matrix by the reconstructed training sample and the reconstructed test sample, and inputting the characteristic matrix and the coefficient matrix of the node into a variational graph self-encoder;
and fifthly, obtaining an implicit variable by utilizing the posterior probability, and reconstructing a coefficient matrix by utilizing the implicit variable to obtain a reconstructed coefficient matrix.
Further, in the sixth step, the edge weight of the graph structure is corrected by setting a penalty term, so that adjacent samples in the spatial domain have similar expression coefficients and the edge weights between the samples of different classes are small, thereby correcting the reconstructed coefficient matrix, wherein the corrected graph structure model is expressed by the following formula (3):
wherein λ is 2 Representing a control airspace penalty item parameter; c ij And indicating the spatial domain connection flag bits of the ith sample and the jth sample.
Further, the specific steps of the seventh step include:
seventhly, converting the corrected coefficient matrix into a symmetric matrix;
seventhly, constructing a Laplace matrix according to the symmetric matrix;
and seventhly, dividing the Laplace matrix into four block matrixes according to the training data and the test data so as to obtain the label classification of the test data.
Further, the calculation process of obtaining the label classification of the test data in the seventh step and the third step includes: the laplacian matrix is divided into four block matrices represented as:
the label classification of the test data is obtained by solving the following equation (4):
wherein Y is a A matrix of a training sample labels is represented.
The beneficial technical effects of the invention are as follows:
compared with the existing classification method, the invention adopts the composition method based on the self-expression model, fully considers the mutual connection among the samples, and has better effect on the attribution of the same type of samples and the definition of different samples; not only the sparse autoencoder is used for carrying out feature extraction on the samples, but also the variational image autoencoder is used for optimizing the image structure, so that the spectral domain information redundancy of the samples is reduced, and the redundancy of the incidence relation among the samples is also reduced; a Gaussian random field and a harmonic function are used as a classification method, and a continuous space is used for replacing a discrete space, so that the prediction mark extends from a discrete value to a real number domain. The invention fully considers the mutual connection among the hyperspectral data, considers the spectral domain information and the spatial domain information of the hyperspectral data and can lead the hyperspectral data to be classified to reach higher accuracy under the condition of small samples.
Drawings
The invention may be better understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like or similar parts throughout the figures. The accompanying drawings, which are incorporated in and form a part of this specification, illustrate preferred embodiments of the present invention and, together with the detailed description, serve to further illustrate the principles and advantages of the invention.
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional real image of a hyperspectral dataset IndianPines in the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals. It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the device structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
As shown in fig. 1, a hyper-spectral image semi-supervised classification method based on depth self-encoder composition includes the following steps:
step 1, inputting hyperspectral image data Indian Pines, referring to FIG. 2, dividing hyperspectral data into a training set X a And test set X b And dividing the test set into t groups:
(1a) Reducing the dimension of the original three-dimensional hyperspectral data, and converting the original three-dimensional hyperspectral data into a two-dimensional hyperspectral data set X 0 =[X aa ,X bb ]∈R 220×21025 Wherein the hyperspectral data Indian Pines comprises 220 wave bands; 21025 samples in total, comprising 10249 samples in 16 classes;
(1b) From highlight dataset X 0 Randomly drawing 10% of each category as a training sample set X aa ∈R 220 ×1024 The rest are used as test sample set X bb ∈R 220×9225 Wherein aa is the number of training set samples, bb is the number of test set samples, and aa + bb =10249;
(1c) For all samples X 0i Performing normalization process X i =X 0i /||X 0i | 2 Obtaining a normalized training set X a ∈R 220×1024 And test set X b ∈R 220×9225 ;
(1d) The normalized test set X b Dividing the test sample into t =41 groups, wherein each group has k =225 test samples and satisfies t × k =9225;
step 2, constructing a sparse self-encoder for obtaining the spectral domain characteristics of the hyperspectral data:
(2a) Determining the node number, the activation function and the sparsity of each layer of the sparse self-encoder, and initializing the weight w between the input layer and the hidden layer (1) Deviation b (1) Weight w between hidden layer and output layer (2) Deviation b (2) ;
(2b) Using training set X a Training the sparse autoencoder to obtain the weight and the deviation meeting the sparsity condition rho, and reconstructing the test sample X by using the trained sparse autoencoder b(i) Reconstituted test sample X' b(i) Representing;
and (3) an encoding process: z = f (w) (1) X b(i) +b (1) )
And (3) a decoding process: x' b(i) =g(w (2) z+b (2) )
Wherein, w (1) 、b (1) As weight, deviation, w, between the input layer and the hidden layer (2) 、b (2) The weight and the deviation between the hidden layer and the output layer are shown.
The loss function consists of two parts of reconstruction loss and KL divergence regularization:
wherein J (W, b) represents the original input X and the reconstruction g (f (X) b(i) ) Is a function of W and b, while β controls the weight of the sparsity penalty factor.
Step three, constructing a graph structure by using a method based on a self-expression model:
(3a) Reconstituted test sample X' b(i) Is expressed as all reconstructed training samples X' a(i) A linear combination of (a); the training samples correspond to the class labels one by one, the relation between the test samples and the class labels is uncertain, and the reconstructed training samples X 'can be used' a(i) Linear representation of reconstructed test sample X' b(i) ;
(3b) Node of graph is formed by reconstructed training sample X' a(i) And a reconstituted test sample X' b (i) The method comprises the following steps of (1) forming a coefficient matrix W by taking a coefficient represented linearly as an edge weight of a graph and solving through a model;
(3b1) The first round of training has a =1024 marked sample X' a(i) =[x 1 ,x 2 ,…,x a ]∈R 220×1024 And k =225 unlabeled samples X' b(i) =[x a+1 ,x a+2 ,…,x a+k ]∈R 220×225 Each sample can be represented as a linear combination of all other samples:
wherein λ is 1 Is a parameter for controlling the compromise between sparsity and reconstruction error, W = [ W = 1 ,W 2 ,…,W a+k ]Is a matrix of coefficients, W i Is sample point X' (i) The expression coefficient of (1), diag (W) =0 is to prevent self-expression;
(3b2) Solving the model by using a cross direction multiplier method:
constructing an augmented Lagrangian function:
where Λ is the Lagrangian multiplier and μ >0 is a numerical parameter.
And minimizing the function L, and alternately updating the variables W, J and Lambda by using a staggered direction multiplier algorithm to obtain an optimal coefficient matrix W.
And 4, optimizing the graph structure by using a variational graph self-encoder (VGAE):
(4a) From X' a(i) And X' b (i) The nodes form a characteristic matrix X of the nodes, and the characteristic matrix X and the coefficient matrix W of the nodes are input into a variational graph self-encoder;
(4b) Firstly, obtaining an implicit variable Z by using posterior probability, and then reconstructing a coefficient matrix by using the implicit variable to obtain a reconstructed coefficient matrix W';
(4b1) In the encoding process, a Gaussian distribution is determined through GCN, and then an implicit variable Z is obtained by sampling from the Gaussian distribution, and the specific calculation is as follows:
defining a variational graph neural network as follows:
The mean and variance of the gaussian distribution can be found by the above equation,
μ=GCN μ (X,W)
logσ=GCN σ (X,W)
both share the first layer parameter W 0 Second layer parameter W 1 Is different. After the Gaussian distribution is determined, sampling is carried out on the distribution to obtain an implicit variable Z;
(4b2) In the decoding process, a coefficient matrix is reconstructed by utilizing the inner product of an implicit variable Z:
W′=σ(ZZ T )
(4b3) To ensure that the reconstructed coefficient matrix W' is as similar as possible to the original coefficient matrix W, and that the distribution calculated by the GCN is as close as possible to the standard gaussian distribution, a loss function is set consisting of the desired and KL divergence:
L=E q (Z|X,W)[log p(W|Z)]-KL[q(Z|X,W)||p(Z)]
step 5, setting an edge weight of a punishment item correction graph structure, ensuring that adjacent samples in the airspace have similar expression coefficients as much as possible, ensuring that the edge weight among the samples in different classes is as small as possible, and obtaining a corrected coefficient matrix W * :
Defining Pixel X (i) Is the spatial neighborhood of (A) is the pixel X (i) All pixels within a central rectangular range having a side length p, denoted N p (X (i) );
Setting a spatial penalty factor:
wherein, w i 、w j Represents a sample X (i) And X (j) C is a spatial domain connection matrix defined as:
C ij a spatial domain connection flag bit representing the ith sample and the jth sample; when C is present ij When =1, it means that two samples are adjacent in the spatial domain, and the probability of belonging to the same class is high; when C is present ij And =0, it means that the two samples are not adjacent in the spatial domain.
The corrected graph structure model is as follows:
wherein λ is 1 Is a parameter for controlling the compromise between sparsity and reconstruction error; lambda [ alpha ] 2 Is one parameter that controls the spatial penalty term.
And 6, classifying by using a Gaussian random field and a harmonic function (GRF):
(6a) The corrected coefficient matrix W * Converting into a symmetrical matrix A to facilitate application of Gaussian random field and harmonic function classification methods;
(6b) Constructing a Laplace matrix LA = D-A, where D represents a diagonal matrix D ii =∑W jj ;
(6c) Based on the number of labeled training samples a =1024 and the number of unlabeled test samples k =225, L is calculated A Divided into four block matrices L aa 、L ak 、L ka 、L kk :
Obtaining label classification [ y ] of the test sample by solving the following formula a+1 ,y a+2 ,…,y a+k ],
F k =-L kk -1 L ka Y a
And 7, taking k =225 classified test samples as a new round of training samples, returning to the step (2) for a new round of training together with the original a =1024 training samples, and performing t =41 rounds of training until all the test samples are classified.
The effectiveness of the invention is verified.
The overall classification precision (OA), the average classification precision (AA) and the Kappa coefficient are used for representing the classification effect, the method is compared with a sparse expression graph-based method (SA) and a local linear reconstruction graph-based method (LLR) on the classification effect, and the overall classification precision (OA), the average classification precision (AA) and the Kappa coefficient comparison result on the Indian pins hyperspectral data set are shown in the table 1. As can be seen from Table 1, compared with the sparse expression graph (SA) based method and the local linear reconstruction graph (LLR) based method, the overall classification accuracy OA is respectively improved by 20.48 percent and 22.32 percent, the average classification accuracy AA is respectively improved by 24.3 percent and 25.94 percent, and the Kappa coefficient is respectively improved by 19.63 percent and 21.51 percent.
TABLE 1
The invention discloses a hyperspectral image semi-supervised classification method based on depth self-encoder composition, which mainly solves the problems that after hyperspectral data features are extracted, the relevance of data is reduced and the classification effect is influenced in the prior art. The method combines a deep learning theory, applies a self-encoder to a graph learning technology for the first time, fully considers the mutual connection among the hyperspectral data, gives consideration to the spectral domain information and the spatial domain information of the hyperspectral data, and can enable the hyperspectral data to be classified to achieve high accuracy under the condition of small samples.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
Claims (4)
1. A hyperspectral image semi-supervised classification method based on depth self-encoder composition is characterized by comprising the following steps:
firstly, acquiring hyperspectral image training and testing data; the training data comprises class labels, and the test data does not comprise the class labels;
secondly, preprocessing the hyperspectral image training and testing data;
thirdly, constructing a sparse self-encoder for obtaining spectral domain characteristics of the hyperspectral image data; the construction of the sparse self-encoder comprises the following steps:
step three, the sparse self-encoder comprises an input layer, a hidden layer and an output layer, firstly, the number of nodes of each layer, an activation function and a sparsity condition are determined, and a weight w between the input layer and the hidden layer is initialized 1 Deviation b 1 Weight w between hidden layer and output layer 2 Deviation b 2 ;
Training a sparse self-encoder by using hyperspectral image training data to obtain a weight and a deviation meeting the sparsity condition, and reconstructing hyperspectral image test data by using the trained sparse self-encoder; the specific calculation process comprises the following steps:
and (3) an encoding process: z = f (w) 1 x+b 1 );
wherein x represents the original input; f represents an encoder function; z represents a hidden variable; g represents a reconstruction function;represents the encoder reconstruction output; the loss function consists of two parts of reconstruction loss and KL divergence regularization:
wherein J (w, b) represents the reconstruction loss between the original input and the reconstruction, and is a function of the weight w and the deviation b; beta represents the weight of the control sparsity penalty factor; s is 2 Representing the number of hidden nodes; ρ represents a set sparsity;representing the average activation output of the jth hidden node;
fourthly, constructing a graph structure by using a method based on a self-expression model; the method specifically comprises the following steps: representing the reconstructed test sample by using the linear combination of the reconstructed training samples; the method comprises the steps that a reconstructed training sample and a reconstructed testing sample form nodes of a graph, linear representation coefficients are used as edge weights of the graph, a solution is carried out on a self-expression-based model, and an optimal coefficient matrix is obtained; the self-expression-based model is specifically expressed by the following formula (1):
s.t.diag(W)=0,W≥0
wherein X represents a feature matrix; lambda [ alpha ] 1 Parameters representing compromise between control sparsity and reconstruction errors; w = [ W = 1 ,W 2 ,…,W a+k ]Is a matrix of coefficients, and is,wherein each vector is an expression coefficient of a corresponding sample point;
solving the formula (1) by using an interleaving direction multiplier algorithm specifically comprises the following steps: firstly, constructing an augmented Lagrangian function, which is specifically expressed as the following formula (2):
wherein Λ represents a lagrange multiplier; mu >0 is a numerical parameter;
then minimizing the formula (2), and alternately updating the variable coefficient matrix W, the loss function J and the Lagrange multiplier Lambda by using an interleaving direction multiplier algorithm so as to obtain an optimal coefficient matrix;
fifthly, optimizing the graph structure by using a variational graph self-encoder VGAE to obtain a reconstructed coefficient matrix; the method comprises the following specific steps:
fifthly, forming a node characteristic matrix by the reconstructed training sample and the reconstructed test sample, and inputting the characteristic matrix and the coefficient matrix of the node into a variational graph self-encoder;
fifthly, obtaining an implicit variable by using the posterior probability, and reconstructing a coefficient matrix by using the implicit variable to obtain a reconstructed coefficient matrix;
step six, correcting the reconstructed coefficient matrix to obtain a corrected graph structure; correcting the edge weight of the graph structure by setting a punishment term so as to enable adjacent samples in a space domain to have similar expression coefficients and enable the edge weight between different types of samples to be small, thereby correcting the reconstructed coefficient matrix, wherein the corrected graph structure model is expressed as the following formula (3):
s.t.diag(W)=0,W≥0 (3)
wherein λ is 2 Representing a control airspace penalty parameter; c ij Spatial domain join flag representing ith sample and jth sampleA bit; w i And W j Expressing the expression coefficient vectors of the ith sample and the jth sample respectively;
step seven, classifying the test data by utilizing a Gaussian random field and a harmonic function GRF;
and step eight, using the classified test data and the initial training data as new training data, and iterating and circulating the step three to the step seven until all the test data are classified.
2. The hyperspectral image semi-supervised classification method based on depth self-encoder composition according to claim 1, wherein the preprocessing in the second step comprises dimensionality reduction processing and normalization processing; and the dimension reduction processing is to convert the original three-dimensional hyperspectral data into two-dimensional hyperspectral data.
3. The hyperspectral image semi-supervised classification method based on depth self-encoder composition as claimed in claim 1, wherein the specific steps of step seven comprise:
seventhly, converting the corrected coefficient matrix into a symmetric matrix;
seventhly, constructing a Laplace matrix according to the symmetric matrix;
and seventhly, dividing the Laplace matrix into four block matrixes according to the training data and the test data so as to obtain the label classification of the test data.
4. The hyperspectral image semi-supervised classification method based on depth self-encoder composition according to claim 3, wherein the computation process of obtaining the label classification of the test data in the seventh step and the third step comprises the following steps: the laplacian matrix is divided into four block matrices represented as:
the label classification of the test data is obtained by solving the following equation (4):
F k =-L kk -1 L ka Y a (4)
wherein, Y a Representing a matrix of a training sample labels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110116366.5A CN112836736B (en) | 2021-01-28 | 2021-01-28 | Hyperspectral image semi-supervised classification method based on depth self-encoder composition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110116366.5A CN112836736B (en) | 2021-01-28 | 2021-01-28 | Hyperspectral image semi-supervised classification method based on depth self-encoder composition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112836736A CN112836736A (en) | 2021-05-25 |
CN112836736B true CN112836736B (en) | 2022-12-30 |
Family
ID=75932173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110116366.5A Active CN112836736B (en) | 2021-01-28 | 2021-01-28 | Hyperspectral image semi-supervised classification method based on depth self-encoder composition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112836736B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113723492B (en) * | 2021-08-25 | 2024-05-24 | 哈尔滨理工大学 | Hyperspectral image semi-supervised classification method and device for improving active deep learning |
CN116935121B (en) * | 2023-07-20 | 2024-04-19 | 哈尔滨理工大学 | Dual-drive feature learning method for cross-region spectral image ground object classification |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867171A (en) * | 2012-08-23 | 2013-01-09 | 山东师范大学 | Label propagation and neighborhood preserving embedding-based facial expression recognition method |
CN107392940A (en) * | 2017-06-12 | 2017-11-24 | 西安电子科技大学 | A kind of SAR image change detection based on the semi-supervised self-adaptive solution self-encoding encoder of storehouse |
CN107609579A (en) * | 2017-08-25 | 2018-01-19 | 西安电子科技大学 | Classification of radar targets method based on sane variation self-encoding encoder |
CN108460326A (en) * | 2018-01-10 | 2018-08-28 | 华中科技大学 | A kind of high spectrum image semisupervised classification method based on sparse expression figure |
CN112084328A (en) * | 2020-07-29 | 2020-12-15 | 浙江工业大学 | Scientific and technological thesis clustering analysis method based on variational graph self-encoder and K-Means |
-
2021
- 2021-01-28 CN CN202110116366.5A patent/CN112836736B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867171A (en) * | 2012-08-23 | 2013-01-09 | 山东师范大学 | Label propagation and neighborhood preserving embedding-based facial expression recognition method |
CN107392940A (en) * | 2017-06-12 | 2017-11-24 | 西安电子科技大学 | A kind of SAR image change detection based on the semi-supervised self-adaptive solution self-encoding encoder of storehouse |
CN107609579A (en) * | 2017-08-25 | 2018-01-19 | 西安电子科技大学 | Classification of radar targets method based on sane variation self-encoding encoder |
CN108460326A (en) * | 2018-01-10 | 2018-08-28 | 华中科技大学 | A kind of high spectrum image semisupervised classification method based on sparse expression figure |
CN112084328A (en) * | 2020-07-29 | 2020-12-15 | 浙江工业大学 | Scientific and technological thesis clustering analysis method based on variational graph self-encoder and K-Means |
Non-Patent Citations (2)
Title |
---|
戴晓爱.基于堆栈式稀疏自编码器的高光谱影像分类.《万方数据》.2016, * |
邵远杰.基于稀疏表达图的高光谱图像半监督分类方法研究.《中国博士学位论文全文数据库》.2019, * |
Also Published As
Publication number | Publication date |
---|---|
CN112836736A (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109214452B (en) | HRRP target identification method based on attention depth bidirectional cyclic neural network | |
CN106203523B (en) | The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient | |
CN108108854B (en) | Urban road network link prediction method, system and storage medium | |
CN107145836B (en) | Hyperspectral image classification method based on stacked boundary identification self-encoder | |
CN112836736B (en) | Hyperspectral image semi-supervised classification method based on depth self-encoder composition | |
CN112765352A (en) | Graph convolution neural network text classification method based on self-attention mechanism | |
CN103914705B (en) | Hyperspectral image classification and wave band selection method based on multi-target immune cloning | |
CN107944483B (en) | Multispectral image classification method based on dual-channel DCGAN and feature fusion | |
CN112949416B (en) | Supervised hyperspectral multiscale graph volume integral classification method | |
CN111144214B (en) | Hyperspectral image unmixing method based on multilayer stack type automatic encoder | |
CN112464004A (en) | Multi-view depth generation image clustering method | |
CN108460400B (en) | Hyperspectral image classification method combining various characteristic information | |
CN113157957A (en) | Attribute graph document clustering method based on graph convolution neural network | |
CN111783879B (en) | Hierarchical compressed graph matching method and system based on orthogonal attention mechanism | |
CN111027630B (en) | Image classification method based on convolutional neural network | |
CN113947725B (en) | Hyperspectral image classification method based on convolution width migration network | |
CN104050680B (en) | Based on iteration self-organizing and the image partition method of multi-agent genetic clustering algorithm | |
CN114937173A (en) | Hyperspectral image rapid classification method based on dynamic graph convolution network | |
Zhang et al. | Superpixel-guided sparse unmixing for remotely sensed hyperspectral imagery | |
CN116206158A (en) | Scene image classification method and system based on double hypergraph neural network | |
CN115496950A (en) | Neighborhood information embedded semi-supervised discrimination dictionary pair learning image classification method | |
CN111325259A (en) | Remote sensing image classification method based on deep learning and binary coding | |
CN113392871B (en) | Polarized SAR (synthetic aperture radar) ground object classification method based on scattering mechanism multichannel expansion convolutional neural network | |
CN114003900A (en) | Network intrusion detection method, device and system for secondary system of transformer substation | |
CN112560949B (en) | Hyperspectral classification method based on multilevel statistical feature extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |